By Yuval Peres, Oded Schramm, Scott Sheffield, David B. Wilson
Abstract
We prove that every bounded Lipschitz function $F$ on a subset $Y$ of a length space $X$ admits a tautest extension to $X$, i.e., a unique Lipschitz extension $u:X \rightarrow \mathbb{R}$ for which $\operatorname {Lip}_U u =\operatorname {Lip}_{\partial U} u$ for all open $U \subset X\smallsetminus Y$. This was previously known only for bounded domains in $\mathbb{R}^n$, in which case $u$ is infinity harmonic; that is, a viscosity solution to $\Delta _\infty u = 0$, where
We also prove the first general uniqueness results for $\Delta _{\infty } u = g$ on bounded subsets of $\mathbb{R}^n$ (when $g$ is uniformly continuous and bounded away from $0$) and analogous results for bounded length spaces. The proofs rely on a new game-theoretic description of $u$. Let $u^\varepsilon (x)$ be the value of the following two-player zero-sum game, called tug-of-war: fix $x_0=x\in X \smallsetminus Y$. At the $k^{\text{th}}$ turn, the players toss a coin and the winner chooses an $x_k$ with $d(x_k, x_{k-1})< \varepsilon$. The game ends when $x_k \in Y$, and player I’s payoff is $F(x_k) - \frac {\varepsilon ^2}{2}\sum _{i=0}^{k-1} g(x_i)$. We show that $\|u^\varepsilon - u\|_{\infty } \to 0$. Even for bounded domains in $\mathbb{R}^n$, the game theoretic description of infinity harmonic functions yields new intuition and estimates; for instance, we prove power law bounds for infinity harmonic functions in the unit disk with boundary values supported in a $\delta$-neighborhood of a Cantor set on the unit circle.
1. Introduction and preliminaries
1.1. Overview
We consider a class of zero-sum two-player stochastic games called tug-of-war and use them to prove that every bounded real-valued Lipschitz function $F$ on a subset $Y$ of a length space $X$ admits a unique absolutely minimal (AM) extension to $X$, i.e., a unique Lipschitz extension $u:X \rightarrow {\mathbb{R}}$ for which ${\operatorname {Lip}}_Uu = {\operatorname {Lip}}_{\partial U}u$ for all open $U \subset X\smallsetminus Y$. We present an example that shows this is not generally true when $F$ is merely Lipschitz and positive. (Recall that a metric space $(X,d)$ is a length space if for all $x,y\in X$, the distance $d(x,y)$ is the infimum of the lengths of continuous paths in $X$ that connect $x$ to $y$. Length spaces are more general than geodesic spaces, where the infima need to be achieved.)
When $X$ is the closure of a bounded domain $U \subset {\mathbb{R}}^n$ and $Y$ is its boundary, a Lipschitz extension $u$ of $F$ is AM if and only it is infinity harmonic in the interior of $X \smallsetminus Y$; i.e., it is a viscosity solution (defined below) to $\Delta _\infty u = 0$, where $\Delta _\infty$ is the so-called infinity Laplacian
(informally, this is the second derivative of $u$ in the direction of the gradient of $u$). Aronsson proved this equivalence for smooth $u$ in 1967, and Jensen proved the general statement in 1993 Reference1Reference13. Our analysis of tug-of-war also shows that in this setting $\Delta _\infty u = g$ has a unique viscosity solution (extending $F$) when $g:\overline U\to {\mathbb{R}}$ is continuous and $\inf g > 0$ or $\sup g < 0$, but not necessarily when $g$ assumes values of both signs. We note that in the study of the homogenous equation $\Delta _\infty u=0$, the normalizing factor $|\nabla u|^{-2}$ in (Equation1.1) is sometimes omitted; however, it is important to include it in the non-homogenous equation. Observe that with the normalization, $\Delta _\infty$ coincides with the ordinary Laplacian $\Delta$ in the one-dimensional case.
Unlike the ordinary Laplacian or the $p$-Laplacian for $p < \infty$, the infinity Laplacian can be defined on any length space with no additional structure (such as a measure or a canonical Markov semigroup)—that is, we will see that viscosity solutions to $\Delta _\infty u = g$ are well defined in this generality. We will establish the above stated uniqueness of solutions $u$ to $\Delta _\infty u = g$ in the setting of length spaces.
Originally, we were motivated not by the infinity Laplacian but by random turn Hex Reference22 and its generalizations, which led us to consider the tug-of-war game. As we later learned, tug-of-war games have been considered by Lazarus, Loeb, Propp and Ullman in Reference16 (see also Reference15).
Tug-of-war on a metric space is very natural and conceivably applicable (like differential game theory) to economic and political modeling.
The intuition provided by thinking of strategies for tug-of-war yields new results even in the classical setting of domains in ${\mathbb{R}}^n$. For instance, in Section 4 we show that if $u$ is infinity harmonic in the unit disk and its boundary values are in $[0,1]$ and supported on a $\delta$-neighborhood of the ternary Cantor set on the unit circle, then $u(0)<\delta ^\beta$ for some $\beta >0$.
Before precisely stating our main results, we need several definitions.
1.2. Random turn games and values
We consider two-player, zero-sum random-turn games, which are defined by the following parameters: a set $X$ of states of the game, two directed transition graphs$E_{{\mathrm{I}}{}}, E_{{\mathrm{II}}{}}$ with vertex set $X$, a non-empty set $Y \subset X$ of terminal states (a.k.a. absorbing states), a terminal payoff function$F:Y \rightarrow {\mathbb{R}}$, a running payoff function$f:X \smallsetminus Y \rightarrow {\mathbb{R}}$, and an initial state$x_0 \in X$.
The game play is as follows: a token is initially placed at position $x_0$. At the $k^{\text{th}}\,$ step of the game, a fair coin is tossed, and the player who wins the toss may move the token to any $x_k$ for which $(x_{k-1},x_k)$ is a directed edge in her transition graph. The game ends the first time $x_k \in Y$, and player I’s payoff is $F(x_k) + \sum _{i=0}^{k-1} f(x_i)$. Player I seeks to maximize this payoff, and since the game is zero-sum, player II seeks to minimize it.
We will use the term tug-of-war (on the graph with edges $E$) to describe the game in which $E:=E_{\mathrm{I}}{} = E_{\mathrm{II}}{}$ (i.e., players have identical move options) and $E$ is undirected (i.e., all moves are reversible). Generally, our results pertain only to the undirected setting. Occasionally, we will also mention some counterexamples showing that the corresponding results do not hold in the directed case.
In the most conventional version of tug-of-war on a graph, $Y$ is a union of “target sets” $Y^{\mathrm{I}}{}$ and $Y^{{\mathrm{II}}{}}$, there is no running payoff ($f=0$), and $F$ is identically $1$ on $Y^{\mathrm{I}}{}$ and identically $0$ on $Y^{{\mathrm{II}}{}}$. Players then try to “tug” the game token to their respective targets (and away from their opponent’s targets), and the game ends when a target is reached.
A strategy for a player is a way of choosing the player’s next move as a function of all previously played moves and all previous coin tosses. It is a map from the set of partially played games to moves (or in the case of a random strategy, a probability distribution on moves). Normally, one would think of a good strategy as being Markovian, i.e., as a map from the current state to the next move, but it is useful to allow more general strategies that take into account the history.
Given two strategies $\mathcal{S}_{{\mathrm{I}}{}}, \mathcal{S}_{{\mathrm{II}}{}}$, let $F_-(\mathcal{S}_{{\mathrm{I}}{}}, \mathcal{S}_{{\mathrm{II}}{}})$ and $F_+(\mathcal{S}_{{\mathrm{I}}{}}, \mathcal{S}_{{\mathrm{II}}{}})$ be the expected total payoff (including the running payoffs received) at the termination of the game, if the game terminates with probability one and this expectation exists in $[-\infty , \infty ]$; otherwise, let $F_-(\mathcal{S}_{{\mathrm{I}}{}}, \mathcal{S}_{{\mathrm{II}}{}})=-\infty$ and $F_+(\mathcal{S}_{{\mathrm{I}}{}}, \mathcal{S}_{{\mathrm{II}}{}})=+\infty$.
The value of the game for player Iis defined as $\sup _{\mathcal{S}_{\mathrm{I}}{}} \inf _{\mathcal{S}_{\mathrm{II}}{}} F_-(\mathcal{S}_{\mathrm{I}}{}, \mathcal{S}_{\mathrm{II}}{})$. The value for player IIis $\inf _{\mathcal{S}_{{\mathrm{II}}{}}}\sup _{\mathcal{S}_{{\mathrm{I}}{}}} F_+(\mathcal{S}_{{\mathrm{I}}{}},\mathcal{S}_{{\mathrm{II}}{}})$. We use the expressions $u_{{\mathrm{I}}{}}(x)$ and $u_{{\mathrm{II}}{}}(x)$ to denote the values for players I and II, respectively, as a function of the starting state $x$ of the game.
Note that if player I cannot force the game to end almost surely, then $u_{{\mathrm{I}}{}}=-\infty$, and if player II cannot force the game to end almost surely, then $u_{{\mathrm{II}}{}}=\infty$. Clearly, $u_{{\mathrm{I}}{}}(x) \leq u_{{\mathrm{II}}{}}(x)$. When $u_{{\mathrm{I}}{}}(x) = u_{{\mathrm{II}}{}}(x)$, we say that the game has a value, given by $u(x):=u_{{\mathrm{I}}{}}(x)=u_{{\mathrm{II}}{}}(x)$.
Our definition of value for player I penalizes player I severely for not forcing the game to terminate with probability one, awarding $-\infty$ in this case.
(As an alternative definition, one could define $F_-$, and hence player I’s value, by assigning payoffs to all of the non-terminating sequences $x_0, x_1, x_2, \mathinner {\ldotp \ldotp \ldotp }$. If the payoff function for the non-terminating games is a zero-sum Borel-measurable function of the infinite sequence, then player I’s value is equal to player II’s value in great generality Reference17; see also Reference20 for more on stochastic games. The existence of a value by our strong definition implies the existence and equality of the values defined by these alternative definitions.)
Considering the two possibilities for the first coin toss yields the following lemma, a variant of which appears in Reference16.
Lemma 1.1
The function $u=u_{{\mathrm{I}}{}}$ satisfies the equation
for every non-terminal state $x \in X\smallsetminus Y$ for which the right-hand side is well defined, and $u_{{\mathrm{I}}{}}(x)=-\infty$ when the right-hand side is of the form $\frac 12 (\infty + (-\infty ))+f(x)$. The analogous statement holds for $u_{{\mathrm{II}}{}}$, except that $u_{{\mathrm{II}}{}}(x)=+\infty$ when the right-hand side of Equation1.2 is of the form $\frac 12 (\infty + (-\infty ))+f(x)$.
is called the (discrete) infinity Laplacian. A function $u$ is infinity harmonic if Equation1.2 holds and $f(x)=0$ at all non-terminal $x \in X\smallsetminus Y$. When $u$ is finite, this is equivalent to $\Delta _{\infty }u=0$. However, it will be convenient to adopt the convention that $u$ is infinity harmonic at $x$ if $u(x)=+\infty$ (resp. $-\infty$) and the right-hand side in Equation1.2 is also $+\infty$ (resp. $-\infty$). Similarly, it will be convenient to say “$u$ is a solution to $\Delta _\infty u = -2\,f$” at $x$ if $u(x)=+\infty$ (resp. $-\infty$) and the right-hand side in Equation1.2 is also $+\infty$ (resp. $-\infty$).
In a tug-of-war game, it is natural to guess that the value $u=u_{{\mathrm{I}}{}}=u_{{\mathrm{II}}{}}$ exists and is the unique solution to
$$\Delta _\infty u(x) = -2f,$$
and also that (at least when $E$ is locally finite) player I’s optimal strategy will be to always move to the vertex that maximizes $u(x)$ and that player II’s optimal strategy will be to always move to the vertex that minimizes $u(x)$. This is easy to prove when $E$ is undirected and finite and $f$ is everywhere positive or everywhere negative. Subtleties arise in more general cases ($X$ infinite, $E$ directed, $F$ unbounded, $f$ having values of both signs, etc.).
Our first theorem addresses the question of the existence of a value.
Theorem 1.2
A tug-of-war game with parameters $X, E, Y, F, f$ has a value whenever the following hold:
(1)
Either $f=0$ everywhere or $\inf f > 0$.
(2)
$\inf F >-\infty$.
(3)
$E$ is undirected.
Counterexamples exist when any one of the three criteria is removed. In Section 5 (a section devoted to counterexamples) we give an example of a tug-of-war game without a value, where $E$ is undirected, $F=0$, and the running payoff satisfies $f > 0$ but $\inf f = 0$.
The case where $f=0$,$F$ is bounded, and $(X,E)$ is locally finite was proved earlier in Reference15. That paper discusses an (essentially non-random) game in which the two players bid for the right to choose the next move. That game, called the Richman game, has the same value as tug-of-war with $f=0$, where $F$ takes the values $0$ and $1$. Additionally, a simple and efficient algorithm for calculating the value when $f=0$ and $(X,E)$ is finite is presented there.
1.3. Tug-of-war on a metric space
Now consider the special case where $(X,d)$ is a metric space, $Y \subset X$, and Lipschitz functions $F:Y \rightarrow {\mathbb{R}}$ and $f:X \smallsetminus Y \rightarrow {\mathbb{R}}$ are given. Let $E_\varepsilon$ be the edge-set in which $x \sim y$ if and only if $d(x,y) <\varepsilon$, and let $u^\varepsilon$ be the value (if it exists) of the game played on $E_\varepsilon$ with terminal payoff $F$ and running payoff normalized to be $\varepsilon ^2 f$.
In other words, $u^\varepsilon (x)$ is the value of the following two-player zero-sum game, called $\varepsilon$-tug-of-war: fix $x_0=x\in X \smallsetminus Y$. At the $k^{\text{th}}\,$ turn, the players toss a coin and the winner chooses an $x_k$ with $d(x_k, x_{k-1}) <\varepsilon$. The game ends when $x_k \in Y$, and player I’s payoff is $F(x_k) + \varepsilon ^2\sum _{i=0}^{k-1} f(x_i)$.
When the limit $u:=\lim _{\varepsilon \rightarrow 0} u^\varepsilon$ exists pointwise, we call $u$ the continuum value (or just “value”) of the quintuple $(X,d,Y,F,f)$. We define the continuum value for player I(or II) analogously.
The reader may wonder why we have chosen not to put an edge in $E_\varepsilon$ between $x$ and $y$ when $d(x,y)=\varepsilon$ exactly. This choice has some technical implications. Specifically, we will compare the $\varepsilon$-game with the $2\,\varepsilon$-game. If $x,z$ are such that $d(x,z)\le 2\,\varepsilon$, then in a length space it does not follow that there is a $y$ such that $d(x,y)\le \varepsilon$ and $d(y,z)\le \varepsilon$. However, it does follow if you replace the weak inequalities with strong inequalities throughout.
We prove the following:
Theorem 1.3
Suppose $X$ is a length space, $Y\subset X$ is non-empty, $F:Y \rightarrow {\mathbb{R}}$ is bounded below and has an extension to a uniformly continuous function on $X$, and either $f:X \smallsetminus Y \rightarrow {\mathbb{R}}$ satisfies $f=0$ or all three of the following hold: $\inf |f| > 0$,$f$ is uniformly continuous, and $X$ has finite diameter. Then the continuum value $u$ exists and is a uniformly continuous function extending $F$. Furthermore, $\|u-u^\varepsilon \|_\infty \to 0$ as $\varepsilon \searrow 0$. If $F$ is Lipschitz, then so is $u$. If $F$ and $f$ are Lipschitz, then $\|u-u^\varepsilon \|_\infty = O(\varepsilon )$.
The above condition that $F:Y\rightarrow {\mathbb{R}}$ extends to a uniformly continuous function on $X$ is equivalent to having $F$ uniformly continuous on $Y$ and “Lipschitz on large scales,” as we prove in Lemma 3.9 below.
We will see in Section 5.1 that this fails in general when $f > 0$ but $\inf f = 0$. When $f$ assumes values of both signs, it fails even when $X$ is a closed disk in ${\mathbb{R}}^2$,$Y$ is its boundary and $F=0$. In Section 5.3 we show by means of an example that in such circumstances it may happen that $u_{\mathrm{I}}{} ^\varepsilon \ne u_{\mathrm{II}}{} ^\varepsilon$ and moreover, $\liminf _{\varepsilon \searrow 0} \|u_{\mathrm{I}}{} ^\varepsilon - u_{\mathrm{II}}{} ^\varepsilon \|_\infty >0$.
1.4. Absolutely minimal Lipschitz extensions
Given a metric space $(X,d)$, a subset $Y \subset X$ and a function $u:X \rightarrow {\mathbb{R}}$, we write
and ${\operatorname {Lip}}\, u = {\operatorname {Lip}}_Xu$. Thus $u$ is Lipschitz iff ${\operatorname {Lip}}\, u < \infty$. Given $F: Y \to {\mathbb{R}}$, we say that $u:X\rightarrow {\mathbb{R}}$ is a minimal extension of $F$ if ${\operatorname {Lip}}_X u = {\operatorname {Lip}}_YF$ and $u(y) = F(y)$ for all $y \in Y$.
It is well known that for any metric space $X$, any Lipschitz $F$ on a subset $Y$ of $X$ admits a minimal extension. The largest and smallest minimal extensions (introduced by McShane Reference18 and Whitney Reference24 in the 1930’s) are respectively
We say $u$ is an absolutely minimal (AM) extension of $F$ if ${\operatorname {Lip}}\, u <\infty$ and ${\operatorname {Lip}}_U u = {\operatorname {Lip}}_{\partial U} u$ for every open set $U \subset X \smallsetminus Y$. We say that $u$ is AM on $U$ if it is defined on $\overline U$ and is an AM extension of its restriction to $\partial U$. AM extensions were first introduced by Aronsson in 1967 Reference1 and have applications in engineering and image processing (see Reference4 for a recent survey).
We prove the following:
Theorem 1.4
Let $X$ be a length space and let $F:Y\rightarrow {\mathbb{R}}$ be Lipschitz, where $\emptyset \ne Y\subset X$. If $\inf F>-\infty$, then the continuum value function $u$ described in Theorem 1.3 (with $f=0$) is an AM extension of $F$. If $F$ is also bounded, then $u$ is the unique AM extension of $F$.
We present in the counterexample section, Section 5, an example in which $F$ is Lipschitz, non-negative, and unbounded, and although the continuum value is an AM extension, it is not the only AM extension.
Prior to our work, the existence of AM extensions in the above settings was known only for separable length spaces Reference14 (see also Reference19). The uniqueness in Theorem 1.4 was known only in the case that $X$ is the closure of a bounded domain $U\subset {\mathbb{R}}^n$ and $Y=\partial U$. (To deduce this case from Theorem 1.4, one needs to replace $X$ by the smallest closed ball containing $\overline U$, say.) Three uniqueness proofs in this setting have been published, by Jensen Reference13, by Barles and Busca Reference6, and by Aronsson, Crandall, and Juutinen Reference4. The third proof generalizes from the Euclidean norm to uniformly convex norms.
Our proof applies to more general spaces because it invokes no outside theorems from analysis (which assume existence of a local Euclidean geometry, a measure, a notion of twice differentiability, etc.), and relies only on the structure of $X$ as a length space.
As noted in Reference4, AM extensions do not generally exist on metric spaces that are not length spaces. (For example, if $X$ is the $L$-shaped region $\{0\} \times [0,1] \cup [0,1] \times \{0\} \subset {\mathbb{R}}^2$ with the Euclidean metric, and $Y = \{(0,1), (1,0) \}$, then no non-constant $F:Y\rightarrow {\mathbb{R}}$ has an AM extension. Indeed, suppose that $u:X\rightarrow {\mathbb{R}}$ is an AM extension of $F:Y\rightarrow {\mathbb{R}}$. Let $a:=u(0,0)$,$b:=u(0,1)$ and $c:= u(1,0)$. Then, considering $U=\{0\}\times (0,1)$, it follows that $u(0,s)=a+s\,(b-a)$. Likewise, $u(s,0)=a+s\,(c-a)$. Now taking $U_\epsilon :=\{0\}\times [0,1)\cup [0,\epsilon )\times \{0\}$, we see that $\lim _{\varepsilon \searrow 0} {\operatorname {Lip}}_{U_\varepsilon }u=|b-a|$. Hence $|c-a|\le |b-a|$. By symmetry, $|c-a|=|b-a|$. Since $F$ is assumed to be non-constant, $b\ne c$, and hence $c-a=a-b$. Then $\bigl |u(0,s)-u(s,0)\bigr |/\bigl (\sqrt 2 s\bigr ) = \sqrt 2\,|b-a|$, which contradicts $\lim _{\varepsilon \searrow 0} {\operatorname {Lip}}_{U_\varepsilon }u=|b-a|$.)
One property that makes length spaces special is the fact that the Lipschitz norm is determined locally. More precisely, if $W\subset X$ is closed, then either ${\operatorname {Lip}}_W u={\operatorname {Lip}}_{\partial W}u$ or for every $\delta >0$
The definition of AM was inspired by the notion that if $u$ is the “tautest possible” Lipschitz extension of $F$, it should be tautest possible on any open $V \subset X \smallsetminus Y$, given the values of $u$ on $\partial V$ and ignoring the rest of the metric space. Without locality, the rest of the metric space cannot be ignored (since long-distance effects may change the global Lipschitz constant), and the definition of AM is less natural. Another important property of length spaces is the fact that the graph distance metric on $E_\varepsilon$ scaled by $\varepsilon$ approximates the original metric, namely, it is within $\varepsilon$ of $d(\cdot , \cdot )$.
1.5. Infinity Laplacian on $\mathbb{R}^n$
The continuum version of the infinity Laplacian is defined for $C^2$ functions $u$ on domains $U \subset {\mathbb{R}}^n$ by
This is the same as $\eta ^T H \eta$, where $H$ is the Hessian of $u$ and $\eta =\nabla u/|\nabla u|$. Informally, $\Delta _\infty u$ is the second derivative of $u$ in the direction of the gradient of $u$. If $\nabla u(x) = 0$, then $\Delta _\infty u(x)$ is undefined; however, we adopt the convention that if the second derivative of $u(x)$ happens to be the same in every direction (i.e., the matrix $\{u_{x_i x_j} \}$ is $\lambda$ times the identity), then $\Delta _\infty u(x) = \lambda$, which is the second derivative in any direction. (As mentioned above, some texts on infinity harmonic functions define $\Delta _\infty$ without the normalizing factor $|\nabla u|^{-2}$. When discussing viscosity solutions to $\Delta _\infty u = 0$, the two definitions are equivalent. The fact that the normalized version is sometimes undefined when $\nabla u = 0$ does not matter because it is always well defined at $x$ when $\varphi$ is a cone function, i.e., when $\varphi (z)$ has the form $a|x-z|+b$ for $a, b \in {\mathbb{R}}$ and $z \in {\mathbb{R}}^n$ with $z \neq x$, and viscosity solutions can be defined via comparison with cones; see Section 1.6.) As in the discrete setting, $u$ is infinity harmonic if $\Delta _\infty u = 0$.
While discrete infinity harmonic functions are a recent concept, introduced in finite-difference schemes for approximating continuous infinity harmonic functions Reference21, related notions of value for stochastic games are of course much older. The continuous infinity Laplacian first appeared in the work of Aronsson Reference1 and has been very thoroughly studied Reference4. Key motivations for studying this operator are the following:
(1)
AM extensions: Aronsson proved that $C^2$ extensions $u$ on domains $U\subset {\mathbb{R}}^n$ (of functions $F$ on $\partial U$) are infinity harmonic if and only if they are AM.
(2)
$p$-harmonic functions: As noted by Aronsson Reference1, the infinity Laplacian is the formal limit, as $p \rightarrow \infty$ of the (properly normalized) $p$-Laplacians. Recall that $p$-harmonic functions, i.e., minimizers $u$ of $\int |\nabla u(x)|^p \,dx$ subject to boundary conditions, solve the Euler-Lagrange equation$$\nabla \cdot (|\nabla u|^{p-2}\nabla u) = 0,$$
which can be rewritten$$|\nabla u|^{p-2} \left(\Delta u + (p-2)\Delta _\infty u\right) = 0,$$
where $\Delta$ is the ordinary Laplacian. Dividing by $|\nabla u|^{p-2}$, we see that (at least when $|\nabla u| \neq 0$)$p$-harmonic functions satisfy $\Delta _p u = 0$, where $\Delta _p := \Delta _\infty + (p-2)^{-1} \Delta$; the second term vanishes in the large $p$ limit. It is not too hard to see that as $p$ tends to infinity, the Lipschitz norm of any limit of the $p$-harmonic functions extending $F$ will be ${\operatorname {Lip}}_{\partial U}F$. So it is natural to guess (and was proved in Reference7) that as $p$ tends to infinity the $p$-harmonic extensions of $F$ converge to a limit that is both absolutely minimal and a viscosity solution to $\Delta _\infty u= 0$.
In the above setting, Aronsson also proved that there always exists an AM extension, and that in the planar case $U\subset {\mathbb{R}}^2$, there exists at most one $C^2$ infinity harmonic extension; however $C^2$ infinity harmonic extensions do not always exist Reference2.
To define the infinity Laplacian in the non-$C^2$ setting requires us to consider weak solutions; the right notion here is that of viscosity solution, as introduced by Crandall and Lions (1983) Reference11. Start by observing that if $u$ and $v$ are $C^2$ functions, $u(x) = v(x)$, and $v \geq u$ in a neighborhood of $x$, then $v-u$ has a local minimum at $x$, whence $\Delta _\infty v(x) \geq \Delta _\infty u(x)$ (if both sides of this inequality are defined). This comparison principle (which has analogs for more general degenerate elliptic PDEs Reference5) suggests that if $u$ is not $C^2$, in order to define $\Delta _\infty u(x)$ we want to compare it to $C^2$ functions $\varphi$ for which $\Delta _\infty \varphi (x)$ is defined. Let $S(x)$ be the set of real valued functions $\varphi$ defined and $C^2$ in a neighborhood of $x$ for which $\Delta _\infty \varphi (x)$ has been defined; that is, either $\nabla \varphi (x)\ne 0$, or $\nabla \varphi (x)=0$ and the limit $\Delta _\infty \varphi (x):= \lim _{x'\to x} 2\,\frac {\varphi (x')-\varphi (x)}{|x'-x|^2}$ exists.
Definition
Let $X$ be a domain in ${\mathbb{R}}^n$ and let $u: X \to {\mathbb{R}}$ be continuous. Set
$$\begin{equation} \Delta _{\infty }^{+}u(x) =\inf \{ \Delta _{\infty }\varphi (x) \, : \, \varphi \in S(x)\text{ and $x$ is a local minimum of }\varphi -u\}\,. \tag{1.3}\cssId{texmlid13}{} \end{equation}$$
Thus $u$ satisfies $\Delta _{\infty }^{+}(u) \ge g$ in a domain $X$, iff every $\varphi \in C^2$ such that $\varphi -u$ has a local minimum at some $x\in X$ satisfies $\Delta ^+_{\infty }\varphi (x) \ge g(x)$. In this case $u$ is called a viscosity subsolution of $\Delta _{\infty }(\cdot ) =g$. Note that if $\varphi \in C^2$, then $\Delta _\infty ^+\varphi =\Delta _\infty \varphi$ wherever $\nabla \varphi \ne 0$.
Similarly, let
$$\begin{equation} \Delta _{\infty }^{-}u(x) =\sup \{ \Delta _{\infty }\varphi (x) \, : \, \varphi \in S(x)\text{ and $x$ is a local maximum of }\varphi -u\}\,, \tag{1.4}\cssId{del-}{} \end{equation}$$
and call $u$ a viscosity supersolution of $\Delta _{\infty }(\cdot ) =g$ iff $\Delta _{\infty }^{-}u \le g$ in $X$.
Finally, $u$ is a viscosity solution of $\Delta _{\infty }(\cdot ) =g$ if $\Delta _{\infty }^{-}u \le g \le \Delta _{\infty }^{+}u$ in $X$ (i.e., $u$ is both a supersolution and a subsolution).
Here is a little caveat. At present, we do not know how to show that $\Delta _\infty u=g$ in the viscosity sense determines $g$. For example, if $u$ is Lipschitz, $g_1$ and $g_2$ are continuous, and $\Delta _\infty u=g_j$ holds for $j=1,2$ (in the viscosity sense), how does one prove that $g_1=g_2$?
The following result of Jensen (alluded to above) is now well known Reference1Reference13Reference4: if $X$ is a domain in ${\mathbb{R}}^n$ and $u:X \rightarrow {\mathbb{R}}$ is continuous, then ${\operatorname {Lip}}_Uu = {\operatorname {Lip}}_{\partial U}u < \infty$ for every bounded open set $U\subset \overline U \subset X$ (i.e., $u$ is AM) if and only if $u$ is a viscosity solution to $\Delta _\infty u = 0$ in $X$.
Let $A\subset Y\subset X$, where $A$ is closed, $Y\ne \emptyset$ and $X$ is a length space. If $x\in X$, one can define the $\infty$-harmonic measure of $A$ from $x$ as the infimum of $u(x)$ over all functions $u:X\to [0,\infty )$ that are Lipschitz on $X$, AM in $X\smallsetminus Y$ and satisfy $u\ge 1$ on $A$. This quantity will be denoted by $\omega _\infty (A)=\omega ^{(x,Y,X)}_\infty (A)$. In Section 4 we prove
Theorem 1.5
Let $X$ be the unit ball in ${\mathbb{R}}^n$,$n>1$, let $Y=\partial X$, let $x$ be the center of the ball, and for each $\delta >0$ let $A_\delta \subset Y$ be a spherical cap of radius $\delta$ (of dimension $n-1$). Then
where $c,C>0$ are absolute constants (which do not depend on $n$).
Numerical calculations Reference21 had suggested that in the setting of the theorem $\omega _\infty (A_\delta )$ tends to $0$ as $\delta \to 0$, but this was only recently proved Reference12, and the proof did not yield any quantitative information on the rate of decay. In contrast to our other theorems in the paper, the proof of this theorem does not use tug-of-war. The primary tool is the comparison of a specific AM function in ${\mathbb{R}}^2\smallsetminus \{0\}$ with decay $r^{-1/3}$ discovered by Aronsson Reference3.
1.6. Quadratic comparison on length spaces
To motivate the next definition, observe that for continuous functions $u:{\mathbb{R}}\to {\mathbb{R}}$, the inequality $\Delta _\infty ^+ u \ge 0$ reduces to convexity. The definition of convexity requiring a function to lie below its chords has an analog, comparison with cones, which characterizes infinity harmonic functions. Call the function $\varphi (y)=b\,|y-z|+c$ a cone based at $z \in {\mathbb{R}}^n$. For an open $U \subset {\mathbb{R}}^n$, say that a continuous $u: U \to {\mathbb{R}}$ satisfies comparison with cones from above on $U$ if for every open $W\subset \overline W \subset U$ for every $z\in {\mathbb{R}}^n\smallsetminus W$, and for every cone $\varphi$ based at $z$ such that the inequality $u \le \varphi$ holds on $\partial W$, the same inequality is valid throughout $W$.Comparison with cones from below is defined similarly using the inequality $u \ge \varphi$.
Jensen Reference13 proved that viscosity solutions to $\Delta _\infty u = 0$ for domains in ${\mathbb{R}}^n$ satisfy comparison with cones (from above and below), and Crandall, Evans, and Gariepy Reference9 proved that a function on ${\mathbb{R}}^n$ is absolutely minimal in a bounded domain $U$ if and only if it satisfies comparison with cones in $U$.
Champion and De Pascale Reference8 adapted this definition to length spaces, where cones are replaced by functions of the form $\varphi (x)=b\,d(x,z)+c$, where $b>0$. Their precise definition is as follows. Let $U$ be an open subset of a length-space $X$ and let $u: \overline U\to {\mathbb{R}}$ be continuous. Then $u$ is said to satisfy comparison with distance functions from above on $U$ if for every open $W\subset U$, for every $z\in X\smallsetminus W$, for every $b\ge 0$ and for every $c\in {\mathbb{R}}$, if $u(x)\le b\,d(x,z)+c$ holds on $\partial W$, then it also holds in $W$. The function $u$ is said to satisfy comparison with distance functions from below if $-u$ satisfies comparison with distance functions from above. Finally, $u$ satisfies comparison with distance functions if it satisfies comparison with distance functions from above and from below.
The following result from Reference8 will be used in the proof of Theorem 1.4:
Let $U$ be an open subset of a length space. A continuous $u:\overline U\to {\mathbb{R}}$ satisfies comparison with distance functions in $U$ if and only if it is AM in $U$.
To study the inhomogenous equation $\Delta _\infty u=g$, because $u$ will in general have a non-zero second derivative (in its gradient direction), it is natural to extend these definitions to comparison with functions that have a quadratic term.
Definitions
Let $Q(r)=a r^2+b r+c$ with $r,a,b,c\in {\mathbb{R}}$ and let $X$ be a length space.
•
Let $z \in X$. We call the function $\varphi (x) = Q(d(x,z))$ a quadratic distance function (centered at $z$).
•
We say that a quadratic distance function $\varphi (x)\!=\!Q(d(x,z))$ is $\star$-increasing (in distance from $z$) on an open set $V\subset X$ if either (1) $z\notin V$ and for every $x \in V$, we have $Q'(d(x,z)) > 0$,or (2) $z\in V$ and $b=0$ and $a>0$. Similarly, we say that a quadratic distance function $\varphi$ is $\star$-decreasing on $V$ if $-\varphi$ is $\star$-increasing on $V$.
•
If $u:U\to {\mathbb{R}}$ is a continuous function defined on an open set $U$ in a length space $X$, we say that $u$ satisfies $g$-quadratic comparison on $U$ if the following two conditions hold:
(1)
$g$-quadratic comparison from above: For every open $V \subset \overline V\subset U$ and $\star$-increasing quadratic distance function $\varphi$ on $V$ with quadratic term $a \leq \inf _{y \in V} \frac {g(y)}{2}$, the inequality $\varphi \geq u$ on $\partial V$ implies $\varphi \geq u$ on $V$.
(2)
$g$-quadratic comparison from below: For every open $V \subset \overline V\subset U$ and $\star$-decreasing quadratic distance function $\varphi$ on $V$ with quadratic term $a \geq \sup _{y \in V} \frac {g(y)}{2}$, the inequality $\varphi \leq u$ on $\partial V$ implies $\varphi \leq u$ on $V$.
Let $u$ be a real-valued continuous function on a bounded domain $U$ in ${\mathbb{R}}^n$, and suppose that $g$ is a continuous function on $U$. Then $u$ satisfies $g$-quadratic comparison on $U$ if and only if $u$ is a viscosity solution to $\Delta _\infty u = g$ in $U$.
This equivalence motivates the study of functions satisfying quadratic comparison. Note that satisfying $\Delta _\infty u(x)=g(x)$ in the viscosity sense depends only on the local behavior of $u$ near $x$. We may use Theorem 1.7 to extend the definition of $\Delta _\infty$ to length spaces, saying that $\Delta _\infty u=g$ on an open subset $U$ of a length space if and only if every $x\in U$ has a neighborhood $V\subset U$ on which $u$ satisfies $g$-quadratic comparison. We warn the reader, however, that even for length spaces $X$ contained within ${\mathbb{R}}$, there can be solutions to $\Delta _\infty u=0$ that do not satisfy comparison with distance functions (or $0$-quadratic comparison, for that matter): for example, $X=U=(0,1)$ and $u(x)=x$. The point here is that when we take $V\subset (0,1)$ and compare $u$ with some function $\varphi$ on $\partial V$, the “appropriate” notion of the boundary $\partial V$ is the boundary in ${\mathbb{R}}$, not in $X$.
The continuum value of the tug-of-war game sometimes gives a construction of a function satisfying $g$-quadratic comparison. We prove:
Theorem 1.8
Suppose $X$ is a length space, $Y\subset X$ is non-empty, $F:Y \rightarrow {\mathbb{R}}$ is uniformly continuous and bounded, and either $f:X \smallsetminus Y \rightarrow {\mathbb{R}}$ satisfies $f=0$ or all three of the following hold: $\inf |f| > 0$,$f$ is uniformly continuous, and $X$ has finite diameter. Then the continuum value $u$ is the unique continuous function satisfying $(-2f)$-quadratic comparison on $X\smallsetminus Y$ and $u=F$ on $Y$. Moreover, if $\tilde{u}:X\to {\mathbb{R}}$ is continuous, satisfies $\tilde{u}\ge F$ on $Y$ and $(-2f)$-quadratic comparison from below on $X\smallsetminus Y$, then $\tilde{u}\ge u$ 1 throughout $X$.
Putting these last two theorems together, we obtain
Corollary 1.9
Suppose $U\subset {\mathbb{R}}^n$ is a bounded open set, $F:\partial U\to {\mathbb{R}}$ is uniformly continuous, and $f:U\to {\mathbb{R}}$ satisfies either $f=0$ or $\inf f>0$ and $f$ is uniformly continuous. Then there is a unique continuous function $u:\overline U\to {\mathbb{R}}$ that is a viscosity solution to $\Delta _\infty u = -2f$ on $U$ and satisfies $u=F$ on $\partial U$. This unique solution is the continuum value of tug-of-war on $(\overline U,d,\partial U,F,f)$.
It is easy to verify that $F$ indeed satisfies the assumptions in Theorem 1.8. In order to deduce the corollary, we may take the length space $X$ as a ball in ${\mathbb{R}}^n$ which contains $U$ and extend $F$ to $X\smallsetminus U$, say. Alternatively, we may consider $U$ with its intrinsic metric and lift $F$ to the completion of $U$.
We present in Section 5 an example showing that the corollary may fail if $f$ is permitted to take values of both signs. Specifically, the example describes two functions $u_1,u_2$ defined in the closed unit disk in ${\mathbb{R}}^2$ and having boundary values identically zero on the unit circle such that with some Lipschitz function $g$ we have $\Delta _\infty u_j=g$ for $j=1,2$ (in the viscosity sense), while $u_1\ne u_2$.
The plan of the paper is as follows. Section 2 discusses the discrete tug-of-war on graphs and proves Theorem 1.2, and Section 3 deals with tug-of-war on length spaces and proves Theorems 1.3, 1.4 and 1.8. Section 4 is devoted to estimates of $\infty$-harmonic measure. In Section 5, we present a few counterexamples showing that some of the assumptions in the theorems we prove are necessary. Section 6 is devoted to the proof of Theorem 1.7. Section 7 presents some heuristic arguments describing what the limiting trajectories of some $\epsilon$-tug-of-war games on domains in ${\mathbb{R}}^n$ may look like and states a question regarding the length of the game. We conclude with additional open problems in Section 8.
2. Discrete game value existence
2.1. Tug-of-war on graphs without running payoffs
In this section, we will generally assume that $E=E_{{\mathrm{I}}{}}=E_{{\mathrm{II}}{}}$ is undirected and connected and that $f=0$.
Though we will not use this fact, it is interesting to point out that in the case where $E$ is finite, there is a simple algorithm from Reference15 which calculates the value $u$ of the game and proceeds as follows. Assuming the value $u(v)$ is already calculated at some set $V'\supset Y$ of vertices, find a path $v_0,v_1,\dotsc ,v_k$,$k>1$, with interior vertices $v_1,\dotsc ,v_{k-1}\in X\smallsetminus V'$ and endpoints $v_0,v_k\in V'$ which maximizes $\bigl (u(v_k)-u(v_0)\bigr )/k$ and set $u(v_i)=u(v_0)+i\,\bigl (u(v_k)-u(v_0)\bigr )/k$ for $i=1,2,\dotsc ,k-1$. Repeat this as long as $V'\ne X$.
Recall that $u_{\mathrm{I}}{}$ is the value function for player I.
Lemma 2.1
Suppose that $\inf _{Y} F >-\infty$ and $f=0$. Then $u_{\mathrm{I}}{}$ is the smallest $\infty$-harmonic function bounded from below on $X$ that extends $F$. More generally, if $v$ is an $\infty$-harmonic function which is bounded from below on $X$ and $v \ge F$ on $Y$, then $v \ge u_{\mathrm{I}}{}$ on $X$.
Similarly, if $F$ is bounded from above on $Y$, then $u_{\mathrm{II}}{}$ is the largest $\infty$-harmonic function bounded from above on $X$ that extends $F$.
Proof.
Player I could always try to move closer to some specific point $y\in Y$. Since in almost every infinite sequence of fair coin tosses there will be a time when the number of tails exceeds the number of heads by $d(x_0,y)$, this ensures that the game terminates a.s., and we have $u_{\mathrm{I}}{} \geq \inf _Y F > -\infty$. Suppose that $v\ge F$ on $Y$ and $v$ is $\infty$-harmonic on $X$. Given $\delta >0$, consider an arbitrary strategy for player I and let II play a strategy that at step $k$ (if II wins the coin toss) II selects a state where $v(\cdot )$ is within $\delta 2^{-k}$ of its infimum among the states to which II could move. We will show that the expected payoff for player I is at most $v(x_0)+\delta$. We may assume the game terminates a.s. at a time $\tau <\infty$. Let $\{x_j\}_{j \ge 0}$ denote the random sequence of states encountered in the game. Since $v$ is $\infty$-harmonic, the sequence $M_k=v(x_{k\wedge \tau }) + \delta 2^{-k}$ is a supermartingale. Optional sampling and Fatou’s lemma imply that $v(x_0)+\delta =M_0 \ge {\mathbb{E}}[M_\tau ] \ge {\mathbb{E}}[F(x_\tau )]$. Thus $u_{\mathrm{I}}{} \le v$. By Lemma Equation1.1, this completes the proof.
■
Now, we prove the first part of Theorem 1.2. When the graph $(X,E)$ is locally finite and $Y$ is finite, this was proven in Reference15, Thm. 17.
Theorem 2.2
Suppose that $(X,E)$ is connected, and $Y\ne \emptyset$. If $F$ is bounded below (or above) on $Y$ and $f=0$, then $u_{\mathrm{I}}{} =u_{\mathrm{II}}{}$, so the game has a value.
Before we prove this, we discuss several counterexamples that occur when the conditions of the theorem are not met. First, this theorem can fail if $E$ is directed. A trivial counterexample is when $X$ is finite and there is no directed path from the initial state to $Y$.
If $X$ is infinite and $E$ is directed, then there are counterexamples to Theorem 2.2, even when every vertex lies on a directed path towards a terminal state. For example, suppose $X={\mathbb{N}}$ and $Y=\{0\}$, with $F(0)=0$. If $E$ consists of directed edges of the form $(n,n-1)$ and $(n,n+2)$, then II may play so that with positive probability the game never terminates, and hence the value for player I is by definition $-\infty$.
Even in the undirected case, a game may not have a value if $F$ is not bounded either from above or below. The reader may check that if $X$ is the integer lattice ${\mathbb{Z}}^2$ and the terminal states are the $x$-axis with $F((x,0)) = x$, then the players’ value functions are given by $u_{{\mathrm{I}}{}}((x,y)) = x-|y|$ and $u_{{\mathrm{II}}{}}((x,y)) = x +|y|$. (Roughly speaking, this is because, in order to force the game to end in a finite amount of time, player I has to “give up” $|y|$ opportunities to move to the right.) Observe also that in this case any linear function which agrees with $F$ on the $x$-axis is $\infty$-harmonic.
As a final remark before proving this theorem, let us consider the obvious strategy for the two players, namely for player I to always maximize $u$ on her turn and for II to always minimize $u$ on her turn. Even when $E$ is (undirected and) locally finite and the payoff function satisfies $0\leq F\leq 1$, these obvious strategies need not be optimal. Consider, e.g., the game shown in Figure 1, where $X\subset {\mathbb{R}}^2$ is given by $X = \bigl \{v(k,j) :j=1,2,\mathinner {\ldotp \ldotp \ldotp }; k =0,1,\dotsc ,2^j\bigr \}$,$v(k,j) =(2^{-j}k,1-2^{-j+1})$, and $E$ consists of edges of the form $\{v(k,j),v(k+1,j)\}$ and $\{v(k,j),v(2k,j+1)\}$. The terminal states are on the left and right edges of the square, and the payoff is $1$ on the left and $0$ on the right. Clearly the function $u(v(k,j))=1-k/2^j$ (the Euclidean distance from the right edge of the square) is infinity harmonic, and by Corollary 2.3 below, we have $u_{\mathrm{I}}{} =u=u_{\mathrm{II}}{}$. The obvious strategy for player I is to always move left, and for II it is to always move right. Suppose however that player I always pulls left and player II always pulls up. It is easy to check that the probability that the game ever terminates when starting from $v(k,j)$ is at most $2/(k+2)$ (this function is a supermartingale under the corresponding Markov chain). Therefore the game continues forever with positive probability, resulting in a payoff of $-\infty$ to player I. Thus, a near-optimal strategy for player I must be able to force the game to end, and must be prepared to make moves which do not maximize $u$. (This is a well-known phenomenon, not particular to tug-of-war.)
If $F$ is bounded above but not below, then we may exchange the roles of players ${\mathrm{I}}{}$ and ${\mathrm{II}}{}$ and negate $F$ to reduce to the case where $F$ is bounded from below. Since $u_{{\mathrm{I}}{}} \leq u_{{\mathrm{II}}{}}$ always holds, we just need to show that $u_{{\mathrm{II}}{}}\le u_{\mathrm{I}}{}$. Since player I could always pull towards a point in $Y$ and thereby ensure that the game terminates, we have $u_{\mathrm{I}}{} \geq \inf _Y F > -\infty$. Let $u=u_{\mathrm{I}}{}$ and write $\delta (x) = \sup _{y: y \sim x} |u(y)-u(x)|$. Let $x_0,x_1,\mathinner {\ldotp \ldotp \ldotp }\,$ be the sequence of positions of the game. For ease of exposition, we begin by assuming that $E$ is locally finite (so that the suprema and infima in the definition of the $\infty$-Laplacian definition are achieved) and that $\delta (x_0)>0$; later we will remove these assumptions.
To motivate the following argument, we make a few observations. In order to prove that $u_{\mathrm{II}}{} \le u_{\mathrm{I}}{}$, we need to show that player II can guarantee that the game terminates while also making sure that the expected payoff is not much larger than $u(x_0)$. These are two different goals, and it is not a priori clear how to combine them. To resolve this difficulty, observe that $\delta (x_j)$ is non-decreasing in $j$ if players I and II adopt the strategies of maximizing (respectively, minimizing) $u$ at every step. As we will later see, this implies that the game terminates a.s. under these strategies. On the other hand, if player I deviates from this strategy and thereby reduces $\delta (x_j)$, then perhaps player II can spend some turns playing suboptimally with respect to $u$ in order to increase $\delta$. Let $X_0:=\{x\in X:\delta (x)\ge \delta (x_0)\}\cup Y$. For $n=0,1,2,\mathinner {\ldotp \ldotp \ldotp }\,$ let $j_n=\max \{j\le n:x_j\in X_0\}$ and $v_n=x_{j_n}$, which is the last position in $X_0$ up to time $n$. We will shortly describe a strategy for II based on the idea of backtracking to $X_0$ when not in $X_0$. If $v_n\ne x_n$, we may define a backtracking move from $x_n$ as any move to a neighbor $y_n$ of $x_n$ that is closer to $v_n$ than $x_n$ in the subgraph $G_n\subset (X,E)$ spanned by the vertices $x_{j_n},x_{j_n+1},\dotsc ,x_n$. Here, “closer” refers to the graph metric of $G_n$. When II plays the backtracking strategy, she backtracks whenever not in $X_0$ and plays to a neighbor minimizing $u_{\mathrm{I}}{}$ when in $X_0$.
Now consider the game evolving under any strategy for player I and the backtracking strategy for II. Let $d_n$ be the distance from $x_n$ to $v_n$ in the subgraph $G_n$. Set
$$m_n:=u(v_n)+\delta (x_0)\,d_n\,.$$
It is clear that $u(x_n)\le m_n$, because there is a path of length $d_n$ from $x_n$ to $v_n$ in $G_n$ and the change in $u$ across any edge in this path is less than $\delta (x_0)$, by the definition of $X_0$. It is easy to verify that $m_n$ is a supermartingale, as follows. If $x_n\in X_0$, and player I plays, then $m_{n+1}\le u(x_n)+\delta (x_n)= m_n+\delta (x_n)$, while if II gets the turn, then $m_{n+1}=u(x_n)-\delta (x_n)$. If $x_n\notin X_0$ and II plays, then $m_{n+1}=m_n-\delta (x_0)$. If $x_n\notin X_0$ and player I does not play into $X_0$, then $m_{n+1}\le m_n+\delta (x_0)$. The last case to consider is that $x_n\notin X_0$ and player I plays into a vertex in $X_0$. In such a situation,
Thus, indeed, $m_n$ is a supermartingale (bounded from below). Let $\tau$ denote the first time a terminal state is reached (so $\tau =\infty$ if the game does not terminate). By the martingale convergence theorem, the limit $\lim _{n\to \infty }m_{n \wedge \tau }$ exists. But when player II plays we have $m_{n+1}\le m_n-\delta (x_0)$. Therefore, the game must terminate with probability $1$. The expected outcome of the game thus played is at most $m_0=u(x_0)$. Consequently, $u_{{\mathrm{II}}{}}\le u$, which completes the proof in the case where $E$ is locally finite and $\delta (x_0)>0$.
Next, what if $E$ is not locally finite, so that suprema and infima might not be achieved? In this case, we fix a small $\eta > 0$, and use the same strategy as above, except that if $x_n\in X_0$ and II gets the turn, she moves to a neighbor at which $u$ is at most $\eta 2^{-n-1}$ larger than its infimum value among neighbors of $x_n$. In this case, $m_n + \eta \,2^{-n}$ is a supermartingale, and hence the expected payoff is at most $u(x_0) + \eta$. Since this can be done for any $\eta >0$, we again have that $u_{{\mathrm{II}}{}} \le u$.
Finally, suppose that $\delta (x_0)=0$. Let $y\in Y$, and let player II pull toward $y$ until the first time a vertex $x_0^*$ with $\delta (x_0^*)>0$ or $x_0^*\in Y$ is reached. After that, II continues as above. Since $u(x_0)=u(x_0^*)$, this completes the proof.
■
Corollary 2.3
If $E$ is connected, $Y\ne \emptyset$ and $\sup |F|<\infty$, then $u=u_{\mathrm{I}}{} =u_{\mathrm{II}}{}$ is the unique bounded $\infty$-harmonic function agreeing with $F$ on $Y$.
Proof.
This is an immediate consequence of Lemma 2.1, the remark that follows it, and Theorem 2.2.
■
If $E=\{(n,n+1):n=0,1,2,\mathinner {\ldotp \ldotp \ldotp }\,\}$,$Y=\{0\}$ and $F(0)=0$, then $\tilde{u}(n)=n$ is an example of an (unbounded) $\infty$-harmonic function that is different from $u$.
2.2. Tug-of-war on graphs with running payoffs
Suppose now that $f \neq 0$. Then the analog of Theorem 2.2 does not hold without additional assumptions. For a simple counterexample, suppose that $E$ is a triangle with self-loops at its vertices (i.e., a player may opt to remain in the same position), that the vertex $v_0$ is a terminal vertex with final payoff $F(v_0)=0$, and the running payoff is given by $f(v_1)=-1$ and $f(v_2)=1$. Then the function given by $u(v_0)=0$,$u(v_1)=a-1$,$u(v_2) = a+1$ is a solution to $\Delta _{\infty } u = -2f$, provided $-1 \leq a \leq 1$. The reader may check that $u_{{\mathrm{I}}{}}$ is the smallest of these functions and $u_{{\mathrm{II}}{}}$ is the largest. The gap of $2$ between $u_{{\mathrm{I}}{}}$ and $u_{{\mathrm{II}}{}}$ appears because a player would have to give up a move (sacrificing one) in order to force the game to end. This is analogous to the ${\mathbb{Z}}^2$ example given in Section 2.1. Both players are earning payoffs in the interior of the game, and moving to a terminal vertex costs a player a turn.
One way around this is to assume that $f$ is either uniformly positive or uniformly negative, as in the following analog of Theorem 2.2. We now prove the second half of Theorem 1.2:
Theorem 2.4
Suppose that $E$ is connected and $Y\ne \emptyset$. Assume that $F$ is bounded from below and $\inf f >0$. Then $u_{\mathrm{I}}{} = u_{\mathrm{II}}{}$. If, additionally, $f$ and $F$ are bounded from above, then any bounded solution $\tilde{u}$ to $\Delta _\infty \tilde{u} = -2f$ with the given boundary conditions is equal to $u$.
Proof.
By considering a strategy for player I that always pulls toward a specific terminal state $y$, we see that $\inf _X u_{\mathrm{I}}{} \ge \inf _{Y} F$. By Lemma Equation1.1, $\Delta _\infty u_{\mathrm{I}}{} =-2f$ on $X\smallsetminus Y$. Let $\tilde{u}$ be any solution to $\Delta _\infty \tilde{u}=-2f$ on $X\smallsetminus Y$ that is bounded from below on $X$ and has the given boundary values on $Y$.
Claim
$u_{\mathrm{II}}{} \le \tilde{u}$. In proving this, we may assume without loss of generality that $G\smallsetminus Y$ is connected, where $G$ is the graph $(X,E)$. Then if $\tilde{u}=\infty$ at some vertex in $X\smallsetminus Y$, we also have $\tilde{u}=\infty$ throughout $X\smallsetminus Y$, in which case the Claim is obvious. Thus, assume that $\tilde{u}$ is finite on $X\smallsetminus Y$. Fix $\delta \in (0,\inf f)$ and let II use the strategy that at step $k$, if the current state is $x_{k-1}$ and II wins the coin toss, selects a state $x_k$ with $\tilde{u}(x_k)<\inf _{z:z \sim x_{k-1}} \tilde{u}(z)+2^{-k}\delta .$ Then for any strategy chosen by player I, the sequence $M_k=\tilde{u}(x_k)+2^{-k}\delta +\sum _{j=0}^{k-1} f(x_j)$ is a supermartingale bounded from below, which must converge a.s. to a finite limit. Since $\inf f>0$, this also forces the game to terminate a.s. Let $\tau$ denote the termination time. Then
Thus this strategy for II shows that $u_{\mathrm{II}}{} (x_0) \le \tilde{u}(x_0)+\delta$. Since $\delta >0$ is arbitrary, this verifies the claim. In particular, $u_{\mathrm{II}}{} \le u_I$ in $X$, so $u_{\mathrm{II}}{} =u_{\mathrm{I}}{}$.
Now suppose that $\sup F<\infty$ and $\sup f < \infty$, and $\tilde{u}$ is a bounded solution to $\Delta _\infty \tilde{u}=-2f$ with the given boundary values. By the claim above, $\tilde{u}\ge u_{{\mathrm{II}}{}}=u_{\mathrm{I}}{}$. On the other hand, player I can play to maximize or nearly maximize $\tilde{u}$ in every move. Under such a strategy, she guarantees that by turn $k$ the expected payoff is at least $\tilde{u}(x_0)-{\mathbb{E}}[Q(k)]-\varepsilon$, where $Q(k)$ is $0$ if the game has terminated by time $k$, and $\tilde{u}(x_k)$ otherwise. If the expected number of moves played is infinite, the expected payoff is infinite. Otherwise, $\lim _k{\mathbb{E}}[Q(k)]=0$, since $\tilde{u}$ is bounded. Thus, $u_{\mathrm{II}}{} =u_{{\mathrm{I}}{}}\ge \tilde{u}$ in any case.
■
3. Continuum value of tug-of-war on a length space
3.1. Preliminaries and outline
In this section, we will prove Theorem 1.3, Theorem 1.8 and Theorem 1.4. Throughout this section, we assume $(X,d,Y,F,f)$ denotes a tug-of-war game, i.e., $X$ is a length space with distance function $d$,$Y\subset X$ is a non-empty set of terminal states, $F:Y\to {\mathbb{R}}$ is the final payoff function, and $f:X\smallsetminus Y\to {\mathbb{R}}$ is the running payoff function. We let $x_k$ denote the game state at time $k$ in $\varepsilon$-tug-of-war.
It is natural to ask for a continuous-time version of tug-of-war on a length space. Precisely and rigorously defining such a game (which would presumably involve replacing coin tosses with white noise, making sense of what a continuum no-look-ahead strategy means, etc.) is a technical challenge we will not undertake in this paper (though we include some discussion of the small-