Abstract
In this manuscript we deal with regularity issues and the asymptotic behaviour (as \(p \rightarrow \infty \)) of solutions for elliptic free boundary problems of \(p-\)Laplacian type (\(2 \le p< \infty \)):
with a prescribed Dirichlet boundary data, where \(\lambda _0>0\) is a bounded function and \(\Omega \) is a regular domain. First, we prove the convergence as \(p\rightarrow \infty \) of any family of solutions \((u_p)_{p\ge 2}\), as well as we obtain the corresponding limit operator (in non-divergence form) ruling the limit equation,
Next, we obtain uniqueness for solutions to this limit problem. Finally, we show that any solution to the limit operator is a limit of value functions for a specific Tug-of-War game.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this article we study diffusion processes governed by quasi-linear operators of \(p-\)Laplacian type with (possibly) a phase transition regime for solutions, i.e. solutions prescribe a PDE in each a priori unknown set (of positivity and negativity, respectively)
for a suitable measurable function \(f:[0, \infty )\times [0, \infty ) \rightarrow {\mathbb {R}}\) with a discontinuity at origin. These models have become mathematically relevant due to their connections with phenomena in applied sciences, as well as several free boundary problems as obstacle type problems, minimization problems with free boundaries and dead core problems just to mention a few. The problem we are particularly interested is given by
where \(\Delta _p u = \mathrm {div}(|\nabla u|^{p-2}\nabla u)\) stands for the p-Laplace operator, \(\lambda _0>0\) is a function (bounded away from zero and from infinity), F is a continuous boundary data and \(\Omega \subset {\mathbb {R}}^N\) is a bounded and regular domain. In this context, \(\partial \{u>0\} \cap \Omega \) is the free boundary of the problem.
It is worth mentioning that the unique weak solution (cf. [8, Theorem 1.1 ]) to (1.1) appears when we minimize the following functional
over the admissible set \({\mathbb {K}} = \left\{ v \in W^{1, p}(\Omega )\,\,\,\text{ and }\,\, v=F \,\, \text{ on } \,\,\partial \Omega \right\} \). Variational problems like (1.2) are connected with several applications and were widely studied in the last decades, see [1, 8, 13, 15, 19].
In our first result, we infer how weak solutions leave the free boundaries in their positivity set.
Theorem 1.1
(Strong Non-degeneracy) Let u be a bounded weak solution to (1.1), \(\Omega ^{\prime } \Subset \Omega \) and let \(x_0 \in \overline{\{u >0\}} \cap \Omega ^{\prime }\). Then, there exists a universal constant \(C_0 = C_0 (N, p, \inf _{\Omega } \lambda _0(x))\) such that for all \(0<r<\min \{1, \mathrm {dist}(\Omega ^{\prime }, \partial \Omega )\}\) there holds
We also deal with the analysis of asymptotic behaviour as p diverges. Recently, motivated by game theory (“Tug of-war games”), in [12] it is studied the following variational problem
with a forcing term \(f\ge 0\) and a continuous boundary data. In this context, \((u_p)_{p\ge 2}\) converges, up to a subsequence, to a limiting function \(u_{\infty }\), which fulfils the following problem in the viscosity sense
where \(\Delta _\infty u(x) {:}{=}\,\nabla u(x)^TD^2u(x) \cdot \nabla u(x)\) is the \(\infty -\)Laplace operator. (cf. [2] for a survey). Such limit problems are known as problems with gradient constraint. Gradient constraint problems like
where \(h\ge 0\), appeared in [10]. By considering solutions to
Jensen provides a mechanism to obtain solutions of the infinity Laplace equation \(-\Delta _\infty u=0\) via an approximation procedure. In this context, he proved uniqueness for the infinity Laplace equation by first showing that it holds for the approximating equations and then sending \(\epsilon \rightarrow 0\). A similar strategy was used in the anisotropic counterpart in [16], and a variant of (1.5) appears in the so-called \(\infty \)-eigenvalue problem, see, for example, [11].
We highlight that in general, the uniqueness of solutions to (1.5) is an easy task if h is a continuous function and strictly positive everywhere. Moreover, uniqueness is known to hold if \(h\equiv 0\), see [10]. Nevertheless, the case \(h\ge 0\) yields significant obstacles. Such a situation resembles the one that holds for the infinity Poisson equation \(-\Delta _\infty u=h\), where the uniqueness is known to hold if \(h>0\) or \(h\equiv 0\), and the case \(h\ge 0\) is an open problem. In this direction, [12, Theorem 4.1] proved uniqueness for (1.5) in the special case \(h=\chi _D\) under the mild topological condition \({{\overline{D}}}=\overline{D^{\circ }}\) on the set \(D \subset {\mathbb {R}}^N\). Furthermore, they show counterexamples where the uniqueness fails if such topological condition is not satisfied, see [12, Section 4.1]. Finally, from a regularity viewpoint, [12] also establishes that viscosity solutions to (1.5) are Lipschitz continuous.
Hence, in our case a natural question arises: What is the expected behaviour for family of solutions and their free boundaries as \(p\rightarrow \infty \)? This question is one of our motivations in order to study existence, uniqueness, regularity and further properties for solutions of gradient constraint type models like (1.5).
In our next result, we establish existence and regularity of limit solutions. We will assume in this limit procedure that the boundary datum g is a fixed Lipschitz function.
Theorem 1.2
(Limiting problem) Let \((u_p)_{p\ge 2}\) be the family of weak solutions to (1.1). Then, up to a subsequence, \(u_p \rightarrow u_{\infty }\) uniformly in \({\overline{\Omega }}\). Furthermore, such a limit fulfils in the viscosity sense
Finally, \(u_{\infty }\) is a Lipschitz continuous function with
Notice that (1.6) can be written as a fully nonlinear second order operator as follows:
which is non-decreasing in s. Moreover, \(\mathrm {F}_\infty \) is a degenerate elliptic operator in the sense that
Nevertheless, \(\mathrm {F}_\infty \) is not in the framework of [6, Theorem 3.3]. Then, to prove uniqueness of limit solutions becomes a non-trivial task. We overcome such difficulty by using ideas from [12, Section 4] and show that solutions to the limit problem are unique.
Theorem 1.3
(Uniqueness) There is a unique viscosity solution to (1.6). Moreover, a comparison principle holds, i.e. if \(g_1 \le g_2\) on \(\partial \Omega \) then the corresponding solutions \(u_{\infty }^1\) and \(u_{\infty }^2\) verify \(u_{\infty }^1 \le u_{\infty }^2\) in \(\Omega \).
Notice that since we have uniqueness for the limit problem, we have convergence of the whole family \((u_p)_{p \ge 2}\) as \(p\rightarrow \infty \) in Theorem 1.2 (and not only convergence along a subsequence).
Next, we will turn our attention to the study of several geometric and analytical properties for limit solutions and their free boundaries. This analysis has been motivated by the analysis of the asymptotic behaviour of several variational problems (see, for example, [7, 12, 21,22,23]). We have a sharp lower control on how limit solutions detach from their free boundaries.
Theorem 1.4
(Linear growth for limit solutions) Let \(u_{\infty }\) be a uniform limit to solutions \(u_p\) of (1.1) and \(\Omega ^{\prime } \Subset \Omega \). Then, for any \(x_0 \in \partial \{u_{\infty }>0\} \cap \Omega ^{\prime }\) and any \(0<r \ll 1\), the following estimate holds:
Our main motivation for considering (1.6) comes from its connection to modern game theory. Recently, in [20] the authors introduced a two-player random turn game called “Tug-of-war”, and showed that as the “step size” converges to zero, the value functions of this game converge to the unique viscosity solution of the infinity Laplace equation \(-\Delta _\infty u=0\). We define and study a variant of the Tug-of-War game, which we call Pay or Leave Tug-of-War, which was inspired by the one in [12]. In our game, one of the players decide to play the usual Tug-of-War or to pass the turn to the other who decides to end the game immediately (and get 0 as final payoff) or move and pay \({\varepsilon }\) (which is the step size). It is then shown that the value functions of this new game, namely \(u^{\varepsilon } \), fulfil a dynamic programming principle (DPP) given by
Moreover, we show that the sequence \((u^{\varepsilon })_{\varepsilon >0}\) converges and the corresponding limit is a viscosity solution to (1.6). Therefore, besides its own interest, the game-theoretic scheme provides an alternative mechanism to prove the existence of a viscosity solution to (1.6).
Theorem 1.5
Let \(u^{\varepsilon }\) be the value functions of the game previously described. Then, it holds that
being u the unique viscosity solution to Eq. (1.6).
It is important to mention that we have been able to obtain a game approximation for a free boundary problem that involves the set where the solution is positive, \(\{u>0\}\). This task involves the following difficulty, if one tries to play with a rule of the form “one player sells the turn when the expected payoff is positive”, then the value of the game will not be well defined since this rule is an anticipating strategy. (The player needs to see the future in order to decide where he is going to play.) We overcome this difficulty by letting the other player the chance to stop the game (and obtain 0 as final payoff in this case) or buy the turn (when the first player gives this option). In this way we obtain a set of rules that are non-anticipating and give a DPP that can be seen as a discretization of the limit PDE.
2 Preliminaries
Definition 2.1
(Weak solution) \(u \in W^{1, p}_{\text{ loc }}(\Omega )\) is a weak supersolution (resp. subsolution) to
if for all \(0\le \varphi \in C^1_0(\Omega )\) it holds
Finally, u is a weak solution to (2.1) when it is simultaneously a supersolution and a subsolution.
Since we are assuming that p is large, then (1.1) is not singular at points where the gradient vanishes. Consequently, the mapping
is well defined and continuous for all \(\phi \in C^2(\Omega )\).
Taking into account that the limiting solutions need not be smooth and the fact that the infinity Laplace operator is not in divergence form, we must use the appropriate notion of weak solutions. Next, we introduce the notion of viscosity solution to (1.1). We refer to the survey [6] for the general theory of viscosity solutions.
Definition 2.2
(Viscosity solution) An upper (resp. lower) semi-continuous function \(u: \Omega \rightarrow {\mathbb {R}}\) is called a viscosity subsolution (resp. supersolution) to (1.1) if, whenever \(x_0 \in \Omega \) and \(\phi \in C^2(\Omega )\) are such that \(u-\phi \) has a strict local maximum (resp. minimum) at \(x_0\), then
Finally, \(u \in C(\Omega )\) is a viscosity solution to (1.1) if it is simultaneously a viscosity subsolution and a viscosity supersolution.
Now we state the definition of viscosity solution to (1.6). Notice that here we are using the sets \(\{u\ge 0\}\) and \(\{u> 0\}\) instead of the set that corresponds to the test function, \(\{\phi > 0\}\), as we did in the previous definition.
Definition 2.3
An upper semi-continuous (resp. lower semi-continuous) function \(u:\Omega \rightarrow {\mathbb {R}}\) is a viscosity subsolution (resp. supersolution) to (1.6) in \(\Omega \) if, whenever \( x_0 \in \Omega \) and \(\varphi \in C^2(\Omega )\) are such that \(u-\varphi \) has a strict local maximum (resp. minimum) at \( x_0\), then
respectively
Finally, a continuous function \(u:\Omega \rightarrow {\mathbb {R}}\) is a viscosity solution to (1.6) in \(\Omega \) if it is both a viscosity subsolution and a viscosity supersolution.
Remark that since (2.2) does not depend on \(\phi (x_0)\), we can assume that \(\phi \) satisfies \(u(x_0) = \phi (x_0)\) and \(u(x)<\phi (x)\), when \(x \ne x_0\). Analogously, in (2.3) we can assume that \(u(x_0) = \phi (x_0)\) and \(u(x)>\phi (x)\), when \(x \ne x_0\). Also we remark that (2.2) is equivalent to
and that (2.3) is equivalent to
The following lemma gives a relation between weak and viscosity sub- and supersolutions to (1.1).
Lemma 2.4
A continuous weak subsolution (resp. supersolution) \(u \in W_{\text{ loc }}^{1,p}(\Omega )\) to (1.1) is a viscosity subsolution (resp. supersolution) to
Proof
Let us proceed for the case of supersolutions. Fix \(x_0 \in \Omega \) and \(\phi \in C^2(\Omega )\) such that \(\phi \) touches u by below, i.e. \(u(x_0) = \phi (x_0)\) and \(u(x)> \phi (x)\) for \(x \ne x_0\). Our goal is to show that
Let us suppose, for sake of contradiction, that the inequality does not hold. Then, by continuity there exists \(r>0\) small enough such that
provided that \(x \in B_r(x_0)\). Now, we consider
Notice that \(\Psi \) verifies \(\Psi < u\) on \(\partial B_r(x_0)\), \(\Psi (x_0)> u(x_0)\) and
By extending by zero outside \(B_r(x_0)\), we may use \((\Psi -u)_{+}\) as a test function in (1.1). Moreover, since u is a weak supersolution, we obtain
On the other hand, multiplying (2.4) by \(\Psi - u\) and integrating by parts we get
Next, subtracting (2.5) from (2.6) we obtain
Finally, since the left-hand side is bounded below by \(\displaystyle 2^{-p}\int _{\{\Psi >u\}} |\nabla \Psi - \nabla u|^p\mathrm{d}x \ge 0,\) this forces \(\Psi \le u\) in \(B_r(x_0)\). However, this contradicts the fact that \(\Psi (x_0)>u(x_0)\) and proves the result.
Similarly, one can prove that a continuous weak subsolution is a viscosity subsolution. \(\square \)
Theorem 2.5
(Morrey’s inequality) Let \(N<p\le \infty \). Then, for \(u \in W^{1, p}(\Omega )\), there exists a constant \(C(N, p)>0\) such that
We must highlight that the dependence of C on p does not deteriorate as \(p \rightarrow \infty \). In fact,
where \(c(N)>0\) is a dimensional constant.
3 Non-degeneracy of solutions
This section is devoted to establish a weak geometrical property which plays a key role in the description of how solutions leave their free boundaries. We show non-degeneracy of solutions.
Proof of Theorem 1.1
Due to the continuity of solutions, it is enough to prove such a estimate just at points \(x_0 \in \{u>0\} \cap \Omega ^{\prime }\). Let us define the scaled function
and the auxiliary barrier
It is easy to check that
in the weak sense, where \({\hat{\lambda }}_0(x) {:}{=}\,\lambda _0(x_0 + rx)\). Now, if \(u_r \le \Psi \) on the whole boundary of \(B_1\), then the comparison principle yields that
which contradicts the assumption that \(u_r(0)>0\). Therefore, there exists a point \(y \in \partial B_1\) such that
The proof finishes by scaling back \(u_r\). \(\square \)
4 The limit problem
This section is devoted to prove Theorems 1.2 and 1.4 concerning the limit as \(p\rightarrow \infty \). First, we will prove the existence of a uniform limit for Theorem 1.2 as \(p\rightarrow \infty \). Remind that since the boundary datum F is assumed to be Lipschitz continuous, we can extend it to a Lipschitz function (that we will still call F) to the whole \(\Omega \).
Lemma 4.1
Assume \(\max \{2, N\}<p < \infty \) and let \(u_p \in W^{1, p}(\Omega )\) be a weak solution to (1.1). Then,
Additionally, \(u_p \in C^{0, \alpha }(\Omega )\), where \(\alpha = 1- \frac{N}{p}\) with the following estimate
where \(C_1, C_2>0\) are constants depending on N, \( \Vert \lambda _0\Vert _{L^{\infty }(\Omega )}\), \(\Vert F\Vert _{L^{\infty }(\Omega )}\), \(\Vert \nabla F\Vert _{L^{\infty }(\Omega )}\).
Proof
The unique weak solution \(u_p\in W^{1,p} (\Omega )\cap C({{\overline{\Omega }}})\) to \(\Delta _p u_p= \lambda _0 \chi _{\{u_p>0\}}\) with fixed Lipschitz continuous boundary values F, can be characterized as being the minimizer for the functional
in the set of functions \({\mathbb {K}} = \{ u \in W^{1,p} (\Omega ) \ : \ u =F \text{ on } \partial \Omega \}\). Using F as test function and the fact that \(\Vert u_p\Vert _{L^{\infty }(\Omega )}\le \Vert F\Vert _{L^{\infty }(\Omega )}\) we obtain
Therefore,
Next, for \(p>N\) by Morrey’s estimates we get
\(\square \)
Next, we show that any family of weak solutions to (1.1) is pre-compact and therefore, we get the existence of a uniform limit (as stated in Theorem 1.2).
Lemma 4.2
(Existence of limit solutions) Let \((u_p)_{p>2}\) be a sequence of weak solutions to (1.1). Then, there exists a subsequence \(p_j \rightarrow \infty \) and a limit function \(u_{\infty }\) such that
uniformly in \(\Omega \). Moreover, \(u_{\infty }\) is Lipschitz continuous with
Proof
Existence of a uniform limit, \(u_{\infty }\), is a direct consequence of our estimates in Lemma 4.1 using with an Arzelà-Ascoli compactness criteria. Finally, the last statement holds by passing to the limit in the Hölder’s estimates from Lemma 4.1. \(\square \)
Next, we will show that any uniform limit, \(u_\infty \), is a viscosity solution to the limit equation.
Proof of Theorem 1.2
Notice that from the uniform convergence, it holds that \(u_{\infty } = F\) on \(\partial \Omega \). Next, we prove that the limit function \(u_{\infty }\) is a viscosity solution to
First, let us prove that \(u_{\infty }\) is a viscosity supersolution. To this end, fix \(x_0 \in \{u_{\infty }>0\} \cap \Omega \) and let \(\phi \in C^2(\Omega )\) be a test function such that \(u_{\infty }(x_0) = \phi (x_0)\) and the inequality \(u_{\infty }(x) > \phi (x)\) holds for all \(x \ne x_0\). Notice that since we have \(x_0 \in \{u_{\infty }>0\} \cap \Omega \), it holds that \(\chi _{\{u_{\infty }\ge 0\}}(x_0) =\chi _{\{u_{\infty }> 0\}}(x_0) =1\).
We want to show that
Notice that if \(-|\nabla \phi (x_0)|+1 \ge 0\) there is nothing to prove. Hence, we may assume that
Since, up to a subsequence, \(u_p \rightarrow u_{\infty }\) uniformly, there exists a sequence \(x_p \rightarrow x_0\) such that \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local minimum at \(x_{p}\). Since \(u_{p}\) is a weak supersolution (and then a viscosity supersolution by Lemma 2.4) to (1.1), we get
Now, dividing both sides by \((p-2)|\nabla \phi (x_{p})|^{p-4}\) (which is not zero for \(p\gg 1\) due to (4.1)) we get
Passing the limit as \(p \rightarrow \infty \) in the above inequality we conclude that
which proves that \(u_{\infty }\) is a viscosity supersolution.
Now, let us show that \(u_{\infty }\) is a viscosity subsolution. To this end, fix \(x_0 \in \{u_{\infty }>0\} \cap \Omega \) and a test function \(\phi \in C^2(\Omega )\) such that \(u_{\infty }(x_0) = \phi (x_0)\) and the inequality \(u_{\infty }(x) < \phi (x)\) holds for \(x \ne x_0\). We want to prove that
One more time, there exists a sequence \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local maximum at \(x_{p}\) and since \(u_{p}\) is a weak subsolution (resp. viscosity subsolution) to (1.1), we have that
Thus, letting \(p \rightarrow \infty \) we obtain \(- \Delta _{\infty } \phi (x_0) \le 0\). Furthermore, if \(-|\nabla \phi (x_0)|+ \chi _{\{u_{\infty }> 0\}}(x_0) > 0\), as \(p \rightarrow \infty \), then the right-hand side diverges to \(-\infty \), giving a contradiction. Therefore, (4.2) holds.
Next, let us establish the limit equation in the null set. To this end, fix \(x_0 \in \Omega \cap \{u_{\infty } = 0\}\) and \(\phi \in C^2(\Omega )\) such that \(u_{\infty }(x_0) = \phi (x_0)=0\) and \(u_{\infty }(x) < \phi (x)\) holds for \(x \ne x_0\). As before, there exists a sequence \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local minimum at \(x_{p}\). We consider two cases:
-
Case 1:
\(\phi (x_{p_k}) \le 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak supersolution (resp. viscosity supersolution) to (1.1), we obtain after passing to the limit as \(p_k \rightarrow \infty \) that \(-\Delta _{\infty } \phi (x_0) \ge 0\).
-
Case 2:
\(\phi (x_{p_k}) > 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak supersolution (resp. viscosity supersolution) to (1.1), we have that
$$\begin{aligned} - \Delta _{p_k} \phi (x_{p_k}) \ge \lambda _0(x_{p_k}). \end{aligned}$$As in the first part of this proof, we obtain after passing to the limit as \(p_k \rightarrow \infty \) that
$$\begin{aligned} -\Delta _{\infty } \phi (x_0) \ge 0 \quad \text{ or } -|\nabla \phi (x_0)| + 1\ge 0 \end{aligned}$$
In both cases, we conclude that
which assures that \(u_{\infty }\) is a viscosity supersolution to (1.6) in its null set.
Now, fix \(x_0 \in \Omega \cap \{u_{\infty } = 0\}\) and \(\phi \in C^2(\Omega )\) such that \(u_{\infty }(x_0) = \phi (x_0)=0\) and \(u_{\infty }(x) > \phi (x)\) holds for \(x \ne x_0\). One more time, there exists a sequence \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local maximum at \(x_{p}\). As before, let us consider two possibilities:
-
Case 1:
\(\phi (x_{p_k}) \le 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak subsolution (resp. viscosity subsolution) to (1.1), we obtain \(-\Delta _{\infty } \phi (x_0) \le 0\). Moreover, we also have \(-|\nabla \phi (x_0)|+ \chi _{\{u> 0\}} = -|\nabla \phi (x_0)| \le 0\).
-
Case 2:
\(\phi (x_{p_k}) > 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak subsolution (resp. viscosity subsolution) to (1.1), we have that
$$\begin{aligned} - \Delta _{p_k} \phi (x_{p_k}) \le \lambda _0(x_{p_k}). \end{aligned}$$Once again, we obtain after passing to the limit as \(p_k \rightarrow \infty \),
$$\begin{aligned} -\Delta _{\infty } \phi (x_0) \le 0 \quad \text{ and } -|\nabla \phi (x_0)|+ 1\le 0 \end{aligned}$$
Therefore, in any of the two cases, we conclude that
which shows that \(u_{\infty }\) is a viscosity subsolution to (1.6) in its null set.
Finally, to prove that \(u_{\infty }\) is \(\infty -\)harmonic in its negativity set is a standard task, and the reasoning is similar to one employed in [21, Theorem 1], [22, page 384] and [23, Theorem 1.1]. We omit the details here. \(\square \)
Proof of Theorem 1.4
Any sequence of weak solutions \((u_p)_{p\ge 2}\) converges, up to a subsequence, to a limit, \(u_{\infty }\), uniformly in \(\Omega \). From Theorem 1.1 we have that
As before for \({\hat{x}} \in \overline{\{u_{\infty }>0\}} \cap \Omega ^{\prime }\) there exist \(x_p \rightarrow {\hat{x}}\) with \(x_p \in \overline{\{u_p>0\}} \cap \Omega ^{\prime }\). Hence, we get,
\(\square \)
5 Uniqueness for the limit problem
Our main goal throughout this section is to show uniqueness of viscosity solutions to
Remind that existence of a solution \(u_\infty \) was obtained as the uniform limit (along subsequences) of solutions to \(p-\)Laplacian problems (1.1), see Theorem 1.2 for more details. Next, we will deliver the proof of Theorem 1.3, which is based on [12, Section 4]. For this reason, we will only include some details.
Proof of Theorem 1.3
To prove such a result we first construct a function v and then show that any possible viscosity solution to (5.1) coincides with v. To construct such an special v we first consider h the unique (see [10]) viscosity solution to
Then, let z be the unique viscosity solution to
Remark that for this problem we have uniqueness, as well as validity of a comparison principle, see [12, Theorem 4.5]. Hence, we have
Moreover, from [12, Theorem 4.2], we have
Now, we modify z in the set \(\{x\in \Omega \, : \, z (x) < 0 \}\) to obtain the function v as follows: Let w be the solution to
and then we set
Remark that this function v is uniquely determined by the boundary datum F since all the involved PDE problems have uniqueness. Moreover, since we have a comparison principle for the involved PDE problems, we have a comparison principle for v, that is, if \(F_1 \le F_2\) on \(\partial \Omega \), then the corresponding functions \(v_1\) and \(v_2\) verify
Now our aim is to show that
Firstly, let us show that \(u_\infty =z=v\) in the set \( \{x\in \Omega \, : \, z (x) \ge 0 \}\). To this end, we observe that in the set \( \{ x\in \Omega \, : \, \nabla h(x) \ge 1 \}\) we have \( z(x) = u_\infty (x) = h(x) \). Hence, we have to deal with \( \{x\in \Omega \, : \, z (x) \ge 0 \text{ and } \nabla h(x) < 1 \}\). Now, as in [12, Theorem 4.2], we argue by contradiction and suppose that there is \({\hat{x}} \in \{x\in \Omega \, : \, z (x) \ge 0 \text{ and } \nabla h(x) < 1 \}\) such that \(u_\infty ({{\hat{x}}})-z({{\hat{x}}})>0\). If \(u_\infty \) were smooth, we would have \(|\nabla u_\infty ({{\hat{x}}})|\ge 1\) by the second part of the equation, and from \(\Delta _\infty u_\infty \ge 0\) it would follow that \(t\mapsto |\nabla u_\infty (\gamma (t))|\) is non-decreasing along the curve \(\gamma \) for which \(\gamma (0)={{\hat{x}}}\) and \({\dot{\gamma }}(t)=\nabla u_\infty (\gamma (t))\). Using this information and the fact that \(|z(x)-z(y)|\le |x-y|\) in \(\{\nabla h < 1\}\), we could then follow \(\gamma \) up to the boundary to find a point y where \(u_\infty (y)>z(y)\); but this is a contradiction since \(u_\infty \) and z coincide on \(\partial \Omega \).
To overcome the lack of smoothness to \(u_\infty \) and to justify rigorously the steps outlined above, we use an approximation procedure with the sup-convolution. Let \(\delta >0\) and
be the standard sup-convolution of \(u_\infty \). Observe that since \(u_\infty \) is bounded in \(\Omega \), we in fact have
with \(R(\delta )=2\sqrt{\delta \Vert u_\infty \Vert _{L^\infty (\Omega )}}\). We assume that \(\delta >0\) is small. In what follows we will use the notation
for the point-wise Lipschitz constant of a function f. Next we observe that since \(u_\infty \) is a solution to (5.1), it follows that \(\Delta _\infty (u_\infty )_\delta \ge 0\) and \(|\nabla (u_\infty )_\delta |-\chi _{(u_\infty )_\delta > 0}\ge 0\). In particular, since \((u_\infty )_\delta \) is semi-convex, there exists \(x_0\) such that
and
Now let \(r_0=\frac{1}{2}\text{ dist }(x_0,\partial \Omega )\) and let \(x_1\in \partial B_{r_0}(x_0)\) be a point such that
Since \(\Delta _\infty (u_\infty )_\delta \ge 0\), the increasing slope estimate, see [4], implies
By defining \(r_1=\frac{1}{2}\mathrm {dist}(x_1,\partial \Omega )\), choosing \(x_2\in \partial B_{r_1}(x_1)\) so that
and using the increasing slope estimate again yields
and
Repeating this construction we obtain a sequence \((x_k)\) such that \(x_k\rightarrow a\in \partial \{x\in \Omega \, : \, z (x) \ge 0 \text{ and } \nabla h(x) < 1 \} \cap \partial \Omega \) as \(k\rightarrow \infty \) and
On the other hand, since \(|z(x)-z(y)|\le |x-y|\) whenever the line segment [x, y] is contained in \(\{\nabla h \le 1\}\) (see [5]), we have
Thus, by continuity,
which is clearly a contradiction. Therefore, we conclude that \(u_\infty =z=v\) in the set \( \{x\in \Omega \, : \, z (x) \ge 0 \}\).
To extend the equality \(u_\infty = v\) to the set \( \{x\in \Omega \, : \, z (x) < 0 \}\) we just observe that \(-\Delta _\infty v = 0\) there and also that \(-\Delta _\infty u_\infty =0\) since \(u_\infty \le 0\) on the boundary of \( \{x\in \Omega \, : \, z (x) < 0 \}\) and then \(u_\infty \le 0\) in the set \( \{x\in \Omega \, : \, z (x) < 0 \}\) (notice that if \(u_\infty =0\) there then trivially \(-\Delta _\infty u_\infty =0\)). Therefore, we conclude that
in the whole \(\Omega \). \(\square \)
Remark 5.1
From the previous proof we have that the positivity sets of \(u_\infty \) and z coincide. The function z can be computed as follows (see [12, Section 2.2]): Since h is everywhere differentiable, see [9], and \(|\nabla h(x)|\) equals to the point-wise Lipschitz constant of h,
for every \(x\in \Omega \), using that the map \(x\mapsto L(h,x)\) is upper semi-continuous, see, for example, [4], we have that the set
is an open subset of \(\Omega \). Now, define the “patched function” \(z :{{\overline{\Omega }}} \rightarrow {\mathbb {R}}\) by first setting
and then, for each connected component U of V and \(x\in U\), we let
where \(d_{U} (x, y)\) stands for the (interior) distance between x and y in U.
6 Games: pay or leave Tug-of-War
In this section, we consider a variant of the Tug-of-War games introduced in [20] and [12]. Let us describe the two-player zero-sum game that we call Pay or Leave Tug-of-War.
Let \(\Omega \) be a bounded open set and \({\varepsilon }>0\). A token is placed at \(x_0\in \Omega \). Player II, the player seeking to minimize the final payoff, can either pass the turn to Player I or decide to toss a fair coin and play Tug-of-War. In this case, the winner of the coin toss gets to move the token to any \(x_1\in B_{\varepsilon }(x_0)\). If Player II passes the turn to Player I, then she can either move the game token to any \(x_1\in B_{\varepsilon }(x_0)\) with the price \(-{\varepsilon }\) or decide to end the game immediately with no payoff for either of the players. After the first round, the game continues from \(x_1\) according to the same rules.
This procedure yields a possibly infinite sequence of game states \(x_0,x_1,\ldots \) where every \(x_k\) is a random variable. If the game is not ended by the rules described above, the game ends when the token leaves \(\Omega \), and at this point the token will be in the boundary strip of width \({\varepsilon }\) given by
We denote by \(x_\tau \in \Gamma _{\varepsilon }\) the first point in the sequence of game states that lies in \(\Gamma _{\varepsilon }\) so that \(\tau \) refers to the first time we hit \(\Gamma _{{\varepsilon }}\).
At this time the game ends with the terminal payoff given by \(F(x_\tau )\), where \(F:\Gamma _{\varepsilon }\rightarrow {\mathbb {R}}\) is a given Borel measurable continuous payoff function. Player I earns \(F(x_\tau )\), while Player II earns \(-F(x_\tau )\).
A strategy \(S_\text {I}\) for Player I is a function defined on the partial histories that gives the next game position \(S_\text {I}{\left( x_0,x_1,\ldots ,x_k\right) }=x_{k+1}\in B_{\varepsilon }(x_k)\) if Player I gets to move the token. Similarly, Player II plays according to a strategy \(S_\text {II}\). In addition, we define a decision variable for Player II, which tells when Player II decides to pass a turn
and one for Player I which tells when Player I decides to end the game immediately
Given the sequence \(x_0,\ldots ,x_k\) with \(x_k\in \Omega \) the game will end immediately when
Otherwise, the one step transition probabilities will be
By using the Kolmogorov’s extension theorem and the one step transition probabilities, we can build a probability measure \({\mathbb {P}}^{x_0}_{S_\text {I},S_\text {II},\theta _\text {I},\theta _\text {II}}\) on the game sequences. The expected payoff, when starting from \(x_0\) and using the strategies \(S_\text {I},S_\text {II},\theta _\text {I},\theta _\text {II}\), is
where \(F:\Gamma _{\varepsilon }\rightarrow {\mathbb {R}}\) is a given continuous function prescribing the terminal payoff extended as \(F\equiv 0\) in \(\Omega \).
The value of the game for Player I is given by
while the value of the game for Player II is given by
Intuitively, the values \(u_\text {I}(x_0)\) and \(u_\text {II}(x_0)\) are the best expected outcomes each player can guarantee when the game starts at \(x_0\). Observe that if the game does not end almost surely, then the expectation (6.1) is undefined. In this case, we define \({\mathbb {E}}_{S_{\text {I}},S_\text {II},\theta _\text {I},\theta _\text {II}}^{x_0}\) to take value \(-\infty \) when evaluating \(u_\text {I}(x_0)\) and \(+\infty \) when evaluating \(u_\text {II}(x_0)\). If \(u_\text {I}= u_\text {II}\), we say that the game has a value.
6.1 The game value function and its dynamic programming principle
In this section, we prove that the game has a value, i.e. \(u {:}{=}\,u_\text {I}= u_\text {II}\), and that such a value function satisfies the dynamic programming principle (DPP) given by
for \(x\in \Omega \) and \(u(x)=F(x)\) for \(x\in \Gamma _{\varepsilon }\).
Let us see intuitively why this holds. At each step, with the token in a given \(x\in \Omega \), we have that Player II chooses whether to play Tug-of-War or to pass the turn to Player I. In the first case with probability \(\frac{1}{2}\), Player I gets to move and will try to maximize the expected outcome; and with probability \(\frac{1}{2}\), Player II gets to move and will try to minimize the expected outcome. In this case the expected payoff will be
On the other hand, if Player II passes the turn to Player I, she will have two options: to end the game immediately obtaining 0 or to move trying to maximize the expected outcome by paying \({\varepsilon }\). Player I will prefer the option that gives the greater payoff, that is, the expected payoff is given by
Finally, Player II will decide between the two possible payoff mentioned here, preferring the one with the minimum payoff.
To prove that the DPP holds for our game, we borrow some ideas from [3] and [12]. We choose a path that allows us to make the presentation self-contain.
We define \(\Omega _{\varepsilon }=\Omega \cup \Gamma _{\varepsilon }\) and \(u_n:\Omega _{\varepsilon }\rightarrow {\mathbb {R}}\) a sequence of functions. We define the sequence inductively, and let \(u_n=F\) on \(\Gamma _{\varepsilon }\),
on \(\Omega \) and
on \(\Omega \) for all \(n\in {\mathbb {N}}\).
Let us observe that \(u_0\ge u_1\) and in addition, if \(u_{n-1}\ge u_n\), by the recursive definition, we have \(u_n\ge u_{n+1}\). Then, by induction, we obtain that the sequence of functions is a decreasing sequence. By the definition we have that the sequence is bounded below by \(\displaystyle \min \left\{ 0,\min _{\Gamma _{\varepsilon }}F\right\} \). Hence, \(u_n\) converge point-wise to a bounded Borel function u.
We want to prove that the limit u satisfies the dynamic programming principle. We can attempt to do that by passing to the limit in the recursive formula. Since \(u_n\) is a decreasing sequence that converges point-wisely to u, we can show that
Although, this convergence is not immediate for the supremum. This is why, in order to be able to pass to the limit in the recursive formula, we want to show that the sequence converges uniformly. To this end, let us prove an auxiliary lemma.
Lemma 6.1
Let \(x\in \Omega \), \(n\in {\mathbb {N}}\) and fix \(\lambda _1\), \(\lambda _2\) and \(\delta \) such that
and \(\delta >0\). Then, there exists \(y\in B_{\varepsilon }(x)\) such that
Proof
Given \(\lambda _1\le u_n(x)-u_{n+1}(x)\), by the recursive definition, we have
From the standard inequalities
we get
Since \(u_{n-1}\ge u_n\) we can avoid the term 0 in the RHS, we obtain
We bound the difference between the suprema and infima using the inequality \(\Vert u_{n-1}-u_n\Vert _\infty \le \lambda _2\), we obtain
that is,
Finally, we can choose \(y\in B_{\varepsilon }(x)\) such that
which gives the desired inequality. \(\square \)
Proposition 6.2
The sequence \(u_n\) converges uniformly, and the limit u is a solution to the DPP.
Proof
We want to show that the convergence is uniform. Suppose not. Observe that if \(||u_n-u_{n+1}||_\infty \rightarrow 0\) we can extract a uniformly Cauchy subsequence, thus this subsequence converges uniformly to a limit u. This implies that the \(u_n\) converges uniformly to u, because of the monotonicity. By the recursive definition we have \(\Vert u_n-u_{n+1}\Vert _\infty \ge \Vert u_{n-1}-u_n \Vert _\infty \ge 0\). Then, as we are assuming the convergence is not uniform, we have
for some \(M>0\).
Given \(\delta >0\), let \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\),
We fix \(k\in {\mathbb {N}}\). Let \(x_0\in \Omega \) such that
Now we apply Lemma 6.1 for \(n=n_0+k-1\), \(\lambda _1=M-\delta \) and \(\lambda _2=M+\delta \), we get
for some \(x_1\in B_{\varepsilon }(x_0)\). If we repeat the argument for \(x_1\), but now with \(\lambda _1=t\delta -M\), we obtain
Inductively, we obtain a sequence \(x_l\), \(1\le l \le k-1\) such that
If we add the inequalities
for \(1\le l \le k-1\) and \(u_{n_0+k}(x_0)\le u_{n_0+k-1}(x_0)+\delta -M\), we get
which is a contradiction since \(u_n\) is bounded but we can make the RHS as small as we want by choosing a big value for k and a small one for \(\delta \). \(\square \)
Now, we are ready to prove one of the main results of this section.
Theorem 6.3
(Dynamic Programming Principle) The game has a value \(u=u_\text {I}= u_\text {II}\), and it satisfies
for \(x\in \Omega \) and \(u(x)=F(x)\) in \(\Gamma _{\varepsilon }\).
Proof
By definition, \(u_\text {I}\le u_\text {II}\). We will show that \(u_\text {II}\le u\) and \(u\le u_\text {I}\) for the u constructed in Proposition 6.2. This, together with the fact that u satisfies the DPP will complete the proof. For the first inequality we will use the constructed sequence of function \(u_n\) as in [3]. For the second inequality we will use an argument similar to one in [12].
We want to show that \(u_\text {II}\le u\). Given \(\eta >0\) let \(n>0\) be such that \(u_n(x_0)<u(x_0)+\frac{\eta }{2}\). We build an strategy (\(S^0_\text {II}, \theta ^0_\text {II}\)) for Player II, in the firsts n moves, given \(x_{k-1}\) she will choose to play Tug-of-War or pass the turn depending whether
is larger. When playing Tug-of-War, she will move to a point that almost minimizes \(u_{n-k}\), that is, she chooses \(x_k\in B_{\varepsilon }(x_{k-1})\) such that
After the first n moves, she will choose to play Tug-of-War following a strategy that ends the game almost surely (for example, pointing in a fix direction).
We have
where we have estimated the strategy of Player I by \(\sup \) and used the construction for the \(u_k\)’s. Thus
is a supermartingale.
Now we have
where \(\tau \wedge k {:}{=}\,\min \{\tau ,k\}\), and we used the optional stopping theorem for \(M_{k}\). Since \(\eta \) is arbitrary, this proves the claim.
Now, we will show that \(u\le u_\text {I}\). We want to find a strategy (\(S^0_\text {I}, \theta ^0_\text {I}\)) for Player I that ensures a payoff close to u. He has to maximize the expected payoff and, at the same time, make sure that the game ends almost sure. This is done by using the backtracking strategy (cf. [20, Theorem 2.2] for more details).
To that end, we define
Fix \(\eta >0\) and a starting point \(x_0\in \Omega \), and set \(\delta _0 =\min \{\delta (x_0),{\varepsilon }\}/2\). We suppose for now that \(\delta _0>0\), and define
We consider a strategy \(S_I^0\) for Player I that distinguishes between the cases \(x_k\in X_0\) and \(x_k\notin X_0\). To that end, we define
and
where \(y_k\) denotes the last game position in \(X_0\) up to time k, and \(d_k\) is the distance, measured in number of steps, from \(x_k\) to \(y_k\) along the graph spanned by the previous points \(y_k=x_{k-j},x_{k-j+1},\ldots ,x_k\) that were used to get from \(y_k\) to \(x_k\).
In what follows we define a strategy for Player I and prove that \(M_k\) is a submartingale. Observe that \(M_{k+1}-m_{k+1}=M_k-m_k\) or \(M_{k+1}-m_{k+1}=M_k-m_k-{\varepsilon }\), so to prove the desired submartingale property we will mostly make computations in terms of \(m_k\).
First, if \(x_k\in X_0\), then Player I chooses to step to a point \(x_{k+1}\) satisfying
where \(\eta _{k+1}\in (0,\eta ]\) is small enough to guarantee that \(x_{k+1}\in X_0\). Let us remark that
and hence
Therefore, we can guarantee that \(x_{k+1}\in X_0\) by choosing \(\eta _{k+1}\) such that
Thus if \(x_k\in X_0\) and Player I gets to choose the next position, it holds that
When Tug-of-War is played, if Player II wins the toss and moves from \(x_k\in X_0\) to \(x_{k+1}\in X_0\), it holds, in view of (6.3), that
If Player II wins the toss and she moves to a point \(x_{k+1}\notin X_0\) (whether \(x_k\in X_0\) or not), it holds that
When Player II passes the turn to Player I, he can choose to end the game immediately or to move by paying \({\varepsilon }\). If \(\delta (x_k)\ge {\varepsilon }\) he will choose to play, we get \(M_{k+1}\ge M_k+\delta (x_k)-{\varepsilon }\ge M_k\). If \({\varepsilon }>\delta (x_k)\), the DPP implies that \(0\ge u(x_k)\) and hence, he can finish the game immediately earning more than \(m_k\).
In the case \(x_k\notin X_0\), the strategy for Player I is to backtrack to \(y_k\), that is, if he wins the coin toss, he moves the token to one of the points \(x_{k-j},x_{k-j+1},\ldots ,x_{k-1}\) closer to \(y_k\) so that \(d_{k+1}= d_k-1\).
Thus if Player I wins and \(x_k\notin X_0\) (whether \(x_{k+1}\in X_0\) or not),
When Tug-of-War is played, if Player II wins the coin toss and moves from \(x_k\notin X_0\) to \(x_{k+1}\in X_0\), then
where the first inequality is due to (6.3), and the second follows from the fact \(m_k=u(y_k)-d_k\delta _0-\eta 2^{-k}\le u(x_k)-\eta 2^{-k}\). The same was obtained in (6.4) when \(x_{k+1}\notin X_0\).
It remains to analyze what happens when Player II passes the turn to Player I in this case. Since \(\delta (x_k)\le {\varepsilon }/2<{\varepsilon }\), we have \(0\ge u(x_k)\) and as before he can finish the game immediately earning more than \(m_k\).
Taking into account all the different cases, we see that \(M_k\) is a submartingale. We can also see that when the game ends Player I ensures a payoff of al least \(M_k\). Let us observe that \(m_k\) is also a submartingale, and it is bounded. Since Player I can assure that \(m_{k+1}\ge m_k+\delta _0\) if he gets to move the token, the game must terminate almost surely. This is because, there are arbitrary long sequences of moves made by Player I (if he does not end the game immediately). Indeed, if Player II passes a turn, then Player I gets to move, and otherwise, this is a consequence of the zero-one law.
We can now conclude the proof with an inequality analogous to that in (6.2).
Finally, let us remove the assumption that \(\delta (x_0)>0\). If \(\delta (x_0)=0\) for \(x_0\in \Omega \), when Tug-of-War is played, Player I adopts a strategy of pulling towards a boundary point until the game token reaches a point \(x_0'\) such that \(\delta (x_0')>0\) or \(x_0'\) is outside \(\Omega \). It holds that \(u(x_0)= u(x_0')\), because by (6.3). If Player II passes the turn, Player I ends the game immediately earning 0 (recall that \(\delta (x)=0\) implies \(0\ge u(x)\) because of the DPP). \(\square \)
6.2 Game value convergence
In this subsection we study the behaviour of the game values as \({\varepsilon }\rightarrow 0\). In the previous sections we analyze the game for a fix value of \({\varepsilon }\), and here we will consider the game value for different values of \({\varepsilon }\). For this purpose, we will refer to the game value as \(u^{\varepsilon }\), emphasizing its dependence on \({\varepsilon }\). We want to prove that
uniformly on \(\overline{\Omega }\) as \({\varepsilon }\rightarrow 0\), and that u is a viscosity solution to
To this end, we would like to apply the following Arzelà-Ascoli type lemma. We refer to the interested reader to [18, Lemma 4.2] for a proof.
Lemma 6.4
Let \(\{u^{\varepsilon }: {\overline{\Omega }} \rightarrow {\mathbb {R}},\ {\varepsilon }>0\}\) be a set of functions such that
-
1.
there exists \(C>0\) such that \(\left| u^{\varepsilon }(x)\right| <C\) for every \({\varepsilon }>0\) and every \(x \in \overline{\Omega }\),
-
2.
given \(\eta >0\) there are constants \(r_0\) and \({\varepsilon }_0\) such that for every \({\varepsilon }< {\varepsilon }_0\) and any \(x, y \in {\overline{\Omega }}\) with \(|x - y | < r_0 \) it holds
$$\begin{aligned} |u^{\varepsilon }(x) - u^{\varepsilon }(y)| < \eta . \end{aligned}$$
Then, there exists a uniformly continuous function \(u: {\overline{\Omega }} \rightarrow {\mathbb {R}}\) and a subsequence still denoted by \(\{u^{\varepsilon }\}\) such that
as \({\varepsilon }\rightarrow 0\).
So our task now is to show that the family \(u^{\varepsilon }\) satisfies the hypotheses of the previous lemma. In the next Lemma, we prove that the family is asymptotically uniformly continuous, that is, it satisfies the condition 6.4 on Lemma 6.4. To do that we follow [12].
Lemma 6.5
The family \(u^{\varepsilon }\) is asymptotically uniformly continuous.
Proof
We prove the required oscillation estimate by arguing by contradiction: We define
We claim that
for all \(x\in \Omega \). Aiming for a contradiction, suppose that there exists \(x_0\in \Omega \) such that
In this case, we have that
The reason is that the alternative
would imply
which is a contradiction with \(A(x_0) > 4 \max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }\). It follows from (6.6) that
Let \(\eta >0\) and take \(x_1 \in B_{\varepsilon }(x_0)\) such that
We obtain
and, since \(x_0\in B_{\varepsilon }(x_1)\), also
Arguing as before, (6.6) also holds at \(x_1\), since otherwise the above inequality would lead to a contradiction similarly as (6.7) for small enough \(\eta \).
so that
Iterating this procedure, we obtain \(x_i \in B_{\varepsilon }(x_{i-1})\) such that
and
We can proceed with an analogous argument considering points where the infimum is nearly attained to obtain \(x_{-1}\), \(x_{-2}\),... such that \(x_{-i} \in B_{\varepsilon }(x_{-(i-1)})\), and (6.9) and (6.10) hold. Since \(u^{\varepsilon }\) is bounded, there must exist k and l such that \(x_k, x_{-l}\in \Gamma _{\varepsilon }\), and we have
a contradiction. Therefore
for every \(x\in \Omega \). \(\square \)
Lemma 6.6
Let \(u^{\varepsilon }\) be a family of game values for a Lipschitz continuous boundary data F. Then, there exists a Lipschitz continuous function u such that, up to selecting a subsequence,
as \({\varepsilon }\rightarrow 0\).
Proof
By choosing always to play Tug-of-War and moving with any strategy that ends the game almost sure (as pulling in a fix direction), Player II can ensure that the final payoff is at most \(\max _{\Gamma _{\varepsilon }}F\). Similarly, by ending the game immediately if given the option and moving with any strategy that ends the game almost sure when playing Tug-of-War, Player II can ensure that the final payoff is at least \(\displaystyle \min \{0,\min _{\Gamma _{\varepsilon }}F\}\). We have
This, together with Lemma 6.5, shows that the family \(u^{\varepsilon }\) satisfies the hypothesis of Lemma 6.4. \(\square \)
Theorem 6.7
The function u obtained as a limit in Lemma 6.6 is a viscosity solution to (6.5).
Proof
First, we observe that \(u=F\) on \(\partial \Omega \) due to \(u_{\varepsilon }=F\) on \(\partial \Omega \) for all \({\varepsilon }>0\). Hence, we can focus our attention on showing that u satisfies the equation inside \(\Omega \) in the viscosity sense.
To this end, we obtain the following asymptotic expansions, as in [17]. Choose a point \(x\in \Omega \) and a \(C^2\)-function \(\psi \) defined in a neighbourhood of x. Note that since \(\psi \) is continuous then we have
for all \(x\in \Omega \). Let \(x_1^{\varepsilon }\) and \(x_2^{\varepsilon }\) be a minimum point and a maximum point, respectively, for \(\phi \) in \(\overline{B}_{\varepsilon }(x)\). It follows from the Taylor expansions in [17] that
and
Suppose that \(u-\psi \) has an strict local minimum. We want to prove that
If \(\nabla \psi (x)=0\), we have \(-\Delta _\infty \psi (x)=0\) and hence, the inequality holds. We can assume \(\nabla \psi (x)\ne 0\). By the uniform convergence, there exists sequence \(x_{{\varepsilon }}\) converging to x such that \(u^{{\varepsilon }} - \psi \) has an approximate minimum at \(x_{{\varepsilon }}\), that is, for \(\eta _{\varepsilon }>0\), there exists \(x_{{\varepsilon }}\) such that
Moreover, considering \({\tilde{\psi }}= \psi - u^{{\varepsilon }} (x_{{\varepsilon }}) - \psi (x_{{\varepsilon }})\), we can assume that \(\psi (x_{{\varepsilon }}) = u^{{\varepsilon }} (x_{{\varepsilon }})\).
If \(u(x)<0\), we have to show that
Since u is continuous and \(u^{\varepsilon }\) converges uniformly, we can assume that \(u^{\varepsilon }(x_{\varepsilon })<0\). Thus, by recalling the fact that \(u^{\varepsilon }\) satisfies the DPP (Theorem 6.3), and observing that
we conclude that
We obtain
and thus, by (6.11), and choosing \(\eta _{\varepsilon }= o({\varepsilon }^2)\), we have
Next, we observe that
provided \(\nabla \psi (x)\ne 0\). Furthermore, such a limit is bounded below and above by the quantities \(\lambda _{\min } (D^2\psi (x))\) and \(\lambda _{\max } (D^2\psi (x))\). Therefore, by dividing by \({\varepsilon }^2\) and letting \({\varepsilon }\rightarrow 0\), we get the desired inequality.
If \(u(x)\ge 0\), we have to show that
As above, by (6.11) and (6.12), we obtain
and hence, we conclude,
as desired.
We have showed that u is a supersolution to our equation. Similarly, we obtain the subsolution counterpart. Let us remark, as part of those computations, that when \(u^{\varepsilon }(x)>0\) the DDP implies
and hence
Then, in this case we have
\(\square \)
We proved (see Theorem 1.3) that viscosity solutions to (6.5) are unique by using pure PDE methods. Therefore, we conclude that convergence as \({\varepsilon }\rightarrow 0\) of \(u^{\varepsilon }\) holds not only along subsequences. This ends the proof of Theorem 1.5.
7 Further properties for limit solutions
Now, we present some relevant geometric and measure theoretic properties for limit solutions and their free boundaries.
Theorem 7.1
(Uniform positive density) Let \(u_{\infty }\) be a limit solution to (1.2) in \(B_1\) and \(x_0 \in \partial \{v > 0\} \cap B_{\frac{1}{2}}\) be a free boundary point. Then, for any \(0<\rho < \frac{1}{2}\),
for a universal constant \(\theta >0\).
Proof
Applying Theorem 1.4 there exists a point \({\hat{y}} \in \partial B_r(x_0) \cap \{u_{\infty }>0\}\) such that,
Moreover, we claim that there exists \(\kappa >0\) small enough such that
The constant \(\kappa \) is given by
In fact, if this does not holds, it exists a free boundary point \({\hat{z}} \in B_{\kappa r}({\hat{y}})\). Then, from (7.1) we obtain
which is a contradiction. Therefore,
and hence
which proves the result. \(\square \)
Definition 7.2
(\(\zeta \)-Porous set) A set \(S \in {\mathbb {R}}^N\) is said to be porous with porosity constant \(0<\zeta \le 1\) if there exists an \(R > 0\) such that for each \(x \in S\) and \(0< r < R\) there exists a point y such that \(B_{\zeta r}(y) \subset B_r(x) \setminus S\).
Theorem 7.3
(Porosity of limiting free boundary) Let \(u_{\infty }\) be a limit solution to (1.2) in \(\Omega \). There exists a constant \(0<\xi = \xi (N, \text{ Lip }[g]) \le 1\) such that
Proof
Let \(R>0\) and \(x_0\in \Omega \) be such that \(\overline{B_{4R}(x_0)}\subset \Omega \). We will show that \(\partial \{u_{\infty } >0\} \cap B_R(x_0)\) is a \(\frac{\zeta }{2}\)-porous set for a universal constant \(0< \zeta \le 1\). To this end, let \(x\in \partial \{u_{\infty } >0\} \cap B_{R}(x_0)\). For each \(r\in (0, R)\) we have \(\overline{B_r(x)}\subset B_{2R}(x_0)\subset \Omega \). Now, let \(y\in \partial B_r(x)\) such that \(u_{\infty }(y) = \sup \limits _{\partial B_r(x)} u(t)\). From Theorem 1.4
On the other hand, near the free boundary, from Lipschitz regularity we have
where \(d(y) {:}{=}\,\mathrm {dist}(y, \partial \{u_{\infty }>0\} \cap \overline{B_{2R}(x_0)})\). From (7.4) and (7.5) we get
for a positive constant \(0<\zeta {:}{=}\,\left( \frac{1}{[u_{\infty }]_{\text{ Lip }({\overline{\Omega }})}+1}\right) <1\).
Now, let \({\hat{y}}\), in the segment joining x and y, be such that \(|y-{\hat{y}}|=\frac{\zeta r}{2}\), then there holds
indeed, for each \(z\in B_{\frac{\zeta }{2}r}({\hat{y}})\)
Then, since by (7.6) \(B_{\zeta r}(y)\subset B_{d(y)}(y)\subset \{u_{\infty }>0\}\), we get \(B_{\zeta r}(y)\cap B_r(x)\subset \{u_{\infty }>0\}\), which together with (7.7) implies that
Therefore, \(\partial \{v>0\} \cap B_{R}(x_0)\) is a \(\frac{\zeta }{2}\)-porous set. Finally, the \((N-\xi )\)-Hausdorff measure estimates in (7.3) follows from [14]. \(\square \)
In particular, Theorem 7.3 implies that the free boundary \(\partial \{u_{\infty }>0\}\) has Lebesgue measure zero.
In the last part of this paper we include two examples to see what kind of solutions to (1.6) one can expect.
Example 7.4
(Radial solutions) First of all, let us study the following boundary value problem:
where \(R, \lambda _0\) and \(\kappa \) are a positive constants.
Observe that by the uniqueness of solutions for the Dirichlet problem (7.8) and invariance under rotations of the \(p-\)Laplace operator, it is easy to see that u must be a radially symmetric function. Hence, let us deal with the following one-dimensional ODE
It is straightforward to check that \(v(t)=\Theta (1, \lambda _0, p) t^{\,\frac{p}{p-1}}\) is a solution to (7.9), where
Now, in order to characterize the unique solution (7.8) fix \(x_0 \in {\mathbb {R}}^N\) and \(0<r_0<R\). We assume the compatibility condition for the dead-core problem, namely \(R > T\). Thus, for \(r_0 = R-T\) the radially symmetric function given by
fulfils (7.8) in the weak sense, where \(r_0 {:}{=}\,R - \left( \frac{\kappa }{\Theta } \right) ^{\frac{p-1}{p}}\). Moreover, the dead core is given by \(B_{r_0}(x_0)\).
Also it is easy to see that the limit radial profile as \(p \rightarrow \infty \) becomes
which satisfies (1.6) in the viscosity sense with \(\Omega = B_{R}(x_0)\) the dead core given by \(B_{r_0}(x_0)\) for \(r_0 = R - 1\) and \(g\equiv \kappa \) on \(\partial B_{R}(x_0)\).
Example 7.5
Finally, by considering the one-dimensional problem
it is straightforward to verify that \(u(x) = \left\{ \begin{array}{lll} -x &{} \text{ if } &{} x\in (-1, 0] \\ -\frac{1}{4}x &{} \text{ if } &{} x\in [0, 4) \end{array} \right. \) is the unique viscosity solution to our gradient constraint problem.
References
Alt, H.W., Phillips, D.: A free boundary problem for semilinear elliptic equations. J. Reine Angew. Math. 368, 63–107 (1986)
Aronsson, G., Crandall, M.G., Juutinen, P.: A tour of the theory of absolutely minimizing functions. Bull. Am. Math. Soc. (N.S.) 41(4), 439–505 (2004)
Blanc, P., Pinasco, J.P., Rossi, J.D.: Maximal operators for the \(p-\)Laplacian family. Pac. J. Math. 287(2), 257–295 (2017)
Crandall, M.G., Evans, L.C., Gariepy, R.F.: Optimal Lipschitz extensions and the infinity Laplacian. Calc. Var. Part. Differ. Equ. 13(2), 123–139 (2001)
Crandall, M.G., Gunnarsson, G., Wang, P.Y.: Uniqueness of \(\infty -\)harmonic functions and the eikonal equation. Comm. Part. Differ. Equ. 32(10–12), 1587–1615 (2007)
Crandall, M.G., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. (N.S.) 27(1), 1–67 (1992)
da Silva, J.V., Rossi, J.D., Salort, A.: Maximal solutions for the \(\infty -\)eigenvalue problem. Adv. Calc. Var. https://doi.org/10.1515/acv-2017-0024 (to appear)
Díaz, J.I.: Nonlinear Partial Differential Equations and Free Boundaries Vol. I. Elliptic equations. Research Notes in Mathematics, 106. Pitman (Advanced Publishing Program), Boston, MA, 1985. vii+323 pp. ISBN: 0-273-08572-7
Evans, L.C., Smart, C.K.: Everywhere differentiability of infinity harmonic functions. Calc. Var. Part. Differ. Equ. 42(1–2), 289–299 (2011)
Jensen, R.: Uniqueness of Lipschitz extensions: minimizing the sup norm of the gradient. Arch. Ration. Mech. Anal. 123(1), 51–74 (1993)
Juutinen, P., Lindqvist, P., Manfredi, J.J.: The \(\infty \)-eigenvalue problem. Arch. Ration. Mech. Anal. 148(2), 89–105 (1999)
Juutinen, P., Parviainen, M., Rossi, J.D.: Discontinuous gradient constraints and the infinity Laplacian. Int. Math. Res. Not. IMRN 8, 2451–2492 (2016)
Karp, L., Kilpeläinen, T., Petrosyan, A., Shahgholian, H.: On the porosity of free boundaries in degenerate variational inequalities. J. Differ. Equ. 164(1), 110–117 (2000)
Koskela, P., Rohde, S.: Hausdorff dimension and mean porosity. Math. Ann. 309(4), 593–609 (1997)
Lee, K.-A., Shahgholian, H.: Hausdorff measure and stability for the \(p\)-obstacle problem \((2 < p < \infty )\). J. Differ. Equ. 195(1), 14–24 (2003)
Lindqvist, P., Lukkari, T.: A curious equation involving the \(\infty \)-Laplacian. Adv. Calc. Var. 3(4), 409–421 (2010)
Manfredi, J.J., Parviainen, M., Rossi, J.D.: An asymptotic mean value characterization for \(p-\)harmonic functions. Proc. Am. Math. Soc. 138(3), 881–889 (2010)
Manfredi, J.J., Parviainen, M., Rossi, J.D.: On the definition and properties of \(p\)-harmonious functions. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 11(2), 215–241 (2012)
Manfredi, J.J.: Regularity for minima of functionals with \(p\)-growth. J. Differ. Equ. 76(2), 203–212 (1988)
Peres, Y., Schramm, O., Sheffield, S., Wilson, D.: Tug-of-war and the infinity Laplacian. J. Am. Math. Soc. 22(1), 167–210 (2009)
Rossi, J.D., Teixeira, E.V.: A limiting free boundary problem ruled by Aronsson’s equation. Trans. Am. Math. Soc. 364(2), 703–719 (2012)
Rossi, J.D., Teixeira, E.V., Urbano, J.M.: Optimal regularity at the free boundary for the infinity obstacle problem. Interfaces Free Bound 17(3), 381–398 (2015)
Rossi, J.D., Wang, P.: The limit as \(p \rightarrow \infty \) in a two-phase free boundary problem for the \(p\)-Laplacian. Interfaces Free Bound. 18(1), 115–135 (2016)
Acknowledgements
This work was partially supported by Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET-Argentina).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Blanc, P., da Silva, J.V. & Rossi, J.D. A limiting free boundary problem with gradient constraint and Tug-of-War games. Annali di Matematica 198, 1441–1469 (2019). https://doi.org/10.1007/s10231-019-00825-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10231-019-00825-0
Keywords
- Lipschitz regularity estimates
- Free boundary problems
- \(\infty \)-Laplace operator
- Existence/uniqueness of solutions
- Tug-of-War games