Abstract
In this paper we study the system
where \((x,t)\in \mathbb R ^{N}\times [0,T]\). A special case of this type of system of variational inequalities with terminal data occurs in the context of optimal switching problems. We establish a general comparison principle for viscosity sub- and supersolutions to the system under mild regularity, growth, and structural assumptions on the data, i.e., on the operator \(\mathcal H \) and on continuous functions \(\psi _i\), \(c_{i,j}\), and \(g_i\). A key aspect is that we make no sign assumption on the switching costs \(\{c_{i,j}\}\) and that \(c_{i,j}\) is allowed to depend on \(x\) as well as \(t\). Using the comparison principle, the existence of a unique viscosity solution \((u_1,\ldots ,u_d)\) to the system is constructed as the limit of an increasing sequence of solutions to associated obstacle problems. Having settled the existence and uniqueness, we subsequently focus on regularity of \((u_1,\ldots ,u_d)\) beyond continuity. In this context, in particular, we assume that \(\mathcal H \) belongs to a class of second-order differential operators of Kolmogorov type of the form:
where \(1\le m\le N\). The matrix \(\{a_{i,j}(x,t)\}_{i,j=1,\ldots ,m}\) is assumed to be symmetric and uniformly positive definite in \(\mathbb R ^m\). In particular, uniform ellipticity is only assumed in the first \(m\) coordinate directions, and hence, \(\mathcal H \) may be degenerate.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper we consider the problem
set in \(\mathbb R ^N\times [0,T]\) (\((x,t)\in \mathbb R ^N\times [0,T]\)), \(T>0\), where \(\psi _i\), \(c_{i,j}\), and \(g_i\) are continuous functions and \(\mathcal H \) is a partial differential operator. Concerning \(\mathcal H \), we assume that
where \(m\le N\). Here \(a_{i,j}\) and \(a_i\) are continuous functions and \(B:=\{b_{i,j}\}_{i,j=1}^N\) is a matrix of real constant entries. This type of operators occurs, for instance, in the context of financial markets where the dynamics of the state variables is described by an \(N\)-dimensional diffusion process \(X=\left(X_s^{x,t}\right)\) which is a solution to the system of stochastic differential equations
where \((x,t)\in \mathbb{R }^{N}\times [0,T]\) and \(W=\{W_t\}\) denotes an \(m\)-dimensional Brownian motion, \(m\le N\). \(\mathcal H \) is, in the context of (1.3), the infinitesimal generator associated with \(X=\left(X_s^{x,t}\right)\). The process \(X=\left(X_s^{x,t}\right)\) can, for instance, be the electricity price, functionals of the electricity price, or other factors which determine the price. In the Markovian setting when the randomness stems from the Ito diffusion \(X=\left(X_s^{x,t}\right)\) in (1.3), the problem in (1.1) is a system of variational inequalities with inter-connected obstacles related to multi-modes optimal switching problems. Our main results concern existence, uniqueness, and regularity for solutions to the system in (1.1) under mild regularity, growth, and structural assumptions on the data, i.e., on the operator \(\mathcal H \) and on continuous functions \(\psi _i\), \(c_{i,j}\), and \(g_i\).
In multi-modes optimal switching problems, the system in (1.1) occurs with \(g=(g_1,\ldots ,g_d)\equiv (0,\ldots ,0)\). To outline the setting for this class of problems, we consider a production facility that can run the production in \(d\), \(d\ge 2\), production modes. Let \(X=\left(X_s^{x,t}\right)\) denote the vector of stochastic processes in (1.3) which, as discussed above, represents the market price of the underlying commodities and other finance assets that influence the production. Let the payoff rate in production mode \(i\), at time \(t\), be \(\psi _i(X_t,t)\), and let \(c_{i,j}(X_t,t)\) denote the switching cost for switching from mode \(i\) to mode \(j\) at time \(t\). A management strategy is a combination of a non-decreasing sequence of stopping times \(\{\tau _k\}_{k\ge 0}\), where, at time \(\tau _k\), the manager decides to switch production from its current mode to another one, and a sequence of indicators \(\{\xi _k\}_{k\ge 0}\), taking values in \(\{1,\ldots ,d\}\), indicating the mode to which the production is switched. At \(\tau _k\) the production is switched from mode \(\xi _{k-1}\) (current mode) to \(\xi _k\). When the production is run under a strategy \((\delta ,\mu )=(\{\tau _k\}_{k\ge 0},\{\xi _k\}_{k\ge 0})\), over a finite horizon \([0,T]\), the total expected profit up to time \(T\) is
The optimal switching problem now consists in finding an optimal management strategy \((\delta ^*,\mu ^*)=(\{\tau _k^*\}_{k\ge 0},\{\xi _k^*\}_{k\ge 0})\) such that
Let \((Y_t^1,\ldots ,Y_t^d)\) be the value function associated with the optimal switching problem, on the time interval \([t,T]\), where \(Y_t^i\) stands for the optimal expected profit if, at time \(t\), the production is in mode \(i\). Under sufficient assumptions, it can then be proved that \((Y_t^1,\ldots ,Y_t^d)=(u_1(X_t,t),\ldots ,u_d(X_t,t))\), where the vector of deterministic functions \((u_1(x,t),\ldots ,u_d(x,t))\) satisfies (1.1) with \(g=(g_1,\ldots ,g_d)\equiv (0,\ldots ,0)\). Another perspective is that, under sufficient assumptions, one can prove that the optimal value vector \((Y_t^1,\ldots ,Y_t^d)\) solves the reflected backward stochastic differential equation
where \(i\in \{1,\ldots ,d\}\), \(0\le t\le T\). Concerning references for the above connections, we first note that the case \(d=2\), often referred to as two modes switching- or starting and stopping-problem, has attracted a lot of interest during the last decades and we refer to [12] for an extensive list of references to the literature in this case. The more general multi-modes optimal switching problem is by now also a quite well-established field of research in economics and mathematics, see, e.g., [5, 6, 8, 10, 11, 20] and the references therein. For more on reflected backward stochastic differential equations in the context of optimal switching problems, we refer to [5–7, 11, 12]. In general, the research on optimal switching problems focuses on two approaches: the probabilistic or stochastic approach and the deterministic approach which focus more on variational inequalities and partial differential equations. For the first approach we refer to the references above concerning reflected backward stochastic differential equations. We note that the stochastic approach heavily uses probabilistic tools as the Snell envelope of processes and backward stochastic differential equations. For the second approach we refer to [1, 2, 8]. In general, the two approaches are used in combination due to the connection between reflected backward stochastic differential equations, see (1.6), and systems of variational inequalities, see (1.1). A good reference for these connections in the context of optimal stopping problems is [15].
The results in the literature concerning the optimal switching problem, i.e., system (1.1) with \(g=(g_1,\ldots ,g_d)\equiv (0,\ldots ,0)\), are derived under different assumptions on the underlying stochastic process, and hence the type of operator \(\mathcal H \) considered, and under different assumptions on the switching costs \(\{c_{i,j}\}\). Concerning the first issue, the operator \(\mathcal H \) can be of local or non-local nature depending on whether or not the underlying process is chosen to allow for jumps or not. As most of the papers listed above concern linear second-order local operators, we mention that [2] deals with Levy-like processes and non-local operators. Concerning the literature and assumptions made on the switching costs \(\{c_{i,j}\}\), these are essentially in all of the references listed above assumed to be nonnegative, i.e., \(c_{i,j}(x,t)\ge \alpha \) for all \((x,t)\in \mathbb R ^N\times [0,T]\), \(i,j\in \{1,\ldots ,d\},\, i\ne j\), and for some \(\alpha \ge 0\). Often, even \(\alpha >0\) is assumed, see [8] and [5] for instance. Furthermore, often additional restrictions are imposed on \(\{c_{i,j}\}\), like \(c_{i,j}\) is independent of \(x\), see [6] for instance, or that \(c_{i,j}\) is even constant, see [2, 5].
The only paper we are aware of where the switching costs are assumed to change sign is [7]. However, in [7] there are two additional conditions concerning the nonnegativity of the switching costs: that \(c_{i,j}(x, T ) = 0\) and that the number of negative switching costs is limited in a certain sense, see condition \((v)\) in [7].
In this paper we establish existence, uniqueness, and regularity for the system in (1.1) under various assumptions on the operator \(\mathcal H \) and on the continuous functions \(\psi _i\), \(c_{i,j}\), and \(g_i\). The structural assumptions on the functions \(\{c_{i,j}\}\) that we impose in this paper to establish our general comparison principle for the system in (1.1), see Theorem 2.1, are the following.
In our final existence theorem, see Theorem 2.4, we assume additional regularity of \(c_{i,j}(x,t)\) beyond continuity, see (2.16), and that
for any sequence of indices \(i_1\), \(i_2\), \(i_3\), \(i_j\in \{1,\ldots ,d\}\) for each \(j\in \{1,2,3\}\), and for all \((x,t)\in \mathbb R ^N\times [0,T]\). Note that (1.8) is an additional structural restriction compared to (1.7).
We emphasize though that in general we make no sign assumption on the switching costs \(\{c_{i,j}\}\) and that \(c_{i,j}\) is allowed to depend on \(x\) as well as \(t\). Naturally, one may ask whether the allowance for possibly negative switching costs is of importance beyond the mathematical issues. In fact, it is very natural to allow for negative switching costs as these allow one to model the situation when, for example, a government through environmental policies provides subsidies, grants, or other financial support to energy production facilities in case they switch to more ‘green energy’ production or to more clean methods for production. In this case it is not a cost for the facility to switch, it is a gain. However, the final decision to switch or not switch is naturally also influenced by the running payoffs \(\{\psi _i\}\).
2 Main results
Focusing on existence and uniqueness of solutions to the system in (1.1), we impose additional structural assumptions on the matrix \(\{a_{i,j}\}_{i,j=1}^m\), assumptions which seem difficult to avoid when constructing solutions along the lines outlined below and using the formalism of viscosity solutions. In particular, we assume that
where \(\sigma =\sigma (x,t)\) is an \(N\times m\) matrix and \(\sigma ^*\) is the transpose of \(\sigma \). Concerning regularity and growth conditions on \(\{a_{i,j}\}_{i,j=1}^m\) and \(\{a_{i}\}_{i=1}^m\), we assume that
for some constant \(A\), \(1\le A<\infty \), for all \( i,j\in \{1,\ldots ,m\}\), and whenever \((x,t),(y,s)\in \mathbb R ^N\times [0,T]\). Note that \((i)\) implies \((ii)\) in (2.2). Here, \(|x|\) is the standard Euclidean norm of \(x\in \mathbb R ^N\). Concerning regularity and growth conditions on \(\psi _i\), \(c_{i,j}\), and \(g_i\), we assume that
Finally, concerning the interplay between the terminal data \(\{g_i\}\) and the switching costs \(\{c_{i,j}\}\), at \(t=T\), we assume that
for all \( i\in \{1,\ldots ,m\}\), and whenever \(x\in \mathbb R ^N\). Note that in the special case of the optimal switching problem discussed above, i.e., \(g=(g_1,\ldots ,g_d)\equiv (0,\ldots ,0)\), then (2.4) implies that \(c_{i,j}(x,T)\ge 0\) for all \( i,j\in \{1,\ldots ,d\}\), and whenever \(x\in \mathbb R ^N\). Finally, we impose the structural assumptions on the functions \(\{c_{i,j}\}\) stated in (1.8). We are now ready to formulate or first results. For the definition of \(\text{ LSC}_p(\mathbb R ^N\times [0,T])\), \(\text{ USC}_p(\mathbb R ^N\times [0,T])\) and \(\text{ C}_p(\mathbb R ^N\times [0,T])\), as well as for the definition of viscosity sub- and supersolutions, we refer to the bulk of the paper. We first prove the following general comparison theorem.
Theorem 2.1
Let \(\mathcal H \) be as in (1.2) and assume (1.8), (2.1), (2.2), (2.3), and (2.4). Assume that \(( u_1^+,\ldots ,u_d^+)\) is a viscosity supersolution and that \((u_1^-,\ldots ,u_d^-)\) is a viscosity subsolution to the problem in (1.1). Then \(u_i^-\le u_i^+\) in \(\mathbb R ^N\times (0,T]\) for all \(i\in \{1,\ldots ,d\}\).
Our proof of Theorem 2.1 relies on by now quite classical arguments in the theory of viscosity solutions developed in [3, 14]. Along the line of these works, a key idea is to double the variables, \((x,y)\in \mathbb R ^N\times \mathbb R ^N\), to construct an appropriate function \(\varphi _\epsilon (x,y,t)\), for \(\epsilon \) small and tending to \(0\) in the final argument, such that the general theory of Crandall–Ishii–Lions applies to a function
where \(\tilde{u}_k^-\) and \(\tilde{u}_k^+\) in our case are slight modifications of our original sub- and supersolutions. It is at this crucial step that (1.7) is used. The assumptions in (2.1) and (2.2) are then used to complete the argument along the lines of the classical theory. The spatial dependence of the switching costs is dealt with as in [7].
Theorem 2.1 allows us to conclude uniqueness of viscosity solutions \((u_1,\ldots ,u_d)\), \(u_i\in \text{ C}_p(\mathbb R ^N\times [0,T])\), to the problem in (1.1). We next focus on the existence of solutions to (1.1), and we establish the existence of solutions as a limit of solutions to iteratively defined auxiliary problems. Before stating our existence theorems, we make the following definition.
Definition 1
A barrier for the system in (1.1), component \(i \in \{1,\ldots , d \}\) and the point \(y\in \mathbb{R }^N\), \(\{u^{+,i,y}\}\), is a family of continuous supersolutions, \(\{u^{+,i,y,{\epsilon }}\}_{{\epsilon }>0}\), to system (1.1) such that \(\lim _{{\epsilon }\rightarrow 0} u_i^{+,i,y,{\epsilon }}(y,T) = g_i(y)\).
To stress generality, we first prove the following theorem.
Theorem 2.2
Let \(\mathcal H \) be as in (1.2) and assume (1.7), (2.1), (2.2), (2.3), (2.4), and the following statements hold:
-
(1)
There exists, for each \(i\in \{1,\ldots ,d\}\) and \(y\in \mathbb{R }^N\), a barrier \(u^+=\{u^{+,i,y}\}\) to the system in (1.1) in the sense of Definition 1.
-
(2)
There exists, for each \(i\in \{1,\ldots ,d\}\), a viscosity subsolution \(u_i^-\) to the problem
$$\begin{aligned} -\mathcal H v-\psi _i =0, \text{ in} \mathbb R ^N\times [0,T),\ v(x,T) = g_{i}(x), \text{ for} x\in \mathbb R ^N. \end{aligned}$$(2.5) -
(3)
Given \((u_1^-,\ldots ,u_d^-)\), let \((u_{1}^0,\ldots ,u_{d}^0):=(u_1^-,\ldots ,u_d^-)\). The following iteratively defined sequence \(\{(u_{1}^k,\ldots ,u_{d}^k)\}_{k=1}^\infty \) is well defined. Assuming that \((u_{1}^k,\ldots ,u_{d}^k)\), \(u_i^k\in \text{ C}_p(\mathbb R ^N\times [0,T])\), \(i\in \{1,\ldots ,d\}\), has been constructed for some \(k\ge 0\). Let \(u_{i}^{k+1}\in \text{ C}_p(\mathbb R ^N\times [0,T])\), for each \( i \in \{1,\ldots ,d\}\), be the viscosity solution to the problem
$$\begin{aligned}&\min \biggl \{-\mathcal H u_{i}^{k+1}-\psi _i,u_{i}^{k+1}-\max _{j\ne i}(-c_{i,j}+u_{j}^k)\biggr \}=0, \text{ in} \, \mathbb R ^N\times (0,T),\nonumber \\&u_{i}^{k+1}(x,T)=g_{i}(x), \text{ for} x\in \mathbb R ^N. \end{aligned}$$(2.6)
Then there exists a viscosity solution \((u_1,\ldots ,u_d)\) to the problem in (1.1), \( u_i^-\le u_i\le u_i^+\) on \(\mathbb R ^N\times [0,T]\), for \(i\in \{1,\ldots ,d\}\), and this solution is unique in the class of continuous solutions \((u_1,\ldots ,u_d)\) which satisfy the polynomial growth condition formulated in (2.3).
Note that we have chosen this formulation of our first existence theorem to stress generalities and through Theorems 2.1 and 2.2 the existence of a unique viscosity solution \((u_1,\ldots ,u_d)\) to the problem in (1.1) is reduced to prove the following:
-
(1)
existence of a barrier \(\{u^{+,i,y}\}\) for (1.1) for each \(i\in \{1,\ldots , d\}\) and \(y\in \mathbb{R }^N\),
-
(2)
existence of viscosity subsolutions \((u_1^-,\ldots , u_d^-)\) to the problems in (2.5),
-
(3)
existence of viscosity solutions \((u_{1}^k,\ldots ,u_{d}^k)\) to the iteratively defined sequence of obstacle problems in (2.6).
To expand on this, we first note that in Lemma 5.1 below we establish a general comparison principle for the obstacle problem under weak assumptions on the data. In fact, Lemma 5.1 is essentially Theorem 2.1 but for obstacle problems. Using Lemma 5.1 and the assumptions on existence in (1)–(3) above, we can conclude that
where \(u_i^+\) i a supersolution for system (1.1). Hence, \(\{ u_{i}^{k}\}_{k=0}^\infty \), for \(i\in \{1,\ldots ,d\}\), is bounded from below and above by \(u_{i}^{-}\) and \(u_{i}^{+}\), respectively, and the monotonicity in (2.7) allows us to conclude that
pointwise as \(k\rightarrow \infty \) for some function \( (u_{1}(x,t),\ldots ,u_{d}(x,t))\). To prove Theorem 2.2, we then prove that \((u_{1}(x,t),\ldots ,u_{d}(x,t))\) is in fact a solution to the problem in (1.1).
In the approach to existence above, the assumptions in Theorem 2.2 concerning existence of \((u_1^-,\ldots ,u_d^-)\) and barriers \(\{u^{+,i,y}\}\) render lower and upper bounds in (2.7), lower and upper bounds with at most polynomial growth (in \(x\)) at infinity. However, assumption \((3)\) in Theorem 2.2 concerning the existence in the iteratively constructed obstacle problems has to be further understood. All of the obstacle problems in (2.6) have the form
where, at each step, \(\psi ,\, \theta \), and \(g\) belong to the class \(\text{ C}_p(\mathbb R ^N\times [0,T])\). Hence, assumption \((3)\) in Theorem 2.2 boils down to the existence of solutions to the problem in (2.8) under the assumption of existence of a supersolution \(u^+\) and a subsolution \(u^-\) such that \(u^-\le u\le u^+\) in \(\mathbb R ^N\times [0,T]\). Based on these preliminaries, there are at least three approaches one can attempt to use to construct viscosity solutions to the obstacle problems in (2.6).
The first is a Perron type approach, and in this case one considers
Let \(\tilde{u}^*\) denote the upper semi-continuous envelope of \(\tilde{u}(x,t)\), and let \(\tilde{u}_*(x,t)=-(-\tilde{u}(x,t))^*\). Then by definition of \(\tilde{u}\) and the comparison principle in Lemma 5.1, it follows that \( u^-\le \tilde{u}^*\), \(\bar{u}_{*} \le u^+\), and \(\tilde{u}_*\le \tilde{u}^*\). The idea in this approach is now to prove that \(\tilde{u}^*\) and \(\tilde{u}_*\) are, respectively, sub- and supersolutions to the problem (2.8). Then, using Lemma 5.1 it follows that \(\tilde{u}^*\le \tilde{u}_*\); hence, \(\tilde{u}_*=\tilde{u}^*\) is a viscosity solution to the equation. To prove that \(\tilde{u}_*=\tilde{u}^*\) is in fact a viscosity solution to (2.8), it then only remains to prove that
To establish (2.9), appropriate bounds from below and above have to be constructed.
The second approach is to construct domains \(\{D_l\}\) such that \(D_l\) is, for each \(l\), bounded, the closure of \(D_l\) is compactly contained in \(D_{l+1}\), \(\cup D_l=\mathbb R ^N\), and such that the existence of a solution \(u_l\) to
can be ensured. Here \(\partial _p(D_l\times (0,T])\) is the appropriate parabolic boundary of \(D_l\times (0,T]\). One way to construct \(g_l\) is as follows, and we note that the following argument requires that \(u^+\in \text{ C}_p(\mathbb R ^N\times [0,T])\). For each \(l\in \mathbb Z _+\), let \(\chi _l\) be a cutoff function such that \(\chi _{l}\in C^\infty _0(\mathbb R ^N)\), \(\chi _{l}\ge 0\), \(\chi _{l}\equiv 1\) on \(D_l\) and such that the support of \(\chi _{l}\) is compactly contained in \(D_{l+1}\). We then let
where \(u^+\) is the globally defined supersolution existing by assumption. Note that the restriction of \(g_{l}\) to \(\partial D_l\times (0,T))\) equals \(u^+\). By continuity and the growth conditions, \(u^+\), \(\psi \), \(\theta \), and \(g_l\) are bounded functions on the closure of \(D_l\times [0,T]\), and the existence of \(u_l\) as in (2.8) boils down to the problem concerning whether or not
Assuming that (2.11) has an affirmative answer, it follows by a local version of the comparison principle in Lemma 5.1 that
Hence,
A solution to the unbounded problem on \(\mathbb R ^N\times [0,T]\) can now be constructed pointwise as the monotone limit of \(\{u_l(x,t)\}\).
The essence of this rather lengthy outline is that the subtleties in both of the above approaches stem from the problem concerning whether or not we can ensure that boundary data are assumed continuously, see (2.9) and (2.11). Working along the first approach above, we have to ensure (2.9) and this can be done by assuming, for instance, that \((u_1^+,\ldots ,u_d^+)\) and \((u_1^-,\ldots ,u_d^-)\) are continuous and identical to \((g_1,\ldots ,g_d)\) at \(T\), or by assuming potentially more restrictive assumptions on the data \(\psi _i\), \(c_{i,j}\) and \(g_i\) based on which appropriate bounds can be constructed. Working along the lines of the second approach above, we have to ensure (2.11), i.e., we have to make sure that all points on \(\partial _p(D_l\times (0,T])\) are regular for the continuous Dirichlet problem for the operator \(\mathcal H \).
Finally, a third approach to the obstacle problem in (2.8) is to use a stochastic formalism and to approach the problem using reflected backward stochastic differential equations (BSDEs). While BSDEs were introduced in [18, 19], the notion of reflected BSDE was introduced in [15]. A solution to a one-dimensional reflected BSDE is a triple of processes \((Y,Z,K)\) satisfying
where the lower bound \(S\) is a given (one-dimensional) stochastic process. \(K\) is a continuous increasing process, \(K_0=0\), pushing the process \(Y\) upward in order to keep it above \(S\). This is done with minimal energy in the sense that
and \(K\) increases only when \(Y\) is at the boundary of the time-dependent space-time domain defined by \((t,S_t)\). In the context of the obstacle problem in (2.8), the data in problem (2.14), i.e., \(\xi \), \(f\), and \(S_t\), are defined based on the data in problem (2.8), i.e., \(\psi \), \(\theta \), and \(g\), and, as outlined in Sect. 6, it is under modest assumptions sufficient to prove existence of the triple \((Y_t, Z_t, K_t)\) to ensure the existence of a solution to (2.8). Uniqueness follows from the comparison principle of Lemma 5.1.
Building on Theorem 2.2, and the methods outlined above for the obstacle problem, we prove the following theorem.
Theorem 2.3
Let \(\mathcal H \) be as in (1.2) and assume (1.7), (2.1), (2.2), (2.3), and (2.4). Assume that there exists, for each \(i\in \{1,\ldots ,d\}\) and \(y\in \mathbb{R }^N\), a barrier \(\{u^{+,i,y}\}\) to the system in (1.1) in the sense of Definition 1. Then there exists a viscosity solution \((u_1,\ldots ,u_d)\) to the problem in (1.1), and this solution is unique in the class of solutions which satisfy the polynomial growth condition formulated in (2.3).
To prove existence of a viscosity solution \((u_1,\ldots ,u_d)\) to the problem in (1.1), it hence remains to construct barriers as stated in \((1)\) of Theorem 2.2. To do so, we impose additional assumptions on the switching costs \(c_{i,j}\). In particular, by assuming (1.8), (2.3), and \(g_i=g\), for all \(i \in \{1,\ldots ,d\}\), we are able to prove that there exist positive constants \(K\) and \(\lambda \) such that \((u_1^{+,i,y,{\epsilon }},\ldots , u_d^{+,i,y,{\epsilon }})\),
for all \(j \in \{1,\ldots , d\}\), is a supersolution to the problem in (1.1). Since \(c_{i,i} = 0\) by assumption, \(u_i^{+,i,y,{\epsilon }}(x,t)\) attains the terminal data \(g\) as \(\epsilon \rightarrow 0\). Hence, from Theorem 2.3, we can deduce existence of a viscosity solution to the problem in (1.1). In particular, we prove the following theorem.
Theorem 2.4
Let \(\mathcal H \) be as in (1.2) and assume (1.8), (2.1), (2.2), (2.16), and (2.4). Assume also that \(g_i=g\), for all \(i \in \{1,\ldots , d\}\), for some Lipschitz continuous function \(g\), that
and that (1.8) holds for any sequence of indices \(i_1\), \(i_2\), \(i_3\), \(i_j\in \{1,\ldots ,d\}\) for each \(j\in \{1,2,3\}\), and for all \((x,t)\in \mathbb R ^N\times [0,T]\). Then there exists a unique viscosity solution \((u_1,\ldots ,u_d)\) to the problem in (1.1), unique in the sense defined in Theorem 2.3.
2.1 Regularity in the context of operators of Kolmogorov type
We here formulate results concerning regularity of viscosity solutions \((u_1,\ldots ,u_d)\) to the problem in (1.1) in the context of operators of Kolmogorov type. In particular, we in the following assume the existence of a vector of functions \((u_1,\ldots ,u_d)\) which is continuous on \(\mathbb R ^N\times [0,T]\) and which satisfies (1.1) in the viscosity sense. We will formulate results concerning further regularity of a viscosity solution \((u_1,\ldots ,u_d)\), regularity beyond continuity. Our regularity part can be stated under much weaker regularity assumptions on the coefficients of the operator \(\mathcal H \) and on the data \(\psi _i\), \(c_{i,j}\), and \(g_i\). In particular, as our results concerning regularity rely on the results in [9] and [17] concerning regularity in obstacle problems involving operators of Kolmogorov type, we will here impose the same assumption on \(\mathcal H \) as in those papers. In particular, concerning the matrix \(A:=\{a_{i,j}\}_{i,j=1}^m\), we assume that \(A\) is a real, symmetric, and uniformly positive definite matrix in \(\mathbb{R }^{m}\) and that there exists a positive constant \(\lambda \) such that
The matrix \(B=\{b_{i,j}\}_{i,j=1}^N\) is assumed to have real constant entries. Consider the constant coefficients operator
We assume that
i.e., every distributional solution of \(\mathcal K u=f\) is a smooth classical solution whenever \(f\) is smooth. The operator \(\mathcal K \) can also be written as
where
and \(\{\bar{a}_{i,j}\}\) are the entries of the unique positive matrix \(\bar{A}\) such that \(A = \bar{A}^2\). The hypothesis (2.19) is equivalent to the Hörmander condition,
see [13]. Equation (2.19) is also equivalent, see [16], to the following structural assumption on \(B\): there exists a basis for \(\mathbb{R }^N\) such that the matrix \(B\) has the form
where \(B_j\) is a \(m_{j-1}\times m_j\) matrix of rank \(m_j\) for \(j\in \{1,\ldots ,\kappa \}\), \(1\le m_\kappa \le \cdots \le m_1 \le m_0=m\) and \(m+ m_1+\cdots +m_\kappa =N\), while \(*\) represents arbitrary matrices with constant entries. Based on (2.20), we introduce the family of dilations \((\delta _r)_{r>0}\) on \(\mathbb{R }^{N+1}\) defined by
where \(I_k\), \(k\in \mathbb N \), is the \(k\)-dimensional unit matrix. Let
We say that \(\mathbf{q}+2\) is the homogeneous dimension of \(\mathbb{R }^{N+1}\) with respect to the dilations group \((\delta _r)_{r>0}\). For simplicity we in this paper, in addition, assume that all the blocks denoted by \(*\) in (2.1) are null. This is equivalent, see [16], to the technical condition
In (2.23) the symbol \(\circ \) refers to the underlying group law defined in (7.1). In the following we let, for \(\alpha \in (0,1]\), \(C_K^{0,\alpha }(\mathbb R ^N\times [0,T])\) be the space of functions which are Hölder continuous of order \(\alpha \) on \(\mathbb R ^N\times [0,T]\), where Hölder continuity is defined in terms of the metric induced by the Lie group structure underlying the operators of Kolmogorov type. \(C_K^{2,\alpha }(\mathbb R ^N\times [0,T])\) is defined similarly. We refer to the bulk of the paper for the precise definition of \(C_K^{0,\alpha }(\mathbb R ^N\times [0,T])\), \(C_K^{2,\alpha }(\mathbb R ^N\times [0,T])\). Concerning the regularity of the coefficients \(a_{i,j}\) and \(a_i\), we assume that
We prove the following theorems.
Theorem 2.5
(Interior regularity) Let \(\mathcal H \) be as in (1.2) and assume (2.17), (2.19), (2.23), and (2.24). Assume that the continuous functions \(\psi _i,\ c_{i,j},\ g_i\in C_K^{0,\alpha }(\mathbb R ^N\times [0,T])\) for some \(\alpha \in (0,1]\) and that \(\{c_{i,j}\}\) satisfy (1.7). If \((u_1,\ldots ,u_d)\) is a viscosity solution to the problem in (1.1) with data \(\psi _i,\ c_{i,j},\ g_i\), then
for each \(i\in \{1,\ldots ,d\}\). Furthermore,
where
Theorem 2.6
(Regularity at the terminal state) Let \(\mathcal H \), \(\psi _i,\ c_{i,j},\ g_i\), \((u_1,\ldots ,u_d)\), be as in Theorem 2.5. Assume also (2.4). Then
for each \(i\in \{1,\ldots ,d\}\).
Concerning proofs, we here simply note that Theorems 2.5 and 2.6 are proved using results and techniques developed in [1, 9, 17].
We emphasize that in contrast to much of the literature on optimal switching problems, the main results in this paper rely on techniques from the theory of partial differential equations and make little references to stochastic formalisms. Furthermore, compared to, e.g., [5, 6, 11], our results do not require viscosity sub- and supersolutions to be continuous; instead, they are only semi-continuous.
2.2 Outline of the paper
The rest of the paper is organized as follows. Section 3 is of preliminary nature, and we here introduce some notation and state the definition of viscosity sub- and supersolutions and that of viscosity solutions. Section 4 is devoted to the proof of Theorem 2.1, and in Sect. 5 we prove Theorem 2.2. In Sect. 6 we then continue to prove Theorems 2.3 and 2.4 by proceeding as outlined above and using the results established in the previous sections. One part of Sect. 6 is devoted to establishing an appropriate version of Theorem 2.1 but for the obstacle problem. Section 7 is devoted to operators of Kolmogorov type and to the proofs of Theorems 2.5 and 2.6.
3 Preliminaries
In this section we introduce some notation used throughout the paper, and we define the appropriate notion of viscosity sub- and supersolutions to the problem in (1.1).
3.1 Notation
We denote by \(\text{ LSC}(\mathbb R ^N\times [0,T])\) the set of lower semi-continuous functions, i.e., all functions \(f: (\mathbb R ^N\times [0,T]) \rightarrow \mathbb{R }\) such that for all points \((x_0,t_0)\) and for any sequence \(\{(x_n, t_n)\}_n\), \(\lim _{n \rightarrow \infty } (x_n,t_n) \rightarrow (x_0,t_0)\) in \((\mathbb R ^N\times [0,T])\), we have
Likewise, we denote by \(\text{ USC}(\mathbb R ^N\times [0,T])\) the set of upper semi-continuous functions, i.e., all functions \(f: (\mathbb R ^N\times [0,T])\rightarrow \mathbb{R }\) such that for all points \((x_0,t_0 )\) and for any sequence \(\{(x_n,t_n)\}_n\), \(\lim _{n \rightarrow \infty } (x_n,t_n) \rightarrow (x_0,t_0)\) in \((\mathbb R ^N\times [0,T])\), we have
Note that a function \(f\) is upper semi-continuous if and only if \(-f\) is lower semi-continuous. Also, a real function \(g\) is continuous if and only if it is both upper and lower semi-continuous. The function space \(\text{ LSC}_p(\mathbb R ^N\times [0,T])\) is defined to consist of functions \(h\in \text{ LSC}(\mathbb R ^N\times [0,T])\) which satisfy the growth condition
for some \(c,\gamma \in [1,\infty )\), whenever \((x,t)\in \mathbb R ^N\times [0,T]\). \(\text{ USC}_p(\mathbb R ^N\times [0,T])\) is defined by analogy. Furthermore, \(\text{ C}_p(\mathbb R ^N\times [0,T])=\text{ USC}_p(\mathbb R ^N\times [0,T])\cap \text{ LSC}_p(\mathbb R ^N\times [0,T])\). We will by \(c\) denote a generic constant, \(1\le c < \infty \), that may change value from line to line.
3.2 Viscosity solutions
We here define the notion of viscosity solutions to the problem in (1.1).
Definition 1
-
(i)
A vector \((u_1^+,\ldots ,u_d^+)\), \(u_i\in \text{ LSC}_p(\mathbb R ^N\times [0,T])\) for \(i\in \{1,\ldots ,d\}\), is a viscosity supersolution to the problem in (1.1) if \(u_i^+(x,T)\ge g_i(x)\) whenever \(x\in \mathbb R ^n\), \(i\in \{1,\ldots ,d\}\), and if the following holds. If \((x_0,t_0)\in \mathbb R ^N\times (0,T)\) and if, for some \(i\in \{1,\ldots ,d\}\), we have \(\phi _i\in C^{1,2}(\mathbb R ^N\times [0,T])\) such that
$$\begin{aligned} (i)&\phi _i(x_0, t_0) = u_i^+(x_0,t_0),\nonumber \\ (ii)&(x_0,t_0) \text{ is} \text{ a} \text{ local} \text{ maximum} \text{ of} \phi _i-u_i^+, \end{aligned}$$then
$$\begin{aligned}&\min \biggl \{-\mathcal H \phi _i-\psi _i, u_i^+-\max _{j\ne i}(-c_{i,j}+u_j^+)\biggr \} \ge 0. \end{aligned}$$ -
(ii)
A vector \((u_1^-,\ldots ,u_d^-)\), \(u_i^-\in \text{ USC}_p(\mathbb R ^N\times [0,T])\) for \(i\in \{1,\ldots ,d\}\), is a viscosity subsolution to the problem in (1.1) if \(u_i^+(x,T)\le g_i(x)\) whenever \(x\in \mathbb R ^n\), \(i\in \{1,\ldots ,d\}\), and if the following holds. If \((x_0,t_0)\in \mathbb R ^N\times (0,T)\) and if, for some \(i\in \{1,\ldots ,d\}\), we have \(\phi _i\in C^{1,2}(\mathbb R ^N\times [0,T])\) such that
$$\begin{aligned} (i)&\phi _i(x_0, t_0) = u_i^-(x_0,t_0),\nonumber \\ (ii)&(x_0,t_0) \text{ is} \text{ a} \text{ local} \text{ minimum} \text{ of} \phi _i-u_i^-, \end{aligned}$$then
$$\begin{aligned}&\min \biggl \{-\mathcal H \phi _i-\psi _i, u_i^--\max _{j\ne i}(-c_{i,j}+u_j^-)\biggr \} \le 0. \end{aligned}$$ -
(iii)
If \((u_1,\ldots ,u_d)\) is both a viscosity supersolution and subsolution to the problem in (1.1), then \((u_1,\ldots ,u_d)\) is a viscosity solution to the problem in (1.1).
In particular, if \((u_1,\ldots ,u_d)\) is a viscosity solution to the problem in (1.1), then \(u_i\in \text{ C}_p(\mathbb R ^N\times [0,T])\) for \(i\in \{1,\ldots ,d\}\).
Definition 2
Let \(v\in \text{ USC}_p(\mathbb R ^N\times [0,T])\) and \(u\in \text{ LSC}_p(\mathbb R ^N\times [0,T])\), \((x,t)\in \mathbb R ^N\times (0,T)\), and let \(S_N\) be the set of all \(N\times N\)-dimensional symmetric matrices. The superjet \(J^{2,+}v(x,t)\) of \(v\) at \((x,t)\) is the set of all triples \((p,q,X)\in \mathbb R \times \mathbb R ^N\times S_N\) such that
The subjet \(J^{2,-}u(x,t)\) of \(u\) at \((x,t)\) is the set of all triples \((p,q,X)\in \mathbb R \times \mathbb R ^N\times S_N\) such that
Note that if \(\varphi -v\) has a local maximum (minimum) at \((x,t)\), then
Let
whenever \((x,t,p,q,X)\in \mathbb R ^N\times \mathbb R \times \mathbb R \times \mathbb R ^N\times S_N\). It is well known, see, e.g., [3], that a notion of viscosity super- and subsolutions as in Definition 1 can be equivalently defined using the notion super- and subjets. We next state such an equivalent definition for further reference.
Definition 3
-
(i)
Let \((u_1^+,\ldots ,u_d^+)\), \(u_i^+\in \text{ LSC}_p(\mathbb R ^N\times [0,T])\) for \(i\in \{1,\ldots ,d\}\), be real-valued functions such that \(u_i^+(x,T)\ge g_i(x)\) whenever \(x\in \mathbb R ^n\), \(i\in \{1,\ldots ,d\}\). Then \((u_i^+,\ldots ,u_d^+)\) is called a viscosity supersolution to the problem in (1.1) if for any \(i\in \{1,\ldots ,d\}\), \((x,t) \in \mathbb R ^N\times (0,T)\) and \((p,q,X) \in J^{2,-}u_i^+\), we have
$$\begin{aligned} \min \biggl \{-\mathcal H (x,t,p,q,X) -\psi _i(x,t),u^+ _i(x,t)-\max _{j\ne i}(-c_{i,j}(x,t)+u^+_j(x,t))\biggr \}\ge 0. \end{aligned}$$ -
(ii)
Let \((u_1^-,\ldots ,u_d^-)\), \(u_i^-\in \text{ USC}_p(\mathbb R ^N\times [0,T])\) for \(i\in \{1,\ldots ,d\}\), be real-valued functions such that \(u_i^+(x,T)\ge g_i(x)\) whenever \(x\in \mathbb R ^n\), \(i\in \{1,\ldots ,d\}\). Then \((u_i^+,\ldots ,u_d^+)\) is called a viscosity subsolution to the problem in (1.1) if for any \(i\in \{1,\ldots ,d\}\), \((x,t) \in \mathbb R ^N\times (0,T)\) and \((p,q,X) \in J^{2,+}u_i^-\), we have
$$\begin{aligned} \min \biggl \{-\mathcal H (x,t,p,q,X) -\psi _i(x,t),u_i^-(x,t)-\max _{j\ne i}(-c_{i,j}(x,t)+u^-_j(x,t))\biggr \}\le 0. \end{aligned}$$ -
(iii)
If \((u_1,\ldots ,u_d)\) is both a viscosity supersolution and subsolution to the problem in (1.1), then \((u_1,\ldots ,u_d)\) is a viscosity solution to the problem in (1.1).
4 The comparison principle: proof of Theorem 2.1
The purpose of this section is to prove Theorem 2.1, and hence, throughout the section we adopt the assumption stated in Theorem 2.1. In particular, let \(\mathcal H \) be as in (1.2) and assume (2.1) and (2.2). Assume that \(\psi _i,\, c_{i,j}\), and \(g_i\) are as stated in Theorem 2.1, and assume that \((u_1^-,\ldots ,u_d^-)\) and \((u_1^+,\ldots ,u_d^+)\) are viscosity sub- and supersolutions, respectively, to the problem in (1.1). We first prove the following lemma.
Lemma 4.1
The following is true for any \(\gamma >0\). Let \(\theta \ge 0\). Then there exists \(\eta >0\), independent of \(\theta \), such that if \(\lambda \ge \eta \), then \((\bar{u}_1^+,\ldots ,\bar{u}_d^+)\),
is a viscosity supersolution of (1.1).
Proof
Since \(u_i^+ \in \text{ LSC}_p(\mathbb R ^N\times [0,T])\), we have \(\bar{u}_i^+ \in \text{ LSC}_p(\mathbb R ^N\times [0,T])\). Let \((x_0,t_0)\in \mathbb R ^N\times [0,T]\) and assume, for some \(i\in \{1,\ldots ,d\}\), that \(\phi _i\in C^{1,2}(\mathbb R ^N\times [0,T])\) satisfies
To prove the lemma, it is enough to prove that there exists \(\eta >0\), independent of \(\theta \), such that if \(\lambda \ge \eta \), then
Let \(\Phi _i=\phi _i-\theta e^{-\lambda t}(|x|^{2\gamma +2}+1)\) and note that by construction \(\Phi _i-u_i^+\) has a local maximum at \((x_0,t_0)\). Using that \(u_i^+\) is a supersolution, we have that
Using (4.1) we see that
since \(\bar{u}_i^+-u_i^+\) is independent of \(i\). To conclude the proof, we hence only have to ensure that
To do this we first note that, at \((x_0,t_0)\),
Hence,
where
Using the assumption on the operator \(\mathcal H \) stated in Theorem 2.1, we see that
and hence
In view of (4.2) we see that (4.3) completes the proof of the lemma.\(\square \)
4.1 Proof of Theorem 2.1
Assume that \((u_1^+,\ldots ,u_d^+)\) is a viscosity supersolution and that \((u_1^-,\ldots ,u_d^-)\) is a viscosity subsolution, respectively, to the problem in (1.1). We want to prove that
whenever \((x,t)\in \mathbb R ^N\times [0,T]\). In fact, we will prove a slightly modified version of (4.4). We let \(\tilde{u}_i^-(x,t)=e^tu_i^-(x,t)\) and \(\tilde{u}_i^+(x,t)=e^t\bar{u}_i^+(x,t)\) for all \(i\in \{1,\ldots ,d\}\). One can easily verify that \((\tilde{u}_1^-,\ldots ,\tilde{u}_d^-)\) is a viscosity subsolution to the problem
where
if and only if \(u_i^-\) is a subsolution to (1.1). Similarly, \((\tilde{u}_1^+,\ldots ,\tilde{u}_d^+)\) is a viscosity supersolution to the problem in (4.5) if and only if \(u_i^+\) is a supersolution to (1.1). Proving (4.4) is now equivalent to proving
whenever \((x,t)\in \mathbb R ^N\times [0,T]\), but according to Lemma 4.1, it is enough to prove
whenever \((x,t)\in \mathbb R ^N\times [0,T]\), where \(\bar{u}_i^+:=\tilde{u}_i^+ + \theta e^{-(\lambda -1) t}(|x|^{2\gamma +2}+1)\), since we easily recover (4.6) by letting \(\theta \rightarrow 0\) in (4.7). In fact, it is enough to show that for any \(\theta >0\)
since the desired result is still retrieved in the limit as \(\theta \rightarrow 0\). In particular, in the following argument \(\theta \) will be fixed and we will prove (4.7). Once that is done, we let \(\theta \rightarrow 0\).
Let in the following \(B(0,R)\), \(R>0\), be the standard Euclidean ball of radius \(R\) centered at 0. Using that \(u_i^-, - u_i^+ \in \text{ USC}_p(\mathbb R ^N\times [0,T])\) for \(i\in \{1,\dots ,d\}\), we have that \(|\tilde{u}_i^-(x,t)|+|\tilde{u}_i^+(x,t)|\le c(1+|x|^{\gamma })\) for some \(\gamma >0\), and we see that there exists \(R>0\) such that
Assuming that (4.6) does not hold we see from (4.8) that
for some \((\bar{x},\bar{t})\in B(0,R)\times (0,T]\). We will now prove (4.7) by contradiction. Indeed, assume that
Using that \(\tilde{u}_i^-(x,T)\le \bar{u}_i^+(x,T)\), for all \(x\in \mathbb R ^N\), by the definition of sub- and supersolutions, we see that \((\bar{x},\bar{t})\in B(0,R)\times (0,T)\). For \((\bar{x},\bar{t})\in B(0,R)\times (0,T)\) fixed, we let \(\mathcal I \) be the non-empty set of all \(j \in \{1,\ldots ,d\}\) such that
Given degrees of freedom \(\beta >0\), \(\Lambda > 0\), we introduce the function \(\varphi _{\epsilon }: \mathbb{R }^N \times \mathbb{R }^N \times [0,T] \rightarrow \mathbb{R }\)
Note that \(\varphi _{\epsilon }\) is nonnegative. Furthermore, for \(j\in \mathcal I \) fixed, and \(\epsilon \), \(0<\epsilon \ll 1\), we consider the function
where \((x,y,t) \in B(0,R) \times B(0,R)\times [0,T]\). Using that \(\tilde{u}_j^-\) is upper semi-continuous and that \(\bar{u}_j^+\) is lower semi-continuous, we can conclude that \(\phi ^j_\epsilon \) is upper semi-continuous and hence that there exists \((x_\epsilon , y_\epsilon ,t_\epsilon )\) such that
Note that the points \((x_{\epsilon }, y_{\epsilon }, t_{\epsilon })\) depend also on \(\beta \) and \(\Lambda \). However, in this part of the argument \(\beta \) and \(\Lambda \) are kept fixed and hence the dependence is harmless in the following. Using that \(2\phi ^j(x_{\epsilon },y_{\epsilon },t_{\epsilon }) \ge \phi ^j(x_{\epsilon },x_{\epsilon },t_{\epsilon })+\phi ^j(y_{\epsilon },y_{\epsilon },t_{\epsilon })\), we see that
and as the right-hand side in (4.11) is bounded, we have that \(|x_{\epsilon }-y_{\epsilon }|\rightarrow 0\) as \(\epsilon \rightarrow 0\). Using that \((\bar{x},\bar{t})\in \overline{B(0,R)}\times [0,T)\), and the construction of \(\varphi _{\epsilon }\), we see that
Furthermore, we see that we must have, using the definition of \((\bar{x},\bar{t})\), \((x_{\epsilon }, y_{\epsilon },t_{\epsilon })\), and the upper semi-continuity of \(\tilde{u}^-_j-\bar{u}^+_j\), that
The above display also shows that, for \({\epsilon }\) small enough, we have \(t_{\epsilon }\in [0,T)\) since \(t_{\epsilon }\rightarrow \bar{t}\) and \(\bar{t} \in [0,T)\). Note also that
Indeed, recall that \(\tilde{u}_j^-\) is upper semi-continuous and assume, taking (4.13) into account, that \(\limsup _{\epsilon \rightarrow 0}\tilde{u}_j^-(x_{\epsilon },t_{\epsilon }) < \tilde{u}_j^-(\bar{x},\bar{t})\). Then, using (4.12) we have that \(\liminf _{\epsilon \rightarrow 0}\bar{u}_j^+(y_{\epsilon },t_{\epsilon }) < \bar{u}_j^+(\bar{x},\bar{t})\), but this contradicts the lower semi-continuity for \(\tilde{u}_j^+\). Similarly, assuming that \(\liminf _{\epsilon \rightarrow 0}\bar{u}_j^+(y_{\epsilon },t_{\epsilon }) >\bar{u}_j^+(\bar{x},\bar{t})\) we see that
which again is a contradiction. Repeating (4.12) we also have that
In particular,
and using (4.14) we see that
In particular,
To proceed we will now argue as in [14], using the no-loop condition (1.7) \((ii)\), to conclude that there exists \(k \in \mathcal I \) such that
Indeed, assume, on the contrary, that
for all \(k\in \mathcal I \) and hence, in particular, that \(\tilde{u}_k^-(\bar{x},\bar{t}) + \tilde{c}_{k,j}(\bar{x}, \bar{t}) \le \tilde{u}_j^-(\bar{x},\bar{t})\) for some \(j \in \{1,\ldots ,k-1,k+1,\ldots ,d\}\). Furthermore, since \((\bar{u}_1^+,\ldots ,\bar{u}_d^+)\) is a supersolution to (4.5), we have that
Combining the two inequalities above yields
and hence
But \(k\in \mathcal I \) so (4.22) is actually an equality and hence \(j\in \mathcal I \). Repeating this argument as many times as necessary, we get the existence of a loop of indices \(\{i_1, i_2, \ldots ,i_p,i_{p+1}\}\) such that \(i_1=i_{p+1}\) and
This contradicts our assumptions on the switching costs and hence (4.19) must hold.
We now consider \(k\in \mathcal I \) such that (4.19) holds, and we intend to derive a contradiction to the assumption in (4.10). First, using (4.14) and (4.19) we see that there exists \(\bar{\epsilon }\), \(0<\bar{\epsilon }\ll 1\), such that
(4.25) ensures that \(\tilde{u}_k^-\) is above the obstacle at the points \(\{(x_{\epsilon },t_{\epsilon })\}_{\epsilon <\bar{\epsilon }}\). We now intend to apply Theorem 8.3 of [3] to the function
in a neighborhood of the point \((x_{\epsilon }, y_{\epsilon },t_{\epsilon })\). We first have to calculate, following Theorem 8.3 of [3], \(\partial _t\varphi _{\epsilon }\), \(\partial _{x_i}\varphi _{\epsilon }\), \(\partial _{y_i}\varphi _{\epsilon }\), and \(\partial _{x_iy_j}\varphi _{\epsilon }\). We see that
Furthermore,
where
Let
Applying Theorem 8.3 of [3], we conclude that there exists \(C,D \in \mathbb{R }\) and \(X,Y \in S_N\) such that
and such that
In addition, for every \(\epsilon _1>0\) we have that
Using this notation, (4.25), and the above, we see, by the definition of subsolutions, that
Similarly, we have, by the definition of supersolutions, that
Adding (4.28) and (4.29), we see that
Using the assumptions in (2.2), we see that
and
Next we note that there exists a constant \(c\) such that
Using this and (4.27) with \(\epsilon _1=\epsilon \) we find that
Assuming \(\gamma > 1\) and using (2.1) and (2.2) it follows by standard deductions that
Putting these estimates together, we find that
where \(0\le h(x_{\epsilon }, y_{\epsilon },\bar{x},\bar{y})\le c\) for all \(\epsilon \), \(0<\epsilon \le \bar{\epsilon }\). Hence, using the relation for \(C+D\) we see that
Now, letting first \(\epsilon \rightarrow 0\), using (4.13), (4.18), and the continuity of \(\tilde{\psi }_k\), we can conclude that
Finally, letting \(\Lambda \rightarrow 0\) in the last display, we see that
which contradicts (4.10). This completes the proof of Theorem 2.1.\(\square \)
5 Existence: proof of Theorem 2.2
We here consider the following obstacle problem for the operator \(\mathcal H \),
Throughout the section we assume that
Definition 1
-
(i)
A function \(u^+\), \(u^+\in \text{ LSC}_p(\mathbb{R }^N \times [0,T])\), is a viscosity supersolution to the problem in (5.1) if \(u^+\ge g\) on \(\mathbb{R }^N \times \{T\}\), and if the following holds. If \((x_0,t_0)\in \mathbb{R }^N \times [0,T]\) and if \(\phi \in C^{1,2}(\mathbb{R }^N \times [0,T])\) satisfies
$$\begin{aligned} (i)&\phi (x_0, t_0) = u^+(x_0,t_0),\nonumber \\ (ii)&(x_0,t_0) \text{ is} \text{ a} \text{ local} \text{ maximum} \text{ of} \phi -u^+, \end{aligned}$$(5.3)then
$$\begin{aligned}&\min \biggl \{-\mathcal H \phi -\psi , u^+-\theta \biggr \} \ge 0. \end{aligned}$$ -
(ii)
A function \(u^-\), \(u^-\in \text{ USC}_p(\mathbb{R }^N \times [0,T])\), is a viscosity subsolution to the problem in (5.1) if \(u^-\le g\) on \(\mathbb{R }^N \times \{T\}\), and if the following holds. If \((x_0,t_0)\in \mathbb{R }^N \times [0,T]\) and if \(\phi \in C^{1,2}(\mathbb{R }^N \times [0,T])\) satisfies
$$\begin{aligned} (i)&\phi (x_0, t_0) = u^-(x_0,t_0),\nonumber \\ (ii)&(x_0,t_0) \text{ is} \text{ a} \text{ local} \text{ minimum} \text{ of} \phi -u^-, \end{aligned}$$(5.4)then
$$\begin{aligned}&\min \biggl \{-\mathcal H \phi -\psi , u^--\theta \biggr \} \le 0. \end{aligned}$$ -
(iii)
If \(u\) is both a viscosity supersolution and subsolution to the problem in (5.1), then \(u\) is a viscosity solution to the problem in (5.1).
5.1 Comparison principle for the obstacle problem
Lemma 5.1
(Comparison principle). Let \(\mathcal H \) be as in (1.2) and assume (2.1), (2.2), and (5.2). Assume that \(u^-\) and \(u^+\) are viscosity sub- and supersolutions to (5.1), respectively. Then \(u^-\le u^+\) in \( \mathbb R ^N\times (0,T]\).
Proof
The proof is along the lines of the proof of Theorem 2.1. We let \(\tilde{u}^-(x,t)=e^tu^-(x,t)\), \(\tilde{u}^+(x,t)=e^t\bar{u}^+(x,t)\), and we see that we then want to prove
whenever \((x,t)\in \mathbb R ^N\times (0,T)\), where \(\tilde{u}^-\) now is a viscosity subsolution to the problem
In (5.6),
Similar \(\tilde{u}^+\) is now a viscosity supersolution to the problem in (5.6). As in Lemma 4.1 we can construct a new supersolution, \(\bar{u}^+(x,t)\) to (5.1),
by choosing \(\lambda \) large enough. As with Theorem 2.1 it is enough to show
for every \({\epsilon }\), and let \({\epsilon }\rightarrow 0\) to derive the result. Note that the construction of \(\bar{u}^+(x,t)\) and the growth conditions of \(\tilde{u}^-(x,t)\) ensure that
for some \((\bar{x},\bar{t})\in B(0,R) \times [0,T]\), where \(B(0,R)\) is a Euclidean ball of radius \(R\) centered at \(0\). We now assume that
and we want to derive a contradiction. Using the definition of sub- and supersolutions, we have \(\tilde{u}_i^-\le \tilde{u}_i^+\) on \(\mathbb{R }\times \{T\}\), and hence, we see that \((\bar{x},\bar{t})\in \mathbb{R }^N \times [0,T)\). We now need to prove that
as a replacement for (4.19). However, now this is trivial since, if \(\tilde{u}^-(\bar{x},\bar{t})\le \tilde{\theta }(\bar{x}, \bar{t})\), then using that \(\bar{u}^+\) is a supersolution, we must have that \(\tilde{u}^-(\bar{x},\bar{t})\le \bar{u}^+(\bar{x},\bar{t})\) which contradicts (5.8). Hence, (5.9) holds and we can then proceed along the lines of the proof of Theorem 2.1 to complete the proof of the lemma. We omit further details.
5.2 Proof of Theorem 2.2
Recall the assumptions made in Theorem 2.2 and recall, in particular, the assumptions concerning the existence, for \(k\in \mathbb N \), of \((u_{1}^k,\ldots ,u_{d}^k)\).
Lemma 5.2
Let \((u_{1}^k,\ldots ,u_{d}^k)\), for \(k=0,1,\ldots \), be as in the statement of Theorem 2.2. Then
whenever \((x,t)\in \mathbb{R }^N \times [0,T]\).
Proof
We prove the statement of the lemma by induction. Indeed, first consider \((u_1^0,\ldots ,u_d^0)\) and \(i\in \{1,\ldots ,d\}\). By assumption
Hence, since \(H\) is linear, \(-\mathcal H (u_i^{0}-u_i^1)\le 0\) in \(\mathbb{R }^N \times [0,T]\) and \(u_i^{0}-u_i^1=0\) on \(\mathbb{R }^N \times \{T\}\). By the comparison principle, we can here use Lemma 5.1 as by assumption \(u_i^{0}-u_i^1\) has at most polynomial growth (in \(x\)) at infinity, it then follows that \(u_i^{1}\ge u_i^0\) in \(\mathbb{R }^N \times [0,T]\), and this concludes the proof of Lemma 5.2 for \(k=0\). Assume now that
We then want to prove that
Note that \(u_{i}^{k_0+1}\) and \(u_{i}^{k_0+2}\) both solve the same obstacle problem but with obstacles
respectively. Using (5.12) we see that \(u_j^{k_0+1}\ge u_j^{k_0}\) for \(j\in \{1,\ldots ,m\}\) and hence \(\Gamma _{i}^{k_0+2}\ge \Gamma _{i}^{k_0+1}\) in \(\mathbb{R }^N\times [0,T]\). Hence, \(u_{i}^{k_0+2}\ge u^{k_0+1}\) in \(\mathbb{R }^N \times [0,T]\) as we see from Lemma 5.1, and we can conclude that (5.13) holds. The proof of the lemma now follows by induction. \(\square \)
Lemma 5.3
Let \((u_1^-,\ldots ,u_d^-)\) and \((u_{1}^k,\ldots ,u_{d}^k)\), for \(k=0,1,\ldots \), be as in the statement of Theorem 2.2. Let \((u_1^{+},\ldots ,u_d^+)\) be any supersolution to system (1.1). Then
for \(i\in \{1,\ldots ,d\}\) whenever \((x,t)\in \mathbb{R }^N \times [0,T]\).
Proof
Obviously, the lower bound for (5.14) holds for \(k=0\). By Lemma 5.2 the lower bound for \(k=1,2,\ldots \) then follows immediately. For \(k=0\), the upper bound holds by the comparison principle, i.e., Theorem 2.1. We now prove the upper bound for \(k=1,2, \ldots \) by induction. Suppose that we have verified the inequality on the right-hand side in (5.14) for \(k\in \{0,\ldots ,k_0\}\). We then consider \(k=k_0+1\). By assumption
By the induction hypothesis, \(\max _{j\ne i}(-c_{i,j}+u_{j}^{k_0})\le \max _{j\ne i}(-c_{i,j}+u_j^{+})\). Now we note that
where \(\theta _i^+=\max _{j\ne i}(-c_{i,j}+u_{j}^{+})\). In particular, \(u_{i}^{+}\) is a supersolution to the obstacle problem in (5.15). Since \(\max _{j\ne i}(-c_{i,j}+u_{j}^{k_0})\le \theta _i^+\) it now immediately follows from Lemma 5.1 that
for all \(i\in \{1,\ldots ,d\}\) whenever \((x,t)\in \mathbb{R }^N \times [0,T]\). This completes the proof of the lemma.\(\square \)
Lemma 5.4
Let the assumptions of Theorem 2.2 hold. Let \((u_{1}^k,\ldots ,u_{d}^k)\), for \(k=0,1,\ldots \), be as in the statement of Theorem 2.2. Then \((u_{1}^k,\ldots ,u_{d}^k)\) converges for every \((x,t)\in \mathbb R ^N\times [0,T]\), as \(k\rightarrow \infty \) and the limit, denoted by \((u_1(x,t),\ldots ,u_d(x,t))\), is a viscosity solution of (1.1).
Proof
Using Lemmas 5.2 and 5.3 we can conclude that
Note that by construction and the assumptions on \((u_{1}^k,\ldots ,u_{d}^k)\) stated in Theorem 2.2, we have that
for every \(x\in \mathbb R ^N\). We now intend to prove that \((u_1,\ldots ,u_d)\) is a sub- as well as a supersolution to (1.1). We first prove that \((u_1,\ldots ,u_d)\) is a supersolution to (1.1).
As \(u_i\) is, for all \(i\in \{1,\ldots ,d\}\), a limit of an increasing sequence of continuous functions, we can immediately conclude that \(u\) is lower semi-continuous. From (5.18) it is clear that the terminal condition is satisfied, i.e.,
Consider \((x_0,t_0)\in \mathbb R ^N\times (0,T)\) and assume that, for some \(i\in \{1,\ldots ,d\}\) fixed, \(\phi _i\in C^{1,2}(\mathbb R ^N\times [0,T])\) satisfies
We may assume that \((x_0, t_0)\) is a strict maximum, i.e., that there exists \({\epsilon }\), \(0<{\epsilon }\ll 1\) such that
since if this is not the case, we can subtract a smooth function \(\kappa _i\) from \(\phi _i\) such that \(\kappa _i(x_0,t_0) =0\), \(\nabla \kappa _i(x_0,t_0)=0\), \(\nabla ^2 \kappa (x_0,t_0) = 0\), and such that \((\phi _i-\kappa _i)-u_i\) has a strict maximum at \((x_0, t_0)\). In this case we can replace \(\phi _i\) by \(\bar{\phi }_i = \phi _i-\kappa _i\). Consider the sequence \(\{u_{i}^k\}_k\). Using (5.21) and that \(u_{i}^k \rightarrow u_i\) as \(k \rightarrow \infty \) it follows, for \(k\) large enough, that there exists \((x^k, t^k)\) such that \(\phi _i-u_{i}^k\) has a (local) maximum at \((x^k,t^k)\) and such that
By assumption, \(u_{i}^k\) is a viscosity solution, and hence a viscosity supersolution, of a particular obstacle problem and hence
at \((x,t)=(x^k,t^k)\). Using continuity of \(\phi _i\), \(\psi _i\), \(c_{i,j}\) as well as the lower semi-continuity of \(u_i\) we can conclude, using (5.22) and letting \(k\rightarrow \infty \) in the last display, that
at \((x,t)=(x_0,t_0)\). The display above together with (5.19) shows that \((u_1,\ldots ,u_d)\) is a supersolution to (1.1).
It remains to prove that \((u_1,\ldots ,u_d)\) is a subsolution to (1.1). To prove this we let \((u_1^*,\ldots ,u_d^*)\) denote the upper semi-continuous envelope of \((u_1,\ldots ,u_d)\), i.e.,
for \((x,t)\in \mathbb R ^N\times (0,T]\) and for \(i\in \{1,\ldots ,d\}\). By construction \(u_i^*\) is upper semi-continuous and
whenever \((x,t)\in \mathbb R ^N\times (0,T]\) and for \(i\in \{1,\ldots ,d\}\). Furthermore, Lemma 5.3 shows that for every supersolution \((u_1^{+}, \ldots , u_d^+)\) of (1.1), \(u^*_i \le {(u^{+}_i)}^*\) for every \(i \in \{1,\ldots ,d\}\). Now let \(i \in \{1,\ldots ,d\}\) and \(y\in \mathbb{R }^N\) be fixed. By assumption there exists a barrier \(\{u^{+,i,y}\}\) for \((i,y)\) and hence
where we have used that \( (u_i^{+,i,y,{\epsilon }}(y,T))^*= u_i^{+,i,y,{\epsilon }}\) since \( u_i^{+,i,y,{\epsilon }}\) is continuous. Since \(i\) and \(y\) were arbitrary, (5.24) shows that the terminal condition for \(u^*\) holds. Consider now \((x_0,t_0)\in \mathbb R ^N\times (0,T)\) and assume that, for some \(i\in \{1,\ldots ,d\}\) fixed, \(\phi _i\in C^{1,2}(\mathbb R ^N\times [0,T])\) satisfies
Arguing as in (5.21), we can assume that the minimum is strict, i.e., that there exists \(\epsilon \), \(0<\epsilon \ll 1\) such that
Using (5.26) and arguing as above, we see that there exists a sequence of points \(\{(x^k,t^k)\}\) such that
Furthermore, by assumption, \(u_{i}^k\) is a viscosity solution, and hence a viscosity subsolution, of a particular obstacle problem and
at \((x,t)=(x^k,t^k)\). Using continuity of \(\phi _i\), \(\psi _i\), \(c_{i,j}\) we can conclude, using (5.27) and letting \(k\rightarrow \infty \) in the last display, that
at \((x,t)=(x_0,t_0)\). From the terminal condition in (5.24) and from the above display, it now follows that \((u_1^*,\ldots ,u_d^*)\) is a subsolution to (1.1). Using (5.23) and that \((u_1,\ldots ,u_d)\) is a supersolution, we can conclude, by an application of Theorem 2.1, that \((u_1^*,\ldots ,u_d^*)=(u_1,\ldots ,u_d)\). Since \(u_i\in \text{ LSC}_p(\mathbb R ^N\times [0,T])\) and \(u_i^*\in \text{ USC}_p(\mathbb R ^N\times [0,T])\) for all \(i\in \{1,\ldots ,d\}\), we can conclude that \(u_i\in C_p(\mathbb R ^N\times [0,T])\) and hence that \((u_1,\ldots ,u_d)\) is a viscosity solution of (1.1).\(\square \)
To complete the proof, we note that the existence part of Theorem 2.2 follows from Lemma 5.4 and the uniqueness from Theorem 2.1 \(\square \)
6 Existence: proof of Theorems 2.3 and 2.4
In this section we first prove Theorem 2.3 by verifying that the assumptions made in Theorem 2.3 are sufficient to ensure that we can apply Theorem 2.2. Theorem 2.4 is then proved relying on Theorem 2.3 and by constructing, for each \(i\in \{1,\ldots ,d\}\) and \(y\in \mathbb{R }^N\), a barrier \(\{u^{+,i,y}\}\) to the system in (1.1) in the sense of Definition 1. We prove Theorem 2.3 using the notion of backward stochastic differential equations and reflected backward stochastic differential equations as outlined in [15, 18, 19].
6.1 Backward stochastic differential equations: a few general theorems
Using Theorem 2.2 we see that the underlying questions of existence are reduced to establishing existence for the obstacle problem in (5.1), assuming that the data \(\psi \), \(\theta \), and \(g\) are continuous functions satisfying (5.2), and the existence of a viscosity subsolution in \(\text{ C}_p(\mathbb R ^N\times [0,T])\) to the corresponding Cauchy problem
Recall that a system of stochastic differential equation was defined in (1.3) with \(W=\{W_t\}\), a standard \(m\)-dimensional Brownian motion defined on some probability space \((\Omega ,\mathcal F ,\mathbb P )\). Let \((\mathcal F _t, t\in [0,T])\) be the natural filtration generated by \(W=\{W_t\}\) augmented with the \(\mathbb P \)-null sets of \({\fancyscript{F}}\). Let \(L^2(\Omega , \mathcal F _T, \mathbb P )\) be the space of square integrable, \(\mathcal F _T\)-measurable random variables, and let \(\left|\cdot \right|\) denote the standard Euclidean norm on \(\mathbb R ^m\). Let \(\xi \) be a one-dimensional and real-valued random variable such that
Consider \(f: \Omega \times [0,T] \rightarrow \mathbb{R }\) such that
In addition, we let \(S_t\) be a continuously and progressively measurable one-dimensional stochastic process which satisfies
In the context of BSDEs and reflected BSDEs, \(\xi \) is referred to as the terminal data, \(f\) is the driver of the reflected BSDE, and \(S\) is the obstacle. For our purposes, it is enough to consider the special cases of the more general BSDEs and reflected BSDEs considered in [15, 18, 19]. Indeed, we consider the following BSDE with data \((\xi ,f)\),
as well as the following reflected BSDE with data \((\xi ,f,S)\),
Note that in (6.5) we solve for \((\tilde{Y}, \tilde{Z})\) and in (6.6) we solve for \((Y,Z,K)\). Given \((\xi ,f)\) as in (6.2)–(6.3), a pair \((\tilde{Y}_t, \tilde{Z}_t)\) of progressively measurable processes with values in \(\mathbb{R }\times \mathbb{R }^{ m}\) is said to be a solution to the BSDE in (6.5) if
and if (6.5) holds a.s. whenever \(0 \le t \le T\). Similarly, given \((\xi ,f,S)\) as in (6.2)–(6.4), a triple \((Y_t, Z_t, K_t)\) of progressively measurable processes with values in \(\mathbb{R }\times \mathbb{R }^{m}\times \mathbb{R }\) is said to be a solution to the reflected BSDE in (6.6) with data \((\xi ,f,S),\) if (6.6) holds with \((\tilde{Y}_t, \tilde{Z}_t)\) replaced by \((Y_t, Z_t)\), \(Y_t\ge S_t\) and () hold a.s. whenever \(0 \le t \le T\), \(K_T\in L^2(\Omega , \mathcal F _T, \mathbb P )\), \(K_t\) is continuous, increasing, \(K_0=0\), and
For our purposes, we can specialize even further. Indeed, using the assumptions in (2.2), standard results on stochastic differential equations ensure the existence of a unique \(N\)-dimensional diffusion process \(X=\left(X_s^{x,t}\right)\) solving (1.3). We first have the following lemma.
Lemma 6.1
Assume (2.2) and let \(X=\left(X_s^{x,t}\right)\) be the unique strong solution to (1.3). Consider the problems in (5.1), (6.1), and assume that \(\psi \), \(\theta \), and \(g\) are continuous functions satisfying (2.16). Let \((\xi ,f,S)\) be defined as \(\xi =g(X_T)\), \(f(\omega ,t)=\psi (X_t,t)\), \(S_t=\theta (X_t,t)\). Then \((\xi ,f,S)\) satisfies (6.2)–(6.4).
Proof
This follows immediately from the fact that for any \(\eta \ge 2\), there exists a constant C such that the process \(X^{x,t}\) satisfies
See, e.g., [21] for a proof of this fact.
We now consider the problems in (6.5) and (6.6) with data \((g(X_T),\psi (X_t,t))\) and \((g(X_T),\psi (X_t,t),\theta (X_t,t))\), respectively. Indeed, in this context we consider
and
Lemma 6.2
Assume (2.2) and let \(X=\left(X_s^{x,t}\right)\) be the unique strong solution to (1.3). Consider the problem in (5.1) and assume that \(\psi \) and \(g\) satisfy (2.16). Then there exists a unique solution \((\tilde{Y},\tilde{Z})\) to the BSDE in (6.9). Furthermore, if we let \(u(x,t) := \tilde{Y}^{x,t}_t\), then \(u\) is deterministic function, continuous on \(\mathbb R ^N\times [0,T]\), solving the Cauchy problem in (6.1) in the viscosity sense.
Proof
See Theorem 4.1 in [18] and Theorem 4.3 in [19].
Lemma 6.3
Assume (2.2) and let \(X=\left(X_s^{x,t}\right)\) be the unique strong solution to (1.3). Consider the problem in (5.1) and assume that \(\psi \), \(\theta \), and \(g\) satisfy (2.16). Then there exists a unique solution \((Y,Z,K)\) to the reflected BSDE in (). Furthermore, if we let \(u(x,t) := \tilde{Y}^{x,t}_t\), then \(u\) is deterministic function, continuous on \(\mathbb R ^N\times [0,T]\), solving the obstacle problem in (5.1) in the viscosity sense.
Proof
See Theorems 5.2 and 8.5 in [15].
6.2 Proof of Theorem 2.3
Using Theorem 2.2, we see that to complete the proof of Theorem 2.3, we only need to show that there exist a subsolution and unique viscosity solutions to the problems in (2.5) and (2.6), respectively. However, the existence part of these statements is now an immediate consequence of Lemmas 6.1, 6.2, and 6.3, and the existence proof can be made rigorous by induction. Uniqueness in (2.6) follows from the comparison principle of Lemma 5.1. Hence, the proof of Theorem 2.3 is complete.
6.3 Proof of Theorem 2.4
In light of Theorem 2.3, we only have to prove the existence of a barrier for (1.1) for each \(i \in \{1,\ldots ,d \}\) and \(y\in \mathbb{R }^N\). Let \(\mathcal H \), \(\psi _i\), \(c_{i,j}\), and \(g_i\) be as in the statement of Theorem 2.4. To construct an appropriate barrier, for fixed \(i \in \{1,\ldots , d\}\) and \(y \in \mathbb{R }^N\), we let, for all \(j\in \{1,\ldots ,d\}\),
where \(K\) and \(\lambda \) are nonnegative degrees of freedom and \(L\) is the Lipschitz-constant of \(g(x)\). Using the assumptions on \(\psi _i\), \(c_{i,j}\) and \(g_i\) stated in (2.16), we see, for \(K\) and \(\lambda \) large enough, that
for all \(j \in \{1,\ldots .,d\}\), where
To prove that \((u_1^{+,i,y,{\epsilon }},\ldots ,u_d^{+,i,y,{\epsilon }})\) is a viscosity supersolution to (1.1), we hence only have to verify the terminal condition and that
First note that
and hence
where the last inequality is a consequence of (1.8). Concerning the terminal value, condition (2.4) yields \(u_j^{+,i,y,{\epsilon }}(x,T) \ge g(x)\) for all \(j \in \{1, \ldots , d\}\). Hence, \((u_1^{+,i,y,{\epsilon }},\ldots ,u_d^{+,i,y,{\epsilon }})\) is a continuous viscosity supersolution to (1.1) for every \({\epsilon }>0\). Moreover, assumption (1.7) \((i)\) implies that \(u_i^{+,i,y,{\epsilon }}(y,T) = g(y) + L (e^{-\lambda T}+1){\epsilon }^{\frac{1}{2}}\) and hence \(\lim _{{\epsilon }\rightarrow 0} u_i^{+,i,y,{\epsilon }}(y,T) =g(y)\). Repeating the above argument for each \(i \in \{1,\ldots , d\}\) and \(y \in \mathbb{R }^N\), we are hence able to produce the barriers needed to apply Theorem 2.3. This completes the proof of Theorem 2.4.\(\square \)
7 Regularity: proof of Theorems 2.5 and 2.6
The purpose of this section is to prove Theorems 2.5 and 2.6. Recall that operators of Kolmogorov type were introduced in Sect. 2.1. The relevant Lie group related to the operator \(\mathcal K \) in (2.18) is defined using the group law
where \( B^*\) denotes the transpose of \(B\).
In particular, the vector fields \(X_1, \ldots , X_m\), and \(Y\) are left-invariant with respect to the group law (7.1) in the sense that
for every \(\zeta \in \mathbb{R }^{N+1}\). In particular, \(\mathcal L \left( u (\zeta \circ \, \cdot \, ) \right) = \left( \mathcal L u \right) (\zeta \circ \, \cdot \, )\). Note that the operation of dilation in (2.21) can be rewritten in the form
where we set \(\alpha _1=\cdots =\alpha _{m}\! =1\), and \(\alpha _{m + m_1 + \cdots + m_{j-1}+1}\!= \cdots = \alpha _{m + m_1 + \cdots + m_j+1}= 2 j+1\) for \(j=1,\ldots , \kappa \). Following (2.21), we split the coordinate \(x\in \mathbb{R }^N\) as
and we define
Note that \( \Vert \delta _r z\Vert _K=r \Vert z\Vert _K\) for every \(r>0\) and \(z\in \mathbb{R }^{N+1}\). We recall the following pseudo-triangular inequality: There exists a positive constant \(\mathbf{c}\) such that
We also define the quasi-distance \(d_K\) by setting
and the ball
Note that from (7.6) it directly follows that
For any \(z\in \mathbb{R }^{N+1}\) and \(H\subset \mathbb{R }^{N+1}\), we define
and we let
Using this notation, we say that a function \(f:\mathcal{O }\rightarrow \mathbb{R }\) is Hölder continuous of exponent \(\alpha \in ]0,1]\), in short \(f\in C^{0,\alpha }_K(\mathcal{O })\), if there exists a positive constant \(c\) such that
We let
Furthermore, we denote by \(C^{2,{\alpha }}_K(\mathcal{O })\) the Hölder space defined by the following norm,
Moreover, we let \(C^0(\mathcal{O })\) denote the set of functions which are continuous on \(\mathcal{O }\). Note that any \(u\in C^{0,{\alpha }}_K(\mathcal{O })\), \(\mathcal{O }\) bounded, is Hölder continuous in the usual sense since
Let \(k\in \{0,2\}\), \(\alpha \in (0,1]\). If \(\psi \in C_{K}^{k,\alpha }(\mathcal{O }^{\prime })\) for every compact subset \(\mathcal{O }^{\prime }\) of \(\Omega \), then we write \(\psi \in C_{K,\text{ loc}}^{k,\alpha }(\Omega )\).
7.1 Local interior regularity
In the following we let, for \(x\in \mathbb{R }^{N}\) and \(r>0\), \(B(x,r)\) denote the standard Euclidean open ball in \(\mathbb{R }^{N}\) with center \(x\) and radius \(r\). We let \(e_1\) be the unit vector pointing in the \(x_1\)-direction in the canonical base for \(\mathbb{R }^{N}\), and we let
Then \(Q\) is a space-time cylinder, and we also let, whenever \((x,t)\in \mathbb{R }^{N+1},\, \rho >0\),
Then \(Q_\rho (x,t)\) is the cylinder \(Q\) scaled to size \(\rho \) and translated to the point \((x,t)\). We also note that the volume of \(Q_\rho (x,t)\) is \(\rho ^\mathbf{q +2}\) times the volume of \(Q\), where \(\mathbf q \) is the homogeneous dimension in (2.22). In the following we say that a constant depends on the operator \(\mathcal H \), when the constant depends on the dimension \(N\), the constant of parabolicity \(\lambda \), and the Hölder norms of its coefficients. Let \(\partial _PQ_R(x,t)\) denote the parabolic boundary of \(Q_R(x,t)\). Concerning local interior regularity, the following theorems can be proved.
Theorem 7.1
(Theorem 4.2 in [4]) Assume (2.17), (2.19), (2.23), (2.24). Let \(R>0\) and \((x,t)\in \mathbb{R }^{N+1}\), assume \(\psi \in C_{K}^{2,\alpha }(Q_R(x,t))\), \(g\in C^0(\partial _PQ_R(x,t))\). Then there exists a unique classical solution \(u\in C_{K,\text{ loc}}^{2,\alpha }(Q_R(x,t))\cap C^0(Q_R(x,t)\cup \partial _PQ_R(x,t))\) to the Dirichlet problem
7.2 Regularity for obstacle and Cauchy problems
Concerning regularity in obstacle problems, we will rely on results established in [9, 17]. We here let, for \(T>0\), \(R>0\),
Then \(\mathcal{O }\subset \mathbb{R }^{N+1}\) is an open subset. Let \(\partial _P\mathcal{O }\) denote the parabolic boundary of \(\mathcal{O }\), let \(g,\psi ,\phi :\bar{\mathcal{O }}\rightarrow \mathbb{R }\) be such that \(g\ge \theta \) on \(\bar{\mathcal{O }}\) and assume that \(g,\psi ,\theta \) are continuous and bounded on \(\bar{\mathcal{O }}\). Consider the following obstacle problem for the operator \(\mathcal H \),
In [9] the following interior estimate was proved.
Theorem 7.3
Assume hypotheses (2.17), (2.19), (2.23), (2.24). Let \({\alpha }\in ]0,1]\) and let \(\mathcal{O },\mathcal{O }^{\prime }\) be domains of \(\mathbb{R }^{N+1}\) such that \(\mathcal{O }^{\prime }\subset \subset \mathcal{O }\). Let \(u\) be a viscosity solution to problem (7.18) and assume that \(\psi ,\theta \in C_{K}^{0,\alpha }(\mathcal{O })\) and that \(g\) is continuous on the closure of \(\mathcal{O }\). Then \(u\in C_{K}^{0,\alpha }(\mathcal{O }^{\prime })\) and
Let
and consider \(\mathcal{O }^{\prime }_{t_{0}}=\mathcal{O }^{\prime }\cap \{t<t_{0}\}\) for every \(\mathcal{O }^{\prime }\subset \subset \mathcal{O }\). We explicitly remark that \(\mathcal{O }^{\prime }_{t_{0}}\) is not a compact subset of \(\mathcal{O }_{t_{0}}\). In [17] the following estimate was established.
Theorem 7.4
Assume (2.17), (2.19), (2.23), (2.24). Let \({\alpha }\in ]0,1]\) and let \(\mathcal{O },\mathcal{O }^{\prime }\) be domains of \(\mathbb{R }^{N+1}\) such that \(\mathcal{O }^{\prime }\subset \subset \mathcal{O }\). Let \(u\) be a viscosity solution to problem (7.18) in the domain \(\mathcal{O }_{t_{0}}\), \(t_{0}\in \mathbb{R }\), defined in (7.19), and let \(g,\psi ,\theta \in C_{K}^{0,\alpha }(\mathcal{O }_{t_{0}})\). Then \(u\in C_{K}^{0,\alpha }(\mathcal{O }^{\prime }_{t_{0}})\) and
Consider the Cauchy–Dirichlet problem
Also, the following result was proved in [17].
Theorem 7.5
Assume (2.17), (2.19), (2.23), (2.24). Let \({\alpha }\in ]0,1]\) and let \(\mathcal{O },\mathcal{O }^{\prime }\) be domains of \(\mathbb{R }^{N+1}\) such that \(\mathcal{O }^{\prime }\subset \subset \mathcal{O }\). Let \(u\) be a viscosity solution to problem (7.18) in the domain \(\mathcal{O }_{t_{0}}\), \(t_{0}\in \mathbb{R }\), defined in (7.19), and let \(g,\psi \in C_{K}^{0,\alpha }(\mathcal{O }_{t_{0}})\). Then \(u\in C_{K}^{0,\alpha }(\mathcal{O }^{\prime }_{t_{0}})\) and
7.3 Proof of Theorems 2.5 and 2.6
Let \(\mathcal H \) be as in (1.2) and assume (2.17), (2.19), (2.23), (2.24). Assume that \(\psi _i,\ c_{i,j},\ g_i\in C_K^{0,\alpha }(\mathbb R ^N\times [0,T])\) for some \(\alpha \in (0,1]\). Assume that \((u_1,\ldots ,u_d)\) is a viscosity solution to the problem in (1.1) with data \(\psi _i,\ c_{i,j},\ g_i\). Assume that \(\psi _i,\ c_{i,j},\ g_i\) are continuous and that \(\{c_{i,j}\}\) satisfy (1.7). Let
for \(i\in \{1,\ldots ,d\},\, i\ne j\). Note that \(\Lambda _i\) is closed for all \(i,j \in \{1,\ldots ,q\}\). To prove Theorem 2.5 we first note that (1.7) implies that \(\cap _i \Lambda _i = \emptyset \) and, by Theorem 7.1, that
Consider \((x_0,t_0)\in \mathbb R ^N\times (0,T)\) and \(i_0\in \{1,\ldots ,d\}\). We want to prove that \(u_{i_0}\) is at least \(C_K^{0,\alpha }\)-regular in a small neighborhood \((x_0,t_0)\). If \((x_0,t_0)\in (\mathbb R ^N\times [0,T))\setminus \Lambda _{i_0}\) then, by (7.21), we are done. Hence, we can assume that \((x_0,t_0)\in \Lambda _{i_0}\). Let \(I_1(i_0)\) be the set of all indices \(i_1\in \{1,\ldots ,d\}\setminus \{i_0\}\) such that
Hence, \(u_{i_0}(x_0,t_0)>-c_{i_0,j}(x_0,t_0)+u_{j}(x_0,t_0)\) whenever \(j\in \{1,\ldots ,d\}\setminus \{i_0\}\setminus I_1\) and by continuity
in a neighborhood \(\mathcal N _1\) of \((x_0,t_0)\). In particular,
where \(\theta _1(x,t)=\max _{j\in I_1}(-c_{i_0,j}(x,t)+u_j(x,t))\}\) and hence \(u_{i_0}\) is a solution to an obstacle problem in \(\mathcal N _1\) with obstacle defined by \(\theta _1\). If now \((x_0,t_0)\notin \cup _{i_1\in I_1}\Lambda _{i_1}\) then by (7.21) the regularity of \(\theta _1\) is determined by the regularity of \(c_{i_0,j}\). Hence, \(\theta _1\) is \(C_K^{0,\alpha }\)-regular in a small neighborhood \((x_0,t_0)\). Using Theorem 7.2 we can then conclude that also \(u_{i_0}\) must be \(C_K^{0,\alpha }\)-regular in a small neighborhood \((x_0,t_0)\) and we are done. Therefore, assume instead that \((x_0,t_0)\in \cup _{i\in I_1}\Lambda _{i}\), and let \(J_1\) be the set of those indices \(i \in I_1(i_0)\) which satisfies \((x_0,t_0)\in \Lambda _{i}\). Consider \(j_1\in J_1\) and let \(I_2(i_1)\) be the set of all indices \(i_2\in \{1,\ldots ,d\}\setminus \{i_1\}\) such that
Assume that \(i_0\in I_2(i_1)\) for some \(i_1\in J_1\). Then, by combining (7.22) and (7.25) we would have
i.e.,
which is impossible by (1.7). Hence, \(i_0\notin I_2(i_1)\) for all \(i_1\in J_1\), and we can conclude that while the cardinality of \(I_1\) is at most \(d-1\), the cardinality of \(I_2\) is at most \(d-2\). By the same argument as above, we can conclude that
in a neighborhood of \((x_0,t_0)\). Using this notation, we have that \(\theta _1\) can be expressed as
where
If now \((x_0,t_0)\notin \Lambda _j\) for all \(j\in I_2(i_1)\) with \(i_1\in J_1\), then as above, we can again conclude that \(u_{i_0}\) must be \(C_K^{0,\alpha }\)-regular in a small neighborhood \((x_0,t_0)\) and we are done. If not, we repeat the argument above. By finiteness of the set \(\{1,\ldots ,d\}\) and by the fact that the above argument yields a reduction in the cardinality of the set \(I_n\), we sooner or later must end up in a situation where \((x_0,t_0) \notin \cup _{i \in I_n} \Lambda _i\) and we can then conclude that \(u_{i_0}\) must in fact be \(C_K^{0,\alpha }\)-regular in a small neighborhood of \((x_0,t_0)\). Repeating this procedure for every \(i_0 \in \{1,\ldots ,d\}\) proves Theorem 2.5.
To prove Theorem 2.6 we first recall that we are also assuming (2.4) in this case. Consider \((x_0,t_0)\in \mathbb R ^N\times (0,T]\), and note that if \(t_0<T\) then we are qualitatively in an interior situation which is covered by Theorem 2.5. Hence, we consider \((x_0,T)\) for some \(x_0\in \mathbb R ^N\), and we want to prove that \(u_{i_0}\) is \(C_K^{0,\alpha }\)-regular in a small neighborhood in \(\mathbb R ^N\times (0,T]\) of \((x_0,T)\). If
then by continuity and regularity in the Cauchy problem up to the terminal state, see Theorem 7.4, \(u_{i_0}\) is \(C_K^{0,\alpha }\)-regular in a small neighborhood in \(\mathbb R ^N\times (0,T]\) of \((x_0,T)\). Hence, we are done in this case. Assume therefore instead that
However, if we now argue as in the proof of Theorem 2.5, using Theorem 7.3 instead of Theorem 7.2, we again, sooner or later, end up in a situation where \(u_{i_0}\) is \(C_K^{0,\alpha }\)-regular in a small neighborhood in \(\mathbb R ^N\times (0,T]\) of \((x_0,T)\). Repeating this for every \(i_0 \in \{1,\ldots ,d\}\) proves Theorem 2.6.
References
Arnarson, T., Djehiche, B., Poghosyan, M., Shahgholian, H.: A PDE approach to regularity of solutions to finite horizon optimal switching problems Nonlinear analysis: theory. Methods Appl. 71, 6054–6067 (2009)
Biswas, I.H., Jakobsen, E.R., Karlsen, K.H.: Viscosity solutions for a system of integro-PDEs and connections to optimal switching and control of jump-diffusion processes. Appl. Math. Optim. 62, 47–80 (2010)
Crandall, M.G., Ishii, H., Lions, P.-L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 27, 1–67 (1992)
Di Francesco, M., Polidoro, S.: Schauder estimates, Harnack inequality and Gaussian lower bound for Kolmogorov-type operators in non-divergence form. Adv. Differ. Equ. 11, 1261–1320 (2006)
Djehiche, B., Hamadene, S.: On a finite horizon starting and stopping problem with risk of abandonment. Int. J. Theor. Appl. Finance 12, 523–543 (2009)
Djehiche, B., Hamadene, S., Popier, A.: A finite horizon optimal multiple switching problem. SIAM J. Control Optim. 48, 2751–2770 (2010)
El-Asri, B., Fakhouri, I.: Optimal multi-modes switching with the switching cost not necessarily positive. arXiv:1204.1683v1 (2012)
El-Asri, B., Hamadene, S.: The finite horizon optimal multi-modes switching problem: the viscosity solution approach. App. Math. Optim. 60, 213–235 (2009)
Frentz, M., Nyström, K., Pascucci, A., Polidoro, S.: Optimal regularity in the obstacle problem for Kolmogorov operators related to American Asian options. Mathematische Annalen 347, 805–838 (2010)
Hamadéne, S., Morlais, M.A.: Viscosity solutions of systems of PDEs with interconnected obstacles and multi-modes switching problem. arXiv:1104.2689v2
Hu, Y., Tang, S.: Multi-dimensional BSDE with oblique reflection and optimal switching. Probab. Theory Rel. Fields 147, 89–121 (2010)
Hamadéne, S., Zhang, J.: Switching problem and related system of reflected backward SDEs. Stoch. Process. Appl. 120, 403–426 (2010)
Hörmander, L.: Hypoelliptic second order differential equations. Acta Mathematica 119, 147–171 (1967)
Ishii, H., Koike, S.: Viscosity solutions of a system of nonlinear second-order elliptic PDEs arising in switching games. Funkcialaj Ekvacioj 34, 143–155 (1991)
El-Karoui, Kampoudjian, Pardoux, Peng: Quenez reflected solutions of backward SDE’s and related obstacle problems for PDE’s. Ann. Probab. 25, 702–737 (1997)
Lanconelli, E., Polidoro, S.: On a class of hypoelliptic evolution operators. Rend. Sem. Mat. Univ. Politec. Torino 52, 29–63 (1994)
Nyström, K., Pascucci, A., Polidoro, S.: Regularity near the Initial State in the obstacle problem for a class of hypoelliptic ultraparabolic operators. J. Differ. Equ. 249, 2044–2060 (2010)
Pardoux, E., Peng, S.: Adapted solutions of a backward stochastic differential equation. Syst. Control Lett. 14, 55–61 (1990)
Pardoux, E., Peng, S.: Backward stochastic differential equations and quasilinear parabolic partial differential equations stochastic partial differential equations and their applications. In: Lecture Notes in Control and Information Science, vol. 176, Springer, Berlin
Pham, H., Vath, V.L., Zhou, X.Y.: Optimal switching over multiple regimes. SIAM J. Control Optim. 48, 2217–2253 (2009)
Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion. Springer, Berlin (1991)
Author information
Authors and Affiliations
Corresponding author
Additional information
Niklas L. P. Lundström and Marcus Olofsson were financed by Jan Wallanders och Tom Hedelius Stiftelse and Tore Browaldhs Stiftelse through the project Optimal switching problems and their applications in economics and finance, P2010-0033:1.
Rights and permissions
About this article
Cite this article
Lundström, N.L.P., Nyström, K. & Olofsson, M. Systems of variational inequalities in the context of optimal switching problems and operators of Kolmogorov type. Annali di Matematica 193, 1213–1247 (2014). https://doi.org/10.1007/s10231-013-0325-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10231-013-0325-y
Keywords
- System
- Variational inequality
- Existence
- Viscosity solution
- Obstacle problem
- Regularity
- Kolmogorov equation
- Ultraparabolic
- Hypoelliptic
- Backward stochastic differential equation
- Reflected backward stochastic differential equation
- Optimal switching problem