# Min max formulas for nonlocal equations

(30 Oct 2016)

Recently, with Russell Schwab, we finished project related to the question of which operators satisfy the global comparison principle. The preprint can be found here.

The manuscript turned out to be longer than we had anticipated. In part, this is due to our having to revisit several facts about Whitney extensions for functions in a Riemannian manifold, which took considerable extra space. We could not find a reference for Whitney extensions on manifolds that stated what we needed explicitly, but our proofs followed closely the very well known ideas for the case of $\mathbb{R}^d$. In any case, we intend to write a shorter paper reviewing our result in the case of $\mathbb{R}^d$ only, where several technical matters, including the Whitney extension, become much simpler.

This post will be an even shorter discussion of the ideas in the paper. I might do a later post discussing matters in greater generality (operators in a manifold or in a metric space). For now, this post will deal only with operators acting on $C^2$ functions in $\mathbb{R}^d$.

(Let me stress that the case of operators on a Riemannian manifold merits attention for several reasons, one is the study of Dirichlet to Neumann maps for elliptic equations, and another is that many free boundary problems can be posed as parabolic integro-differential equations in a manifold, both topics for another post).

(1) The global comparison property

We are considering scalar equations in $\mathbb{R}^d$, concretely, functions $u:\mathbb{R}^d\to\mathbb{R}$ which solve the equation

where $I$ is some (possibly nonlinear) mapping between functions. The operators $I$ we are interested are (heuristically) those for which one expects the comparison principle to hold, which roughly speaking says that

This is the case, for instance, for the Laplace operator. If one looks at the proof of the comparison principle for harmonic functions, then one sees that the crucial fact is the following:

This motivates the following definition.

Definition: An operator $I:C^2_b(\mathbb{R}^d)\to C^0_b(\mathbb{R}^d)$ is said to have the global comparison property (GCP) if whenever $u,v$ are such that $v$ touches $u$ from above at some $x_0\in\mathbb{R}^d$, we have $I(u,x_0)\leq I(v,x_0)$.

Recall that "$v$ touches $u$ from above at $x_0$" means that

By its very definition, the class of equations having the GCP is the class of equations that are amenable to treatment by methods based on the comparison property (i.e. barrier arguments, viscosity solutions, Perron's method, etc).

Question: Is there a simple characterization for the class of operators which have the GCP?.

(2) A few Examples

1) $I u= \Delta u(x)$.

2) $Iu = H(\nabla u(x))$, for some differentiable function $H:\mathbb{R}^d\to\mathbb{R}$.

3) The operator known as "the fractional Laplacian ", $Lu = -(-\Delta)^{\alpha/2}u$ with $\alpha \in [0,2]$, also written for $u\in C^2_b(\mathbb{R}^d)$ by the formula

4) Any given a Borel measure $\mu$ defines such an operator, via

5) If $L_1$ and $L_2$ are two operators having the GCP, then

also have the GCP.

6) Given $u \in C^2_b(\mathbb{R}^{d-1})$, let $U_u$ denote be the unique bounded solution to the elliptic Dirichlet problem

Then, $I(u,x):= \partial_{d} U_u(x,0)$ satisfies the global comparison property.

(3) A warm up exercise

Lemma: Suppose $L$ is a bounded linear map $L:C^2(\mathbb{R}^d)\to C_b^0(\mathbb{R}^d)$ such that $Lu(x_0)\leq 0$ for any $u\in C^2_b(\mathbb{R}^d)$ having a nonnegative local maximum at $x_0$. Then $$Lu(x) = \text{tr}(A(x)D^2u(x))+b(x)\cdot Du(x)+c(x)u(x)$$ where $A(x)\geq 0$ and $c(x)\leq 0$.

Proof: Let $P(x)$ denote the second order Taylor polynomial for $u$ at $x_0$.

For any $\delta>0$ one can construct a function $\eta \in C^2_b(\mathbb{R}^d)$ with $\| \eta\|_{C^2(\mathbb{R}^d)} \leq \delta$ such that $\eta(x_0)=0$ and

In which case

Since $\delta>0$ was arbitrary, it follows that

in other words, for every $x \in \mathbb{R}^d$ we have that $L(u,x)$ is a (linear) function of the second order polynomial of $u$ at $x$. In particular, there must be a symmetric matrix $A(x)$, a vector $b(x)$ and a scalar $c(x)$ such that

From here it is not difficult to see that $A(x)\geq 0$ and $c(x)\leq 0$.                                                                                                                                                                           ∎

Keeping in mind the proof of the above lemma, think about the following:

-What can be said if instead of asking $L(u,x_0)\leq 0$ at every local maximum $x_0$, we only assume that this happens at global maxima?.

-What if the operator is not linear?

The first question was answered by P. Courrege in the 60’s, and the answer to this (semingly) purely analytical question leads to an important class of operators from the theory of stochastic processes.

Definition: By a Levy measure we will refer to a Borel measure $\nu$ in $\mathbb{R}^d \setminus \{ 0 \}$ which may not have finite total mass, but is at least such that

Theorem (Courrege): A linear operator $L:C^2_b(\mathbb{R}^d)\to C_b^0(\mathbb{R}^d)$ has the global comparison property if and only if it is of the form $$L = L_{\text{loc}}+L_{\text{Levy}},$$ where the operators $L_{\text{loc}}$ and $L_{\text{Levy}}$ are given by $$L_{\text{loc}}(u,x) = \text{tr}(A(x)D^2u(x))+b(x)\cdot D u(x) + c(x) u(x),$$ $$L_{\text{Levy} }= \int_{\mathbb{R}^d\setminus \{ 0\}} u(x+y)-u(x)-\chi_{B_1(0)}(y) \nabla u(x)\cdot y \;\nu(x,dy).$$ for $A(x)\geq 0$, $A,b,c\in L^\infty$ and $\{ \nu(x,dy) \}_{x\in\mathbb{R}^d}$ is a family of Levy measures.

(4) A new min-max formula

Theorem (joint with Russell Schwab): Let $I:C^2_b(\mathbb{R}^d)\to C_b^\gamma(\mathbb{R}^d)$ ($\gamma\in (0,1)$) be a Lipschitz continuous map which satisfies the GCP, and such that $$\exists \text{ modulus of continuity } \omega(\cdot) \text{ and a constant } C \text{ such that:}$$ $$\|I(u)-I(v)\|_{L^\infty(B_r)} \leq C\|u-v\|_{C^2(B_{2r})}+C\omega(r)\|u-v\|_{L^\infty(\mathbb{R}^d)}.$$ Then, then there exists i) a (uniformly continuous) family of linear operators $$L_{ab}:C^2_b(\mathbb{R}^d) \to C^\gamma_b(\mathbb{R}^d)$$, each having the global comparison property, ii) a (bounded) family of functions $f_{ab}\in C^\gamma_b(\mathbb{R}^d)$, and these are such that for any function $u \in C^2_b(\mathbb{R}^d)$ we have the formula $$I(u,x) = \min\limits_{a} \max \limits_{b} \{ f_{ab}(x)+ L_{ab}(u,x)\}.$$

Remark: If one asks that $I$ be Lipschitz as a map between the spaces $C^\beta_b(\mathbb{R}^d)$ and $C^\gamma_b(\mathbb{R}^d)$ (where now $\beta,\gamma\in(0,1)$), then one can say more about the terms appearing in the min-max formula, in fact, in that case the theorem says that

«««< HEAD (3) Elementary proof when $I$ is Fréchet differentiable

# Let us suppose that $I:C^2_b(\mathbb{R}^d) \to C^0_b(\mathbb{R}^d)$ is Fréchet differentiable.

(3) Elementary proof when $I$ is Frechet differentiable

Let us suppose that $I:C^2_b(\mathbb{R}^d) \to C^0_b(\mathbb{R}^d)$ is Frechet differentiable.

origin/master

1.Fix $u,v\in C^2_b(\mathbb{R}^d)$ and let

Then

Then, the chain-rule says that

That is, if we define an operator $L$ by $\int_0^1 DI(u_t)\;dt$ then

2.For any $u\in C^2_b(\mathbb{R}^d)$, the linear operator $DI(u)$ is a continuous linear map from $C^2_b(\mathbb{R}^d)$ to $C^0_b(\mathbb{R}^d)$ which was the GCP.

3.The GCP is closed under convex combinations. Therefore, if we define

then every element of $\mathcal{D}(I)$ has the GCP.

4.Thanks to to step 1), for any $u,v\in C^2_b(\mathbb{R}^d)$

Since we have equality for $v=u$, it follows that

(4) A finite dimensional version

Let $G$ be a finite set, and let $C(G)$ denote the space of real valued functions in $G$.

Lemma: Let $I:C(G)\to C(G)$ be a Lipschitz map satisfying the GCP, then, $$I(u,x) = \min \limits_{v \in C(G)} \max \limits_{L \in \mathcal{D}I} \{ I(v,x) + L(u-v,x)\},$$ where each $L$ is a linear map from $C(G)$ to $C(G)$ having the form $$L(u,x) = c(x)u(x) + \sum \limits_{y\in G} (u(y)-u(x))k(x,y),$$ for some $c\in C(G)$ and some $k:G\times G\to\mathbb{R}$ with $k(x,y)\geq 0$ for all $x$ and $y$ in $G$.

The key difference now is that in this case $I:C(G)\to C(G)$ amounts to a Lipschitz map between two finite dimensional vector spaces, and here we have Clarke’s non-smooth calculus at our disposal.

Definition (Clarke Jacobian): Let $I:C(G)\to C(G)$ be a Lipschiz continuous map and $f \in C(G)$. The Clarke Jacobian of $I$ at $f$ is defined as the set

Finally, we will consider the total Clarke Jacobian of $I$, denoted $\mathcal{D}I$, and defined by

Lemma (mean value theorem): Let $I:C(G)\to C(G)$ be Lipschitz mapping. For any $u,v\in C(G)$, there exists some $L\in \mathcal{D}(I)$ such that $$I(u)-I(v) = L(u-v).$$

Corollary: Let $I$ be as before, then for any $u\in C(G)$ and any $x\in G$, we have $$I(u,x) \leq I(v,x) + \max \limits_{L \in \mathcal{D}(I)} L(u-v,x).$$

Proposition: If $I:C(G)\to C(G)$ is Lipschitz and has the GCP, then each $L \in \mathcal{D}(I)$ has the GCP.

«««< HEAD The proof of this last proposition is quite simple. First, let $u\in C(G)$ be a point of differentiability for $I(\cdot)$, and let $L_u$ denote the Fréchet derivative of $I(\cdot)$ at $u$. ======= The proof of this last proposition is quite simple. First, let $u\in C(G)$ be a point of differentiability for $I(\cdot)$, and let $L_u$ denote the Frechet derivative of $I(\cdot)$ at $u$.

origin/master

Then, let $v_1,v_2 \in C(G)$ be such that $v_1\leq v_2$ in $G$ with $=$ at some $x_0 \in G$, then for every $t>0$ we have

Then

«««< HEAD Using the fact that $I(\cdot)$ is Fréchet differentiable at $t$, it follows that ======= Using the fact that $I(\cdot)$ is Frechet differentiable at $t$, it follows that

origin/master

(5) The min-max formula via finite dimensional approximations.

With this finite dimensional result at hand, one can try to prove the result by approximating the space $C^2_b(\mathbb{R}^d)$ by finite dimensional subspaces, obtained roughly as follows: one constructs an increasing sequence of finite graphs $G_n$, which converge to $\mathbb{R}^d$.

Consider the following increasing sequence of discrete sets in $\mathbb{R}^d$:

in terms of these sets, we define projection operators

Then, for each $n$, we define a finite dimensional approximation to $I$, $I_n :C^2_b(\mathbb{R}^d) \to C^0_b(\mathbb{R}^d)$, by

Now, we can think of $I_n$ also as a (Lipschitz) map $C(G) \to C(G)$, and apply the min-max formula for this case,

Lemma: For every $n \in \mathbb{N}$ and $x\in G_n$, we have, for any $u \in C^2_b(\mathbb{R}^d)$, $$I_n(u,x) = \min\limits_{v \in C^2_b(\mathbb{R}^d)} \max \limits_{L \in \mathcal{D}(I_n)} \{ I_n(v,x) + L(u-v,x) \},$$ moreover, for each $L \in \mathcal{D}(I_n)$, there is some $c\in C(G_n)$ and measures $\{ m(x,dy) \}_{x\in G_n}$ such that $$L(v,x) = c(x)v(x) + \int_{\mathbb{R}^d} v(y)-v(x)\; m(x,dy),\;\;\;\forall\;x \in G_n$$ Moreover, for each $x\in G_n$, we have some nonnegative function $k:G_n\times G_n \to \mathbb{R}$ such that $$m(x,dy) = \sum \limits_{z \in G_n\setminus \{x\}} k(x,z)\delta_z(dy).$$