Bootstrapping and Lecture notes on nonlocal equations: Difference between pages

From nonlocal pde
(Difference between pages)
Jump to navigation Jump to search
imported>Luis
 
imported>Luis
No edit summary
 
Line 1: Line 1:
Bootstrapping is one of the simplest methods to prove regularity of a nonlinear equation. The general idea is described below.
Indication: A star (*) in an exercise indicates that I don't know how to solve it.


Assume that $u$ is a solution to some nonlinear equation of any kind. By being a solution to the nonlinear equation, it is also a solution to a linear equation whose coefficients depend on $u$. Typically this is some form of linearization of the equation. If an a priori estimate is known on $u$, then that provides some assumption on the coefficients of the linear equation that $u$ satisfies. The linear equation, in turn, may provide a new regularity estimate for the solution $u$. If this regularity estimate is stronger than the original a priori estimate, then we can start over and repeat the process to obtain better and better regularity estimates.
=Lecture 1=


This is the most elementary example of [[perturbation methods]].
==Definitions: linear equations==


== Examples ==
The first lecture serves as an overview of the subject and to familiarize ourselves with the type of equations under study.


=== A simple example ===
The aim of the course is to see some regularity results for elliptic equations. Most of these results can be generalized to parabolic equations as well. However, this generalization presents extra difficulties that involve nontrivial ideas.


Imagine that we have a general semilinear equation of the form
The prime example of an elliptic equation is the Laplace equation.
\[ u_t + (-\Delta)^s u = H(u,Du). \]
\[ \Delta u(x) = 0 \text{ in } \Omega.\]
Where $H$ is some smooth function and $s \in (1/2,1]$. Assume that a solution $u$ is known to be Lipschitz. Therefore, $u$ coincides with the solution $v$ of the linear equation
 
Elliptic equations are those which have similar properties as the Laplace equation. This is a vague definition.
 
The class of fully nonlinear elliptic equations of second order have the form
\[ F(D^2u, Du, u, x)=0 \text{ in } \Omega.\]
for a function $F$ such that
\[ \frac{\partial F}{\partial M_{ij}} > 0 \text{ and } \frac{\partial F}{\partial u} \leq 0.\]
 
These are the minimal monotonicity conditions for which you can expect a [[comparison principle]] to hold. The appropriate notion of weak solution, [[viscosity solutions]], is based on this monotonicity.
 
What is the Laplacian? The most natural (coordinate independent) definition may be
\[ \Delta u(x) = \lim_{r \to 0} \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy.\]
 
A simple (although rather uninteresting) example of a nonlocal equation would be the following non infinitesimal version of the Laplace equation
\[ \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy = 0 \text{ for all } x \in \Omega.\]
 
The equation tells us that the value $u(x)$ equals the average of $u$ in the ball $B_r(x)$. A more general integral equation is a ''weighted'' version of the above.
\[ \int_{\R^n} (u(x+y)-u(x)) K(y) dy = 0 \text{ for all } x \in \Omega.\]
where $K:\R^n \to \R$ is a non negative kernel.
 
The equations show that $u(x)$ is a weighted average of the values of $u$ in the neighborhood of $x$. This is true in some sense for all elliptic equations, but it is most apparent for integro-differential ones.
 
For the Dirichlet problem, the boundary values have to be prescribed in the whole complement of the domain.
\begin{align*}
\int_{\R^n} (u(x+y)-u(x)) K(y) dy &= 0 \text{ for all } x \in \Omega, \\
u(x) &= g(x) \text{ for all } x \notin \Omega.
\end{align*}
 
These type of equations have a natural motivation from probability, as we will see below.
 
==Probabilistic derivation==
 
Let us start by an overview on how to derive the Laplace equation from Brownian motion.
 
Let $B_t^x$ be Brownian motion starting at the point $x$ and $\tau$ be the first time it hits the boundary $\partial \Omega$. If we call $u(x) = \mathbb E[g(B_\tau^x)]$ for some prescribed function $g: \partial \Omega \to \R$, then $u$ will solve the classical Laplace equation
\begin{align*}
\Delta u(x) &= 0 \text{ in } \Omega,\\
u(x) &= g(x) \text{ on } \partial \Omega.
\end{align*}
 
A variation would be to consider diffusions other than Brownian motion. If $X^x_t$ is the stochastic process given by the SDE: $X_0^x = x$ and $dX_t^x = \sigma(X) dB$, and we define as before $u(x) = \mathbb E[g(X_\tau^x)]$, then $u$ will solve
\begin{align*}
a_{ij}(x) \partial_{ij} u(x) &= 0 \text{ in } \Omega,\\
u(x) &= g(x) \text{ on } \partial \Omega.
\end{align*}
where $a_{ij}(x) = \sigma^*(x) \sigma(x)$ is a non negative definite matrix for each point $x$.
 
Nonlinear equations arise from [[stochastic control]] problems. Say that we can choose the coefficients $a_{ij}(x)$ from a family of possible matrices $\{a_{ij}^\alpha\}$ indexed by a parameter $\alpha \in A$. For every point $x$, we can choose a different $a_{ij}(x)$ and our objective is to make $u(x)$ as large as possible. The maximum possible value of $u(x)$ will satisfy the equation
\begin{align*}
\sup_{\alpha} a_{ij}^\alpha \partial_{ij} u &= 0  \text{ in } \Omega,\\
u(x) &= g(x) \text{ on } \partial \Omega.
\end{align*}
<div style="background:#EEEEEE;">
'''Sketch of the proof.'''
If $v$ is any solution to
\begin{align*}
a_{ij}(x) \partial_{ij} v(x) &= 0 \text{ in } \Omega,\\
v(x) &= g(x) \text{ on } \partial \Omega.
\end{align*}
with $a_{ij}(x) \in \{a_{ij}^\alpha : \alpha \in A\}$, then from the equation that $u$ solves, we have
\[ a_{ij}(x) \partial_{ij} u(x) \leq 0 \text{ in } \Omega. \]
Therefore $u \geq v$ in $\Omega$ by the comparison principle for linear elliptic PDE.
</div>
 
Integro-differential equations are derived from discontinuous stochastic processes: [[Levy processes]] with jumps.
 
Let $X_t^x$ be a pure jump Levy process starting at $x$. Now $\tau$ is the first exit time from $\Omega$. The point $X_\tau$ may be anywhere outside of $\Omega$ since $X_t$ jumps. The jumps take place at random times determined by a Poisson process. The jumps in any direction $y \in A$, for some set $A \subset \R^n$ follow a Poisson process with intensity
\[ \int_A K(y) dy. \]
The kernel $K$ represents then the frequency of jumps in each direction. This type of processes are well understood and studied in the probability community.
 
The small jumps may happen more often than large ones. In fact, small jumps may happen infinitely often and still have a well defined stochastic process. This mean that the kernels $K$ may have a singularity at the origin. The exact assumption one has to make is
\[ \int_{\R^n} K(y) (1 \wedge |y|^2) dy , +\infty.\]
The generator operator of the [[Levy process]] is
\[ Lu(x) = \int_{\R^n} (u(x+y) - u(x) - y \cdot Du(x) \chi_{B_1}(y)) K(y) dy. \]
 
We may assume that $K(y)=K(-y)$ in order to simplify the expression. This assumption is not essential, but it makes the computations more compact. This way we can write
\begin{align*}
Lu(x) &= PV \int_{\R^n} (u(x+y) - u(x)) K(y) dy, \text{ or }
&= \int_{\R^n} (u(x+y) + u(x-y) - 2u(x)) K(y) dy.
\end{align*}
 
An optimal control problem for jump processes leads to the integro-differntial [[Bellman equation]]
\[ Iu(x) := \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^\alpha(y) dy = 0 \text{ in } \Omega.\]
 
Another possibility is to consider a problem with two parameters, which are controlled by two competitive players. This is the integro-differential [[Isaacs equation]].
\[ Iu(x) := \inf_\beta \ \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^{\alpha\beta}(y) dy = 0 \text{ in } \Omega.\]
 
Other contexts in which integral equations arise are the following:
* Population dynamics.
* [[Kinetic models]]. See the work of [http://www2.math.umd.edu/~mellet/ Antoine Mellet].
* [[Nonlocal electrostatics]].
* [[Nonlocal image processing]].
* Fluid mechanics. Mostly toy problems like the [[surface quasi-geostrophic equation]] or [[active scalar equations]].
 
==Uniform ellipticity==
Regularity result require stronger monotonicity assumptions. For fully nonlinear elliptic equations of second order F(D^2u)=0, uniform ellipticity is defined as that there exist two constants $\Lambda \geq \lambda > 0$ such that
\[ \lambda I \leq \frac{\partial F}{\partial M_{ij}}(M) \leq \Lambda I.\]
 
'''Big Theorems''':
* [[Krylov-Safonov]] (1981): Solutions to fully nonlinear uniformly elliptic equations are $C^{1,\alpha}$ for some $\alpha>0$.
* [[Evans-Krylov]] (1983): Solutions to convex fully nonlinear uniformly elliptic equations are $C^{2,\alpha}$ for some $\alpha>0$.
 
At the end of this course, we should be able to understand the proof of these two theorems and their generalizations to nonlocal equations.
 
We first need to understand what ellipticity means in an integro-differential equation. The prime example will be the [[fractional Laplacian]]. For $s \in (0,2)$, define
\[ -(-\Delta)^{s/2} u(x) = \int_{\R^n} (u(x+y)-u(x)) \frac{c_{n,s}}{|y|^{n+s}} dy.\]
 
This is an integro-differential operator with a kernel which is radially symmetric, homogeneous, and singular at the origin.
 
A natural ellipticity condition for [[linear integro-differential operators]] would be to impose that the kernel is comparable to that of the fractional Laplacian. The condition could be
\[ c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y).\]
But other conditions are possible.
 
Uniform ellipticity is linked to [[extremal operators]]. The classical Pucci maximal operators are the extremal of all uniformly elliptic operators which vanish at zero.
\begin{align*}
M^+(D^2 u) &= \sup_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \Lambda tr(D^2u)^+ - \lambda tr(D^2u)^+,\\
M^-(D^2 u) &= \inf_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \lambda tr(D^2u)^+ - \Lambda tr(D^2u)^+.\\
\end{align*}
A fully nonlinear equation $F(D^2u)=0$ is uniformly elliptic if and only if for any two symmetric matrices $X$ and $Y$,
\[M^-(X-Y) \leq F(X) - F(Y) \leq M^+(X-Y).\]
This definition is originally from <ref name="cpam"/>.
 
Given any family of kernels $\mathcal L$, we define
\begin{align*}
\begin{align*}
v(0,x) &= u(0,x) \\
M_{\mathcal L}^+ u(x) &= \sup_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy, \\
v_t + (-\Delta)^s v &= H(u,Du).
M_{\mathcal L}^- u(x) &= \inf_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy.
\end{align*}
\end{align*}
Since the right hand side $H(u,Du)$ is bounded, then the solution v must be $C^{2s}$ in space. Since $2s > 1$, then we improved our regularity on $u$ (which is the same as $v$). But now $H(u,Du) \in C^{2s-1}$ and therefore $v \in C^{4s-1}$. Continuing the iteration, we obtain that $u \in C^\infty$.
Thus, for a nonlocal operator $I$ (which is a black box that maps $C^2$ functions into continuous functions), we can say it is [[uniformly elliptic]] if for any two $C^2$ functions $u$ and $v$,
\[ M_{\mathcal L}^- (u-v)(x) \leq Iu(x) - Iv(x) \leq M_{\mathcal L}^+ (u-v)(x).\]


The above example is relatively simple because the only estimates used are an assumption that $u$ is Lipschitz and the classical estimates for the fractional heat equation. The bootstrapping method usually works when the equation is semilinear and the a priori estimate or assumption on the solution has subcritical scaling.
The first choice of $\mathcal L$ would be the one described above
\[ \mathcal L = \left\{ K :  c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]


=== A slightly more complicated example ===
In this case, the maximal operators take a particularly simple form


Imagine now that we have a fractional conservation law of the form
\[ u_t + (-\Delta)^s u + \mathrm{div} \ F(\nabla u) = 0. \]
Where $F$ is some smooth vector valued function and $s \in (0,1/2)$. Assume that a solution $u$ is known to be $C^\alpha$ for some $\alpha>1-2s$. As before, $u$ coincides with the solution $v$ of a linear equation whose coefficients depend on $u$. However, the equation is now more complicated.
\begin{align*}
\begin{align*}
v(0,x)&=u(0,x) \\
M_{\mathcal L}^+ u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\Lambda (u(x+y)+u(x-y)-2u(x))^+ - \lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy, \\
v_t + (-\Delta)^s v  + b(x,t) \cdot \nabla v &= 0
M_{\mathcal L}^- u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\lambda (u(x+y)+u(x-y)-2u(x))^+ - \Lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy.
\end{align*}
\end{align*}
where $b(x,t) = F'(u)$. Since $F$ is smooth and $u \in C^\alpha$ in space, we have that $b \in C^\alpha$ in space, which implies that $v \in C^{1,\alpha}$ in space applying estimates for linear [[drift-diffusion equations]]. Therefore $u \in C^{1,\alpha}$. Differentiating the equation and repeating the procedure we get $u \in C^{2,\alpha}$, $u \in C^{3,\alpha}$, etc...


The procedure is slightly more complicated because the linear equation has variable coefficients and a less standard estimate for linear equations is used. Still the outline of the idea is the same. Bootstrap arguments are more or less automatic once we have an priori estimate which is sufficient for a stronger regularity result for linear equations with coefficients.
For other choices of $\mathcal L$, the operators $M^+_{\mathcal L}$ and $M^-_{\mathcal L}$ may not have an explicit expression.
 
'''Exercise 1.''' Let $I : C^2(\R^2) \to C(\R)$ be a nonlinear operator which satisfies
\[ M^-(D^2(u-v)) \leq Iu - Iv \leq M^+(D^2(u-v)),\]
for any two functions $u$ and $v$, where $M^+$ and $M^-$ are the classical Pucci operators, then prove that $Iu$ is a fully nonlinear uniformly elliptic operator of the form $Iu(x) = F(D^2u(x))$ (in particular you have to show that $I$ is local).
 
'''Exercise 2 (*).''' Let $I : C^2(\R^2) \to C(\R)$ be a nonlinear operator, uniformly elliptic respect to $\mathcal L$ in the sense that for any two functions $u$ and $v$,
\[ M_{\mathcal L}^-(u-v) \leq Iu - Iv \leq M_{\mathcal L}^+(u-v).\]
Is it true that there always exists a family of kernels $K^{\alpha \beta} \in \mathcal L$ and constants $c^{\alpha \beta}$ such that
\[ Iu(x) = \inf_{\alpha} \ \sup_{\beta} \ c^{\alpha \beta} + \int_{\R^n} (u(x+y)-u(x)) K^{\alpha \beta}(y) dy \ ?\]
 
= Lecture 2 =
== Viscosity solutions ==
'''Definition'''. We say that $Iu \leq 0$ in $\Omega$ in the viscosity sense if every time there exists a function $\varphi : \R^n \to \R$ such that for some point $x \in \Omega$,
# $\varphi$ is $C^2$ in a neighborhood of $x$,
# $\varphi(x) = u(x)$,
# $\varphi(y) \leq u(y)$ everywhere in $\R^n$,
then $I\varphi(x) \leq 0$.
 
The point of the definition is to translate the difficulty of evaluating the operator $I$ into a smooth test function $\varphi$. In this way, the function $u$ is only required to be continuous (lower semicontinuous for the inequality $Iu \leq 0$). The function $\varphi$ is a test function ''touching $u$ from below'' at $x$.
 
The inequality $Iu \geq 0$ is defined analogously using tests functions touching $u$ from above. A ''viscosity solution'' is a function $u$ for which both $Iu \leq 0$ and $Iu \geq 0$ hold in $\Omega$.
 
[[Viscosity solutions]] have the following basic properties:
* Stability under uniform limits.
For second order equations this means that if $F_n(D^2 u_n) = 0$ in $\Omega$ and we have both $F_n \to F$ and $u_n \to u$ locally uniformly, then $F(D^2 u)=0$ also holds in the viscosity sense.
* Uniqueness by the [[comparison principle]].
This is available under several set of assumptions. Some are rather difficult to prove, like the case of second order equations with variable coefficients.
* Existence by [[Perron's method]].
The method can be applied to find the viscosity solution of the Dirichlet problem every time the comparison principle holds and some barrier construction can be used to assure the boundary condition.
 
Let us analyze the case of integral equations. Whenever a test function $\varphi$ exists, there is a vector $b$ ($=\nabla \varphi(x)$) and a constant $c$ ($=|D^2 \varphi(x)|$) such that
\[ u(x+y) \leq u(x) + b \cdot y + c|y|^2.\]
Therefore, the positive part of the integral
\[ \int_{\R^n} (u(x+y) + u(x-y) - 2u(x))^+ K(y) dy \]
has an $L^1$ integrand. The negative part can a priori integrate to $-\infty$. In any case, we can assign a value to the integral in $[-\infty,\infty)$, and also to any expression of the form
\[Iu(x) = \inf_\beta \ \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^{\alpha\beta}(y) dy\]
Thus, the value of $Iu(x)$ can be evaluated classically. In the case that $I$ is uniformly elliptic one can even show that the negative part of the integral is also finite. This small observation makes it more comfortable to deal with viscosity solutions of integro-differential equations than in the classical PDE case, since the equation is evaluated directly into the solution $u$ at all points $x$ where there is a test function $\varphi$ touching $u$ either from above or below.
 
==An open problem==
'''Uniqueness with variable coefficients'''
 
'''Exercise 3 (*).''' Prove that the [[comparison principle]] holds for equations of the form
\[ \inf_\alpha \ \sup_\beta \int_{\R^n} (u(x+y)-u(x)) K^{\alpha \beta}(x,y) dy = 0,\]
under appropriate ellipticity and continuity conditions on the kernel $K$.
 
The [[comparison principle|closest result available]], due to Cyril Imbert and Guy Barles <ref name="BI"/>, is for equations of the form
\[ \inf_{\alpha} \ \sup_\beta \int_{\R^n} (u(x+j(x,y))-u(x)) K^{\alpha \beta}(y) dy = 0.\]
Here $j$ is assumed to be essentially Lipschitz continuous respect to $x$, among other nondegeneracy conditions for $j$ and $K$.
 
==Second order equations as limits of integro-differential equations==
We can recover second order elliptic operators as limits of integral ones. Consider
\[ \lim_{s \to 2} \int_{\R^n} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy.\]
 
For $u \in C^3$, we write the expansion
\[ u(x+y) = u(x) + Du(x) \cdot y + y^t \ D^2u(x)\ y + O(|y|^3).\]
 
Let us split the integral above in the domains $B_R$ and $\R^n \setminus B_R$ for some small $R>0$.
 
For the first part, we have
\begin{align*}
\int_{B_R} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy &= \int_{B_R} (y^t \ D^2u(x) \ y + O(|y|^3)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy \\
&= \int_0^R (2-s) \frac{r^2}{r^{n+s}} r^{n-1} \int_{\partial B_1} (\theta^t \ D^2u(x) \ \theta) \frac{a(\theta)} d\theta dr + (2-s) O(R^{3-s}) \\
&= R^{2-s} \int_{\partial B_1} (\theta^t \ D^2u(x) \ \theta) \frac{a(\theta)} d\theta + (2-s) O(R^{3-s}) \\
\end{align*}
 
Therefore, when we take $s\to 2$, we obtain
\[\int_{B_R} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy = \int_{\partial B_1} \theta^t \ D^2u(x) \ \theta a(\theta) d\theta,\]
which is a linear operator in $D^2u$, hence it equals $a_{ij} \partial_{ij}u$ for some matrix $a_{ij}$.
 
==Smooth approximations of viscosity solutions to fully nonlinear elliptic equations==
 
One of the common difficulties one encounters when dealing with viscosity solutions is that it is difficult to make density type arguments. More precisely, a viscosity solution cannot be approximated by a classical $C^2$ solution in any standard way. We can do it however, if we use nonlocal equations <ref name="smooth"/>.
 
Given the equation
\begin{align*}
0 = F(D^2u) &= \inf_\alpha \ \sup_\beta a^{\alpha \beta}_{ij} \partial_{ij} u\\
&= \frac \lambda 2 \Delta u + \inf_\alpha \ \sup_\beta b^{\alpha \beta}_{ij} \partial_{ij} u.
\end{align*}
 
We approximate linear each operator $b^{\alpha \beta}_{ij} \partial_{ij} u$ by an integro-differential one
\[b^{\alpha \beta}_{ij} \partial_{ij} u = \lim_{r\to 0} \int_{\R^n} (u(x+y)-u(x)) K_r^{\alpha \beta} dy,\]
where
\[ K_r^{\alpha \beta}(y) = \frac 1 {r^{n+2}} K^{\alpha \beta} \left( \frac y r \right),\]
and each $K^{\alpha \beta}$ is smooth and compactly supported. Then, we approximate the equation with
\[ \frac \lambda 2 \Delta u_r + \inf_\alpha \ \sup_\beta \int_{\R^n} (u_r(x+y)-u_r(x)) K_r^{\alpha \beta} dy = 0 \]
For each $r>0$, the solution $u$ will be $C^{2,1}$ (very smooth), and $u_r \to u$ as $r \to 0$, where $u$ is the solution to $F(D^2 u)=0$.
 
Regularity results, such as Harnack or $C^{1,\alpha}$, can be proved uniformly in $r$ bypassing the technical difficulties of viscosity solutions if we are willing to deal with integral equations.
 
==Regularity of nonlinear equations: how to start==
 
In order to show that the solution to a fully nonlinear equation $F(D^2 u)=0$ is $C^{1,\alpha}$ for some $\alpha>0$, we differentiate the equation and study the equation that the derivative satisfies. Formally, if we differentiate in an arbitrary direction $e$,
\[ \frac{\partial F}{\partial M_{ij}} (D^2u) \partial_{ij} (\partial_e u) = 0.\]
 
If we call $a_{ij}(x) = \frac{\partial F}{\partial M_{ij}} (D^2u(x))$, we do not know much about this coefficients a priori (they are technically not well defined), but we know that for all $x$
\[ \lambda I \leq a_{ij}(x) \leq \Lambda I,\]
because of the uniform ellipticity assumption on $F$.
 
What we need is to prove that a solution to an equation of the form
\[ a_{ij}(x) \partial_{ij} v = 0\]
is Holder continuous, with an estimate which depends on the ellipticity constants of $a_{ij}$ but is independent of any other property of $a_{ij}$ (no smoothness assumption can be made). This is the fundamental result by Krylov and Safonov.
 
===Differentiating the equation===
When we try to make the argument above rigorous, we encounter some technical difficulties. The first obvious one is that $\partial_e u$ may not be a well defined function. We must take incremental quotients.
\[ v(x) = \frac{u(x+h)-u(x)}{|h|}.\]
The coefficients of the equation may not be well defined either, but what can be shown is that
\[ M^+(D^2 v) \geq 0 \text{ and } M^-(D^2 v) \leq 0,\]
for the classical Pucci operators of order 2.
 
For [[fully nonlinear integro-differential equations]], one gets the same thing with the appropriate extremal operators corresponding to the uniform ellipticity assumption. If $Iu=0$ in $B_1$ and $v$ is defined as above, then
\[ M_{\mathcal L}^+(v) \geq 0 \text{ and } M_{\mathcal L}^-(v) \leq 0,\]
wherever $x \in \Omega$ and $x+h \in \Omega$.
 
The challenge is then to find a Holder estimate based on these two inequalities. The result says that if $v$ satisfies in the viscosity sense both inequalities $M_{\mathcal L}^+(v) \geq 0$ and $M_{\mathcal L}^-(v) \leq 0$ in (say) $B_1$, then $v$ is $C^\alpha(B_{1/2})$ with the estimate
\[ \|v\|_{C^\alpha(B_{1/2})} \leq C \|v\|_{L^\infty(\R^n)}.\]
 
The fact that the $L^\infty$ norm is taken in the full space $\R^n$ is an unavoidable consequence of the fact that the equation is non local. This feature does make the proof of $C^{1,\alpha}$ [[differentiability estimates | regularity]] more involved and it even forces us to add extra assumptions
 
It is good to keep in mind that for smooth functions $v$, the two inequalities above are equivalent to the existence of some kernel $K(x,y)$ such that
\[ \int_{\R^n} (v(x+y)-v(x)) K(x,y) dy = 0, \]
and that $K(x,\cdot) \in \mathcal L$ for all $x$. But no assumption can be made about the regularity of $K$ respect to $x$.
 
===Holder estimates===
 
The proof of the [[Holder estimates]] is relatively simple if we do not care about how the constants $C$ and $\alpha$ depend on $s$. If we want a robust estimate that passes to the limit as $s \to 2$, the proof will be much harder. We will start with the simple case.
 
This simple case was originally proved in <ref name="holder"/>. The harder case with uniform constants is in <ref name="cpam"/>
 
Let $\mathcal L$ be the usual class of kernels
\[ \mathcal L = \left\{ K :  c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]
 
Let $u$ be a continuous function, bounded in $\R^n$ such that
\begin{align*}
M^+_{\mathcal L} u &\geq 0 \text{ in } B_1, \\
M^-_{\mathcal L} u &\leq 0 \text{ in } B_1.
\end{align*}
Where both of the inequalities above are understood in the viscosity sense.
 
Then, there are constants $C$ and $\alpha>0$ (depending only on $\lambda$, $\Lambda$, $n$ and $s$) such that
\[ |u(x) - u(0)| \leq C |x|^\alpha \|u\|_{L^\infty(\R^n)}.\]
There is nothing special about the point $0$. Thus, the estimate can be made uniformly in any set of points compactly contained in $B_1$.
 
<div style="background:#EEEEEE;">
'''Proof.'''
The factor $\|u\|_{L^\infty(\R^n)}$ can be assumed to be $1$ thanks to the simple normalization $u/\|u\|_{L^\infty}$. So, we assume that $\|u\|_{L^\infty}=1$ and will prove that there is a constant $\theta>0$ such that
\[ osc_{B_{2^{-k}}} u \leq (1-\theta)^k.\]
The result then follows taking $\alpha = \log(1-\theta)/\log(1/2)$ and $C = (1-\theta)^{-1}$.
 
We will prove the above estimate for dyadic balls inductively. It is certainly true for $k \leq 0$ since $\|u\|_{L^\infty} = 1$. Now we assume it holds up to some value of $k$ and want to prove it for $k+1$.
 
In order to prove the inductive step, we rescale the function so that $B_{2^{-k}}$ corresponds to $B_1$. Let
\[ v(x) = (1-\theta)^{-k} u(2^{-k} x) - a_k .\]
The function $v$ is scaled, and the constant $a_k$ is chosen, so that $-1/2 \leq v \leq 1/2$ in $B_1$.
 
The scale invariance of $M^+_{\mathcal L}$ and $M^-_{\mathcal L}$ plays a crucial role here in that $v$ satisfies the same extremal equations as the original function $u$.
 
From the inductive hypothesis, $osc_{B_{2^{-j}}} u \leq (1-\theta)^j$ for all $j \leq k$, so we have
that $osc_{B_{2^{j}}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$.
 
There are two obvious ways in which the oscillation of $v$ in the ball of radius $1/2$ can be smaller than its oscillation in $B_1$: either the suppremum of $u$ is smaller in $B_{1/2}$ or the infimum is larger. We prove one or the other depending on which of the sets $\{v < 0\} \cap B_1$ or $\{v > 0\} \cap B_1$ has larger measure. Let us assume the former. The other case follows by exchangind $v$ with $-v$. We want to prove now that $v \leq (1/2-\theta)$ in $B_{1/2}$.
 
Note that since we know that $osc_{B_{2^{j}}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$, then
\[ v(x) \leq (2|x|)^\alpha-1/2 \text{ for } x \notin B_1.\]
 
The point is to choose $\theta$ and $\alpha$ appropriately so that the following three points
* $v(x) \leq (2|x|)^\alpha-1/2 \ \text{ for all } x \notin B_1$.
* $|\{v < 0\} \cap B_1| > 1/2 |B_1|$.
* $M^+_{\mathcal L} v \leq 0$ in $B_1$
Imply that $v \leq (1/2-\theta)$ in $B_{1/2}$.
 
If that holds for any choice of $\alpha$ and $\theta$, it also holds for smaller values. Thus, a posteriori, we can make one of them smaller so that $\alpha = \log(1-\theta)/\log(1/2)$.
 
Let $\rho$ be a smooth radial function supported in $B_{3/4}$ such that $\rho \equiv 1$ in $B_{1/2}$.
 
If $v \geq (1/2-\theta)$ at any point in $B_{1/2}$, then $(u+\theta \rho)$ would have a local maximum at a point $x_0 \in B_{3/4}$ for which $(u+\theta \rho)(x_0) > 1/2$.
\[ \max_{B_1} (u+\theta \rho) = (u+\theta \rho)(x_0) > 1.\]
In order to obtain a contradiction, we evaluate $M^+ (u+\theta \rho)(x_0)$.
 
On one hand
\begin{align*}
M^+ (u+\theta \rho)(x_0) &\geq M^+ u(x_0) + \theta M^- \rho(x_0) \\
&\geq \theta \ \min_{B_{3/4}} \ M^- \rho(x_0).
\end{align*}
 
On the other hand, the other estimate is more delicate. Let $w = (u+\theta \rho)$.
\begin{align*}
M^+ (u+\theta \rho)(x_0) &= \int_{\R^n} \frac{\Lambda (w(x_0+y)+w(x_0-y)-2w(x_0))^+ - \lambda (w(x_0+y)+w(x_0-y)-2w(x_0))^-}{|y|^{n+s}} dy \\
&\leq \int_{\R^n} \frac{\Lambda (w(x_0+y)-w(x_0))^+ - \lambda (w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\
&\leq \int_{x_0+y \notin B_1} (\dots) + \int_{x_0+y \in B_1} (\dots)
\end{align*}
 
The first integral can be bounded using that $v(x) \leq (2|x|)^\alpha-1/2$ for all $x \notin B_1$. In fact, it is arbitrarily small if $\alpha$ is chosen close to $0$.
\[ \int_{x_0+y \notin B_1} (\dots) \leq \int_{x_0+y \notin B_1} ((2|x|)^\alpha-1) \frac{\Lambda}{|y|^{n+s}} dy \ll 1.\]
 
The second integral has a non negative integrand just because $(u+\theta \rho)$ takes its maximum in $B_1$ at $x_0$. But we can say more using the set $G = \{v < 0\} \cap B_1$.
\begin{align*}
\int_{x_0+y \in B_1} (\dots) &\leq \int_{x_0+y \in G} (\dots) + \int_{x_0+y \in B_1 \setminus G} (\dots) \\
&\leq \int_{x_0+y \in G} (\dots) = \int_{x_0+y \in G} - \lambda \frac{(w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\
&\leq \int_{x_0+y \in G} - \lambda \frac{(w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\
&\leq \int_{x_0+y \in G} - \lambda \frac{1/2-\theta}{|y|^{n+s}} dy \leq -C.
\end{align*}
In the last inequality we use that $|y|^{-n-s}$ is bounded below, $\theta$ is chosen less than $1/2$, and $|G|>|B_1|/2$.
 
So, for $\theta$ and $\alpha$ small enough the sum of the two terms will be negative and less than $\theta \min M^- \rho$, arriving to a contradiction. This finishes the proof.
</div>
 
Inspecting the proof above we see that the argument is much more general than presented. The only assumptions used on $\mathcal L$ are that:
# The extremal operators are scale invariant.
# For the smooth bump function $\rho$, $M^- \rho$ is bounded.
# $M^+ w(x_0)$ can be bounded below at point $x_0 \in B_{3/4}$ which achieves the maximum of $w$ in $B_1$ provided that
#* $w(x) \leq w(x_0) + (2|x|)^\alpha-1$ for $x \notin B_1$.
#* $|\{w(x) \leq w(x_0)-1\} \cap B_1| \geq |B_1|/2$.
 
There are very general families of non local operators which satisfy those conditions above.
 
'''Exercise 4.''' Verify that the proof above also holds for equations of the form
\[ \int_{\R^n} (u(x+y) - u(x)) K(x,y) dy = 0 \text{ in } B_1.\]
for which we assume that for every $x \in B_1$,
\[ \frac{\lambda}{|y|^{n+s}} \leq K(x,y) \leq \frac{\Lambda}{|y|^{n+s}},\]
where $s \in (0,1)$ but we do not assume that $K$ is symmetric in $y$.
 
'''Exercise 5.''' Verify that the proof above also holds for equations of the form
\[ \int_{\R^n} (u(x+y) - u(x) - y \cdot Du(x)) K(x,y) dy = 0 \text{ in } B_1.\]
for which we assume that for every $x \in B_1$,
\[ \frac{\lambda}{|y|^{n+s}} \leq K(x,y) \leq \frac{\Lambda}{|y|^{n+s}},\]
where $s \in (1,2)$ but we do not assume that $K$ is symmetric in $y$.
 
= Lecture 3 =
 
== $C^{1,\alpha}$ [[differentiability estimates|estimates]] for nonlinear nonlocal equations ==
Let $u$ be a bounded function in $\R^n$ which solves $Iu = 0$ in $B_1$ in the viscosity sense, where $I$ is a nonlocal operator uniformly elliptic respect to a class $\mathcal L$. Let us also assume that $I$ is translation invariant, meaning that if $u$ solves $Iu = 0$ in $\Omega$, then $u(\cdot-x)$ also solves $Iu = 0$ in $x+\Omega$.
 
We want to obtain a $C^{1,\alpha}$ estimate of the following form.
\[ \|u\|_{C^{1,\alpha}(B_{1/2})} \leq C \|u\|_{L^\infty(B_1)}.\]
 
The strategy of the proof is the following. Let us assume that $I0=0$ (the value of $I$ applied to the zero function is zero). From the ellipticity assumption
\[ M^-_{\mathcal L} u \leq Iu - I0 \leq M^+_{\mathcal L} u.\]
Thus, the two inequalities hold
\begin{align*}
M^-_{\mathcal L} u &\leq 0  \text{ in } B_1, \\
M^+_{\mathcal L} u &\geq 0  \text{ in } B_1.
\end{align*}
So, from the [[Holder estimates]], $u \in C^\alpha$ in the interior of $B_1$.
 
Now, for any small vector $h \in \R^n$, we define the incremental quotient
\[ v(x) = \frac{u(x+h)-u(x)}{|h|^\alpha}. \]
This function $v$ is bounded independently of $h$ in any set compactly contained in $B_1$ (say $B_{1-\varepsilon}$). From this we would like apply the Holder estimates to obtain that $v \in C^\alpha$ in the interior of $B_1$ independently of $h$. The problem is that the right hand side in the Holder estimate depends on the $L^\infty$ norm of $v$ in the full space $\R^n$ and not only $B_{1-\varepsilon}$.
 
One way to overcome this difficulty is imposing stronger assumptions to the family of kernels $\mathcal L$. Let us define the following more restrictive family, where we impose a bound on the derivatives of the kernels
\[ \mathcal L_1 = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ and } |\nabla K(y)| \leq \frac C{|y|^{n+s+1}}, \text{ plus } K(y)=K(-y) \right\}.\]
 
Now, we can "integrate by parts" the contribution of the tails of the integrals in $M^+_{\mathcal L_1} v$ and $M^-_{\mathcal L_1} v$. If we split the domain of the integrals of each kernel in $\mathcal L_1$
\begin{align*}
\int_{\R^n} (v(x+y)-v(x)) K(y) dy &= \int_{B_r} (v(x+y)-v(x)) K(y) dy + \int_{\R^n \setminus B_r} (v(x+y)-v(x)) K(y) dy, \\
&= \int_{B_r} (v(x+y)-v(x)) K(y) dy + \int_{\R^n \setminus B_r} (u(x+y)-u(x)) (K(y)-K(y+h)) dy
\end{align*}
 
The second term is bounded (depending on $r$) thanks to the bound on $DK$ away from zero, and the first term is what we really need to work out the $C^\alpha$ norm of $v$ in terms of the $L^\infty$ norm of $v$ in $B_{1-2 \varepsilon}$.
 
From the equation above we get that $v \in C^\alpha$ independently of $h$. That implies that $u \in C^{2\alpha}$. Iterating the procedure we get $u \in C^{3\alpha}$, $u \in C^{4\alpha}$,  \dots, up to $u$ Lipschitz. Then one more iteration gives $u\in C^{1,\alpha}$ and but more gains in regularity are possible with this method because the $C^{1,\alpha}$ estimate of $u$ is not equivalent to any uniform bound of an incremental quotient of $u$.
 
'''Exercise 6 (*).''' Is the extra assumption on the boundedness of the derivatives of the kernels really necessary to obtain $C^{1,\alpha}$ estimates? This condition is unnecessary if the equation holds in the full space. But in fact this condition is necessary if $s<1$ even for linear equations. The result is not clear (and in fact open) for $s>1$.
 
== Holder estimates in the parabolic case ==
 
We will now work out the parabolic version of the Holder estimates that we obtained in the previous lecture. This will show some of the extra difficulties that one faces when dealing with parabolic equations.
 
The result that we prove is the following.
 
Let $\mathcal L$ be the usual class of kernels
\[ \mathcal L = \left\{ K :  c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]
 
Let $u$ be a continuous function, bounded in $\R^n \times [-1,0]$ such that
\begin{align*}
u_t - M^+_{\mathcal L} u &\leq 0 \text{ in } B_1 \times (-1,0], \\
u_t - M^-_{\mathcal L} u &\geq 0 \text{ in } B_1 \times (-1,0].
\end{align*}
 
The $u \in C^\alpha(B_{1/2} \times [-1/2,0])$ and
\[ \|u\|_{C^\alpha(B_{1/2} \times [-1/2,0])} \leq C \|u\|_{L^\infty},\]
for constants $\alpha$ and $C$ that depend on $\lambda$, $\Lambda$, $s$ and $n$.
 
As in the elliptic case, the proof is much harder if we want to make sure that $C$ and $\alpha$ have a finite positive limit as $s \to 2$. We will do the simple case now, in which we do not care about how $C$ and $\alpha$ depend on $s$.
 
This result was proved with gradient dependence in the equations in <ref name="HJ"/> and <ref name="DD"/>.
 
Let us normalize the function $u$ such that $osc_{\R^n \times [-1,0]} u = 1$. We will show that there is a Holder modulus of continuity at the origin, i.e.
\[ |u(x,-t) - u(0,0)| \leq C(|x|^\alpha+t^{\alpha/s}).\]
 
It is convenient to keep in mind the natural scaling of the equation. The function $u_r(x,t) = u(rx,r^st)$ satisfies the same two inequalities
\begin{align*}
\partial_t u_r - M^+_{\mathcal L} u_r &\leq 0 \text{ in } B_{1/r} \times (-1/r^s,0], \\
\partial_t u_r - M^-_{\mathcal L} u_r &\geq 0 \text{ in } B_{1/r} \times (-1/r^s,0].
\end{align*}
Thus, $|x|^\alpha$ has the same scaling as $t^{\alpha/s}$.
 
Let us define the ''parabolic'' cylinders $Q_r$ with the right scaling as
\[ Q_r := B_r \times [-r^s,0].\]
 
What we will prove is the inequality
\begin{equation} \label{e1} osc_{Q_{2^-k}} u \leq (1-\theta)^k.\end{equation}
From this, the Holder continuity follows as in the elliptic case.
 
From the assumption that $osc_{\R^n \times [-1,0]} u = 1$, we know that \eqref{e1} holds for all $k \leq 0$. That gives us the base for the induction. Now we assume it is true up to some value of $k$ and want to show it also holds for $k+1$.
 
We start by rescaling the function so as to map $Q_r$ to $Q_1$. Let $v(x,t) = (1-\theta)^{-k} u(2^{-k}x, 2^{-ks}t) - a_k$, where $a_k$ is chosen so that $-1/2 \leq v \leq 1/2$ in $B_1$.
 
From the inductive hypothesis, $osc_{Q_{2^j}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$.
 
In order to show that $osc_{Q_{1/2}} v \leq (1-\theta)$ we must show either that $\theta \leq 1/2-\theta$ in $Q_{1/2}$ or that $\theta \geq -1/2+\theta$ in $Q_{1/2}$. Which of the two alternatives we manage to prove depends on which of the two sets
$\{v \geq 0\} \cap (B_1 \times [-1,-1/2^s])$ or $\{v \leq 0\} \cap (B_1 \times [-1,-1/2^s])$ has larger measure. Let us assume it is the first, otherwise the same proof upside down would work with the opposite inequalities.
 
The function $v$ satisfies the following three conditions
* $v(x) \leq (2|x|)^\alpha - 1/2$ for all $x \notin B_1$.
* $|\{v \leq 0 \} \cap (B_1 \times [-1,-1/2^s])| \geq \frac 12 |B_1 \times [-1,-1/2^s]|$.
* $\partial_t v - M^+_{\mathcal L} v \leq 0$ in $Q_1$
 
We need to show that for small enough $\theta>0$ and $\alpha>0$, these three conditions imply that $v \leq 1/2-\theta$ in $Q_{1/2}$
 
Let $\rho$ be a smooth radial function supported in $B_{3/4}$ such that $\rho \equiv 1$ in $B_{1/2}$. We will show that the function $v$ stays below the function $b(x,t) = 1/2 + \epsilon + \delta (t+1) - m(t) \rho(x)$ in $B_1 \times [-1,0]$ where $m$ is the solution to the ODE:
\begin{align*}
m(-1) &= 0, \\
m'(t) &= c_0 | \{x \in B_1: v(x,t) \leq 0\}| - C_1 m(t).
\end{align*}
for constants $c_0$ and $C$ to be chosen later.
 
We show that the inequality holds by proving that it can never be invalidated for the first time. Indeed, assume there was a point $(x_0,t_0)$ where equality holds. This point must be in the support of $\rho$ (strict inequality holds in the rest since $v \leq 1/2$), thus $x_0 \in B_{3/4}$.
 
We have the simple inequality
\[v_t(x_0,t_0) \geq b_t(x_0,t_0) = -m'(t_0) \rho(x_0) + \delta.\]
 
Let $G(t) = \{x \in B_1: u(x,t) \leq 0\}$. We know, by the assumption above, that $\int_{-1}^{-1/2^s} G(t) dt > c$.
 
We write
\begin{align*}
M^+_{\mathcal L} v(x_0,t_0) &= \int_{x_0 + y \notin B_1} (\dots) dy + \int_{x_0 + y \in B_1\setminus G} (\dots) dy + \int_{x_0 + y \in G} (\dots) dy \\
&\leq (\text{sthing arbitrarily small as $\alpha\to 0$}) + C m(t_0) M^+_\mathcal \rho(x_0,t_0) + c_0 |G|.\\
&= C(\alpha) + C m(t) M^+_{\mathcal L} \rho(x_0,t_0) - c_1 |G|
\end{align*}
 
Plugging those inequality into the equation, we obtain
\[ v_t(x_0,t_0) - M^+_{\mathcal L} v(x_0,t_0) \geq -m'(t_0) \rho(x_0) + \delta - C(\alpha) - C m(t_0) M^+_{\mathcal L} \rho(x_0,t_0) + c_0 |G| \]
Recall that $m'(t) = c_0 |G| - C_1 m(t)$ by definition (this is when $c_0$ is chosen). Since $\rho \leq 1$, we have
\[ v_t(x_0,t_0) - M^+_{\mathcal L} v(x_0,t_0) \geq \delta - C(\alpha) - C m(t_0) M^+_{\mathcal L} \rho(x_0,t_0) + C_0 m(t_0) \rho(x_0). \]
We choose $\alpha$ small so that $C(\alpha) < \delta$, so we have
\[ v_t(x_0,t_0) - M^+_{\mathcal L} v(x_0,t_0) \geq - C m(t_0) M^+_{\mathcal L} \rho(x_0,t_0) + C_0 m(t_0) \rho(x_0). \]
Now we have to choose $C_0$ appropriately to make this right hand side positive and contradict the equation for $v$.
 
This is clearly possible if we know a lower bound for $\rho(x_0)$. However, we must also consider that $x_0$ may be a point where $\rho$ is very small. It turns out that $M^+_{\mathcal L} \rho > 0$ where $\rho$ is small since trivially $M^+_{\mathcal L} \rho(x) > 0$ if $\rho(x)=0$ (from the formula for $M^+_{\mathcal L}$) and $M^+_{\mathcal L} \rho$ is a continuous function. Thus, where $\rho$ is small, the right hand side is automatically positive. We choose $C_0$ large so that this right hand side is also positive where $\rho$ is large. This gives us a contradiction with the equation and proves that then $v$ must stay below the function $b$.
 
To finish the proof, all we need is to show that $b \leq 1/2-\theta$ in $Q_{1/2}$. We analyze the ODE that defines $m(t)$ and we realize that $m(t)$ is going to be bounded below in for $t \in [-1/2^s,0]$ in terms of the measure of the set $\{u \leq 0\} \cap (B_1 \times [-1,-1/2^s])$. In fact, an explicit formula can be given for $b$. Let $\theta$ be half of this lower bound for $m(t)$ and let us choose $\delta < \theta/4$. We will then have $b = 1/2 - \epsilon + \delta(t+1) - m(t) b(x,t) \leq 1/2-\theta$ in $Q_{1/2}$, which finishes the proof.
 
'''Exercise 7.''' Adapt the proof to the previous result to equations with drift and diffusion. Let $u$ be a continuous function, bounded in $\R^n \times [-1,0]$ such that for some $B>0$ and $s \geq 1$,
\begin{align*}
u_t - M^+_{\mathcal L} u - B|\nabla u| &\leq 0 \text{ in } B_1 \times (-1,0], \\
u_t - M^-_{\mathcal L} u + B|\nabla u| &\geq 0 \text{ in } B_1 \times (-1,0].
\end{align*}
then $u \in C^\alpha(Q_{1/2})$ with
\[ \|u\|_{C^\alpha(Q_{1/2})} \leq C \|u\|_{L^\infty(\R^n \times [-1,0])}.\]
 
The two inequalities above are implied by an equation of the form
\[ u_t + b \cdot \nabla u - \int_{\R^n} (u(x+y)-u(x)) K(x,y) dy = 0.\]
where $\|b\|_{L^\infty} \leq B$ and $K(x,\cdot)$ belongs to the class $\mathcal L$ for all $x$.
 
'''Exercise 8.''' Let $I$ be uniformly elliptic with respect to the usual class $\mathcal L$ (without any condition on the derivatives of the kernel) and translation invariant. Prove that bounded solutions to the equation (''in the full space'')
\[ u_t - Iu = 0 \text{ in } \R^n \times (0,\infty),\]
become immediately $C^{1,\alpha}$ in space and time for positive time.
 
== References ==
{{reflist|refs=
<ref name="smooth"> {{Citation | last1=Caffarelli | first1=Luis | last2=Silvestre | first2=Luis | title=Smooth Approximations of Solutions to Nonconvex Fully Nonlinear Elliptic Equations | publisher=Amer Mathematical Society | year=2010 | journal=Nonlinear partial differential equations and related topics: dedicated to Nina N. Uraltseva | volume=229 | pages=67}} </ref>
<ref name="holder"> {{Citation | last1=Silvestre | first1=Luis | title=Holder estimates for solutions of integro-differential equations like the fractional laplace | publisher=Bloomington, Ind.: Dept. of Mathematics, Indiana University, c1970- | year=2006 | journal=Indiana University Mathematics Journal | issn=0022-2518 | volume=55 | issue=3 | pages=1155–1174}}</ref>
<ref name="HJ"> {{Citation | last1=Silvestre | first1=Luis | title=On the differentiability of the solution to the Hamilton--Jacobi equation with critical fractional diffusion | publisher=[[Elsevier]] | year=2011 | journal=Advances in Mathematics | issn=0001-8708 | volume=226 | issue=2 | pages=2020–2039}}</ref>
<ref name="DD"> {{Citation | last1=Silvestre | first1=Luis | title=Holder estimates for advection fractional-diffusion equations | year=2010 | journal=Arxiv preprint Arxiv:1009.5723}} </ref>
<ref name="cpam">{{Citation | last1=Caffarelli | first1=Luis | last2=Silvestre | first2=Luis | title=Regularity theory for fully nonlinear integro-differential equations | publisher=Wiley Online Library | year=2009 | journal=[[Communications on Pure and Applied Mathematics]] | issn=0010-3640 | volume=62 | issue=5 | pages=597–638}}</ref>
<ref name="BI">{{Citation | last1=Barles | first1=Guy | last2=Imbert | first2=Cyril | title=Second-order elliptic integro-differential equations: viscosity solutions' theory revisited | url=http://dx.doi.org/10.1016/j.anihpc.2007.02.007 | doi=10.1016/j.anihpc.2007.02.007 | year=2008 | journal=Annales de l'Institut Henri Poincaré. Analyse Non Linéaire | issn=0294-1449 | volume=25 | issue=3 | pages=567–585}}</ref>
}}

Revision as of 21:47, 8 May 2012

Indication: A star (*) in an exercise indicates that I don't know how to solve it.

Lecture 1

Definitions: linear equations

The first lecture serves as an overview of the subject and to familiarize ourselves with the type of equations under study.

The aim of the course is to see some regularity results for elliptic equations. Most of these results can be generalized to parabolic equations as well. However, this generalization presents extra difficulties that involve nontrivial ideas.

The prime example of an elliptic equation is the Laplace equation. \[ \Delta u(x) = 0 \text{ in } \Omega.\]

Elliptic equations are those which have similar properties as the Laplace equation. This is a vague definition.

The class of fully nonlinear elliptic equations of second order have the form \[ F(D^2u, Du, u, x)=0 \text{ in } \Omega.\] for a function $F$ such that \[ \frac{\partial F}{\partial M_{ij}} > 0 \text{ and } \frac{\partial F}{\partial u} \leq 0.\]

These are the minimal monotonicity conditions for which you can expect a comparison principle to hold. The appropriate notion of weak solution, viscosity solutions, is based on this monotonicity.

What is the Laplacian? The most natural (coordinate independent) definition may be \[ \Delta u(x) = \lim_{r \to 0} \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy.\]

A simple (although rather uninteresting) example of a nonlocal equation would be the following non infinitesimal version of the Laplace equation \[ \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy = 0 \text{ for all } x \in \Omega.\]

The equation tells us that the value $u(x)$ equals the average of $u$ in the ball $B_r(x)$. A more general integral equation is a weighted version of the above. \[ \int_{\R^n} (u(x+y)-u(x)) K(y) dy = 0 \text{ for all } x \in \Omega.\] where $K:\R^n \to \R$ is a non negative kernel.

The equations show that $u(x)$ is a weighted average of the values of $u$ in the neighborhood of $x$. This is true in some sense for all elliptic equations, but it is most apparent for integro-differential ones.

For the Dirichlet problem, the boundary values have to be prescribed in the whole complement of the domain. \begin{align*} \int_{\R^n} (u(x+y)-u(x)) K(y) dy &= 0 \text{ for all } x \in \Omega, \\ u(x) &= g(x) \text{ for all } x \notin \Omega. \end{align*}

These type of equations have a natural motivation from probability, as we will see below.

Probabilistic derivation

Let us start by an overview on how to derive the Laplace equation from Brownian motion.

Let $B_t^x$ be Brownian motion starting at the point $x$ and $\tau$ be the first time it hits the boundary $\partial \Omega$. If we call $u(x) = \mathbb E[g(B_\tau^x)]$ for some prescribed function $g: \partial \Omega \to \R$, then $u$ will solve the classical Laplace equation \begin{align*} \Delta u(x) &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*}

A variation would be to consider diffusions other than Brownian motion. If $X^x_t$ is the stochastic process given by the SDE: $X_0^x = x$ and $dX_t^x = \sigma(X) dB$, and we define as before $u(x) = \mathbb E[g(X_\tau^x)]$, then $u$ will solve \begin{align*} a_{ij}(x) \partial_{ij} u(x) &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*} where $a_{ij}(x) = \sigma^*(x) \sigma(x)$ is a non negative definite matrix for each point $x$.

Nonlinear equations arise from stochastic control problems. Say that we can choose the coefficients $a_{ij}(x)$ from a family of possible matrices $\{a_{ij}^\alpha\}$ indexed by a parameter $\alpha \in A$. For every point $x$, we can choose a different $a_{ij}(x)$ and our objective is to make $u(x)$ as large as possible. The maximum possible value of $u(x)$ will satisfy the equation \begin{align*} \sup_{\alpha} a_{ij}^\alpha \partial_{ij} u &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*}

Sketch of the proof. If $v$ is any solution to \begin{align*} a_{ij}(x) \partial_{ij} v(x) &= 0 \text{ in } \Omega,\\ v(x) &= g(x) \text{ on } \partial \Omega. \end{align*} with $a_{ij}(x) \in \{a_{ij}^\alpha : \alpha \in A\}$, then from the equation that $u$ solves, we have \[ a_{ij}(x) \partial_{ij} u(x) \leq 0 \text{ in } \Omega. \] Therefore $u \geq v$ in $\Omega$ by the comparison principle for linear elliptic PDE.

Integro-differential equations are derived from discontinuous stochastic processes: Levy processes with jumps.

Let $X_t^x$ be a pure jump Levy process starting at $x$. Now $\tau$ is the first exit time from $\Omega$. The point $X_\tau$ may be anywhere outside of $\Omega$ since $X_t$ jumps. The jumps take place at random times determined by a Poisson process. The jumps in any direction $y \in A$, for some set $A \subset \R^n$ follow a Poisson process with intensity \[ \int_A K(y) dy. \] The kernel $K$ represents then the frequency of jumps in each direction. This type of processes are well understood and studied in the probability community.

The small jumps may happen more often than large ones. In fact, small jumps may happen infinitely often and still have a well defined stochastic process. This mean that the kernels $K$ may have a singularity at the origin. The exact assumption one has to make is \[ \int_{\R^n} K(y) (1 \wedge |y|^2) dy , +\infty.\] The generator operator of the Levy process is \[ Lu(x) = \int_{\R^n} (u(x+y) - u(x) - y \cdot Du(x) \chi_{B_1}(y)) K(y) dy. \]

We may assume that $K(y)=K(-y)$ in order to simplify the expression. This assumption is not essential, but it makes the computations more compact. This way we can write \begin{align*} Lu(x) &= PV \int_{\R^n} (u(x+y) - u(x)) K(y) dy, \text{ or } &= \int_{\R^n} (u(x+y) + u(x-y) - 2u(x)) K(y) dy. \end{align*}

An optimal control problem for jump processes leads to the integro-differntial Bellman equation \[ Iu(x) := \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^\alpha(y) dy = 0 \text{ in } \Omega.\]

Another possibility is to consider a problem with two parameters, which are controlled by two competitive players. This is the integro-differential Isaacs equation. \[ Iu(x) := \inf_\beta \ \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^{\alpha\beta}(y) dy = 0 \text{ in } \Omega.\]

Other contexts in which integral equations arise are the following:

Uniform ellipticity

Regularity result require stronger monotonicity assumptions. For fully nonlinear elliptic equations of second order F(D^2u)=0, uniform ellipticity is defined as that there exist two constants $\Lambda \geq \lambda > 0$ such that \[ \lambda I \leq \frac{\partial F}{\partial M_{ij}}(M) \leq \Lambda I.\]

Big Theorems:

  • Krylov-Safonov (1981): Solutions to fully nonlinear uniformly elliptic equations are $C^{1,\alpha}$ for some $\alpha>0$.
  • Evans-Krylov (1983): Solutions to convex fully nonlinear uniformly elliptic equations are $C^{2,\alpha}$ for some $\alpha>0$.

At the end of this course, we should be able to understand the proof of these two theorems and their generalizations to nonlocal equations.

We first need to understand what ellipticity means in an integro-differential equation. The prime example will be the fractional Laplacian. For $s \in (0,2)$, define \[ -(-\Delta)^{s/2} u(x) = \int_{\R^n} (u(x+y)-u(x)) \frac{c_{n,s}}{|y|^{n+s}} dy.\]

This is an integro-differential operator with a kernel which is radially symmetric, homogeneous, and singular at the origin.

A natural ellipticity condition for linear integro-differential operators would be to impose that the kernel is comparable to that of the fractional Laplacian. The condition could be \[ c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y).\] But other conditions are possible.

Uniform ellipticity is linked to extremal operators. The classical Pucci maximal operators are the extremal of all uniformly elliptic operators which vanish at zero. \begin{align*} M^+(D^2 u) &= \sup_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \Lambda tr(D^2u)^+ - \lambda tr(D^2u)^+,\\ M^-(D^2 u) &= \inf_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \lambda tr(D^2u)^+ - \Lambda tr(D^2u)^+.\\ \end{align*} A fully nonlinear equation $F(D^2u)=0$ is uniformly elliptic if and only if for any two symmetric matrices $X$ and $Y$, \[M^-(X-Y) \leq F(X) - F(Y) \leq M^+(X-Y).\] This definition is originally from [1].

Given any family of kernels $\mathcal L$, we define \begin{align*} M_{\mathcal L}^+ u(x) &= \sup_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy, \\ M_{\mathcal L}^- u(x) &= \inf_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy. \end{align*} Thus, for a nonlocal operator $I$ (which is a black box that maps $C^2$ functions into continuous functions), we can say it is uniformly elliptic if for any two $C^2$ functions $u$ and $v$, \[ M_{\mathcal L}^- (u-v)(x) \leq Iu(x) - Iv(x) \leq M_{\mathcal L}^+ (u-v)(x).\]

The first choice of $\mathcal L$ would be the one described above \[ \mathcal L = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]

In this case, the maximal operators take a particularly simple form

\begin{align*} M_{\mathcal L}^+ u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\Lambda (u(x+y)+u(x-y)-2u(x))^+ - \lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy, \\ M_{\mathcal L}^- u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\lambda (u(x+y)+u(x-y)-2u(x))^+ - \Lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy. \end{align*}

For other choices of $\mathcal L$, the operators $M^+_{\mathcal L}$ and $M^-_{\mathcal L}$ may not have an explicit expression.

Exercise 1. Let $I : C^2(\R^2) \to C(\R)$ be a nonlinear operator which satisfies \[ M^-(D^2(u-v)) \leq Iu - Iv \leq M^+(D^2(u-v)),\] for any two functions $u$ and $v$, where $M^+$ and $M^-$ are the classical Pucci operators, then prove that $Iu$ is a fully nonlinear uniformly elliptic operator of the form $Iu(x) = F(D^2u(x))$ (in particular you have to show that $I$ is local).

Exercise 2 (*). Let $I : C^2(\R^2) \to C(\R)$ be a nonlinear operator, uniformly elliptic respect to $\mathcal L$ in the sense that for any two functions $u$ and $v$, \[ M_{\mathcal L}^-(u-v) \leq Iu - Iv \leq M_{\mathcal L}^+(u-v).\] Is it true that there always exists a family of kernels $K^{\alpha \beta} \in \mathcal L$ and constants $c^{\alpha \beta}$ such that \[ Iu(x) = \inf_{\alpha} \ \sup_{\beta} \ c^{\alpha \beta} + \int_{\R^n} (u(x+y)-u(x)) K^{\alpha \beta}(y) dy \ ?\]

Lecture 2

Viscosity solutions

Definition. We say that $Iu \leq 0$ in $\Omega$ in the viscosity sense if every time there exists a function $\varphi : \R^n \to \R$ such that for some point $x \in \Omega$,

  1. $\varphi$ is $C^2$ in a neighborhood of $x$,
  2. $\varphi(x) = u(x)$,
  3. $\varphi(y) \leq u(y)$ everywhere in $\R^n$,

then $I\varphi(x) \leq 0$.

The point of the definition is to translate the difficulty of evaluating the operator $I$ into a smooth test function $\varphi$. In this way, the function $u$ is only required to be continuous (lower semicontinuous for the inequality $Iu \leq 0$). The function $\varphi$ is a test function touching $u$ from below at $x$.

The inequality $Iu \geq 0$ is defined analogously using tests functions touching $u$ from above. A viscosity solution is a function $u$ for which both $Iu \leq 0$ and $Iu \geq 0$ hold in $\Omega$.

Viscosity solutions have the following basic properties:

  • Stability under uniform limits.

For second order equations this means that if $F_n(D^2 u_n) = 0$ in $\Omega$ and we have both $F_n \to F$ and $u_n \to u$ locally uniformly, then $F(D^2 u)=0$ also holds in the viscosity sense.

This is available under several set of assumptions. Some are rather difficult to prove, like the case of second order equations with variable coefficients.

The method can be applied to find the viscosity solution of the Dirichlet problem every time the comparison principle holds and some barrier construction can be used to assure the boundary condition.

Let us analyze the case of integral equations. Whenever a test function $\varphi$ exists, there is a vector $b$ ($=\nabla \varphi(x)$) and a constant $c$ ($=|D^2 \varphi(x)|$) such that \[ u(x+y) \leq u(x) + b \cdot y + c|y|^2.\] Therefore, the positive part of the integral \[ \int_{\R^n} (u(x+y) + u(x-y) - 2u(x))^+ K(y) dy \] has an $L^1$ integrand. The negative part can a priori integrate to $-\infty$. In any case, we can assign a value to the integral in $[-\infty,\infty)$, and also to any expression of the form \[Iu(x) = \inf_\beta \ \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^{\alpha\beta}(y) dy\] Thus, the value of $Iu(x)$ can be evaluated classically. In the case that $I$ is uniformly elliptic one can even show that the negative part of the integral is also finite. This small observation makes it more comfortable to deal with viscosity solutions of integro-differential equations than in the classical PDE case, since the equation is evaluated directly into the solution $u$ at all points $x$ where there is a test function $\varphi$ touching $u$ either from above or below.

An open problem

Uniqueness with variable coefficients

Exercise 3 (*). Prove that the comparison principle holds for equations of the form \[ \inf_\alpha \ \sup_\beta \int_{\R^n} (u(x+y)-u(x)) K^{\alpha \beta}(x,y) dy = 0,\] under appropriate ellipticity and continuity conditions on the kernel $K$.

The closest result available, due to Cyril Imbert and Guy Barles [2], is for equations of the form \[ \inf_{\alpha} \ \sup_\beta \int_{\R^n} (u(x+j(x,y))-u(x)) K^{\alpha \beta}(y) dy = 0.\] Here $j$ is assumed to be essentially Lipschitz continuous respect to $x$, among other nondegeneracy conditions for $j$ and $K$.

Second order equations as limits of integro-differential equations

We can recover second order elliptic operators as limits of integral ones. Consider \[ \lim_{s \to 2} \int_{\R^n} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy.\]

For $u \in C^3$, we write the expansion \[ u(x+y) = u(x) + Du(x) \cdot y + y^t \ D^2u(x)\ y + O(|y|^3).\]

Let us split the integral above in the domains $B_R$ and $\R^n \setminus B_R$ for some small $R>0$.

For the first part, we have \begin{align*} \int_{B_R} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy &= \int_{B_R} (y^t \ D^2u(x) \ y + O(|y|^3)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy \\ &= \int_0^R (2-s) \frac{r^2}{r^{n+s}} r^{n-1} \int_{\partial B_1} (\theta^t \ D^2u(x) \ \theta) \frac{a(\theta)} d\theta dr + (2-s) O(R^{3-s}) \\ &= R^{2-s} \int_{\partial B_1} (\theta^t \ D^2u(x) \ \theta) \frac{a(\theta)} d\theta + (2-s) O(R^{3-s}) \\ \end{align*}

Therefore, when we take $s\to 2$, we obtain \[\int_{B_R} (u(x+y)-u(x)) \frac{(2-s)a(y/|y|)}{|y|^{n+s}} dy = \int_{\partial B_1} \theta^t \ D^2u(x) \ \theta a(\theta) d\theta,\] which is a linear operator in $D^2u$, hence it equals $a_{ij} \partial_{ij}u$ for some matrix $a_{ij}$.

Smooth approximations of viscosity solutions to fully nonlinear elliptic equations

One of the common difficulties one encounters when dealing with viscosity solutions is that it is difficult to make density type arguments. More precisely, a viscosity solution cannot be approximated by a classical $C^2$ solution in any standard way. We can do it however, if we use nonlocal equations [3].

Given the equation \begin{align*} 0 = F(D^2u) &= \inf_\alpha \ \sup_\beta a^{\alpha \beta}_{ij} \partial_{ij} u\\ &= \frac \lambda 2 \Delta u + \inf_\alpha \ \sup_\beta b^{\alpha \beta}_{ij} \partial_{ij} u. \end{align*}

We approximate linear each operator $b^{\alpha \beta}_{ij} \partial_{ij} u$ by an integro-differential one \[b^{\alpha \beta}_{ij} \partial_{ij} u = \lim_{r\to 0} \int_{\R^n} (u(x+y)-u(x)) K_r^{\alpha \beta} dy,\] where \[ K_r^{\alpha \beta}(y) = \frac 1 {r^{n+2}} K^{\alpha \beta} \left( \frac y r \right),\] and each $K^{\alpha \beta}$ is smooth and compactly supported. Then, we approximate the equation with \[ \frac \lambda 2 \Delta u_r + \inf_\alpha \ \sup_\beta \int_{\R^n} (u_r(x+y)-u_r(x)) K_r^{\alpha \beta} dy = 0 \] For each $r>0$, the solution $u$ will be $C^{2,1}$ (very smooth), and $u_r \to u$ as $r \to 0$, where $u$ is the solution to $F(D^2 u)=0$.

Regularity results, such as Harnack or $C^{1,\alpha}$, can be proved uniformly in $r$ bypassing the technical difficulties of viscosity solutions if we are willing to deal with integral equations.

Regularity of nonlinear equations: how to start

In order to show that the solution to a fully nonlinear equation $F(D^2 u)=0$ is $C^{1,\alpha}$ for some $\alpha>0$, we differentiate the equation and study the equation that the derivative satisfies. Formally, if we differentiate in an arbitrary direction $e$, \[ \frac{\partial F}{\partial M_{ij}} (D^2u) \partial_{ij} (\partial_e u) = 0.\]

If we call $a_{ij}(x) = \frac{\partial F}{\partial M_{ij}} (D^2u(x))$, we do not know much about this coefficients a priori (they are technically not well defined), but we know that for all $x$ \[ \lambda I \leq a_{ij}(x) \leq \Lambda I,\] because of the uniform ellipticity assumption on $F$.

What we need is to prove that a solution to an equation of the form \[ a_{ij}(x) \partial_{ij} v = 0\] is Holder continuous, with an estimate which depends on the ellipticity constants of $a_{ij}$ but is independent of any other property of $a_{ij}$ (no smoothness assumption can be made). This is the fundamental result by Krylov and Safonov.

Differentiating the equation

When we try to make the argument above rigorous, we encounter some technical difficulties. The first obvious one is that $\partial_e u$ may not be a well defined function. We must take incremental quotients. \[ v(x) = \frac{u(x+h)-u(x)}{|h|}.\] The coefficients of the equation may not be well defined either, but what can be shown is that \[ M^+(D^2 v) \geq 0 \text{ and } M^-(D^2 v) \leq 0,\] for the classical Pucci operators of order 2.

For fully nonlinear integro-differential equations, one gets the same thing with the appropriate extremal operators corresponding to the uniform ellipticity assumption. If $Iu=0$ in $B_1$ and $v$ is defined as above, then \[ M_{\mathcal L}^+(v) \geq 0 \text{ and } M_{\mathcal L}^-(v) \leq 0,\] wherever $x \in \Omega$ and $x+h \in \Omega$.

The challenge is then to find a Holder estimate based on these two inequalities. The result says that if $v$ satisfies in the viscosity sense both inequalities $M_{\mathcal L}^+(v) \geq 0$ and $M_{\mathcal L}^-(v) \leq 0$ in (say) $B_1$, then $v$ is $C^\alpha(B_{1/2})$ with the estimate \[ \|v\|_{C^\alpha(B_{1/2})} \leq C \|v\|_{L^\infty(\R^n)}.\]

The fact that the $L^\infty$ norm is taken in the full space $\R^n$ is an unavoidable consequence of the fact that the equation is non local. This feature does make the proof of $C^{1,\alpha}$ regularity more involved and it even forces us to add extra assumptions

It is good to keep in mind that for smooth functions $v$, the two inequalities above are equivalent to the existence of some kernel $K(x,y)$ such that \[ \int_{\R^n} (v(x+y)-v(x)) K(x,y) dy = 0, \] and that $K(x,\cdot) \in \mathcal L$ for all $x$. But no assumption can be made about the regularity of $K$ respect to $x$.

Holder estimates

The proof of the Holder estimates is relatively simple if we do not care about how the constants $C$ and $\alpha$ depend on $s$. If we want a robust estimate that passes to the limit as $s \to 2$, the proof will be much harder. We will start with the simple case.

This simple case was originally proved in [4]. The harder case with uniform constants is in [1]

Let $\mathcal L$ be the usual class of kernels \[ \mathcal L = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]

Let $u$ be a continuous function, bounded in $\R^n$ such that \begin{align*} M^+_{\mathcal L} u &\geq 0 \text{ in } B_1, \\ M^-_{\mathcal L} u &\leq 0 \text{ in } B_1. \end{align*} Where both of the inequalities above are understood in the viscosity sense.

Then, there are constants $C$ and $\alpha>0$ (depending only on $\lambda$, $\Lambda$, $n$ and $s$) such that \[ |u(x) - u(0)| \leq C |x|^\alpha \|u\|_{L^\infty(\R^n)}.\] There is nothing special about the point $0$. Thus, the estimate can be made uniformly in any set of points compactly contained in $B_1$.

Proof. The factor $\|u\|_{L^\infty(\R^n)}$ can be assumed to be $1$ thanks to the simple normalization $u/\|u\|_{L^\infty}$. So, we assume that $\|u\|_{L^\infty}=1$ and will prove that there is a constant $\theta>0$ such that \[ osc_{B_{2^{-k}}} u \leq (1-\theta)^k.\] The result then follows taking $\alpha = \log(1-\theta)/\log(1/2)$ and $C = (1-\theta)^{-1}$.

We will prove the above estimate for dyadic balls inductively. It is certainly true for $k \leq 0$ since $\|u\|_{L^\infty} = 1$. Now we assume it holds up to some value of $k$ and want to prove it for $k+1$.

In order to prove the inductive step, we rescale the function so that $B_{2^{-k}}$ corresponds to $B_1$. Let \[ v(x) = (1-\theta)^{-k} u(2^{-k} x) - a_k .\] The function $v$ is scaled, and the constant $a_k$ is chosen, so that $-1/2 \leq v \leq 1/2$ in $B_1$.

The scale invariance of $M^+_{\mathcal L}$ and $M^-_{\mathcal L}$ plays a crucial role here in that $v$ satisfies the same extremal equations as the original function $u$.

From the inductive hypothesis, $osc_{B_{2^{-j}}} u \leq (1-\theta)^j$ for all $j \leq k$, so we have that $osc_{B_{2^{j}}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$.

There are two obvious ways in which the oscillation of $v$ in the ball of radius $1/2$ can be smaller than its oscillation in $B_1$: either the suppremum of $u$ is smaller in $B_{1/2}$ or the infimum is larger. We prove one or the other depending on which of the sets $\{v < 0\} \cap B_1$ or $\{v > 0\} \cap B_1$ has larger measure. Let us assume the former. The other case follows by exchangind $v$ with $-v$. We want to prove now that $v \leq (1/2-\theta)$ in $B_{1/2}$.

Note that since we know that $osc_{B_{2^{j}}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$, then \[ v(x) \leq (2|x|)^\alpha-1/2 \text{ for } x \notin B_1.\]

The point is to choose $\theta$ and $\alpha$ appropriately so that the following three points

  • $v(x) \leq (2|x|)^\alpha-1/2 \ \text{ for all } x \notin B_1$.
  • $|\{v < 0\} \cap B_1| > 1/2 |B_1|$.
  • $M^+_{\mathcal L} v \leq 0$ in $B_1$

Imply that $v \leq (1/2-\theta)$ in $B_{1/2}$.

If that holds for any choice of $\alpha$ and $\theta$, it also holds for smaller values. Thus, a posteriori, we can make one of them smaller so that $\alpha = \log(1-\theta)/\log(1/2)$.

Let $\rho$ be a smooth radial function supported in $B_{3/4}$ such that $\rho \equiv 1$ in $B_{1/2}$.

If $v \geq (1/2-\theta)$ at any point in $B_{1/2}$, then $(u+\theta \rho)$ would have a local maximum at a point $x_0 \in B_{3/4}$ for which $(u+\theta \rho)(x_0) > 1/2$. \[ \max_{B_1} (u+\theta \rho) = (u+\theta \rho)(x_0) > 1.\] In order to obtain a contradiction, we evaluate $M^+ (u+\theta \rho)(x_0)$.

On one hand \begin{align*} M^+ (u+\theta \rho)(x_0) &\geq M^+ u(x_0) + \theta M^- \rho(x_0) \\ &\geq \theta \ \min_{B_{3/4}} \ M^- \rho(x_0). \end{align*}

On the other hand, the other estimate is more delicate. Let $w = (u+\theta \rho)$. \begin{align*} M^+ (u+\theta \rho)(x_0) &= \int_{\R^n} \frac{\Lambda (w(x_0+y)+w(x_0-y)-2w(x_0))^+ - \lambda (w(x_0+y)+w(x_0-y)-2w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{\R^n} \frac{\Lambda (w(x_0+y)-w(x_0))^+ - \lambda (w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{x_0+y \notin B_1} (\dots) + \int_{x_0+y \in B_1} (\dots) \end{align*}

The first integral can be bounded using that $v(x) \leq (2|x|)^\alpha-1/2$ for all $x \notin B_1$. In fact, it is arbitrarily small if $\alpha$ is chosen close to $0$. \[ \int_{x_0+y \notin B_1} (\dots) \leq \int_{x_0+y \notin B_1} ((2|x|)^\alpha-1) \frac{\Lambda}{|y|^{n+s}} dy \ll 1.\]

The second integral has a non negative integrand just because $(u+\theta \rho)$ takes its maximum in $B_1$ at $x_0$. But we can say more using the set $G = \{v < 0\} \cap B_1$. \begin{align*} \int_{x_0+y \in B_1} (\dots) &\leq \int_{x_0+y \in G} (\dots) + \int_{x_0+y \in B_1 \setminus G} (\dots) \\ &\leq \int_{x_0+y \in G} (\dots) = \int_{x_0+y \in G} - \lambda \frac{(w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{x_0+y \in G} - \lambda \frac{(w(x_0+y)-w(x_0))^-}{|y|^{n+s}} dy \\ &\leq \int_{x_0+y \in G} - \lambda \frac{1/2-\theta}{|y|^{n+s}} dy \leq -C. \end{align*} In the last inequality we use that $|y|^{-n-s}$ is bounded below, $\theta$ is chosen less than $1/2$, and $|G|>|B_1|/2$.

So, for $\theta$ and $\alpha$ small enough the sum of the two terms will be negative and less than $\theta \min M^- \rho$, arriving to a contradiction. This finishes the proof.

Inspecting the proof above we see that the argument is much more general than presented. The only assumptions used on $\mathcal L$ are that:

  1. The extremal operators are scale invariant.
  2. For the smooth bump function $\rho$, $M^- \rho$ is bounded.
  3. $M^+ w(x_0)$ can be bounded below at point $x_0 \in B_{3/4}$ which achieves the maximum of $w$ in $B_1$ provided that
    • $w(x) \leq w(x_0) + (2|x|)^\alpha-1$ for $x \notin B_1$.
    • $|\{w(x) \leq w(x_0)-1\} \cap B_1| \geq |B_1|/2$.

There are very general families of non local operators which satisfy those conditions above.

Exercise 4. Verify that the proof above also holds for equations of the form \[ \int_{\R^n} (u(x+y) - u(x)) K(x,y) dy = 0 \text{ in } B_1.\] for which we assume that for every $x \in B_1$, \[ \frac{\lambda}{|y|^{n+s}} \leq K(x,y) \leq \frac{\Lambda}{|y|^{n+s}},\] where $s \in (0,1)$ but we do not assume that $K$ is symmetric in $y$.

Exercise 5. Verify that the proof above also holds for equations of the form \[ \int_{\R^n} (u(x+y) - u(x) - y \cdot Du(x)) K(x,y) dy = 0 \text{ in } B_1.\] for which we assume that for every $x \in B_1$, \[ \frac{\lambda}{|y|^{n+s}} \leq K(x,y) \leq \frac{\Lambda}{|y|^{n+s}},\] where $s \in (1,2)$ but we do not assume that $K$ is symmetric in $y$.

Lecture 3

$C^{1,\alpha}$ estimates for nonlinear nonlocal equations

Let $u$ be a bounded function in $\R^n$ which solves $Iu = 0$ in $B_1$ in the viscosity sense, where $I$ is a nonlocal operator uniformly elliptic respect to a class $\mathcal L$. Let us also assume that $I$ is translation invariant, meaning that if $u$ solves $Iu = 0$ in $\Omega$, then $u(\cdot-x)$ also solves $Iu = 0$ in $x+\Omega$.

We want to obtain a $C^{1,\alpha}$ estimate of the following form. \[ \|u\|_{C^{1,\alpha}(B_{1/2})} \leq C \|u\|_{L^\infty(B_1)}.\]

The strategy of the proof is the following. Let us assume that $I0=0$ (the value of $I$ applied to the zero function is zero). From the ellipticity assumption \[ M^-_{\mathcal L} u \leq Iu - I0 \leq M^+_{\mathcal L} u.\] Thus, the two inequalities hold \begin{align*} M^-_{\mathcal L} u &\leq 0 \text{ in } B_1, \\ M^+_{\mathcal L} u &\geq 0 \text{ in } B_1. \end{align*} So, from the Holder estimates, $u \in C^\alpha$ in the interior of $B_1$.

Now, for any small vector $h \in \R^n$, we define the incremental quotient \[ v(x) = \frac{u(x+h)-u(x)}{|h|^\alpha}. \] This function $v$ is bounded independently of $h$ in any set compactly contained in $B_1$ (say $B_{1-\varepsilon}$). From this we would like apply the Holder estimates to obtain that $v \in C^\alpha$ in the interior of $B_1$ independently of $h$. The problem is that the right hand side in the Holder estimate depends on the $L^\infty$ norm of $v$ in the full space $\R^n$ and not only $B_{1-\varepsilon}$.

One way to overcome this difficulty is imposing stronger assumptions to the family of kernels $\mathcal L$. Let us define the following more restrictive family, where we impose a bound on the derivatives of the kernels \[ \mathcal L_1 = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ and } |\nabla K(y)| \leq \frac C{|y|^{n+s+1}}, \text{ plus } K(y)=K(-y) \right\}.\]

Now, we can "integrate by parts" the contribution of the tails of the integrals in $M^+_{\mathcal L_1} v$ and $M^-_{\mathcal L_1} v$. If we split the domain of the integrals of each kernel in $\mathcal L_1$ \begin{align*} \int_{\R^n} (v(x+y)-v(x)) K(y) dy &= \int_{B_r} (v(x+y)-v(x)) K(y) dy + \int_{\R^n \setminus B_r} (v(x+y)-v(x)) K(y) dy, \\ &= \int_{B_r} (v(x+y)-v(x)) K(y) dy + \int_{\R^n \setminus B_r} (u(x+y)-u(x)) (K(y)-K(y+h)) dy \end{align*}

The second term is bounded (depending on $r$) thanks to the bound on $DK$ away from zero, and the first term is what we really need to work out the $C^\alpha$ norm of $v$ in terms of the $L^\infty$ norm of $v$ in $B_{1-2 \varepsilon}$.

From the equation above we get that $v \in C^\alpha$ independently of $h$. That implies that $u \in C^{2\alpha}$. Iterating the procedure we get $u \in C^{3\alpha}$, $u \in C^{4\alpha}$, \dots, up to $u$ Lipschitz. Then one more iteration gives $u\in C^{1,\alpha}$ and but more gains in regularity are possible with this method because the $C^{1,\alpha}$ estimate of $u$ is not equivalent to any uniform bound of an incremental quotient of $u$.

Exercise 6 (*). Is the extra assumption on the boundedness of the derivatives of the kernels really necessary to obtain $C^{1,\alpha}$ estimates? This condition is unnecessary if the equation holds in the full space. But in fact this condition is necessary if $s<1$ even for linear equations. The result is not clear (and in fact open) for $s>1$.

Holder estimates in the parabolic case

We will now work out the parabolic version of the Holder estimates that we obtained in the previous lecture. This will show some of the extra difficulties that one faces when dealing with parabolic equations.

The result that we prove is the following.

Let $\mathcal L$ be the usual class of kernels \[ \mathcal L = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]

Let $u$ be a continuous function, bounded in $\R^n \times [-1,0]$ such that \begin{align*} u_t - M^+_{\mathcal L} u &\leq 0 \text{ in } B_1 \times (-1,0], \\ u_t - M^-_{\mathcal L} u &\geq 0 \text{ in } B_1 \times (-1,0]. \end{align*}

The $u \in C^\alpha(B_{1/2} \times [-1/2,0])$ and \[ \|u\|_{C^\alpha(B_{1/2} \times [-1/2,0])} \leq C \|u\|_{L^\infty},\] for constants $\alpha$ and $C$ that depend on $\lambda$, $\Lambda$, $s$ and $n$.

As in the elliptic case, the proof is much harder if we want to make sure that $C$ and $\alpha$ have a finite positive limit as $s \to 2$. We will do the simple case now, in which we do not care about how $C$ and $\alpha$ depend on $s$.

This result was proved with gradient dependence in the equations in [5] and [6].

Let us normalize the function $u$ such that $osc_{\R^n \times [-1,0]} u = 1$. We will show that there is a Holder modulus of continuity at the origin, i.e. \[ |u(x,-t) - u(0,0)| \leq C(|x|^\alpha+t^{\alpha/s}).\]

It is convenient to keep in mind the natural scaling of the equation. The function $u_r(x,t) = u(rx,r^st)$ satisfies the same two inequalities \begin{align*} \partial_t u_r - M^+_{\mathcal L} u_r &\leq 0 \text{ in } B_{1/r} \times (-1/r^s,0], \\ \partial_t u_r - M^-_{\mathcal L} u_r &\geq 0 \text{ in } B_{1/r} \times (-1/r^s,0]. \end{align*} Thus, $|x|^\alpha$ has the same scaling as $t^{\alpha/s}$.

Let us define the parabolic cylinders $Q_r$ with the right scaling as \[ Q_r := B_r \times [-r^s,0].\]

What we will prove is the inequality \begin{equation} \label{e1} osc_{Q_{2^-k}} u \leq (1-\theta)^k.\end{equation} From this, the Holder continuity follows as in the elliptic case.

From the assumption that $osc_{\R^n \times [-1,0]} u = 1$, we know that \eqref{e1} holds for all $k \leq 0$. That gives us the base for the induction. Now we assume it is true up to some value of $k$ and want to show it also holds for $k+1$.

We start by rescaling the function so as to map $Q_r$ to $Q_1$. Let $v(x,t) = (1-\theta)^{-k} u(2^{-k}x, 2^{-ks}t) - a_k$, where $a_k$ is chosen so that $-1/2 \leq v \leq 1/2$ in $B_1$.

From the inductive hypothesis, $osc_{Q_{2^j}} v \leq (1-\theta)^{-j}$ for all $j \geq 0$.

In order to show that $osc_{Q_{1/2}} v \leq (1-\theta)$ we must show either that $\theta \leq 1/2-\theta$ in $Q_{1/2}$ or that $\theta \geq -1/2+\theta$ in $Q_{1/2}$. Which of the two alternatives we manage to prove depends on which of the two sets $\{v \geq 0\} \cap (B_1 \times [-1,-1/2^s])$ or $\{v \leq 0\} \cap (B_1 \times [-1,-1/2^s])$ has larger measure. Let us assume it is the first, otherwise the same proof upside down would work with the opposite inequalities.

The function $v$ satisfies the following three conditions

  • $v(x) \leq (2|x|)^\alpha - 1/2$ for all $x \notin B_1$.
  • $|\{v \leq 0 \} \cap (B_1 \times [-1,-1/2^s])| \geq \frac 12 |B_1 \times [-1,-1/2^s]|$.
  • $\partial_t v - M^+_{\mathcal L} v \leq 0$ in $Q_1$

We need to show that for small enough $\theta>0$ and $\alpha>0$, these three conditions imply that $v \leq 1/2-\theta$ in $Q_{1/2}$

Let $\rho$ be a smooth radial function supported in $B_{3/4}$ such that $\rho \equiv 1$ in $B_{1/2}$. We will show that the function $v$ stays below the function $b(x,t) = 1/2 + \epsilon + \delta (t+1) - m(t) \rho(x)$ in $B_1 \times [-1,0]$ where $m$ is the solution to the ODE: \begin{align*} m(-1) &= 0, \\ m'(t) &= c_0 | \{x \in B_1: v(x,t) \leq 0\}| - C_1 m(t). \end{align*} for constants $c_0$ and $C$ to be chosen later.

We show that the inequality holds by proving that it can never be invalidated for the first time. Indeed, assume there was a point $(x_0,t_0)$ where equality holds. This point must be in the support of $\rho$ (strict inequality holds in the rest since $v \leq 1/2$), thus $x_0 \in B_{3/4}$.

We have the simple inequality \[v_t(x_0,t_0) \geq b_t(x_0,t_0) = -m'(t_0) \rho(x_0) + \delta.\]

Let $G(t) = \{x \in B_1: u(x,t) \leq 0\}$. We know, by the assumption above, that $\int_{-1}^{-1/2^s} G(t) dt > c$.

We write \begin{align*} M^+_{\mathcal L} v(x_0,t_0) &= \int_{x_0 + y \notin B_1} (\dots) dy + \int_{x_0 + y \in B_1\setminus G} (\dots) dy + \int_{x_0 + y \in G} (\dots) dy \\ &\leq (\text{sthing arbitrarily small as $\alpha\to 0$}) + C m(t_0) M^+_\mathcal \rho(x_0,t_0) + c_0 |G|.\\ &= C(\alpha) + C m(t) M^+_{\mathcal L} \rho(x_0,t_0) - c_1 |G| \end{align*}

Plugging those inequality into the equation, we obtain \[ v_t(x_0,t_0) - M^+_{\mathcal L} v(x_0,t_0) \geq -m'(t_0) \rho(x_0) + \delta - C(\alpha) - C m(t_0) M^+_{\mathcal L} \rho(x_0,t_0) + c_0 |G| \] Recall that $m'(t) = c_0 |G| - C_1 m(t)$ by definition (this is when $c_0$ is chosen). Since $\rho \leq 1$, we have \[ v_t(x_0,t_0) - M^+_{\mathcal L} v(x_0,t_0) \geq \delta - C(\alpha) - C m(t_0) M^+_{\mathcal L} \rho(x_0,t_0) + C_0 m(t_0) \rho(x_0). \] We choose $\alpha$ small so that $C(\alpha) < \delta$, so we have \[ v_t(x_0,t_0) - M^+_{\mathcal L} v(x_0,t_0) \geq - C m(t_0) M^+_{\mathcal L} \rho(x_0,t_0) + C_0 m(t_0) \rho(x_0). \] Now we have to choose $C_0$ appropriately to make this right hand side positive and contradict the equation for $v$.

This is clearly possible if we know a lower bound for $\rho(x_0)$. However, we must also consider that $x_0$ may be a point where $\rho$ is very small. It turns out that $M^+_{\mathcal L} \rho > 0$ where $\rho$ is small since trivially $M^+_{\mathcal L} \rho(x) > 0$ if $\rho(x)=0$ (from the formula for $M^+_{\mathcal L}$) and $M^+_{\mathcal L} \rho$ is a continuous function. Thus, where $\rho$ is small, the right hand side is automatically positive. We choose $C_0$ large so that this right hand side is also positive where $\rho$ is large. This gives us a contradiction with the equation and proves that then $v$ must stay below the function $b$.

To finish the proof, all we need is to show that $b \leq 1/2-\theta$ in $Q_{1/2}$. We analyze the ODE that defines $m(t)$ and we realize that $m(t)$ is going to be bounded below in for $t \in [-1/2^s,0]$ in terms of the measure of the set $\{u \leq 0\} \cap (B_1 \times [-1,-1/2^s])$. In fact, an explicit formula can be given for $b$. Let $\theta$ be half of this lower bound for $m(t)$ and let us choose $\delta < \theta/4$. We will then have $b = 1/2 - \epsilon + \delta(t+1) - m(t) b(x,t) \leq 1/2-\theta$ in $Q_{1/2}$, which finishes the proof.

Exercise 7. Adapt the proof to the previous result to equations with drift and diffusion. Let $u$ be a continuous function, bounded in $\R^n \times [-1,0]$ such that for some $B>0$ and $s \geq 1$, \begin{align*} u_t - M^+_{\mathcal L} u - B|\nabla u| &\leq 0 \text{ in } B_1 \times (-1,0], \\ u_t - M^-_{\mathcal L} u + B|\nabla u| &\geq 0 \text{ in } B_1 \times (-1,0]. \end{align*} then $u \in C^\alpha(Q_{1/2})$ with \[ \|u\|_{C^\alpha(Q_{1/2})} \leq C \|u\|_{L^\infty(\R^n \times [-1,0])}.\]

The two inequalities above are implied by an equation of the form \[ u_t + b \cdot \nabla u - \int_{\R^n} (u(x+y)-u(x)) K(x,y) dy = 0.\] where $\|b\|_{L^\infty} \leq B$ and $K(x,\cdot)$ belongs to the class $\mathcal L$ for all $x$.

Exercise 8. Let $I$ be uniformly elliptic with respect to the usual class $\mathcal L$ (without any condition on the derivatives of the kernel) and translation invariant. Prove that bounded solutions to the equation (in the full space) \[ u_t - Iu = 0 \text{ in } \R^n \times (0,\infty),\] become immediately $C^{1,\alpha}$ in space and time for positive time.

References

  1. 1.0 1.1 Caffarelli, Luis; Silvestre, Luis (2009), "Regularity theory for fully nonlinear integro-differential equations", Communications on Pure and Applied Mathematics (Wiley Online Library) 62 (5): 597–638, ISSN 0010-3640 
  2. Barles, Guy; Imbert, Cyril (2008), "Second-order elliptic integro-differential equations: viscosity solutions' theory revisited", Annales de l'Institut Henri Poincaré. Analyse Non Linéaire 25 (3): 567–585, doi:10.1016/j.anihpc.2007.02.007, ISSN 0294-1449, http://dx.doi.org/10.1016/j.anihpc.2007.02.007 
  3. Caffarelli, Luis; Silvestre, Luis (2010), "Smooth Approximations of Solutions to Nonconvex Fully Nonlinear Elliptic Equations", Nonlinear partial differential equations and related topics: dedicated to Nina N. Uraltseva (Amer Mathematical Society) 229: 67 
  4. Silvestre, Luis (2006), "Holder estimates for solutions of integro-differential equations like the fractional laplace", Indiana University Mathematics Journal (Bloomington, Ind.: Dept. of Mathematics, Indiana University, c1970-) 55 (3): 1155–1174, ISSN 0022-2518 
  5. Silvestre, Luis (2011), "On the differentiability of the solution to the Hamilton--Jacobi equation with critical fractional diffusion", Advances in Mathematics (Elsevier) 226 (2): 2020–2039, ISSN 0001-8708 
  6. Silvestre, Luis (2010), "Holder estimates for advection fractional-diffusion equations", Arxiv preprint Arxiv:1009.5723