Lecture notes on nonlocal equations: Difference between revisions

From nonlocal pde
Jump to navigation Jump to search
imported>Luis
(Created page with "=Lecture 1= ==Definitions: linear equations== The first lecture serves as an overview of the subject and to familiarize ourselves with the type of equations under study. The a...")
(No difference)

Revision as of 15:18, 7 May 2012

Lecture 1

Definitions: linear equations

The first lecture serves as an overview of the subject and to familiarize ourselves with the type of equations under study.

The aim of the course is to see some regularity results for elliptic equations. Most of these results can be generalized to parabolic equations as well. However, this generalization presents extra difficulties that involve nontrivial ideas.

The prime example of an elliptic equation is the Laplace equation. \[ \Delta u(x) = 0 \text{ in } \Omega.\]

Elliptic equations are those which have similar properties as the Laplace equation. This is a vague definition.

The class of fully nonlinear elliptic equations of second order have the form \[ F(D^2u, Du, u, x)=0 \text{ in } \Omega.\] for a function $F$ such that \[ \frac{\partial F}{\partial M_{ij}} > 0 \text{ and } \frac{\partial F}{\partial u} \leq 0.\]

These are the minimal monotonicity conditions for which you can expect a comparison principle to hold. The appropriate notion of weak solution, viscosity solutions, is based on this monotonicity.

What is the Laplacian? The most natural (coordinate independent) definition may be \[ \Delta u(x) = \lim_{r \to 0} \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy.\]

A simple (although rather uninteresting) example of a nonlocal equation would be the following non infinitesimal version of the Laplace equation \[ \frac c {r^{n+2}} \int_{B_r} u(x+y)-u(x) dy = 0 \text{ for all } x \in \Omega.\]

The equation tells us that the value $u(x)$ equals the average of $u$ in the ball $B_r(x)$. A more general integral equation is a weighted version of the above. \[ \int_{\R^n} (u(x+y)-u(x)) K(y) dy = 0 \text{ for all } x \in \Omega.\] where $K:\R^n \to \R$ is a non negative kernel.

The equations show that $u(x)$ is a weighted average of the values of $u$ in the neighborhood of $x$. This is true in some sense for all elliptic equations, but it is most apparent for integro-differential ones.

For the Dirichlet problem, the boundary values have to be prescribed in the whole complement of the domain. \begin{align*} \int_{\R^n} (u(x+y)-u(x)) K(y) dy &= 0 \text{ for all } x \in \Omega, \\ u(x) &= g(x) \text{ for all } x \notin \Omega. \end{align*}

These type of equations have a natural motivation from probability, as we will see below.

Probabilistic derivation

Let us start by an overview on how to derive the Laplace equation from Brownian motion.

Let $B_t^x$ be Brownian motion starting at the point $x$ and $\tau$ be the first time it hits the boundary $\partial \Omega$. If we call $u(x) = \mathbb E[g(B_\tau^x)]$ for some prescribed function $g: \partial \Omega \to \R$, then $u$ will solve the classical Laplace equation \begin{align*} \Delta u(x) &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*}

A variation would be to consider diffusions other than Brownian motion. If $X^x_t$ is the stochastic process given by the SDE: $X_0^x = x$ and $dX_t^x = \sigma(X) dB$, and we define as before $u(x) = \mathbb E[g(X_\tau^x)]$, then $u$ will solve \begin{align*} a_{ij}(x) \partial_{ij} u(x) &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*} where $a_{ij}(x) = \sigma^*(x) \sigma(x)$ is a non negative definite matrix for each point $x$.

Nonlinear equations arise from stochastic control problems. Say that we can choose the coefficients $a_{ij}(x)$ from a family of possible matrices $\{a_{ij}^\alpha\}$ indexed by a parameter $\alpha \in A$. For every point $x$, we can choose a different $a_{ij}(x)$ and our objective is to make $u(x)$ as large as possible. The maximum possible value of $u(x)$ will satisfy the equation \begin{align*} \sup_{\alpha} a_{ij}^\alpha \partial_{ij} u &= 0 \text{ in } \Omega,\\ u(x) &= g(x) \text{ on } \partial \Omega. \end{align*}

Sketch of the proof. If $v$ is any solution to \begin{align*} a_{ij}(x) \partial_{ij} v(x) &= 0 \text{ in } \Omega,\\ v(x) &= g(x) \text{ on } \partial \Omega. \end{align*} with $a_{ij}(x) \in \{a_{ij}^\alpha : \alpha \in A\}$, then from the equation that $u$ solves, we have \[ a_{ij}(x) \partial_{ij} u(x) \leq 0 \text{ in } \Omega. \] Therefore $u \geq v$ in $\Omega$ by the comparison principle for linear elliptic PDE.

Integro-differential equations are derived from discontinuous stochastic processes: Levy processes with jumps.

Let $X_t^x$ be a pure jump Levy process starting at $x$. Now $\tau$ is the first exit time from $\Omega$. The point $X_\tau$ may be anywhere outside of $\Omega$ since $X_t$ jumps. The jumps take place at random times determined by a Poisson process. The jumps in any direction $y \in A$, for some set $A \subset \R^n$ follow a Poisson process with intensity \[ \int_A K(y) dy. \] The kernel $K$ represents then the frequency of jumps in each direction. This type of processes are well understood and studied in the probability community.

The small jumps may happen more often than large one. In fact, small jumps may happen infinitely often and still have a well defined stochastic process. This translates in kernels $K$ with a singularity at the origin. The exact assumption one has to make is \[ \int_{\R^n} K(y) (1 \wedge |y|^2) dy , +\infty.\] The generator operator of the Levy process is \[ Lu(x) = \int_{\R^n} (u(x+y) - u(x) - y \cdot Du(x) \chi_{B_1}(y)) K(y) dy. \]

We may assume that $K(y)=K(-y)$ in order to simplify the expression. This assumption is not essential, but it makes the computations more compact. This way we can write \begin{align*} Lu(x) &= PV \int_{\R^n} (u(x+y) - u(x)) K(y) dy, \text{ or } &= \int_{\R^n} (u(x+y) + u(x-y) - 2u(x)) K(y) dy. \end{align*}

An optimal control problem for jump processes leads to the integro-differntial Bellman equation \[ Iu(x) := \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^\alpha(y) dy = 0 \text{ in } \Omega.\]

Another possibility is to consider a problem with two parameters, which are controlled by two competitive players. This is the integro-differential Isaacs equation. \[ Iu(x) := \inf_\beta \ \sup_{\alpha} \int_{\R^n}(u(x+y)-u(x)) K^{\alpha\beta}(y) dy = 0 \text{ in } \Omega.\]

Other contexts in which integral equations arise are the following:

Uniform ellipticity

Regularity result require stronger monotonicity assumptions. For fully nonlinear elliptic equations of second order F(D^2u)=0, uniform ellipticity is defined as that there exist two constants $\Lambda \geq \lambda > 0$ such that \[ \lambda I \leq \frac{\partial F}{\partial M_{ij}}(M) \leq \Lambda I.\]

Big Theorems:

  • Krylov-Safonov (1981): Solutions to fully nonlinear uniformly elliptic equations are $C^{1,\alpha}$ for some $\alpha>0$.
  • Evans-Krylov (1983): Solutions to convex fully nonlinear uniformly elliptic equations are $C^{2,\alpha}$ for some $\alpha>0$.

At the end of this course, we should be able to understand the proof of these two theorem and their generalization to nonlocal equations.

We first need to understand what ellipticity means in an integro-differential equation. The prime example will be the fractional Laplacian. For $s \in (0,2)$, define \[ -(-\Delta)^{s/2} u(x) = \int_{\R^n} (u(x+y)-u(x)) \frac{c_{n,s}}{|y|^{n+s}} dy.\]

This is an integro-differential operator with a kernel which is radially symmetric, homogeneous, and singular at the origin.

A natural ellipticity condition for linear integro-differential operators would be to impose that the kernel is comparable to that of the fractional Laplacian. The condition could be \[ c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y).\] But other conditions are possible.

Uniform ellipticity is linked to extremal operators. The classical Pucci maximal operators are the extremal of all uniformly elliptic operators which vanish at zero. \begin{align*} M^+(D^2 u) &= \sup_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \Lambda tr(D^2u)^+ - \lambda tr(D^2u)^+,\\ M^-(D^2 u) &= \inf_{\lambda I \leq \{a_{ij}\} \leq \Lambda I} a_{ij} \partial_{ij} u(x) = \lambda tr(D^2u)^+ - \Lambda tr(D^2u)^+.\\ \end{align*} A fully nonlinear equation $F(D^2u)=0$ is uniformly elliptic if and only if for any two symmetric matrices $X$ and $Y$, \[M^-(X-Y) \leq F(X) - F(Y) \leq M^+(X-Y).\]

Given any family of kernels $\mathcal L$, we define \begin{align*} M_{\mathcal L}^+ u(x) &= \sup_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy, \\ M_{\mathcal L}^- u(x) &= \inf_{K \in \mathcal L} \int (u(x+y)-u(x)) K(y) dy. \end{align*} Thus, for a nonlocal operator $I$ (which is a black box that maps $C^2$ functions into continuous functions), we can say if is uniformly elliptic if for any two $C^2$ functions $u$ and $v$, \[ M_{\mathcal L}^- (u-v)(x) \leq Iu(x) - Iv(x) \leq M_{\mathcal L}^+ (u-v)(x).\]

The first choice of $\mathcal L$ would be the one described above \[ \mathcal L = \left\{ K : c_{s,n} \frac \lambda {|y|^{n+s}} \leq K(y) \leq c_{s,n} \frac \Lambda {|y|^{n+s}}, \text{ plus } K(y)=K(-y) \right\}.\]

In this case, the maximal operators take a particularly simple form

\begin{align*} M_{\mathcal L}^+ u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\Lambda (u(x+y)+u(x-y)-2u(x))^+ - \lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy, \\ M_{\mathcal L}^- u(x) &= \frac{c_{n,s}}2 \int_{\R^n} \frac{\lambda (u(x+y)+u(x-y)-2u(x))^+ - \Lambda (u(x+y)+u(x-y)-2u(x))^-}{|y|^{n+s}} dy. \end{align*}

For other choices of $\mathcal L$, the operators $M^+_{\mathcal L}$ and $M^-_{\mathcal L}$ may not have an explicit expression.

Lecture 2