Stochastic control and Template:Stub: Difference between pages

From nonlocal pde
(Difference between pages)
Jump to navigation Jump to search
imported>Luis
(Created page with "Stochastic control refers to the general area in which some random variable distributions depend on the choice of certain controls, and one looks for an optimal strategy to choos...")
 
imported>Luis
No edit summary
 
Line 1: Line 1:
Stochastic control refers to the general area in which some random variable distributions depend on the choice of certain controls, and one looks for an optimal strategy to choose those controls in order to maximize or minimize the expected value of the random variable.
<div style="background:#FFEEDD;border-style:ridge;font-size:150%;">
 
This article is a '''stub'''. You can help this nonlocal wiki by [{{fullurl:{{FULLPAGENAME}}|action=edit}} expanding it].
The random variable to optimize is computed in terms of some stochastic process. It is usually the value of some given function evaluated at the end point of the stochastic process.
</div>
 
== Standard stochastic control: the [[Bellman equation]] ==
Consider a family of stochastic processes $X_t^\alpha$ indexed by a parameter $\alpha \in A$, whose corresponding generator operators are $L^\alpha$. We consider the following dynamic programming setting: the parameter $\alpha$ is a control that can be changed at any period of time.
 
We look for the optimal choice of the control that will maximize the value of a given function $g$ the first time the process $X_t$ exits a domain $D$. If we call this maximal possible expected value $u(x)$, in terms of the initial point $X_0 = x$, the function $u$ will solve the [[Bellman equation]].
\[ \sup_{\alpha \in A} L^\alpha u = 0 \qquad \text{in } D.\]
 
This is a fully nonlinear convex equation.
 
When the operators $L^\alpha$ are second order and uniformly elliptic, the solution is $C^{2,\alpha}$. This is the result of the [[Evans-Krylov theorem]]. When the operators $L^\alpha$ are integral kernels, one can still prove that the solution is classical if the kernels satisfy some uniform assumptions. This is the [[nonlocal Evans-Krylov theorem]]
 
There are many variants of this problem. If the value of $g$ is given at a previously specified time $T$, then $u(x,t)$ solves the backwards parabolic Bellman equation.
 
== [[Optimal stopping problem]] ==
 
== Zero sum games: the [[Isaacs equation]] ==

Revision as of 21:33, 5 February 2012

This article is a stub. You can help this nonlocal wiki by expanding it.