Template:Short description Template:About In mathematics, Itô's lemma or Itô's formula (also called the Itô–Döblin formula<ref>Template:Cite journal</ref>) is an identity used in Itô calculus to find the differential of a time-dependent function of a stochastic process. It serves as the stochastic calculus counterpart of the chain rule. It can be heuristically derived by forming the Taylor series expansion of the function up to its second derivatives and retaining terms up to first order in the time increment and second order in the Wiener process increment. The lemma is widely employed in mathematical finance, and its best known application is in the derivation of the Black–Scholes equation for option values.

This result was discovered by Japanese mathematician Kiyoshi Itô in 1951.<ref>Template:Cite journal</ref>

MotivationEdit

Suppose we are given the stochastic differential equation <math display="block">dX_t = \mu_t\ dt + \sigma_t\ dB_t,</math> where Template:Math is a Wiener process and the functions <math>\mu_t, \sigma_t</math> are deterministic (not stochastic) functions of time. In general, it's not possible to write a solution <math>X_t</math> directly in terms of <math>B_t.</math> However, we can formally write an integral solution <math display="block">X_t = \int_0^t \mu_s\ ds + \int_0^t \sigma_s\ dB_s.</math>

This expression lets us easily read off the mean and variance of <math>X_t</math> (which has no higher moments). First, notice that every <math>\mathrm{d}B_t</math> individually has mean 0, so the expected value of <math>X_t</math> is simply the integral of the drift function: <math display="block">\mathrm E[X_t]=\int_0^t \mu_s\ ds.</math>

Similarly, because the <math>dB</math> terms have variance 1 and no correlation with one another, the variance of <math>X_t</math> is simply the integral of the variance of each infinitesimal step in the random walk: <math display="block">\mathrm{Var}[X_t] = \int_0^t\sigma_s^2\ ds.</math>

However, sometimes we are faced with a stochastic differential equation for a more complex process <math>Y_t,</math> in which the process appears on both sides of the differential equation. That is, say <math display="block">dY_t = a_1(Y_t,t) \ dt + a_2(Y_t,t)\ dB_t,</math> for some functions <math>a_1</math> and <math>a_2.</math> In this case, we cannot immediately write a formal solution as we did for the simpler case above. Instead, we hope to write the process <math>Y_t</math> as a function of a simpler process <math>X_t</math> taking the form above. That is, we want to identify three functions <math>f(t,x), \mu_t,</math> and <math>\sigma_t,</math> such that <math>Y_t=f(t, X_t)</math> and <math>dX_t = \mu_t\ dt + \sigma_t\ dB_t.</math> In practice, Ito's lemma is used in order to find this transformation. Finally, once we have transformed the problem into the simpler type of problem, we can determine the mean and higher moments of the process.

DerivationEdit

We derive Itô's lemma by expanding a Taylor series and applying the rules of stochastic calculus.

Suppose <math>X_t</math> is an Itô drift-diffusion process that satisfies the stochastic differential equation

<math display="block"> dX_t= \mu_t \, dt + \sigma_t \, dB_t,</math>

where Template:Math is a Wiener process.

If Template:Math is a twice-differentiable scalar function, its expansion in a Taylor series is

<math display="block"> \begin{align} \frac{\Delta f(t)}{dt}dt &= f(t+dt, x) - f(t,x) \\ &= \frac{\partial f}{\partial t}\,dt + \frac{1}{2}\frac{\partial^2 f}{\partial t^2}\,(dt)^2 + \cdots \\[1ex] \frac{\Delta f(x)}{dx}dx &= f(t, x+dx) - f(t,x) \\ &= \frac{\partial f}{\partial x}\,dx + \frac{1}{2}\frac{\partial^2 f}{\partial x^2}\,(dx)^2 + \cdots \end{align}</math>

Then use the total derivative and the definition of the partial derivative <math>f_y=\lim_{dy\to0}\frac{\Delta f(y)}{dy}</math>:

<math display="block"> \begin{align} df &= f_t dt + f_x dx \\[1ex] &= \lim_{dx \to 0 \atop dt \to 0} \frac{\partial f}{\partial t}\,dt + \frac{\partial f}{\partial x}\,dx + \frac{1}{2} \left(\frac{\partial^2 f}{\partial t^2}\,(dt)^2 + \frac{\partial^2 f}{\partial x^2}\,(dx)^2\right) + \cdots . \end{align}</math>

Substituting <math>x=X_t</math> and therefore <math>dx=dX_t=\mu_t\,dt + \sigma_t\,dB_t</math>, we get

<math display="block"> \begin{align} df = \lim_{dB_t \to 0 \atop dt \to 0} \; & \frac{\partial f}{\partial t}\,dt

+ \frac{\partial f}{\partial x} \left(\mu_t\,dt + \sigma_t\,dB_t\right) \\

&+ \frac{1}{2} \left[

      \frac{\partial^2 f}{\partial t^2}\,{\left(dt\right)}^2
    + \frac{\partial^2 f}{\partial x^2} \left (\mu_t^2\,{\left(dt\right)}^2 + 2 \mu_t \sigma_t \, dt \, dB_t + \sigma_t^2 \, {\left(dB_t\right)}^2 \right )

\right] + \cdots. \end{align} </math>

In the limit <math>dt\to0</math>, the terms <math>(dt)^2</math> and <math>dt\,dB_t</math> tend to zero faster than <math>dt</math>. <math>(dB_t)^2</math> is <math>O(dt)</math> (due to the quadratic variation of a Wiener process which says <math>B_t^2=O(t)</math>), so setting <math>(dt)^2, dt\,dB_t</math> and <math>(dx)^3</math> terms to zero and substituting <math>dt</math> for <math>(dB_t)^2</math>, and then collecting the <math>dt</math> terms, we obtain

<math display="block"> df = \lim_{dt\to0}\left(\frac{\partial f}{\partial t} + \mu_t\frac{\partial f}{\partial x} + \frac{\sigma_t^2}{2}\frac{\partial^2 f}{\partial x^2}\right)dt + \sigma_t\frac{\partial f}{\partial x}\,dB_t </math>

as required.

Alternatively,

<math display="block"> df = \lim_{dt\to0}\left(\frac{\partial f}{\partial t} + \frac{\sigma_t^2}{2}\frac{\partial^2 f}{\partial x^2}\right)dt + \frac{\partial f}{\partial x}\,dX_t </math>

Geometric intuitionEdit

File:Ito lemma 1D illustration.svg
When <math>X_{t+dt}</math> is a Gaussian random variable, <math>f(X_{t+dt})</math> is also approximately Gaussian random variable, but its mean <math>E[f(X_{t+dt})]</math> differs from <math>f(E[X_{t+dt}])</math> by a factor proportional to <math>f(E[X_{t+dt}])</math> and the variance of <math>X_{t+dt}</math>.

Suppose we know that <math>X_t, X_{t+dt} </math> are two jointly-Gaussian distributed random variables, and <math>f</math> is nonlinear but has continuous second derivative, then in general, neither of <math>f(X_t), f(X_{t+dt})</math> is Gaussian, and their joint distribution is also not Gaussian. However, since <math>X_{t+dt} \mid X_t</math> is Gaussian, we might still find <math>f(X_{t+dt}) \mid f(X_t)</math> is Gaussian. This is not true when <math>dt</math> is finite, but when <math>dt</math> becomes infinitesimal, this becomes true.

The key idea is that <math>X_{t+dt} = X_t + \mu_t \, dt + dW_t </math> has a deterministic part and a noisy part. When <math>f</math> is nonlinear, the noisy part has a deterministic contribution. If <math>f</math> is convex, then the deterministic contribution is positive (by Jensen's inequality).

To find out how large the contribution is, we write <math>X_{t + dt} = X_t + \mu_t \, dt + \sigma_t \sqrt{dt} \, z</math>, where <math>z</math> is a standard Gaussian, then perform Taylor expansion. <math display="block">\begin{aligned} f(X_{t+dt}) ={}& f(X_t) + f'(X_t) \mu_t \, dt + f'(X_t) \sigma_t \sqrt{dt} \, z \\[1ex]

  & + \frac{1}{2} f(X_t) \left(\sigma_t^2 z^2 \, dt + 2 \mu_t \sigma_t z \, dt^{3/2} + \mu_t^2 dt^2\right) + o(dt) \\[2ex]

={}& \left[f(X_t) + f'(X_t) \mu_t \, dt + \frac{1}{2} f(X_t) \sigma_t^2 \, dt + o(dt)\right] \\[1ex]

  & + \left[f'(X_t)\sigma_t \sqrt{dt} \, z + \frac{1}{2} f(X_t) \sigma_t^2 \left(z^2 - 1\right) \, dt + o(dt)\right]

\end{aligned}</math>We have split it into two parts, a deterministic part, and a random part with mean zero. The random part is non-Gaussian, but the non-Gaussian parts decay faster than the Gaussian part, and at the <math>dt \to 0</math> limit, only the Gaussian part remains. The deterministic part has the expected <math>f(X_t) + f'(X_t) \mu_t \, dt </math>, but also a part contributed by the convexity: <math display="inline">\frac{1}{2} f(X_t) \sigma_t^2 \, dt</math>.

To understand why there should be a contribution due to convexity, consider the simplest case of geometric Brownian walk (of the stock market): <math>S_{t+dt} = S_t ( 1 + dB_t)</math>. In other words, <math>d(\ln S_t) = dB_t</math>. Let <math>X_t = \ln S_t</math>, then <math>S_t = e^{X_t}</math>, and <math>X_t</math> is a Brownian walk. However, although the expectation of <math>X_t</math> remains constant, the expectation of <math>S_t</math> grows. Intuitively it is because the downside is limited at zero, but the upside is unlimited. That is, while <math>X_t</math> is normally distributed, <math>S_t</math> is log-normally distributed.

Mathematical formulation of Itô's lemmaEdit

In the following subsections we discuss versions of Itô's lemma for different types of stochastic processes.

Itô drift-diffusion processes (due to: Kunita–Watanabe)Edit

In its simplest form, Itô's lemma states the following: for an Itô drift-diffusion process

<math display="block"> dX_t= \mu_t \, dt + \sigma_t \, dB_t</math>

and any twice differentiable scalar function Template:Math of two real variables Template:Mvar and Template:Mvar, one has

<math display="block">df(t,X_t) =\left(\frac{\partial f}{\partial t} + \mu_t \frac{\partial f}{\partial x} + \frac{\sigma_t^2}{2}\frac{\partial^2f}{\partial x^2}\right)dt+ \sigma_t \frac{\partial f}{\partial x}\,dB_t.</math>

This immediately implies that Template:Math is itself an Itô drift-diffusion process.

In higher dimensions, if <math>\mathbf{X}_t = (X^1_t, X^2_t, \ldots, X^n_t)^T</math> is a vector of Itô processes such that

<math display="block">d\mathbf{X}_t = \boldsymbol{\mu}_t\, dt + \mathbf{G}_t\, d\mathbf{B}_t</math>

for a vector <math>\boldsymbol{\mu}_t</math> and matrix <math>\mathbf{G}_t</math>, Itô's lemma then states that

<math display="block">\begin{align} df(t,\mathbf{X}_t)

&= \frac{\partial f}{\partial t}\, dt + \left (\nabla_\mathbf{X} f \right )^T\, d\mathbf{X}_t + \frac{1}{2} \left(d\mathbf{X}_t \right )^T \left( H_\mathbf{X} f \right) \, d\mathbf{X}_t, \\[4pt]
&= \left\{ \frac{\partial f}{\partial t} + \left (\nabla_\mathbf{X} f \right)^T \boldsymbol{\mu}_t + \frac{1}{2} \operatorname{Tr} \left[ \mathbf{G}_t^T \left( H_\mathbf{X} f \right) \mathbf{G}_t \right] \right\} \, dt + \left (\nabla_\mathbf{X} f \right)^T \mathbf{G}_t\, d\mathbf{B}_t

\end{align}</math>

where <math>\nabla_\mathbf{X} f</math> is the gradient of Template:Math w.r.t. Template:Math, Template:Math is the Hessian matrix of Template:Math w.r.t. Template:Math, and Template:Math is the trace operator.

Poisson jump processesEdit

We may also define functions on discontinuous stochastic processes.

Let Template:Mvar be the jump intensity. The Poisson process model for jumps is that the probability of one jump in the interval Template:Math is Template:Math plus higher order terms. Template:Mvar could be a constant, a deterministic function of time, or a stochastic process. The survival probability Template:Math is the probability that no jump has occurred in the interval Template:Math. The change in the survival probability is

<math display="block">d p_s(t) = -p_s(t) h(t) \, dt.</math>

So

<math display="block">p_s(t) = \exp \left(-\int_0^t h(u) \, du \right).</math>

Let Template:Math be a discontinuous stochastic process. Write <math>S(t^-)</math> for the value of S as we approach t from the left. Write <math>d_j S(t)</math> for the non-infinitesimal change in Template:Math as a result of a jump. Then

<math display="block">d_j S(t) = \lim_{\Delta t \to 0} \left[S(t + \Delta t) - S(t^-)\right]</math>

Let Template:Math be the magnitude of the jump and let <math>\eta(S(t^-),z)</math> be the distribution of Template:Math. The expected magnitude of the jump is

<math display="block">\operatorname{E}[d_j S(t)]=h(S(t^-)) \, dt \int_z z \eta(S(t^-),z) \, dz.</math>

Define <math>d J_S(t)</math>, a compensated process and martingale, as

<math display="block">\begin{align} dJ_S(t) &= d_j S(t) - \operatorname{E}[d_j S(t)] \\[1ex] &= S(t)-S(t^-) - \left ( h(S(t^-))\int_z z \eta \left (S(t^-),z \right) \, dz \right ) \, dt. \end{align}</math>

Then

<math display="block">\begin{align} d_j S(t) &= E[d_j S(t)] + d J_S(t) \\[1ex] &= h(S(t^-)) \left(\int_z z \eta(S(t^-),z) \, dz \right) dt + d J_S(t). \end{align}</math>

Consider a function <math>g(S(t),t)</math> of the jump process Template:Math. If Template:Math jumps by Template:Math then Template:Math jumps by Template:Math. Template:Math is drawn from distribution <math>\eta_g()</math> which may depend on <math>g(t^-)</math>, dg and <math>S(t^-)</math>. The jump part of <math>g</math> is

<math display="block">g(t)-g(t^-) =h(t) \, dt \int_{\Delta g} \, \Delta g \eta_g(\cdot) \, d\Delta g + d J_g(t).</math>

If <math>S</math> contains drift, diffusion and jump parts, then Itô's Lemma for <math>g(S(t),t)</math> is

<math display="block">\begin{align} dg(t) ={}& \left( \frac{\partial g}{\partial t}+\mu \frac{\partial g}{\partial S}+\frac{\sigma^2}{2} \frac{\partial^2 g}{\partial S^2} + h(t) \int_{\Delta g} \left (\Delta g \eta_g(\cdot) \, d{\Delta}g \right ) \, \right) dt \\ & + \frac{\partial g}{\partial S} \sigma \, dW(t) + dJ_g(t). \end{align}</math>

Itô's lemma for a process which is the sum of a drift-diffusion process and a jump process is just the sum of the Itô's lemma for the individual parts.

Discontinuous semimartingalesEdit

Itô's lemma can also be applied to general Template:Mvar-dimensional semimartingales, which need not be continuous. In general, a semimartingale is a càdlàg process, and an additional jump term needs to be added to the Itô's formula. For any cadlag process Template:Math, the left limit in Template:Mvar is denoted by Template:Math, which is a left-continuous process. The jumps are written as Template:Math. Then, Itô's lemma states that if Template:Math is a Template:Mvar-dimensional semimartingale and f is a twice continuously differentiable real valued function on Template:Math then f(X) is a semimartingale, and

<math display="block">\begin{align} f(X_t) = f(X_0) &+ \sum_{i=1}^d\int_0^t f_{i}(X_{s-}) \, dX^i_s

+ \frac{1}{2}\sum_{i,j=1}^d \int_0^t f_{i,j}(X_{s-})\,d[X^i,X^j]_s \\

&+ \sum_{s\le t} \left(\Delta f(X_s)-\sum_{i=1}^df_{i}(X_{s-})\,\Delta X^i_s

- \frac{1}{2}\sum_{i,j=1}^d f_{i,j}(X_{s-})\,\Delta X^i_s \, \Delta X^j_s\right).

\end{align}</math>

This differs from the formula for continuous semi-martingales by the last term summing over the jumps of X, which ensures that the jump of the right hand side at time Template:Mvar is Δf(Xt).

ExamplesEdit

Geometric Brownian motionEdit

A process S is said to follow a geometric Brownian motion with constant volatility σ and constant drift μ if it satisfies the stochastic differential equation <math>dS_t = \sigma S_t\,dB_t + \mu S_t\,dt</math>, for a Brownian motion Template:Math. Applying Itô's lemma with <math>f(S_t) = \log(S_t)</math> gives

<math display="block">\begin{align} df & = f'(S_t) \, dS_t + \frac{1}{2} f (S_t) \, {\left(dS_t\right)}^2 \\[4pt] & = \frac{1}{S_t}\,dS_t + \frac{1}{2} \left(-S_t^{-2}\right) \left(S_t^2 \sigma^2 \, dt\right) \\[4pt] & = \frac{1}{S_t} \left( \sigma S_t \, dB_t + \mu S_t \, dt\right) - \frac{\sigma^2}{2} \, dt \\[4pt] &= \sigma \, dB_t + \left(\mu - \tfrac{\sigma^2}{2} \right) dt. \end{align}</math>

It follows that

<math display="block">\log (S_t) = \log (S_0) + \sigma B_t + \left (\mu-\tfrac{\sigma^2}{2} \right )t,</math>

exponentiating gives the expression for Template:Math,

<math display="block">S_t = S_0 \exp\left(\sigma B_t + \left (\mu - \tfrac{\sigma^2}{2} \right )t\right).</math>

The correction term of Template:Math corresponds to the difference between the median and mean of the log-normal distribution, or equivalently for this distribution, the geometric mean and arithmetic mean, with the median (geometric mean) being lower. This is due to the AM–GM inequality, and corresponds to the logarithm being concave (or convex upwards), so the correction term can accordingly be interpreted as a convexity correction. This is an infinitesimal version of the fact that the annualized return is less than the average return, with the difference proportional to the variance. See geometric moments of the log-normal distribution for further discussion.

The same factor of Template:Math appears in the d1 and d2 auxiliary variables of the Black–Scholes formula, and can be interpreted as a consequence of Itô's lemma.

Doléans-Dade exponentialEdit

The Doléans-Dade exponential (or stochastic exponential) of a continuous semimartingale X can be defined as the solution to the SDE Template:Math with initial condition Template:Math. It is sometimes denoted by Template:Math. Applying Itô's lemma with Template:Math gives

<math display="block">\begin{align} d\log(Y) &= \frac{1}{Y}\,dY -\frac{1}{2Y^2}\,d[Y] \\[6pt] &= dX - \tfrac{1}{2}\,d[X]. \end{align} </math>

Exponentiating gives the solution

<math display="block">Y_t = \exp\left(X_t-X_0-\tfrac{1}{2} [X]_t\right).</math>

Black–Scholes formulaEdit

Template:More Itô's lemma can be used to derive the Black–Scholes equation for an option.<ref>Template:Cite book</ref> Suppose a stock price follows a geometric Brownian motion given by the stochastic differential equation Template:Math. Then, if the value of an option at time Template:Mvar is Template:Math, Itô's lemma gives

<math display="block">df(t,S_t) = \left(\frac{\partial f}{\partial t} + \frac{1}{2}\left(S_t\sigma\right)^2\frac{\partial^2 f}{\partial S^2}\right)\,dt +\frac{\partial f}{\partial S}\,dS_t.</math>

The term Template:Math represents the change in value in time Template:Math of the trading strategy consisting of holding an amount Template:Math of the stock. If this trading strategy is followed, and any cash held is assumed to grow at the risk free rate Template:Mvar, then the total value V of this portfolio satisfies the SDE

<math display="block"> dV_t = r\left(V_t-\frac{\partial f}{\partial S}S_t\right)\,dt + \frac{\partial f}{\partial S}\,dS_t.</math>

This strategy replicates the option if Template:Math. Combining these equations gives the celebrated Black–Scholes equation

<math display="block">\frac{\partial f}{\partial t} + \frac{\sigma^2S^2}{2}\frac{\partial^2 f}{\partial S^2} + rS\frac{\partial f}{\partial S}-rf = 0.</math>

Product rule for Itô processesEdit

Let <math>\mathbf X_t</math> be a two-dimensional Ito process with SDE: <math display="block">d\mathbf X_t = d\begin{pmatrix} X_t^1 \\ X_t^2 \end{pmatrix} = \begin{pmatrix} \mu_t^1 \\ \mu_t^2 \end{pmatrix} dt + \begin{pmatrix} \sigma_t^1 \\ \sigma_t^2 \end{pmatrix} \, dB_t</math>

Then we can use the multi-dimensional form of Ito's lemma to find an expression for <math>d(X_t^1X_t^2)</math>.

We have <math> \mu_t = \begin{pmatrix} \mu_t^1 \\ \mu_t^2 \end{pmatrix}</math> and <math>\mathbf G = \begin{pmatrix} \sigma_t^1 \\ \sigma_t^2 \end{pmatrix}</math>.

We set <math>f(t,\mathbf X_t) = X_t^1 X_t^2</math> and observe that Template:Nowrap Template:Nowrap and <math>H_\mathbf X f = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}</math>

Substituting these values in the multi-dimensional version of the lemma gives us:

<math display="block">\begin{align} d(X_t^1X_t^2) &= df(t, \mathbf X_t) \\ &= 0 \cdot dt

+ \begin{pmatrix} X_t^2 & X_t^1 \end{pmatrix} \, d\mathbf X_t
+ \frac{1}{2} \begin{pmatrix} dX_t^1 & dX_t^2 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} dX_t^1 \\ dX_t^2 \end{pmatrix} \\[1ex]

&= X_t^2 \, dX_t^1 + X^1_t \, dX_t^2 + dX_t^1 \, dX_t^2 \end{align}</math>

This is a generalisation of Leibniz's product rule to Ito processes, which are non-differentiable.

Further, using the second form of the multidimensional version above gives us

<math display="block">\begin{align} d(X_t^1 X_t^2) &= \left\{

 0 + \begin{pmatrix} X_t^2 & X_t^1 \end{pmatrix}
 \begin{pmatrix} \mu_t^1 \\ \mu_t^2 \end{pmatrix}
 + \frac{1}{2} \operatorname{Tr}
 \left[
   \begin{pmatrix} \sigma_t^1 & \sigma_t^2 \end{pmatrix}
   \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}
   \begin{pmatrix} \sigma_t^1 \\ \sigma_t^2 \end{pmatrix}
 \right]

\right\} dt \\[1ex] & \qquad + \left(X_t^2 \sigma_t^1 + X^1_t \sigma_t^2\right) dB_t\\[2ex] &= \left(X_t^2 \mu_t^1 + X^1_t \mu_t^2 + \sigma_t^1\sigma_t^2\right) dt

+ \left(X_t^2 \sigma_t^1 + X^1_t \sigma_t^2\right) dB_t

\end{align}</math>

so we see that the product <math>X_t^1X_t^2</math> is itself an Itô drift-diffusion process.

Itô's formula for functions with finite quadratic variationEdit

Hans Föllmer provided a non-probabilistic proof of the Itô formula and showed that it holds for all functions with finite quadratic variation.<ref>Template:Cite journal</ref>

Let <math>f\in C^2</math> be a real-valued function and <math>x:[0,\infty]\to \mathbb{R}</math> a right-continuous function with left limits and finite quadratic variation <math>[x]</math>. Then <math display="block">\begin{align} f(x_t) = f(x_0) &+ \int_0^t f'(x_{s-}) \, \mathrm{d}x_s + \frac{1}{2}\int_{]0,t]} f(x_{s-}) \, d[x]_s \\ & + \sum_{0\leq s\leq t}\left[f(x_s)-f(x_{s-})-f'(x_{s-})\Delta x_s - \frac{1}{2} f(x_{s-})(\Delta x_s)^2\right]. \end{align}</math>

where the quadratic variation of <math>x</math> is defined as a limit along a sequence of partitions <math>D_n</math> of <math>[0,t]</math> with step decreasing to zero:

<math display="block"> [x](t) = \lim_{n\to\infty} \sum_{t^n_k \in D_n} \left(x_{t^n_{k+1}} - x_{t^n_k}\right)^2.</math>

Higher-order Itô formulaEdit

Rama Cont and Nicholas Perkowski extended the Ito formula to functions with finite Template:Mvar-th variation where <math>p\geq 2</math> is an arbitrarily large integer. <ref> Template:Cite journal</ref>

Given a continuous function with finite p-th variation

<math display="block"> [x]^p(t) = \lim_{n\to\infty} \sum_{t^n_k \in D_n} {\left(x_{t^n_{k+1}} - x_{t^n_k}\right)}^p,</math>

Cont and Perkowski's change of variable formula states that for any <math> f\in C^p(\mathbb{R}^d,\mathbb{R})</math>:

<math display="block">\begin{align} f(x_t) = {} & f(x_0)+\int_0^t \nabla_{p-1}f(x_{s-}) \, \mathrm{d}x_s + \frac{1}{p!}\int_{]0,t]} f^{p}(x_{s-}) \, d[x]^p_s \end{align}</math>

where the first integral is defined as a limit of compensated left Riemann sums along a sequence of partitions <math>D_n</math>:

<math display="block">\begin{align} \int_0^t \nabla_{p-1}f(x_{s-}) \, \mathrm{d}x_s := {} & \sum_{t^n_k\in D_n} \sum_{k=1}^{p-1} \frac{f^{k}(x_{t_k^n})}{k!} \left(x_{t^n_{k+1}} - x_{t^n_k}\right)^k. \end{align}</math> An extension to the case of fractional regularity (non-integer <math>p</math>) was obtained by Cont and Jin. <ref>Template:Cite journal</ref>

Infinite-dimensional formulasEdit

There exist some extensions to infinite-dimensional spaces (e.g. Pardoux,<ref>Template:Cite journal</ref> Gyöngy-Krylov,<ref>Template:Cite encyclopedia</ref> Brzezniak-van Neerven-Veraar-Weis<ref>Template:Cite journal</ref>).

See alsoEdit

NotesEdit

Template:Reflist

ReferencesEdit

Template:Refbegin

  • Kiyosi Itô (1944). Stochastic Integral. Proc. Imperial Acad. Tokyo 20, 519–524. This is the paper with the Ito Formula; Online
  • Kiyosi Itô (1951). On stochastic differential equations. Memoirs, American Mathematical Society 4, 1–51. Online
  • Bernt Øksendal (2000). Stochastic Differential Equations. An Introduction with Applications, 5th edition, corrected 2nd printing. Springer. Template:ISBN. Sections 4.1 and 4.2.
  • Philip E Protter (2005). Stochastic Integration and Differential Equations, 2nd edition. Springer. Template:ISBN. Section 2.7.

Template:Refend

External linksEdit