Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Wiener process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Properties of a one-dimensional Wiener process == [[Image:Wiener-process-5traces.svg|thumb|upright=1.5|Five sampled processes, with expected standard deviation in gray.]] === Basic properties === The unconditional [[probability density function]] follows a [[normal distribution]] with mean = 0 and variance = ''t'', at a fixed time {{mvar|t}}: <math display="block">f_{W_t}(x) = \frac{1}{\sqrt{2 \pi t}} e^{-x^2/(2t)}.</math> The [[expected value|expectation]] is zero: <math display="block">\operatorname E[W_t] = 0.</math> The [[variance]], using the computational formula, is {{mvar|t}}: <math display="block">\operatorname{Var}(W_t) = t.</math> These results follow immediately from the definition that increments have a [[normal distribution]], centered at zero. Thus <math display="block">W_t = W_t-W_0 \sim N(0,t).</math> === Covariance and correlation === The [[covariance function|covariance]] and [[correlation function|correlation]] (where <math>s \leq t</math>): <math display="block">\begin{align} \operatorname{cov}(W_s, W_t) &= s, \\ \operatorname{corr}(W_s,W_t) &= \frac{\operatorname{cov}(W_s,W_t)}{\sigma_{W_s} \sigma_{W_t}} = \frac{s}{\sqrt{st}} = \sqrt{\frac{s}{t}}. \end{align}</math> These results follow from the definition that non-overlapping increments are independent, of which only the property that they are uncorrelated is used. Suppose that <math>t_1\leq t_2</math>. <math display="block">\operatorname{cov}(W_{t_1}, W_{t_2}) = \operatorname{E}\left[(W_{t_1}-\operatorname{E}[W_{t_1}]) \cdot (W_{t_2}-\operatorname{E}[W_{t_2}])\right] = \operatorname{E}\left[W_{t_1} \cdot W_{t_2} \right].</math> Substituting <math display="block"> W_{t_2} = ( W_{t_2} - W_{t_1} ) + W_{t_1} </math> we arrive at: <math display="block">\begin{align} \operatorname{E}[W_{t_1} \cdot W_{t_2}] & = \operatorname{E}\left[W_{t_1} \cdot ((W_{t_2} - W_{t_1})+ W_{t_1}) \right] \\ & = \operatorname{E}\left[W_{t_1} \cdot (W_{t_2} - W_{t_1} )\right] + \operatorname{E}\left[ W_{t_1}^2 \right]. \end{align}</math> Since <math> W_{t_1}=W_{t_1} - W_{t_0} </math> and <math> W_{t_2} - W_{t_1} </math> are independent, <math display="block"> \operatorname{E}\left [W_{t_1} \cdot (W_{t_2} - W_{t_1} ) \right ] = \operatorname{E}[W_{t_1}] \cdot \operatorname{E}[W_{t_2} - W_{t_1}] = 0.</math> Thus <math display="block">\operatorname{cov}(W_{t_1}, W_{t_2}) = \operatorname{E} \left [W_{t_1}^2 \right ] = t_1.</math> A corollary useful for simulation is that we can write, for {{math|''t''<sub>1</sub> < ''t''<sub>2</sub>}}: <math display="block">W_{t_2} = W_{t_1}+\sqrt{t_2-t_1}\cdot Z</math> where {{mvar|Z}} is an independent standard normal variable. === Wiener representation === Wiener (1923) also gave a representation of a Brownian path in terms of a random [[Fourier series]]. If <math>\xi_n</math> are independent Gaussian variables with mean zero and variance one, then <math display="block">W_t = \xi_0 t+ \sqrt{2}\sum_{n=1}^\infty \xi_n\frac{\sin \pi n t}{\pi n}</math> and <math display="block"> W_t = \sqrt{2} \sum_{n=1}^\infty \xi_n \frac{\sin \left(\left(n - \frac{1}{2}\right) \pi t\right)}{ \left(n - \frac{1}{2}\right) \pi} </math> represent a Brownian motion on <math>[0,1]</math>. The scaled process <math display="block">\sqrt{c}\, W\left(\frac{t}{c}\right)</math> is a Brownian motion on <math>[0,c]</math> (cf. [[Karhunen–Loève theorem]]). === Running maximum === The joint distribution of the running maximum <math display="block"> M_t = \max_{0 \leq s \leq t} W_s </math> and {{math|''W<sub>t</sub>''}} is <math display="block"> f_{M_t,W_t}(m,w) = \frac{2(2m - w)}{t\sqrt{2 \pi t}} e^{-\frac{(2m-w)^2}{2t}}, \qquad m \ge 0, w \leq m.</math> To get the unconditional distribution of <math>f_{M_t}</math>, integrate over {{math|−∞ < ''w'' ≤ ''m''}}: <math display="block">\begin{align} f_{M_t}(m) & = \int_{-\infty}^m f_{M_t,W_t}(m,w)\,dw = \int_{-\infty}^m \frac{2(2m - w)}{t\sqrt{2 \pi t}} e^{-\frac{(2m-w)^2}{2t}} \,dw \\[5pt] & = \sqrt{\frac{2}{\pi t}}e^{-\frac{m^2}{2t}}, \qquad m \ge 0, \end{align}</math> the probability density function of a [[Half-normal distribution]]. The expectation<ref>{{cite book|last=Shreve|first=Steven E| title=Stochastic Calculus for Finance II: Continuous Time Models|year=2008|publisher=Springer| isbn=978-0-387-40101-0| pages=114}}</ref> is <math display="block"> \operatorname{E}[M_t] = \int_0^\infty m f_{M_t}(m)\,dm = \int_0^\infty m \sqrt{\frac{2}{\pi t}}e^{-\frac{m^2}{2t}}\,dm = \sqrt{\frac{2t}{\pi}} </math> If at time <math>t</math> the Wiener process has a known value <math>W_{t}</math>, it is possible to calculate the conditional probability distribution of the maximum in interval <math>[0, t]</math> (cf. [[Probability distribution of extreme points of a Wiener stochastic process]]). The [[cumulative probability distribution function]] of the maximum value, [[Conditional probability|conditioned]] by the known value <math>W_t</math>, is: <math display="block">\, F_{M_{W_t}} (m) = \Pr \left( M_{W_t} = \max_{0 \leq s \leq t} W(s) \leq m \mid W(t) = W_t \right) = \ 1 -\ e^{-2\frac{m(m - W_t)}{t}}\ \, , \,\ \ m > \max(0,W_t)</math> === Self-similarity === [[File:Wiener process animated.gif|thumb|500px|A demonstration of Brownian scaling, showing <math>V_t = (1/\sqrt c) W_{ct}</math> for decreasing ''c''. Note that the average features of the function do not change while zooming in, and note that it zooms in quadratically faster horizontally than vertically. <!-- Feel free to rewrite this... -->]] ==== Brownian scaling ==== For every {{math|''c'' > 0}} the process <math> V_t = (1 / \sqrt c) W_{ct} </math> is another Wiener process. ==== Time reversal ==== The process <math> V_t = W_{1-t} - W_{1} </math> for {{math|0 ≤ ''t'' ≤ 1}} is distributed like {{math|''W<sub>t</sub>''}} for {{math|0 ≤ ''t'' ≤ 1}}. ==== Time inversion ==== The process <math> V_t = t W_{1/t} </math> is another Wiener process. ==== Projective invariance ==== Consider a Wiener process <math>W(t)</math>, <math>t\in\mathbb R</math>, conditioned so that <math>\lim_{t\to\pm\infty}tW(t)=0</math> (which holds almost surely) and as usual <math>W(0)=0</math>. Then the following are all Wiener processes {{harv|Takenaka|1988}}: <math display="block"> \begin{array}{rcl} W_{1,s}(t) &=& W(t+s)-W(s), \quad s\in\mathbb R\\ W_{2,\sigma}(t) &=& \sigma^{-1/2}W(\sigma t),\quad \sigma > 0\\ W_3(t) &=& tW(-1/t). \end{array} </math> Thus the Wiener process is invariant under the projective group [[PSL(2,R)]], being invariant under the generators of the group. The action of an element <math>g = \begin{bmatrix}a&b\\c&d\end{bmatrix}</math> is <math>W_g(t) = (ct+d)W\left(\frac{at+b}{ct+d}\right) - ctW\left(\frac{a}{c}\right) - dW\left(\frac{b}{d}\right),</math> which defines a [[group action]], in the sense that <math>(W_g)_h = W_{gh}.</math> ==== Conformal invariance in two dimensions ==== Let <math>W(t)</math> be a two-dimensional Wiener process, regarded as a complex-valued process with <math>W(0)=0\in\mathbb C</math>. Let <math>D\subset\mathbb C</math> be an open set containing 0, and <math>\tau_D</math> be associated Markov time: <math display="block">\tau_D = \inf \{ t\ge 0 |W(t)\not\in D\}.</math> If <math>f:D\to \mathbb C</math> is a [[holomorphic function]] which is not constant, such that <math>f(0)=0</math>, then <math>f(W_t)</math> is a time-changed Wiener process in <math>f(D)</math> {{harv|Lawler|2005}}. More precisely, the process <math>Y(t)</math> is Wiener in <math>D</math> with the Markov time <math>S(t)</math> where <math display="block">Y(t) = f(W(\sigma(t)))</math> <math display="block">S(t) = \int_0^t|f'(W(s))|^2\,ds</math> <math display="block">\sigma(t) = S^{-1}(t):\quad t = \int_0^{\sigma(t)}|f'(W(s))|^2\,ds.</math> === A class of Brownian martingales === If a [[polynomial]] {{math|''p''(''x'', ''t'')}} satisfies the [[partial differential equation]] <math display="block">\left( \frac{\partial}{\partial t} + \frac{1}{2} \frac{\partial^2}{\partial x^2} \right) p(x,t) = 0 </math> then the stochastic process <math display="block"> M_t = p ( W_t, t )</math> is a [[martingale (probability theory)|martingale]]. '''Example:''' <math> W_t^2 - t </math> is a martingale, which shows that the [[quadratic variation]] of ''W'' on {{closed-closed|0, ''t''}} is equal to {{mvar|t}}. It follows that the expected [[first exit time|time of first exit]] of ''W'' from (−''c'', ''c'') is equal to {{math|''c''<sup>2</sup>}}. More generally, for every polynomial {{math|''p''(''x'', ''t'')}} the following stochastic process is a martingale: <math display="block"> M_t = p ( W_t, t ) - \int_0^t a(W_s,s) \, \mathrm{d}s, </math> where ''a'' is the polynomial <math display="block"> a(x,t) = \left( \frac{\partial}{\partial t} + \frac 1 2 \frac{\partial^2}{\partial x^2} \right) p(x,t). </math> '''Example:''' <math> p(x,t) = \left(x^2 - t\right)^2, </math> <math> a(x,t) = 4x^2; </math> the process <math display="block"> \left(W_t^2 - t\right)^2 - 4 \int_0^t W_s^2 \, \mathrm{d}s </math> is a martingale, which shows that the quadratic variation of the martingale <math> W_t^2 - t </math> on [0, ''t''] is equal to <math display="block"> 4 \int_0^t W_s^2 \, \mathrm{d}s.</math> About functions {{math|''p''(''xa'', ''t'')}} more general than polynomials, see [[Local martingale#Martingales via local martingales|local martingales]]. === Some properties of sample paths === The set of all functions ''w'' with these properties is of full Wiener measure. That is, a path (sample function) of the Wiener process has all these properties almost surely: ==== Qualitative properties ==== * For every ε > 0, the function ''w'' takes both (strictly) positive and (strictly) negative values on (0, ε). * The function ''w'' is continuous everywhere but differentiable nowhere (like the [[Weierstrass function]]). * For any <math>\epsilon > 0</math>, <math>w(t)</math> is almost surely not <math>(\tfrac 1 2 + \epsilon)</math>-[[Hölder continuous]], and almost surely <math>(\tfrac 1 2 - \epsilon)</math>-Hölder continuous.<ref>{{Cite book |last1=Mörters |first1=Peter |title=Brownian motion |last2=Peres |first2=Yuval |last3=Schramm |first3=Oded |last4=Werner |first4=Wendelin |date=2010 |publisher=Cambridge University Press |isbn=978-0-521-76018-8 |series=Cambridge series in statistical and probabilistic mathematics |location=Cambridge |pages=18}}</ref> * Points of [[Maxima and minima|local maximum]] of the function ''w'' are a dense countable set; the maximum values are pairwise different; each local maximum is sharp in the following sense: if ''w'' has a local maximum at {{mvar|t}} then <math display="block">\lim_{s \to t} \frac{|w(s)-w(t)|}{|s-t|} \to \infty.</math> The same holds for local minima. * The function ''w'' has no points of local increase, that is, no ''t'' > 0 satisfies the following for some ε in (0, ''t''): first, ''w''(''s'') ≤ ''w''(''t'') for all ''s'' in (''t'' − ε, ''t''), and second, ''w''(''s'') ≥ ''w''(''t'') for all ''s'' in (''t'', ''t'' + ε). (Local increase is a weaker condition than that ''w'' is increasing on (''t'' − ''ε'', ''t'' + ''ε'').) The same holds for local decrease. * The function ''w'' is of [[bounded variation|unbounded variation]] on every interval. * The [[quadratic variation]] of ''w'' over [0,''t''] is ''t''. * [[root of a function|Zeros]] of the function ''w'' are a [[nowhere dense set|nowhere dense]] [[perfect set]] of Lebesgue measure 0 and [[Hausdorff dimension]] 1/2 (therefore, uncountable). ==== Quantitative properties ==== ===== [[Law of the iterated logarithm]] ===== <math display="block"> \limsup_{t\to+\infty} \frac{ |w(t)| }{ \sqrt{ 2t \log\log t } } = 1, \quad \text{almost surely}. </math> ===== [[Modulus of continuity]] ===== Local modulus of continuity: <math display="block"> \limsup_{\varepsilon \to 0+} \frac{ |w(\varepsilon)| }{ \sqrt{ 2\varepsilon \log\log(1/\varepsilon) } } = 1, \qquad \text{almost surely}. </math> [[Lévy's modulus of continuity theorem|Global modulus of continuity]] (Lévy): <math display="block"> \limsup_{\varepsilon\to0+} \sup_{0\le s<t\le 1, t-s\le\varepsilon}\frac{|w(s)-w(t)|}{\sqrt{ 2\varepsilon \log(1/\varepsilon)}} = 1, \qquad \text{almost surely}. </math> ===== [[Dimension doubling theorem]] ===== The dimension doubling theorems say that the [[Hausdorff dimension]] of a set under a Brownian motion doubles almost surely. ==== Local time ==== The image of the [[Lebesgue measure]] on [0, ''t''] under the map ''w'' (the [[pushforward measure]]) has a density {{math|''L''<sub>''t''</sub>}}. Thus, <math display="block"> \int_0^t f(w(s)) \, \mathrm{d}s = \int_{-\infty}^{+\infty} f(x) L_t(x) \, \mathrm{d}x </math> for a wide class of functions ''f'' (namely: all continuous functions; all locally integrable functions; all non-negative measurable functions). The density ''L<sub>t</sub>'' is (more exactly, can and will be chosen to be) continuous. The number ''L<sub>t</sub>''(''x'') is called the [[local time (mathematics)|local time]] at ''x'' of ''w'' on [0, ''t'']. It is strictly positive for all ''x'' of the interval (''a'', ''b'') where ''a'' and ''b'' are the least and the greatest value of ''w'' on [0, ''t''], respectively. (For ''x'' outside this interval the local time evidently vanishes.) Treated as a function of two variables ''x'' and ''t'', the local time is still continuous. Treated as a function of ''t'' (while ''x'' is fixed), the local time is a [[singular function]] corresponding to a [[atom (measure theory)|nonatomic]] measure on the set of zeros of ''w''. These continuity properties are fairly non-trivial. Consider that the local time can also be defined (as the density of the pushforward measure) for a smooth function. Then, however, the density is discontinuous, unless the given function is monotone. In other words, there is a conflict between good behavior of a function and good behavior of its local time. In this sense, the continuity of the local time of the Wiener process is another manifestation of non-smoothness of the trajectory. === Information rate === The [[information rate]] of the Wiener process with respect to the squared error distance, i.e. its quadratic [[rate-distortion function]], is given by <ref>T. Berger, "Information rates of Wiener processes," in IEEE Transactions on Information Theory, vol. 16, no. 2, pp. 134-139, March 1970. doi: 10.1109/TIT.1970.1054423</ref> <math display="block">R(D) = \frac{2}{\pi^2 D \ln 2} \approx 0.29D^{-1}.</math> Therefore, it is impossible to encode <math>\{w_t \}_{t \in [0,T]}</math> using a [[binary code]] of less than <math>T R(D)</math> [[bit]]s and recover it with expected mean squared error less than <math>D</math>. On the other hand, for any <math> \varepsilon>0</math>, there exists <math>T</math> large enough and a [[binary code]] of no more than <math>2^{TR(D)}</math> distinct elements such that the expected [[mean squared error]] in recovering <math>\{w_t \}_{t \in [0,T]}</math> from this code is at most <math>D - \varepsilon</math>. In many cases, it is impossible to [[binary code|encode]] the Wiener process without [[Sampling (signal processing)|sampling]] it first. When the Wiener process is sampled at intervals <math>T_s</math> before applying a binary code to represent these samples, the optimal trade-off between [[code rate]] <math>R(T_s,D)</math> and expected [[mean square error]] <math>D</math> (in estimating the continuous-time Wiener process) follows the parametric representation <ref>Kipnis, A., Goldsmith, A.J. and Eldar, Y.C., 2019. The distortion-rate function of sampled Wiener processes. IEEE Transactions on Information Theory, 65(1), pp.482-499.</ref> <math display="block"> R(T_s,D_\theta) = \frac{T_s}{2} \int_0^1 \log_2^+\left[\frac{S(\varphi)- \frac{1}{6}}{\theta}\right] d\varphi, </math> <math display="block"> D_\theta = \frac{T_s}{6} + T_s\int_0^1 \min\left\{S(\varphi)-\frac{1}{6},\theta \right\} d\varphi, </math> where <math>S(\varphi) = (2 \sin(\pi \varphi /2))^{-2}</math> and <math>\log^+[x] = \max\{0,\log(x)\}</math>. In particular, <math>T_s/6</math> is the mean squared error associated only with the sampling operation (without encoding).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)