Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Martingale (probability theory)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Model in probability theory}} {{For|the martingale betting strategy|martingale (betting system)}} In [[probability theory]], a '''martingale''' is a [[stochastic process]] in which the expected value of the next observation, given all prior observations, is equal to the most recent value. In other words, the [[conditional expectation]] of the next value, given the past, is equal to the present value. Martingales are used to model fair games, where future expected winnings are equal to the current amount regardless of past outcomes.[[Image:HittingTimes1.png|thumb|340px|[[Stopped process#Brownian motion|Stopped Brownian motion]] is an example of a martingale. It can model an even coin-toss betting game with the possibility of bankruptcy.]] ==History== Originally, ''[[martingale (betting system)|martingale]]'' referred to a class of [[betting strategy|betting strategies]] that was popular in 18th-century [[France]].<ref>{{cite book| first=N. J. |last=Balsara|title=Money Management Strategies for Futures Traders|publisher= Wiley Finance|year= 1992| isbn =978-0-471-52215-7 |page=[https://archive.org/details/moneymanagements00bals/page/122 122]|url=https://archive.org/details/moneymanagements00bals| url-access=registration | quote=martingale. }}</ref><ref>{{cite journal|url=http://www.jehps.net/juin2009/Mansuy.pdf|title=The origins of the Word "Martingale"|last1=Mansuy|first1=Roger|date=June 2009|volume=5|number=1|journal=Electronic Journal for History of Probability and Statistics|access-date=2011-10-22|archive-url=https://web.archive.org/web/20120131103618/http://www.jehps.net/juin2009/Mansuy.pdf|archive-date=2012-01-31|url-status=live}}</ref> The simplest of these strategies was designed for a game in which the [[gambler]] wins their stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double their bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, their probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a [[almost surely|sure thing]]. However, the [[exponential growth]] of the bets eventually bankrupts its users due to finite bankrolls. [[Stopped process#Brownian motion|Stopped Brownian motion]], which is a martingale process, can be used to model the trajectory of such games. The concept of martingale in probability theory was introduced by [[Paul Lévy (mathematician)|Paul Lévy]] in 1934, though he did not name it. The term "martingale" was introduced later by {{harvtxt|Ville|1939}}, who also extended the definition to continuous martingales. Much of the original development of the theory was done by [[Joseph Leo Doob]] among others. Part of the motivation for that work was to show the impossibility of successful betting strategies in games of chance. ==Definitions== A basic definition of a [[Discrete-time stochastic process|discrete-time]] martingale is a discrete-time [[stochastic process]] (i.e., a [[sequence]] of [[random variable]]s) ''X''<sub>1</sub>, ''X''<sub>2</sub>, ''X''<sub>3</sub>, ... that satisfies for any time ''n'', :<math>\mathbf{E} ( \vert X_n \vert )< \infty </math> :<math>\mathbf{E} (X_{n+1}\mid X_1,\ldots,X_n)=X_n.</math> That is, the [[conditional expected value]] of the next observation, given all the past observations, is equal to the most recent observation. ===Martingale sequences with respect to another sequence=== More generally, a sequence ''Y''<sub>1</sub>, ''Y''<sub>2</sub>, ''Y''<sub>3</sub> ... is said to be a '''martingale with respect to''' another sequence ''X''<sub>1</sub>, ''X''<sub>2</sub>, ''X''<sub>3</sub> ... if for all ''n'' :<math>\mathbf{E} ( \vert Y_n \vert )< \infty </math> :<math>\mathbf{E} (Y_{n+1}\mid X_1,\ldots,X_n)=Y_n.</math> Similarly, a '''[[continuous time|continuous-time]] martingale with respect to''' the [[stochastic process]] ''X<sub>t</sub>'' is a [[stochastic process]] ''Y<sub>t</sub>'' such that for all ''t'' :<math>\mathbf{E} ( \vert Y_t \vert )<\infty </math> :<math>\mathbf{E} ( Y_{t} \mid \{ X_{\tau}, \tau \leq s \} ) = Y_s\quad \forall s \le t.</math> This expresses the property that the conditional expectation of an observation at time ''t'', given all the observations up to time <math> s </math>, is equal to the observation at time ''s'' (of course, provided that ''s'' ≤ ''t''). The second property implies that <math>Y_n</math> is measurable with respect to <math>X_1 \dots X_n</math>. ===General definition=== In full generality, a [[stochastic process]] <math>Y:T\times\Omega\to S</math> taking values in a [[Banach space]] <math>S</math> with norm <math>\lVert \cdot \rVert_{S}</math> is a '''martingale with respect to a filtration''' <math>\Sigma_*</math> '''and [[probability measure]] <math>\mathbb P</math>''' if * Σ<sub>∗</sub> is a [[Filtration (probability theory)|filtration]] of the underlying [[probability space]] (Ω, Σ, <math>\mathbb P</math>); * ''Y'' is [[adapted process|adapted]] to the filtration Σ<sub>∗</sub>, i.e., for each ''t'' in the [[index set]] ''T'', the random variable ''Y<sub>t</sub>'' is a Σ<sub>''t''</sub>-[[measurable function]]; * for each ''t'', ''Y<sub>t</sub>'' lies in the [[Lp space|''L<sup>p</sup>'' space]] ''L''<sup>1</sup>(Ω, Σ<sub>''t''</sub>, <math>\mathbb P</math>; ''S''), i.e. ::<math>\mathbf{E}_{\mathbb{P}} (\lVert Y_{t} \rVert_{S}) < + \infty;</math> * for all ''s'' and ''t'' with ''s'' < ''t'' and all ''F'' ∈ Σ<sub>''s''</sub>, ::<math>\mathbf{E}_{\mathbb{P}} \left([Y_t-Y_s]\chi_F\right) =0,</math> :where ''χ<sub>F</sub>'' denotes the [[indicator function]] of the event ''F''. In Grimmett and Stirzaker's ''Probability and Random Processes'', this last condition is denoted as ::<math>Y_s = \mathbf{E}_{\mathbb{P}} ( Y_t \mid \Sigma_s ),</math> :which is a general form of [[conditional expectation]].<ref>{{cite book|first1=G. |last1=Grimmett |first2= D.|last2= Stirzaker|title=Probability and Random Processes|edition= 3rd|publisher= Oxford University Press|year= 2001| isbn =978-0-19-857223-7}}</ref> It is important to note that the property of being a martingale involves both the filtration ''and'' the probability measure (with respect to which the expectations are taken). It is possible that ''Y'' could be a martingale with respect to one measure but not another one; the [[Girsanov theorem]] offers a way to find a measure with respect to which an [[Itō process]] is a martingale. In the Banach space setting the conditional expectation is also denoted in operator notation as <math>\mathbf{E}^{\Sigma_s} Y_t</math>.<ref>{{cite book |last=Bogachev |first=Vladimir |title=Gaussian Measures |publisher=American Mathematical Society |pages=372–373 |year=1998 |isbn=978-1470418694}}</ref> ==Examples of martingales== * An unbiased [[random walk]], in any number of dimensions, is an example of a martingale. For example, consider a 1-dimensional random walk where at each time step a move to the right or left is equally likely. * A gambler's fortune (capital) is a martingale if all the betting games which the gambler plays are fair. The gambler is playing a game of [[coin flipping]]. Suppose ''X<sub>n</sub>'' is the gambler's fortune after ''n'' tosses of a [[fair coin]], such that the gambler wins $1 if the coin toss outcome is heads and loses $1 if the coin toss outcome is tails. The gambler's conditional expected fortune after the next game, given the history, is equal to his present fortune. This sequence is thus a martingale. * Let ''Y<sub>n</sub>'' = ''X<sub>n</sub>''<sup>2</sup> − ''n'' where ''X<sub>n</sub>'' is the gambler's fortune from the prior example. Then the sequence {''Y<sub>n</sub>'' : ''n'' = 1, 2, 3, ... } is a martingale. This can be used to show that the gambler's total gain or loss varies roughly between plus or minus the [[square root]] of the number of games of coin flipping played. * [[Abraham de Moivre|de Moivre]]'s martingale: Suppose the [[Fair coin|coin toss outcomes are unfair]], i.e., biased, with probability ''p'' of coming up heads and probability ''q'' = 1 − ''p'' of tails. Let ::<math>X_{n+1}=X_n\pm 1</math> :with "+" in case of "heads" and "−" in case of "tails". Let ::<math>Y_n=(q/p)^{X_n}</math> :Then {''Y<sub>n</sub>'' : ''n'' = 1, 2, 3, ... } is a martingale with respect to {''X<sub>n</sub>'' : ''n'' = 1, 2, 3, ... }. To show this ::<math> \begin{align} E[Y_{n+1} \mid X_1,\dots,X_n] & = p (q/p)^{X_n+1} + q (q/p)^{X_n-1} \\[6pt] & = p (q/p) (q/p)^{X_n} + q (p/q) (q/p)^{X_n} \\[6pt] & = q (q/p)^{X_n} + p (q/p)^{X_n} = (q/p)^{X_n}=Y_n. \end{align} </math> * [[Pólya's urn]] contains a number of different-coloured marbles; at each [[iterative method|iteration]] a marble is randomly selected from the urn and replaced with several more of that same colour. For any given colour, the fraction of marbles in the urn with that colour is a martingale. For example, if currently 95% of the marbles are red then, though the next iteration is more likely to add red marbles than another color, this bias is exactly balanced out by the fact that adding more red marbles alters the fraction much less significantly than adding the same number of non-red marbles would. * [[Likelihood-ratio test]]ing in [[statistics]]: A random variable ''X'' is thought to be distributed according either to probability density ''f'' or to a different probability density ''g''. A [[random sample]] ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub> is taken. Let ''Y''<sub>''n''</sub> be the "likelihood ratio" ::<math>Y_n=\prod_{i=1}^n\frac{g(X_i)}{f(X_i)}</math> : If X is actually distributed according to the density ''f'' rather than according to ''g'', then {''Y<sub>n</sub>'' :''n''=1, 2, 3,...} is a martingale with respect to {''X<sub>n</sub>'' :''n''=1, 2, 3, ...} [[Image:Martingale1.svg|thumb|250px|Software-created martingale series]] * In an [[ecological community]], i.e. a group of species that are in a particular trophic level, competing for similar resources in a local area, the number of individuals of any particular species of fixed size is a function of (discrete) time, and may be viewed as a sequence of random variables. This sequence is a martingale under the [[unified neutral theory of biodiversity and biogeography]]. * If { ''N<sub>t</sub>'' : ''t'' ≥ 0 } is a [[Poisson process]] with intensity ''λ'', then the compensated Poisson process { ''N<sub>t</sub>'' − ''λt'' : ''t'' ≥ 0 } is a continuous-time martingale with [[Classification of discontinuities|right-continuous/left-limit]] sample paths. * [[Wald's martingale]] * A <math>d</math>-dimensional process <math>M=(M^{(1)},\dots,M^{(d)})</math> in some space <math>S^d</math> is a martingale in <math>S^d</math> if each component <math>T_i(M)=M^{(i)}</math> is a one-dimensional martingale in <math>S</math>. ==Submartingales, supermartingales, and relationship to harmonic functions{{anchor|Submartingales and supermartingales}}== There are two generalizations of a martingale that also include cases when the current observation ''X<sub>n</sub>'' is not necessarily equal to the future conditional expectation ''E''[''X''<sub>''n''+1</sub> | ''X''<sub>1</sub>,...,''X<sub>n</sub>''] but instead an upper or lower bound on the conditional expectation. These generalizations reflect the relationship between martingale theory and [[potential theory]], that is, the study of [[harmonic function]]s. Just as a continuous-time martingale satisfies E[''X''<sub>''t''</sub> | {''X''<sub>''τ''</sub> : ''τ'' ≤ ''s''}] − ''X''<sub>''s''</sub> = 0 ∀''s'' ≤ ''t'', a harmonic function ''f'' satisfies the [[partial differential equation]] Δ''f'' = 0 where Δ is the [[Laplace operator|Laplacian operator]]. Given a [[Brownian motion]] process ''W''<sub>''t''</sub> and a harmonic function ''f'', the resulting process ''f''(''W''<sub>''t''</sub>) is also a martingale. * A discrete-time '''submartingale''' is a sequence <math>X_1,X_2,X_3,\ldots</math> of [[Integrable function|integrable]] random variables satisfying ::<math>\operatorname E[X_{n+1}\mid X_1,\ldots,X_n] \ge X_n.</math> : Likewise, a continuous-time submartingale satisfies ::<math>\operatorname E[X_t\mid\{X_\tau : \tau \le s\}] \ge X_s \quad \forall s \le t.</math> :In potential theory, a [[subharmonic function]] ''f'' satisfies Δ''f'' ≥ 0. Any subharmonic function that is bounded above by a harmonic function for all points on the boundary of a ball is bounded above by the harmonic function for all points inside the ball. Similarly, if a submartingale and a martingale have equivalent expectations for a given time, the history of the submartingale tends to be bounded above by the history of the martingale. Roughly speaking, the [[prefix]] "sub-" is consistent because the current observation ''X<sub>n</sub>'' is ''less than'' (or equal to) the conditional expectation ''E''[''X<sub>n</sub>''<sub>+1</sub> | ''X''<sub>1</sub>,...,''X<sub>n</sub>'']. Consequently, the current observation provides support ''from below'' the future conditional expectation, and the process tends to increase in future time. * Analogously, a discrete-time '''supermartingale''' satisfies ::<math>\operatorname E[X_{n+1}\mid X_1,\ldots,X_n] \le X_n.</math> : Likewise, a continuous-time supermartingale satisfies ::<math>\operatorname E[X_t\mid\{X_\tau : \tau \le s\}] \le X_s \quad \forall s \le t.</math> :In potential theory, a [[superharmonic function]] ''f'' satisfies Δ''f'' ≤ 0. Any superharmonic function that is bounded below by a harmonic function for all points on the boundary of a ball is bounded below by the harmonic function for all points inside the ball. Similarly, if a supermartingale and a martingale have equivalent expectations for a given time, the history of the supermartingale tends to be bounded below by the history of the martingale. Roughly speaking, the prefix "super-" is consistent because the current observation ''X<sub>n</sub>'' is ''greater than'' (or equal to) the conditional expectation ''E''[''X<sub>n</sub>''<sub>+1</sub> | ''X''<sub>1</sub>,...,''X<sub>n</sub>'']. Consequently, the current observation provides support ''from above'' the future conditional expectation, and the process tends to decrease in future time. ===Examples of submartingales and supermartingales=== * Every martingale is also a submartingale and a supermartingale. Conversely, any stochastic process that is ''both'' a submartingale and a supermartingale is a martingale. * Consider again the gambler who wins $1 when a coin comes up heads and loses $1 when the coin comes up tails. Suppose now that the coin may be biased, so that it comes up heads with probability ''p''. ** If ''p'' is equal to 1/2, the gambler on average neither wins nor loses money, and the gambler's fortune over time is a martingale. ** If ''p'' is less than 1/2, the gambler loses money on average, and the gambler's fortune over time is a supermartingale. ** If ''p'' is greater than 1/2, the gambler wins money on average, and the gambler's fortune over time is a submartingale. * A [[convex function]] of a martingale is a submartingale, by [[Jensen's inequality]]. For example, the square of the gambler's fortune in the fair coin game is a submartingale (which also follows from the fact that ''X<sub>n</sub>''<sup>2</sup> − ''n'' is a martingale). Similarly, a [[concave function]] of a martingale is a supermartingale. ==Martingales and stopping times== {{Main|Stopping time}} A [[stopping time]] with respect to a sequence of random variables ''X''<sub>1</sub>, ''X''<sub>2</sub>, ''X''<sub>3</sub>, ... is a random variable τ with the property that for each ''t'', the occurrence or non-occurrence of the event ''τ'' = ''t'' depends only on the values of ''X''<sub>1</sub>, ''X''<sub>2</sub>, ''X''<sub>3</sub>, ..., ''X''<sub>''t''</sub>. The intuition behind the definition is that at any particular time ''t'', you can look at the sequence so far and tell if it is time to stop. An example in real life might be the time at which a gambler leaves the gambling table, which might be a function of their previous winnings (for example, he might leave only when he goes broke), but he can't choose to go or stay based on the outcome of games that haven't been played yet. In some contexts the concept of ''stopping time'' is defined by requiring only that the occurrence or non-occurrence of the event ''τ'' = ''t'' is [[statistical independence|probabilistically independent]] of ''X''<sub>''t'' + 1</sub>, ''X''<sub>''t'' + 2</sub>, ... but not that it is completely determined by the history of the process up to time ''t''. That is a weaker condition than the one appearing in the paragraph above, but is strong enough to serve in some of the proofs in which stopping times are used. One of the basic properties of martingales is that, if <math>(X_t)_{t>0}</math> is a (sub-/super-) martingale and <math>\tau</math> is a stopping time, then the corresponding stopped process <math>(X_t^\tau)_{t>0}</math> defined by <math>X_t^\tau:=X_{\min\{\tau,t\}}</math> is also a (sub-/super-) martingale. The concept of a stopped martingale leads to a series of important theorems, including, for example, the [[optional stopping theorem]] which states that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value. == Martingale problem == The martingale problem is a framework in stochastic analysis for characterizing solutions to stochastic differential equations (SDEs) through martingale conditions. === General Martingale Problem (A, μ) === Let <math>E</math> be a Polish space with Borel <math>\sigma</math>-algebra <math>\mathcal{E}</math>, and let <math>\mathcal{P}(E)</math> be the set of probability measures on <math>E</math>. Suppose <math>A : \mathcal{D}(A) \to C(E)</math> is a Markov pregenerator, where <math>\mathcal{D}(A)</math> is a dense subspace of <math>C(E)</math>. A probability measure <math>\mathbb{P}</math> on the Skorokhod space <math>D_E[0,\infty)</math> solves the martingale problem <math>(A, \mu)</math> for <math>\mu \in \mathcal{P}(E)</math> if: For every <math>\Gamma \in \mathcal{E}</math>, <math>\mathbb{P}{\zeta : \zeta_0 \in \Gamma} = \mu(\Gamma).</math> For every <math>f \in \mathcal{D}(A)</math>, the process <math>f(\zeta_t) - \int_0^t A f(\zeta_s),ds</math> is a local martingale under <math>\mathbb{P}</math> with respect to its natural filtration. If <math>\mu = \delta_\eta</math> (the Dirac measure at <math>\eta</math>), then <math>\mathbb{P}</math> is said to solve the martingale problem for <math>A</math> with initial point <math>\eta</math>. === Martingale Problem for Diffusions M(a, b) === A process <math>X = (X_t)_{t \ge 0}</math> on a filtered probability space <math>(\Omega, \mathcal{F}, (\mathcal{F}t), \mathbb{P})</math> solves the martingale problem <math>M(a, b)</math> for measurable functions <math>a : \mathbb{R}^d \to \mathbb{S}+^d</math> and <math>b : \mathbb{R}^d \to \mathbb{R}^d</math> if: For each <math>1 \le i \le d</math>, <math>M^i_t = X^i_t - \int_0^t b_i(X_s),ds</math> is a local martingale. For each <math>1 \le i,j \le d</math>, <math>M^i_t,M^j_t - \int_0^t a_{ij}(X_s),ds</math> is a local martingale. === Connection to Stochastic Differential Equations === Solutions to <math>M(a, b)</math> correspond (in a weak sense) to solutions of the SDE <math>dX_t = b(X_t),dt + \sigma(X_t),dB_t</math>, where <math>\sigma\sigma^\top = a</math>. One sees this by applying the generator <math>A</math> to simple functions such as <math>x_i</math> or <math>x_i,x_j</math>, thereby recovering the drift <math>b</math> and the diffusion matrix <math>a</math>. ==See also== {{Div col|colwidth=25em}} * [[Azuma's inequality]] * [[Brownian motion]] * [[Doob martingale]] * [[Doob's martingale convergence theorems]] * [[Doob's martingale inequality]] * [[Doob–Meyer decomposition theorem]] * [[Local martingale]] * [[Markov chain]] * [[Markov property]] * [[Martingale (betting system)]] * [[Martingale central limit theorem]] * [[Martingale difference sequence]] * [[Martingale representation theorem]] * [[Normal number]] * [[Semimartingale]] {{div col end}} ==Notes== {{Reflist}} ==References== * {{springer|title=Martingale|id=p/m062570}} * {{cite journal|title=The Splendors and Miseries of Martingales|journal= Electronic Journal for History of Probability and Statistics|volume=5|date=June 2009|issue=1|url=http://www.jehps.net/juin2009.html}} Entire issue dedicated to Martingale probability theory (Laurent Mazliak and Glenn Shafer, Editors). * {{cite book|first1=Paolo |last1=Baldi |first2=Laurent |last2=Mazliak |first3=Pierre |last3=Priouret|title=Martingales and Markov Chains|publisher= Chapman and Hall|year=1991| isbn =978-1-584-88329-6}} * {{cite book| author-link=David Williams (mathematician)|first=David |last=Williams|title=Probability with Martingales|publisher= Cambridge University Press|year=1991| isbn =978-0-521-40605-5}} * {{cite book|first=Hagen|last= Kleinert|author-link=Hagen Kleinert|title=Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets|edition= 4th|publisher= World Scientific |location=Singapore|year= 2004| isbn =981-238-107-4|url=http://www.physik.fu-berlin.de/~kleinert/b5 }} * {{cite journal |title= Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis | year=2021 |first1=Mark |last1=Richard |first2=Jan |last2=Vecer |journal=Risks |volume=9 |issue=2|page= 31 |doi= 10.3390/risks9020031 |doi-access= free |hdl=10419/258120 |hdl-access=free }} * {{cite web |title=Martingales and Stopping Times: Use of martingales in obtaining bounds and analyzing algorithms |url=http://www.corelab.ece.ntua.gr/courses/rand-alg/slides/Martingales-Stopping_Times.pdf |publisher=University of Athens |first=Paris |last=Siminelakis |year=2010 |access-date=2010-06-18 |archive-url=https://web.archive.org/web/20180219020828/http://www.corelab.ece.ntua.gr/courses/rand-alg/slides/Martingales-Stopping_Times.pdf |archive-date=2018-02-19 |url-status=dead }} * {{cite journal |zbl=0021.14601|last= Ville|first= Jean |title=Étude critique de la notion de collectif|journal= Bulletin of the American Mathematical Society|language=fr|series=Monographies des Probabilités |volume=3 |issue= 11|pages= 824–825|place=Paris|year=1939|id=[https://dx.doi.org/10.1090/S0002-9904-1939-07089-4 Review by Doob]|url=https://books.google.com/books?id=ETY7AQAAIAAJ|doi= 10.1090/S0002-9904-1939-07089-4|doi-access=free|url-access=subscription}} * Stroock, D. W. and Varadhan, S. R. S. (1979). ''Multidimensional Diffusion Processes''. Springer. * Ethier, S. N. and Kurtz, T. G. (1986). ''Markov Processes: Characterization and Convergence''. Wiley. {{Stochastic processes}} {{Authority control}} [[Category:Stochastic processes]] [[Category:Martingale theory]] [[Category:Game theory]] [[Category:Paul Lévy (mathematician)]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:For
(
edit
)
Template:Harvtxt
(
edit
)
Template:Main
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Springer
(
edit
)
Template:Stochastic processes
(
edit
)