Template:Pp-vandalism Template:Pp-move-indef
Template:Redirect Template:Broader Template:Information page
Transclusion refers to the inclusion of the content from one document within another document by reference. In Wikipedia, transclusion means the MediaWiki software retrieving the content from a source page, often a template, and incorporating it into the content of a target page.
Similar to creating a wikilink using double square brackets (<syntaxhighlight lang="wikitext" inline="">Pagename</syntaxhighlight>
), a page can be transcluded as a template by enclosing its title in double curly braces or double curly brackets: <syntaxhighlight lang="wikitext" inline="">Template:Namespace:Pagename</syntaxhighlight>. Any changes made to the source page, or template, are automatically reflected on all pages that include the transcluded content.Template:Refn
If no namespace is specified, it is assumed to be in the Template namespace. To refer to a page in the Main article namespace, it is necessary to prefix it with a colon (:
). For example:
- <syntaxhighlight lang="wikitext" inline="">Template:Pagename</syntaxhighlight> is the same as <syntaxhighlight lang="wikitext" inline="">Template:Pagename</syntaxhighlight>
- <syntaxhighlight lang="wikitext" inline="">Template:Stochastic processes</syntaxhighlight> will transclude from the page Template:Stochastic processes
- <syntaxhighlight lang="wikitext" inline="">Template:Short description
Template:Probability fundamentals
In probability theory and related fields, a stochastic (Template:IPAc-en) or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule.<ref name="doob1953stochasticP46to47">Template:Cite book</ref><ref name="Parzen1999">Template:Cite book</ref><ref name="GikhmanSkorokhod1969page1">Template:Cite book</ref> Stochastic processes have applications in many disciplines such as biology,<ref name="Bressloff2014">Template:Cite book</ref> chemistry,<ref name="Kampen2011">Template:Cite book</ref> ecology,<ref name="LandeEngen2003">Template:Cite book</ref> neuroscience,<ref name="LaingLord2010">Template:Cite book</ref> physics,<ref name="PaulBaschnagel2013">Template:Cite book</ref> image processing, signal processing,<ref name="Dougherty1999">Template:Cite book</ref> control theory,<ref name="Bertsekas1996">Template:Cite book</ref> information theory,<ref name="CoverThomas2012page71">Template:Cite book</ref> computer science,<ref name="Baron2015">Template:Cite book</ref> and telecommunications.<ref name="BaccelliBlaszczyszyn2009">Template:Cite book</ref> Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.<ref name="Steele2001">Template:Cite book</ref><ref name="MusielaRutkowski2006">Template:Cite book</ref><ref name="Shreve2004">Template:Cite book</ref>
Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process,Template:Efn used by Louis Bachelier to study price changes on the Paris Bourse,<ref name="JarrowProtter2004">Template:Cite book</ref> and the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time.<ref name="Stirzaker2000">Template:Cite journal</ref> These two stochastic processes are considered the most important and central in the theory of stochastic processes,<ref name="doob1953stochasticP46to47"/><ref name="Parzen1999"/><ref>Template:Cite book</ref> and were invented repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries.<ref name="JarrowProtter2004"/><ref name="GuttorpThorarinsdottir2012">Template:Cite journal</ref>
The term random function is also used to refer to a stochastic or random process,<ref name="GusakKukush2010page21">Template:Cite book</ref><ref name="Skorokhod2005page42">Template:Cite book</ref> because a stochastic process can also be interpreted as a random element in a function space.<ref name="Kallenberg2002page24"/><ref name="Lamperti1977page1">Template:Cite book</ref> The terms stochastic process and random process are used interchangeably, often with no specific mathematical space for the set that indexes the random variables.<ref name="Kallenberg2002page24">Template:Cite book</ref><ref name="ChaumontYor2012">Template:Cite book</ref> But often these two terms are used when the random variables are indexed by the integers or an interval of the real line.<ref name="GikhmanSkorokhod1969page1"/><ref name="ChaumontYor2012"/> If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead.<ref name="GikhmanSkorokhod1969page1"/><ref name="AdlerTaylor2009page7">Template:Cite book</ref> The values of a stochastic process are not always numbers and can be vectors or other mathematical objects.<ref name="GikhmanSkorokhod1969page1"/><ref name="Lamperti1977page1"/>
Based on their mathematical properties, stochastic processes can be grouped into various categories, which include random walks,<ref name="LawlerLimic2010">Template:Cite book</ref> martingales,<ref name="Williams1991">Template:Cite book</ref> Markov processes,<ref name="RogersWilliams2000">Template:Cite book</ref> Lévy processes,<ref name="ApplebaumBook2004">Template:Cite book</ref> Gaussian processes,<ref>Template:Cite book</ref> random fields,<ref name="Adler2010">Template:Cite book</ref> renewal processes, and branching processes.<ref name="KarlinTaylor2012">Template:Cite book</ref> The study of stochastic processes uses mathematical knowledge and techniques from probability, calculus, linear algebra, set theory, and topology<ref name="Hajek2015">Template:Cite book</ref><ref name="LatoucheRamaswami1999">Template:Cite book</ref><ref name="DaleyVere-Jones2007">Template:Cite book</ref> as well as branches of mathematical analysis such as real analysis, measure theory, Fourier analysis, and functional analysis.<ref name="Billingsley2008">Template:Cite book</ref><ref name="Brémaud2014">Template:Cite book</ref><ref name="Bobrowski2005">Template:Cite book</ref> The theory of stochastic processes is considered to be an important contribution to mathematics<ref name="Applebaum2004">Template:Cite journal</ref> and it continues to be an active topic of research for both theoretical reasons and applications.<ref name="BlathImkeller2011">Template:Cite book</ref><ref name="Talagrand2014">Template:Cite book</ref><ref name="Bressloff2014VII">Template:Cite book</ref>
IntroductionEdit
A stochastic or random process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set.<ref name="Parzen1999"/><ref name="GikhmanSkorokhod1969page1"/> The set used to index the random variables is called the index set. Historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time.<ref name="doob1953stochasticP46to47"/> Each random variable in the collection takes values from the same mathematical space known as the state space. This state space can be, for example, the integers, the real line or <math>n</math>-dimensional Euclidean space.<ref name="doob1953stochasticP46to47"/><ref name="GikhmanSkorokhod1969page1"/> An increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time.<ref name="KarlinTaylor2012page27"/><ref name="Applebaum2004page1337"/> A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, a sample function or realization.<ref name="Lamperti1977page1"/><ref name="RogersWilliams2000page121b"/>
ClassificationsEdit
A stochastic process can be classified in different ways, for example, by its state space, its index set, or the dependence among the random variables. One common way of classification is by the cardinality of the index set and the state space.<ref name="Florescu2014page294"/><ref name="KarlinTaylor2012page26">Template:Cite book</ref><ref>Template:Cite book</ref>
When interpreted as time, if the index set of a stochastic process has a finite or countable number of elements, such as a finite set of numbers, the set of integers, or the natural numbers, then the stochastic process is said to be in discrete time.<ref name="Billingsley2008page482"/><ref name="Borovkov2013page527">Template:Cite book</ref> If the index set is some interval of the real line, then time is said to be continuous. The two types of stochastic processes are respectively referred to as discrete-time and continuous-time stochastic processes.<ref name="KarlinTaylor2012page27"/><ref name="Brémaud2014page120"/><ref name="Rosenthal2006page177">Template:Cite book</ref> Discrete-time stochastic processes are considered easier to study because continuous-time processes require more advanced mathematical techniques and knowledge, particularly due to the index set being uncountable.<ref name="KloedenPlaten2013page63">Template:Cite book</ref><ref name="Khoshnevisan2006page153">Template:Cite book</ref> If the index set is the integers, or some subset of them, then the stochastic process can also be called a random sequence.<ref name="Borovkov2013page527"/>
If the state space is the integers or natural numbers, then the stochastic process is called a discrete or integer-valued stochastic process. If the state space is the real line, then the stochastic process is referred to as a real-valued stochastic process or a process with continuous state space. If the state space is <math>n</math>-dimensional Euclidean space, then the stochastic process is called a <math>n</math>-dimensional vector process or <math>n</math>-vector process.<ref name="Florescu2014page294"/><ref name="KarlinTaylor2012page26"/>
EtymologyEdit
The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence.<ref name="OxfordStochastic">Template:Cite OED</ref> In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics".<ref name="Sheĭnin2006page5">Template:Cite book</ref> This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz<ref name="SheyninStrecker2011page136">Template:Cite book</ref> who in 1917 wrote in German the word stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob.<ref name="OxfordStochastic"/> For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin,<ref name="Doob1934"/><ref name="Khintchine1934">Template:Cite journal</ref> though the German term had been used earlier, for example, by Andrei Kolmogorov in 1931.<ref name="Kolmogoroff1931page1">Template:Cite journal</ref>
According to the Oxford English Dictionary, early occurrences of the word random in English with its current meaning, which relates to chance or luck, date back to the 16th century, while earlier recorded usages started in the 14th century as a noun meaning "impetuosity, great speed, force, or violence (in riding, running, striking, etc.)". The word itself comes from a Middle French word meaning "speed, haste", and it is probably derived from a French verb meaning "to run" or "to gallop". The first written appearance of the term random process pre-dates stochastic process, which the Oxford English Dictionary also gives as a synonym, and was used in an article by Francis Edgeworth published in 1888.<ref name="OxfordRandom">Template:Cite OED</ref>
TerminologyEdit
The definition of a stochastic process varies,<ref name="FristedtGray2013page580">Template:Cite book</ref> but a stochastic process is traditionally defined as a collection of random variables indexed by some set.<ref name="RogersWilliams2000page121"/><ref name="Asmussen2003page408"/> The terms random process and stochastic process are considered synonyms and are used interchangeably, without the index set being precisely specified.<ref name="Kallenberg2002page24"/><ref name="ChaumontYor2012"/><ref name="AdlerTaylor2009page7"/><ref name="Stirzaker2005page45">Template:Cite book</ref><ref name="Rosenblatt1962page91">Template:Cite book</ref><ref name="Gubner2006page383">Template:Cite book</ref> Both "collection",<ref name="Lamperti1977page1"/><ref name="Stirzaker2005page45"/> or "family" are used<ref name="Parzen1999"/><ref name="Ito2006page13">Template:Cite book</ref> while instead of "index set", sometimes the terms "parameter set"<ref name="Lamperti1977page1"/> or "parameter space"<ref name="AdlerTaylor2009page7"/> are used.
The term random function is also used to refer to a stochastic or random process,<ref name="GikhmanSkorokhod1969page1"/><ref name="Loeve1978">Template:Cite book</ref><ref name="Brémaud2014page133">Template:Cite book</ref> though sometimes it is only used when the stochastic process takes real values.<ref name="Lamperti1977page1"/><ref name="Ito2006page13"/> This term is also used when the index sets are mathematical spaces other than the real line,<ref name="GikhmanSkorokhod1969page1"/><ref name="GusakKukush2010page1">Template:Harvtxt, p. 1</ref> while the terms stochastic process and random process are usually used when the index set is interpreted as time,<ref name="GikhmanSkorokhod1969page1"/><ref name="GusakKukush2010page1"/><ref name="Bass2011page1">Template:Cite book</ref> and other terms are used such as random field when the index set is <math>n</math>-dimensional Euclidean space <math>\mathbb{R}^n</math> or a manifold.<ref name="GikhmanSkorokhod1969page1"/><ref name="Lamperti1977page1"/><ref name="AdlerTaylor2009page7"/>
NotationEdit
A stochastic process can be denoted, among other ways, by <math>\{X(t)\}_{t\in T} </math>,<ref name="Brémaud2014page120"/> <math>\{X_t\}_{t\in T} </math>,<ref name="Asmussen2003page408"/> <math>\{X_t\}</math><ref name="Lamperti1977page3">,Template:Cite book</ref> <math>\{X(t)\}</math> or simply as <math>X</math>. Some authors mistakenly write <math>X(t)</math> even though it is an abuse of function notation.<ref name="Klebaner2005page55">Template:Cite book</ref> For example, <math>X(t)</math> or <math>X_t</math> are used to refer to the random variable with the index <math>t</math>, and not the entire stochastic process.<ref name="Lamperti1977page3"/> If the index set is <math>T=[0,\infty)</math>, then one can write, for example, <math>(X_t , t \geq 0)</math> to denote the stochastic process.<ref name="ChaumontYor2012"/>
ExamplesEdit
Bernoulli processEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
One of the simplest stochastic processes is the Bernoulli process,<ref name="Florescu2014page293"/> which is a sequence of independent and identically distributed (iid) random variables, where each random variable takes either the value one or zero, say one with probability <math>p</math> and zero with probability <math>1-p</math>. This process can be linked to an idealisation of repeatedly flipping a coin, where the probability of obtaining a head is taken to be <math>p</math> and its value is one, while the value of a tail is zero.<ref name= "Florescu2014page301">Template:Cite book</ref> In other words, a Bernoulli process is a sequence of iid Bernoulli random variables,<ref name="BertsekasTsitsiklis2002page273">Template:Cite book</ref> where each idealised coin flip is an example of a Bernoulli trial.<ref name="Ibe2013page11">Template:Cite book</ref>
Random walkEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
Random walks are stochastic processes that are usually defined as sums of iid random variables or random vectors in Euclidean space, so they are processes that change in discrete time.<ref name="Klenke2013page347">Template:Cite book</ref><ref name="LawlerLimic2010page1">Template:Cite book</ref><ref name="Kallenberg2002page136">Template:Cite book</ref><ref name="Florescu2014page383">Template:Cite book</ref><ref name="Durrett2010page277">Template:Cite book</ref> But some also use the term to refer to processes that change in continuous time,<ref name="Weiss2006page1">Template:Cite book</ref> particularly the Wiener process used in financial models, which has led to some confusion, resulting in its criticism.<ref name="Spanos1999page454">Template:Cite book</ref> There are various other types of random walks, defined so their state spaces can be other mathematical objects, such as lattices and groups, and in general they are highly studied and have many applications in different disciplines.<ref name="Weiss2006page1"/><ref name="Klebaner2005page81">Template:Cite book</ref>
A classic example of a random walk is known as the simple random walk, which is a stochastic process in discrete time with the integers as the state space, and is based on a Bernoulli process, where each Bernoulli variable takes either the value positive one or negative one. In other words, the simple random walk takes place on the integers, and its value increases by one with probability, say, <math>p</math>, or decreases by one with probability <math>1-p</math>, so the index set of this random walk is the natural numbers, while its state space is the integers. If <math>p=0.5</math>, this random walk is called a symmetric random walk.<ref name="Gut2012page88">Template:Cite book</ref><ref name="GrimmettStirzaker2001page71">Template:Cite book</ref>
Wiener processEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
The Wiener process is a stochastic process with stationary and independent increments that are normally distributed based on the size of the increments.<ref name="RogersWilliams2000page1">Template:Cite book</ref><ref name="Klebaner2005page56">Template:Cite book</ref> The Wiener process is named after Norbert Wiener, who proved its mathematical existence, but the process is also called the Brownian motion process or just Brownian motion due to its historical connection as a model for Brownian movement in liquids.<ref name="Brush1968page1">Template:Cite journal</ref><ref name="Applebaum2004page1338">Template:Cite journal</ref><ref name="GikhmanSkorokhod1969page21">Template:Cite book</ref>
Playing a central role in the theory of probability, the Wiener process is often considered the most important and studied stochastic process, with connections to other stochastic processes.<ref name="doob1953stochasticP46to47"/><ref name="RogersWilliams2000page1"/><ref name="Steele2012page29">Template:Cite book</ref><ref name="Florescu2014page471">Template:Cite book</ref><ref name="KarlinTaylor2012page21">Template:Cite book</ref><ref name="KaratzasShreve2014pageVIII">Template:Cite book</ref><ref name="RevuzYor2013pageIX">Template:Cite book</ref> Its index set and state space are the non-negative numbers and real numbers, respectively, so it has both continuous index set and states space.<ref name="Rosenthal2006page186">Template:Cite book</ref> But the process can be defined more generally so its state space can be <math>n</math>-dimensional Euclidean space.<ref name="Klebaner2005page81"/><ref name="KarlinTaylor2012page21"/><ref>Template:Cite book</ref> If the mean of any increment is zero, then the resulting Wiener or Brownian motion process is said to have zero drift. If the mean of the increment for any two points in time is equal to the time difference multiplied by some constant <math> \mu</math>, which is a real number, then the resulting stochastic process is said to have drift <math> \mu</math>.<ref name="Steele2012page118">Template:Cite book</ref><ref name="MörtersPeres2010page1"/><ref name="KaratzasShreve2014page78">Template:Cite book</ref>
Almost surely, a sample path of a Wiener process is continuous everywhere but nowhere differentiable. It can be considered as a continuous version of the simple random walk.<ref name="Applebaum2004page1337">Template:Cite journal</ref><ref name="MörtersPeres2010page1">Template:Cite book</ref> The process arises as the mathematical limit of other stochastic processes such as certain random walks rescaled,<ref name="KaratzasShreve2014page61">Template:Cite book</ref><ref name="Shreve2004page93">Template:Cite book</ref> which is the subject of Donsker's theorem or invariance principle, also known as the functional central limit theorem.<ref name="Kallenberg2002page225and260">Template:Cite book</ref><ref name="KaratzasShreve2014page70">Template:Cite book</ref><ref name="MörtersPeres2010page131">Template:Cite book</ref>
The Wiener process is a member of some important families of stochastic processes, including Markov processes, Lévy processes and Gaussian processes.<ref name="RogersWilliams2000page1"/><ref name="Applebaum2004page1337"/> The process also has many applications and is the main stochastic process used in stochastic calculus.<ref name="Klebaner2005">Template:Cite book</ref><ref name="KaratzasShreve2014page">Template:Cite book</ref> It plays a central role in quantitative finance,<ref name="Applebaum2004page1341">Template:Cite journal</ref><ref name="KarlinTaylor2012page340">Template:Cite book</ref> where it is used, for example, in the Black–Scholes–Merton model.<ref name="Klebaner2005page124">Template:Cite book</ref> The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena.<ref name="Steele2012page29"/><ref name="KaratzasShreve2014page47">Template:Cite book</ref><ref name="Wiersema2008page2">Template:Cite book</ref>
Poisson processEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
The Poisson process is a stochastic process that has different forms and definitions.<ref name="Tijms2003page1">Template:Cite book</ref><ref name="DaleyVere-Jones2006chap2">Template:Cite book</ref> It can be defined as a counting process, which is a stochastic process that represents the random number of points or events up to some time. The number of points of the process that are located in the interval from zero to some given time is a Poisson random variable that depends on that time and some parameter. This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process.<ref name="Tijms2003page1"/>
If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process.<ref name="Tijms2003page1"/><ref name="PinskyKarlin2011">Template:Cite book</ref> The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and Lévy processes.<ref name="Applebaum2004page1337"/>
The homogeneous Poisson process can be defined and generalized in different ways. It can be defined such that its index set is the real line, and this stochastic process is also called the stationary Poisson process.<ref name="Kingman1992page38">Template:Cite book</ref><ref name="DaleyVere-Jones2006page19">Template:Cite book</ref> If the parameter constant of the Poisson process is replaced with some non-negative integrable function of <math>t</math>, the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant.<ref name="Kingman1992page22">Template:Cite book</ref> Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows.<ref name="KarlinTaylor2012page118">Template:Cite book</ref><ref name="Kleinrock1976page61">Template:Cite book</ref>
Defined on the real line, the Poisson process can be interpreted as a stochastic process,<ref name="Applebaum2004page1337"/><ref name="Rosenblatt1962page94">Template:Cite book</ref> among other random objects.<ref name="Haenggi2013page10and18">Template:Cite book</ref><ref name="ChiuStoyan2013page41and108">Template:Cite book</ref> But then it can be defined on the <math>n</math>-dimensional Euclidean space or other mathematical spaces,<ref name="Kingman1992page11">Template:Cite book</ref> where it is often interpreted as a random set or a random counting measure, instead of a stochastic process.<ref name="Haenggi2013page10and18"/><ref name="ChiuStoyan2013page41and108"/> In this setting, the Poisson process, also called the Poisson point process, is one of the most important objects in probability theory, both for applications and theoretical reasons.<ref name="Stirzaker2000"/><ref name="Streit2010page1">Template:Cite book</ref> But it has been remarked that the Poisson process does not receive as much attention as it should, partly due to it often being considered just on the real line, and not on other mathematical spaces.<ref name="Streit2010page1"/><ref name="Kingman1992pagev">Template:Cite book</ref>
DefinitionsEdit
Stochastic processEdit
A stochastic process is defined as a collection of random variables defined on a common probability space <math>(\Omega, \mathcal{F}, P)</math>, where <math>\Omega</math> is a sample space, <math>\mathcal{F}</math> is a <math>\sigma</math>-algebra, and <math>P</math> is a probability measure; and the random variables, indexed by some set <math>T</math>, all take values in the same mathematical space <math>S</math>, which must be measurable with respect to some <math>\sigma</math>-algebra <math>\Sigma</math>.<ref name="Lamperti1977page1"/>
In other words, for a given probability space <math>(\Omega, \mathcal{F}, P)</math> and a measurable space <math>(S,\Sigma)</math>, a stochastic process is a collection of <math>S</math>-valued random variables, which can be written as:<ref name="Florescu2014page293">Template:Cite book</ref>
\{X(t):t\in T \}.
</math>Historically, in many problems from the natural sciences a point <math>t\in T</math> had the meaning of time, so <math>X(t)</math> is a random variable representing a value observed at time <math>t</math>.<ref name="Borovkov2013page528">Template:Cite book</ref> A stochastic process can also be written as <math> \{X(t,\omega):t\in T \}</math> to reflect that it is actually a function of two variables, <math>t\in T</math> and <math>\omega\in \Omega</math>.<ref name="Lamperti1977page1"/><ref name="LindgrenRootzen2013page11">Template:Cite book</ref>
There are other ways to consider a stochastic process, with the above definition being considered the traditional one.<ref name="RogersWilliams2000page121">Template:Cite book</ref><ref name="Asmussen2003page408">Template:Cite book</ref> For example, a stochastic process can be interpreted or defined as a <math>S^T</math>-valued random variable, where <math>S^T</math> is the space of all the possible functions from the set <math>T</math> into the space <math>S</math>.<ref name="Kallenberg2002page24"/><ref name="RogersWilliams2000page121"/> However this alternative definition as a "function-valued random variable" in general requires additional regularity assumptions to be well-defined.<ref name="aumann">Template:Cite journal</ref>
Index setEdit
The set <math>T</math> is called the index set<ref name="Parzen1999"/><ref name="Florescu2014page294"/> or parameter set<ref name="Lamperti1977page1"/><ref name="Skorokhod2005page93">Template:Cite book</ref> of the stochastic process. Often this set is some subset of the real line, such as the natural numbers or an interval, giving the set <math>T</math> the interpretation of time.<ref name="doob1953stochasticP46to47"/> In addition to these sets, the index set <math>T</math> can be another set with a total order or a more general set,<ref name="doob1953stochasticP46to47"/><ref name="Billingsley2008page482">Template:Cite book</ref> such as the Cartesian plane <math>\mathbb{R}^2</math> or <math>n</math>-dimensional Euclidean space, where an element <math>t\in T</math> can represent a point in space.<ref name="KarlinTaylor2012page27">Template:Cite book</ref><ref>Template:Cite book</ref> That said, many results and theorems are only possible for stochastic processes with a totally ordered index set.<ref name="Skorokhod2005page104">Template:Cite book</ref>
State spaceEdit
The mathematical space <math>S</math> of a stochastic process is called its state space. This mathematical space can be defined using integers, real lines, <math>n</math>-dimensional Euclidean spaces, complex planes, or more abstract mathematical spaces. The state space is defined using elements that reflect the different values that the stochastic process can take.<ref name="doob1953stochasticP46to47"/><ref name="GikhmanSkorokhod1969page1"/><ref name="Lamperti1977page1"/><ref name="Florescu2014page294">Template:Cite book</ref><ref name="Brémaud2014page120">Template:Cite book</ref>
Sample functionEdit
A sample function is a single outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process.<ref name="Lamperti1977page1"/><ref name="Florescu2014page296">Template:Cite book</ref> More precisely, if <math>\{X(t,\omega):t\in T \}</math> is a stochastic process, then for any point <math>\omega\in\Omega</math>, the mapping
X(\cdot,\omega): T \rightarrow S,
</math>is called a sample function, a realization, or, particularly when <math>T</math> is interpreted as time, a sample path of the stochastic process <math>\{X(t,\omega):t\in T \}</math>.<ref name="RogersWilliams2000page121b">Template:Cite book</ref> This means that for a fixed <math>\omega\in\Omega</math>, there exists a sample function that maps the index set <math>T</math> to the state space <math>S</math>.<ref name="Lamperti1977page1"/> Other names for a sample function of a stochastic process include trajectory, path function<ref name="Billingsley2008page493">Template:Cite book</ref> or path.<ref name="Øksendal2003page10">Template:Cite book</ref>
IncrementEdit
An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period. For example, if <math>\{X(t):t\in T \}</math> is a stochastic process with state space <math>S</math> and index set <math>T=[0,\infty)</math>, then for any two non-negative numbers <math>t_1\in [0,\infty)</math> and <math>t_2\in [0,\infty)</math> such that <math>t_1\leq t_2</math>, the difference <math>X_{t_2}-X_{t_1}</math> is a <math>S</math>-valued random variable known as an increment.<ref name="KarlinTaylor2012page27"/><ref name="Applebaum2004page1337"/> When interested in the increments, often the state space <math>S</math> is the real line or the natural numbers, but it can be <math>n</math>-dimensional Euclidean space or more abstract spaces such as Banach spaces.<ref name="Applebaum2004page1337"/>
Further definitionsEdit
LawEdit
For a stochastic process <math>X\colon\Omega \rightarrow S^T</math> defined on the probability space <math>(\Omega, \mathcal{F}, P)</math>, the law of stochastic process <math>X</math> is defined as the pushforward measure:
\mu=P\circ X^{-1},
</math>where <math>P</math> is a probability measure, the symbol <math>\circ </math> denotes function composition and <math>X^{-1}</math> is the pre-image of the measurable function or, equivalently, the <math>S^T</math>-valued random variable <math>X</math>, where <math>S^T</math> is the space of all the possible <math>S</math>-valued functions of <math>t\in T</math>, so the law of a stochastic process is a probability measure.<ref name="Kallenberg2002page24"/><ref name="RogersWilliams2000page121"/><ref name="FrizVictoir2010page571"/><ref name="Resnick2013page40">Template:Cite book</ref>
For a measurable subset <math>B</math> of <math>S^T</math>, the pre-image of <math>X</math> gives
X^{-1}(B)=\{\omega\in \Omega: X(\omega)\in B \},
</math>so the law of a <math>X</math> can be written as:<ref name="Lamperti1977page1"/>
\mu(B)=P(\{\omega\in \Omega: X(\omega)\in B \}).
</math>The law of a stochastic process or a random variable is also called the probability law, probability distribution, or the distribution.<ref name="Borovkov2013page528"/><ref name="FrizVictoir2010page571"/><ref name="Whitt2006page23">Template:Cite book</ref><ref name="ApplebaumBook2004page4">Template:Cite book</ref><ref name="RevuzYor2013page10">Template:Cite book</ref>
Finite-dimensional probability distributionsEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} For a stochastic process <math>X</math> with law <math>\mu</math>, its finite-dimensional distribution for <math>t_1,\dots,t_n\in T</math> is defined as:
\mu_{t_1,\dots,t_n} =P\circ (X({t_1}),\dots, X({t_n}))^{-1},
</math>This measure <math>\mu_{t_1,..,t_n}</math> is the joint distribution of the random vector <math> (X({t_1}),\dots, X({t_n})) </math>; it can be viewed as a "projection" of the law <math>\mu</math> onto a finite subset of <math>T</math>.<ref name="Kallenberg2002page24"/><ref name="RogersWilliams2000page123">Template:Cite book</ref>
For any measurable subset <math>C</math> of the <math>n</math>-fold Cartesian power <math>S^n=S\times\dots \times S</math>, the finite-dimensional distributions of a stochastic process <math>X</math> can be written as:<ref name="Lamperti1977page1"/>
\mu_{t_1,\dots,t_n}(C) =P \Big(\big\{\omega\in \Omega: \big( X_{t_1}(\omega), \dots, X_{t_n}(\omega) \big) \in C \big\} \Big).
</math>The finite-dimensional distributions of a stochastic process satisfy two mathematical conditions known as consistency conditions.<ref name="Rosenthal2006page177"/>
StationarityEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} Stationarity is a mathematical property that a stochastic process has when all the random variables of that stochastic process are identically distributed. In other words, if <math>X</math> is a stationary stochastic process, then for any <math>t\in T</math> the random variable <math>X_t</math> has the same distribution, which means that for any set of <math>n</math> index set values <math>t_1,\dots, t_n</math>, the corresponding <math>n</math> random variables
X_{t_1}, \dots X_{t_n},
</math>all have the same probability distribution. The index set of a stationary stochastic process is usually interpreted as time, so it can be the integers or the real line.<ref name="Lamperti1977page6">Template:Cite book</ref><ref name="GikhmanSkorokhod1969page4">Template:Cite book</ref> But the concept of stationarity also exists for point processes and random fields, where the index set is not interpreted as time.<ref name="Lamperti1977page6"/><ref name="Adler2010page14">Template:Cite book</ref><ref name="ChiuStoyan2013page112">Template:Cite book</ref>
When the index set <math>T</math> can be interpreted as time, a stochastic process is said to be stationary if its finite-dimensional distributions are invariant under translations of time. This type of stochastic process can be used to describe a physical system that is in steady state, but still experiences random fluctuations.<ref name="Lamperti1977page6"/> The intuition behind stationarity is that as time passes the distribution of the stationary stochastic process remains the same.<ref name="Doob1990page94">Template:Cite book</ref> A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.<ref name="Lamperti1977page6"/>
A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic process <math>X</math> is said to be stationary in the wide sense, then the process <math>X</math> has a finite second moment for all <math>t\in T</math> and the covariance of the two random variables <math>X_t</math> and <math>X_{t+h}</math> depends only on the number <math>h</math> for all <math>t\in T</math>.<ref name="Doob1990page94"/><ref name="Florescu2014page298">Template:Cite book</ref> Khinchin introduced the related concept of stationarity in the wide sense, which has other names including covariance stationarity or stationarity in the broad sense.<ref name="Florescu2014page298"/><ref name="GikhmanSkorokhod1969page8">Template:Cite book</ref>
FiltrationEdit
A filtration is an increasing sequence of sigma-algebras defined in relation to some probability space and an index set that has some total order relation, such as in the case of the index set being some subset of the real numbers. More formally, if a stochastic process has an index set with a total order, then a filtration <math>\{\mathcal{F}_t\}_{t\in T} </math>, on a probability space <math>(\Omega, \mathcal{F}, P)</math> is a family of sigma-algebras such that <math> \mathcal{F}_s \subseteq \mathcal{F}_t \subseteq \mathcal{F} </math> for all <math>s \leq t</math>, where <math>t, s\in T</math> and <math>\leq</math> denotes the total order of the index set <math>T</math>.<ref name="Florescu2014page294"/> With the concept of a filtration, it is possible to study the amount of information contained in a stochastic process <math>X_t</math> at <math>t\in T</math>, which can be interpreted as time <math>t</math>.<ref name="Florescu2014page294"/><ref name="Williams1991page93"/> The intuition behind a filtration <math>\mathcal{F}_t</math> is that as time <math>t</math> passes, more and more information on <math>X_t</math> is known or available, which is captured in <math>\mathcal{F}_t</math>, resulting in finer and finer partitions of <math>\Omega</math>.<ref name="Klebaner2005page22">Template:Cite book</ref><ref name="MörtersPeres2010page37">Template:Cite book</ref>
ModificationEdit
A modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process <math>X</math> that has the same index set <math>T</math>, state space <math>S</math>, and probability space <math>(\Omega,{\cal F},P)</math> as another stochastic process <math>Y</math> is said to be a modification of <math>X</math> if for all <math>t\in T</math> the following
P(X_t=Y_t)=1 ,
</math>holds. Two stochastic processes that are modifications of each other have the same finite-dimensional law<ref name="RogersWilliams2000page130">Template:Cite book</ref> and they are said to be stochastically equivalent or equivalent.<ref name="Borovkov2013page530">Template:Cite book</ref>
Instead of modification, the term version is also used,<ref name="Adler2010page14"/><ref name="Klebaner2005page48">Template:Cite book</ref><ref name="Øksendal2003page14">Template:Cite book</ref><ref name="Florescu2014page472">Template:Cite book</ref> however some authors use the term version when two stochastic processes have the same finite-dimensional distributions, but they may be defined on different probability spaces, so two processes that are modifications of each other, are also versions of each other, in the latter sense, but not the converse.<ref name="RevuzYor2013page18">Template:Cite book</ref><ref name="FrizVictoir2010page571"/>
If a continuous-time real-valued stochastic process meets certain moment conditions on its increments, then the Kolmogorov continuity theorem says that there exists a modification of this process that has continuous sample paths with probability one, so the stochastic process has a continuous modification or version.<ref name="Øksendal2003page14"/><ref name="Florescu2014page472"/><ref name="ApplebaumBook2004page20">Template:Cite book</ref> The theorem can also be generalized to random fields so the index set is <math>n</math>-dimensional Euclidean space<ref name="Kunita1997page31">Template:Cite book</ref> as well as to stochastic processes with metric spaces as their state spaces.<ref name="Kallenberg2002page">Template:Cite book</ref>
IndistinguishableEdit
Two stochastic processes <math>X</math> and <math>Y</math> defined on the same probability space <math>(\Omega,\mathcal{F},P)</math> with the same index set <math>T</math> and set space <math>S</math> are said be indistinguishable if the following
P(X_t=Y_t \text{ for all } t\in T )=1 ,
</math>holds.<ref name="FrizVictoir2010page571"/><ref name="RogersWilliams2000page130"/> If two <math>X</math> and <math>Y</math> are modifications of each other and are almost surely continuous, then <math>X</math> and <math>Y</math> are indistinguishable.<ref name="JeanblancYor2009page11">Template:Cite book</ref>
SeparabilityEdit
Separability is a property of a stochastic process based on its index set in relation to the probability measure. The property is assumed so that functionals of stochastic processes or random fields with uncountable index sets can form random variables. For a stochastic process to be separable, in addition to other conditions, its index set must be a separable space,Template:Efn which means that the index set has a dense countable subset.<ref name="Adler2010page14"/><ref name="Ito2006page32">Template:Cite book</ref>
More precisely, a real-valued continuous-time stochastic process <math>X</math> with a probability space <math>(\Omega,{\cal F},P)</math> is separable if its index set <math>T</math> has a dense countable subset <math>U\subset T</math> and there is a set <math>\Omega_0 \subset \Omega</math> of probability zero, so <math>P(\Omega_0)=0</math>, such that for every open set <math>G\subset T</math> and every closed set <math>F\subset \textstyle R =(-\infty,\infty) </math>, the two events <math>\{ X_t \in F \text{ for all } t \in G\cap U\}</math> and <math>\{ X_t \in F \text{ for all } t \in G\}</math> differ from each other at most on a subset of <math>\Omega_0</math>.<ref name="GikhmanSkorokhod1969page150">Template:Cite book</ref><ref name="Todorovic2012page19">Template:Cite book</ref><ref name="Molchanov2005page340">Template:Cite book</ref> The definition of separabilityTemplate:Efn can also be stated for other index sets and state spaces,<ref name="GusakKukush2010page22">Template:Harvtxt, p. 22</ref> such as in the case of random fields, where the index set as well as the state space can be <math>n</math>-dimensional Euclidean space.<ref name="AdlerTaylor2009page7"/><ref name="Adler2010page14"/>
The concept of separability of a stochastic process was introduced by Joseph Doob,.<ref name="Ito2006page32"/> The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process.<ref name="Billingsley2008page526"/> Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable.<ref name="Doob1990page56">Template:Cite book</ref> A theorem by Doob, sometimes known as Doob's separability theorem, says that any real-valued continuous-time stochastic process has a separable modification.<ref name="Ito2006page32"/><ref name="Todorovic2012page19"/><ref name="Khoshnevisan2006page155">Template:Cite book</ref> Versions of this theorem also exist for more general stochastic processes with index sets and state spaces other than the real line.<ref name="Skorokhod2005page93"/>
IndependenceEdit
Two stochastic processes <math>X</math> and <math>Y</math> defined on the same probability space <math>(\Omega,\mathcal{F},P)</math> with the same index set <math>T</math> are said be independent if for all <math>n \in \mathbb{N}</math> and for every choice of epochs <math>t_1,\ldots,t_n \in T</math>, the random vectors <math>\left( X(t_1),\ldots,X(t_n) \right)</math> and <math>\left( Y(t_1),\ldots,Y(t_n) \right)</math> are independent.<ref name=Lapidoth>Lapidoth, Amos, A Foundation in Digital Communication, Cambridge University Press, 2009.</ref>Template:Rp
Edit
Two stochastic processes <math>\left\{X_t\right\}</math> and <math>\left\{Y_t\right\}</math> are called uncorrelated if their cross-covariance <math>\operatorname{K}_{\mathbf{X}\mathbf{Y}}(t_1,t_2) = \operatorname{E} \left[ \left( X(t_1)- \mu_X(t_1) \right) \left( Y(t_2)- \mu_Y(t_2) \right) \right]</math> is zero for all times.<ref name=KunIlPark>Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3</ref>Template:Rp Formally:
- <math>\left\{X_t\right\},\left\{Y_t\right\} \text{ uncorrelated} \quad \iff \quad \operatorname{K}_{\mathbf{X}\mathbf{Y}}(t_1,t_2) = 0 \quad \forall t_1,t_2</math>.
Edit
If two stochastic processes <math>X</math> and <math>Y</math> are independent, then they are also uncorrelated.<ref name=KunIlPark/>Template:Rp
OrthogonalityEdit
Two stochastic processes <math>\left\{X_t\right\}</math> and <math>\left\{Y_t\right\}</math> are called orthogonal if their cross-correlation <math>\operatorname{R}_{\mathbf{X}\mathbf{Y}}(t_1,t_2) = \operatorname{E}[X(t_1) \overline{Y(t_2)}]</math> is zero for all times.<ref name=KunIlPark/>Template:Rp Formally:
- <math>\left\{X_t\right\},\left\{Y_t\right\} \text{ orthogonal} \quad \iff \quad \operatorname{R}_{\mathbf{X}\mathbf{Y}}(t_1,t_2) = 0 \quad \forall t_1,t_2</math>.
Skorokhod spaceEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} A Skorokhod space, also written as Skorohod space, is a mathematical space of all the functions that are right-continuous with left limits, defined on some interval of the real line such as <math>[0,1]</math> or <math>[0,\infty)</math>, and take values on the real line or on some metric space.<ref name="Whitt2006page78">Template:Cite book</ref><ref name="GusakKukush2010page24">Template:Harvtxt, p. 24</ref><ref name="Bogachev2007Vol2page53">Template:Cite book</ref> Such functions are known as càdlàg or cadlag functions, based on the acronym of the French phrase continue à droite, limite à gauche.<ref name="Whitt2006page78"/><ref name="Klebaner2005page4">Template:Cite book</ref> A Skorokhod function space, introduced by Anatoliy Skorokhod,<ref name="Bogachev2007Vol2page53"/> is often denoted with the letter <math>D</math>,<ref name="Whitt2006page78"/><ref name="GusakKukush2010page24"/><ref name="Bogachev2007Vol2page53"/><ref name="Klebaner2005page4"/> so the function space is also referred to as space <math>D</math>.<ref name="Whitt2006page78"/><ref name="Asmussen2003page420">Template:Cite book</ref><ref name="Billingsley2013page121">Template:Cite book</ref> The notation of this function space can also include the interval on which all the càdlàg functions are defined, so, for example, <math>D[0,1]</math> denotes the space of càdlàg functions defined on the unit interval <math>[0,1]</math>.<ref name="Klebaner2005page4"/><ref name="Billingsley2013page121"/><ref name="Bass2011page34">Template:Cite book</ref>
Skorokhod function spaces are frequently used in the theory of stochastic processes because it often assumed that the sample functions of continuous-time stochastic processes belong to a Skorokhod space.<ref name="Bogachev2007Vol2page53"/><ref name="Asmussen2003page420"/> Such spaces contain continuous functions, which correspond to sample functions of the Wiener process. But the space also has functions with discontinuities, which means that the sample functions of stochastic processes with jumps, such as the Poisson process (on the real line), are also members of this space.<ref name="Billingsley2013page121"/><ref name="BinghamKiesel2013page154">Template:Cite book</ref>
RegularityEdit
In the context of mathematical construction of stochastic processes, the term regularity is used when discussing and assuming certain conditions for a stochastic process to resolve possible construction issues.<ref name="Borovkov2013page532">Template:Cite book</ref><ref name="Khoshnevisan2006page148to165">Template:Cite book</ref> For example, to study stochastic processes with uncountable index sets, it is assumed that the stochastic process adheres to some type of regularity condition such as the sample functions being continuous.<ref name="Todorovic2012page22">Template:Cite book</ref><ref name="Whitt2006page79">Template:Cite book</ref>
Further examplesEdit
Markov processes and chainsEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.<ref name="Serfozo2009page2">Template:Cite book</ref><ref name="Rozanov2012page58">Template:Cite book</ref>
The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes<ref name="Ross1996page235and358">Template:Cite book</ref> in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time.<ref name="Florescu2014page373">Template:Cite book</ref><ref name="KarlinTaylor2012page49">Template:Cite book</ref>
A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies.<ref name="Asmussen2003page7">Template:Cite book</ref> For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time),<ref name="Parzen1999page188">Template:Cite book</ref><ref name="KarlinTaylor2012page29">Template:Cite book</ref><ref name="Lamperti1977chap6">Template:Cite book</ref><ref name="Ross1996page174and231">Template:Cite book</ref> but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).<ref name="Asmussen2003page7" /> It has been argued that the first definition of a Markov chain, where it has discrete time, now tends to be used, despite the second definition having been used by researchers like Joseph Doob and Kai Lai Chung.<ref name="MeynTweedie2009">Template:Cite book</ref>
Markov processes form an important class of stochastic processes and have applications in many areas.<ref name="LatoucheRamaswami1999"/><ref name="KarlinTaylor2012page47">Template:Cite book</ref> For example, they are the basis for a general stochastic simulation method known as Markov chain Monte Carlo, which is used for simulating random objects with specific probability distributions, and has found application in Bayesian statistics.<ref name="RubinsteinKroese2011page225">Template:Cite book</ref><ref name="GamermanLopes2006">Template:Cite book</ref>
The concept of the Markov property was originally for stochastic processes in continuous and discrete time, but the property has been adapted for other index sets such as <math>n</math>-dimensional Euclidean space, which results in collections of random variables known as Markov random fields.<ref name="Rozanov2012page61">Template:Cite book</ref><ref>Template:Cite book</ref><ref name="Bremaud2013page253">Template:Cite book</ref>
MartingaleEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} A martingale is a discrete-time or continuous-time stochastic process with the property that, at every instant, given the current value and all the past values of the process, the conditional expectation of every future value is equal to the current value. In discrete time, if this property holds for the next value, then it holds for all future values. The exact mathematical definition of a martingale requires two other conditions coupled with the mathematical concept of a filtration, which is related to the intuition of increasing available information as time passes. Martingales are usually defined to be real-valued,<ref name="Klebaner2005page65">Template:Cite book</ref><ref name="KaratzasShreve2014page11">Template:Cite book</ref><ref name="Williams1991page93">Template:Cite book</ref> but they can also be complex-valued<ref name="Doob1990page292">Template:Cite book</ref> or even more general.<ref name="Pisier2016">Template:Cite book</ref>
A symmetric random walk and a Wiener process (with zero drift) are both examples of martingales, respectively, in discrete and continuous time.<ref name="Klebaner2005page65"/><ref name="KaratzasShreve2014page11"/> For a sequence of independent and identically distributed random variables <math>X_1, X_2, X_3, \dots</math> with zero mean, the stochastic process formed from the successive partial sums <math>X_1,X_1+ X_2, X_1+ X_2+X_3, \dots</math> is a discrete-time martingale.<ref name="Steele2012page12">Template:Cite book</ref> In this aspect, discrete-time martingales generalize the idea of partial sums of independent random variables.<ref name="HallHeyde2014page2">Template:Cite book</ref>
Martingales can also be created from stochastic processes by applying some suitable transformations, which is the case for the homogeneous Poisson process (on the real line) resulting in a martingale called the compensated Poisson process.<ref name="KaratzasShreve2014page11"/> Martingales can also be built from other martingales.<ref name="Steele2012page12"/> For example, there are martingales based on the martingale the Wiener process, forming continuous-time martingales.<ref name="Klebaner2005page65"/><ref name="Steele2012page115">Template:Cite book</ref>
Martingales mathematically formalize the idea of a 'fair game' where it is possible form reasonable expectations for payoffs,<ref name="Ross1996page295">Template:Cite book</ref> and they were originally developed to show that it is not possible to gain an 'unfair' advantage in such a game.<ref name="Steele2012page11"/> But now they are used in many areas of probability, which is one of the main reasons for studying them.<ref name="Williams1991page93"/><ref name="Steele2012page11">Template:Cite book</ref><ref name="Kallenberg2002page96">Template:Cite book</ref> Many problems in probability have been solved by finding a martingale in the problem and studying it.<ref name="Steele2012page371">Template:Cite book</ref> Martingales will converge, given some conditions on their moments, so they are often used to derive convergence results, due largely to martingale convergence theorems.<ref name="HallHeyde2014page2"/><ref name="Steele2012page22">Template:Cite book</ref><ref name="GrimmettStirzaker2001page336">Template:Cite book</ref>
Martingales have many applications in statistics, but it has been remarked that its use and application are not as widespread as it could be in the field of statistics, particularly statistical inference.<ref name="GlassermanKou2006">Template:Cite journal</ref> They have found applications in areas in probability theory such as queueing theory and Palm calculus<ref name="BaccelliBremaud2013">Template:Cite book</ref> and other fields such as economics<ref name="HallHeyde2014pageX">Template:Cite book</ref> and finance.<ref name="MusielaRutkowski2006"/>
Lévy processEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} Lévy processes are types of stochastic processes that can be considered as generalizations of random walks in continuous time.<ref name="Applebaum2004page1337"/><ref name="Bertoin1998pageVIII">Template:Cite book</ref> These processes have many applications in fields such as finance, fluid mechanics, physics and biology.<ref name="Applebaum2004page1336">Template:Cite journal</ref><ref name="ApplebaumBook2004page69">Template:Cite book</ref> The main defining characteristics of these processes are their stationarity and independence properties, so they were known as processes with stationary and independent increments. In other words, a stochastic process <math>X</math> is a Lévy process if for <math>n</math> non-negatives numbers, <math>0\leq t_1\leq \dots \leq t_n</math>, the corresponding <math>n-1</math> increments
X_{t_2}-X_{t_1}, \dots , X_{t_n}-X_{t_{n-1}},
</math>are all independent of each other, and the distribution of each increment only depends on the difference in time.<ref name="Applebaum2004page1337"/>
A Lévy process can be defined such that its state space is some abstract mathematical space, such as a Banach space, but the processes are often defined so that they take values in Euclidean space. The index set is the non-negative numbers, so <math> I= [0,\infty) </math>, which gives the interpretation of time. Important stochastic processes such as the Wiener process, the homogeneous Poisson process (in one dimension), and subordinators are all Lévy processes.<ref name="Applebaum2004page1337"/><ref name="Bertoin1998pageVIII"/>
Random fieldEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} A random field is a collection of random variables indexed by a <math>n</math>-dimensional Euclidean space or some manifold. In general, a random field can be considered an example of a stochastic or random process, where the index set is not necessarily a subset of the real line.<ref name="AdlerTaylor2009page7"/> But there is a convention that an indexed collection of random variables is called a random field when the index has two or more dimensions.<ref name="GikhmanSkorokhod1969page1"/><ref name="Lamperti1977page1"/><ref name="KoralovSinai2007page171">Template:Cite book</ref> If the specific definition of a stochastic process requires the index set to be a subset of the real line, then the random field can be considered as a generalization of stochastic process.<ref name="ApplebaumBook2004page19">Template:Cite book</ref>
Point processEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} A point process is a collection of points randomly located on some mathematical space such as the real line, <math>n</math>-dimensional Euclidean space, or more abstract spaces. Sometimes the term point process is not preferred, as historically the word process denoted an evolution of some system in time, so a point process is also called a random point field.<ref name="ChiuStoyan2013page109">Template:Cite book</ref> There are different interpretations of a point process, such a random counting measure or a random set.<ref name="ChiuStoyan2013page108">Template:Cite book</ref><ref name="Haenggi2013page10">Template:Cite book</ref> Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process,<ref name="DaleyVere-Jones2006page194">Template:Cite book</ref><ref name="CoxIsham1980page3">Template:Cite book</ref> though it has been remarked that the difference between point processes and stochastic processes is not clear.<ref name="CoxIsham1980page3"/>
Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying spaceTemplate:Efn on which it is defined, such as the real line or <math>n</math>-dimensional Euclidean space.<ref name="KarlinTaylor2012page31">Template:Cite book</ref><ref name="Schmidt2014page99">Template:Cite book</ref> Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.<ref name="DaleyVere-Jones200">Template:Cite book</ref><ref name="CoxIsham1980page3" />
HistoryEdit
Early probability theoryEdit
Probability theory has its origins in games of chance, which have a long history, with some games being played thousands of years ago,<ref name="David1955">Template:Cite journal</ref> but very little analysis on them was done in terms of probability.<ref name="Maistrov2014page1">Template:Cite book</ref> The year 1654 is often considered the birth of probability theory when French mathematicians Pierre Fermat and Blaise Pascal had a written correspondence on probability, motivated by a gambling problem.<ref name="Seneta2006page1">Template:Cite book</ref><ref name="Tabak2014page24to26">Template:Cite book</ref> But there was earlier mathematical work done on the probability of gambling games such as Liber de Ludo Aleae by Gerolamo Cardano, written in the 16th century but posthumously published later in 1663.<ref name="Bellhouse2005">Template:Cite journal</ref>
After Cardano, Jakob BernoulliTemplate:Efn wrote Ars Conjectandi, which is considered a significant event in the history of probability theory. Bernoulli's book was published, also posthumously, in 1713 and inspired many mathematicians to study probability.<ref name="Maistrov2014page56">Template:Cite book</ref><ref name="Tabak2014page37">Template:Cite book</ref> But despite some renowned mathematicians contributing to probability theory, such as Pierre-Simon Laplace, Abraham de Moivre, Carl Gauss, Siméon Poisson and Pafnuty Chebyshev,<ref name="Chung1998">Template:Cite journal</ref><ref name="Bingham2000">Template:Cite journal</ref> most of the mathematical communityTemplate:Efn did not consider probability theory to be part of mathematics until the 20th century.<ref name="Chung1998"/><ref name="BenziBenzi2007"/><ref name="Doob1996">Template:Cite journal</ref><ref name="Cramer1976">Template:Cite journal</ref>
Statistical mechanicsEdit
In the physical sciences, scientists developed in the 19th century the discipline of statistical mechanics, where physical systems, such as containers filled with gases, are regarded or treated mathematically as collections of many moving particles. Although there were attempts to incorporate randomness into statistical physics by some scientists, such as Rudolf Clausius, most of the work had little or no randomness.<ref name="Truesdell1975page22">Template:Cite journal</ref><ref name="Brush1967page150">Template:Cite journal</ref> This changed in 1859 when James Clerk Maxwell contributed significantly to the field, more specifically, to the kinetic theory of gases, by presenting work where he modelled the gas particles as moving in random directions at random velocities.<ref name="Truesdell1975page31">Template:Cite journal</ref><ref name="Brush1958">Template:Cite journal</ref> The kinetic theory of gases and statistical physics continued to be developed in the second half of the 19th century, with work done chiefly by Clausius, Ludwig Boltzmann and Josiah Gibbs, which would later have an influence on Albert Einstein's mathematical model for Brownian movement.<ref name="Brush1968page15">Template:Cite journal</ref>
Measure theory and probability theoryEdit
At the International Congress of Mathematicians in Paris in 1900, David Hilbert presented a list of mathematical problems, where his sixth problem asked for a mathematical treatment of physics and probability involving axioms.<ref name="Bingham2000"/> Around the start of the 20th century, mathematicians developed measure theory, a branch of mathematics for studying integrals of mathematical functions, where two of the founders were French mathematicians, Henri Lebesgue and Émile Borel. In 1925, another French mathematician Paul Lévy published the first probability book that used ideas from measure theory.<ref name="Bingham2000"/>
In the 1920s, fundamental contributions to probability theory were made in the Soviet Union by mathematicians such as Sergei Bernstein, Aleksandr Khinchin,Template:Efn and Andrei Kolmogorov.<ref name="Cramer1976"/> Kolmogorov published in 1929 his first attempt at presenting a mathematical foundation, based on measure theory, for probability theory.<ref name="KendallBatchelor1990page33">Template:Cite journal</ref> In the early 1930s, Khinchin and Kolmogorov set up probability seminars, which were attended by researchers such as Eugene Slutsky and Nikolai Smirnov,<ref name="Vere-Jones2006page1">Template:Cite book</ref> and Khinchin gave the first mathematical definition of a stochastic process as a set of random variables indexed by the real line.<ref name="Doob1934"/><ref name="Vere-Jones2006page4">Template:Cite book</ref>Template:Efn
Birth of modern probability theoryEdit
In 1933, Andrei Kolmogorov published in German, his book on the foundations of probability theory titled Grundbegriffe der Wahrscheinlichkeitsrechnung,Template:Efn where Kolmogorov used measure theory to develop an axiomatic framework for probability theory. The publication of this book is now widely considered to be the birth of modern probability theory, when the theories of probability and stochastic processes became parts of mathematics.<ref name="Bingham2000"/><ref name="Cramer1976"/>
After the publication of Kolmogorov's book, further fundamental work on probability theory and stochastic processes was done by Khinchin and Kolmogorov as well as other mathematicians such as Joseph Doob, William Feller, Maurice Fréchet, Paul Lévy, Wolfgang Doeblin, and Harald Cramér.<ref name="Bingham2000"/><ref name="Cramer1976"/> Decades later, Cramér referred to the 1930s as the "heroic period of mathematical probability theory".<ref name="Cramer1976"/> World War II greatly interrupted the development of probability theory, causing, for example, the migration of Feller from Sweden to the United States of America<ref name="Cramer1976"/> and the death of Doeblin, considered now a pioneer in stochastic processes.<ref name="Lindvall1991">Template:Cite journal</ref>
Stochastic processes after World War IIEdit
After World War II, the study of probability theory and stochastic processes gained more attention from mathematicians, with significant contributions made in many areas of probability and mathematics as well as the creation of new areas.<ref name="Cramer1976"/><ref name="Meyer2009">Template:Cite journal</ref> Starting in the 1940s, Kiyosi Itô published papers developing the field of stochastic calculus, which involves stochastic integrals and stochastic differential equations based on the Wiener or Brownian motion process.<ref name="Ito1998Prize">Template:Cite journal</ref>
Also starting in the 1940s, connections were made between stochastic processes, particularly martingales, and the mathematical field of potential theory, with early ideas by Shizuo Kakutani and then later work by Joseph Doob.<ref name="Meyer2009"/> Further work, considered pioneering, was done by Gilbert Hunt in the 1950s, connecting Markov processes and potential theory, which had a significant effect on the theory of Lévy processes and led to more interest in studying Markov processes with methods developed by Itô.<ref name="JarrowProtter2004"/><ref name="Bertoin1998pageVIIIandIX">Template:Cite book</ref><ref name="Steele2012page176">Template:Cite book</ref>
In 1953, Doob published his book Stochastic processes, which had a strong influence on the theory of stochastic processes and stressed the importance of measure theory in probability.<ref name="Meyer2009"/> <ref name="Bingham2005">Template:Cite journal</ref> Doob also chiefly developed the theory of martingales, with later substantial contributions by Paul-André Meyer. Earlier work had been carried out by Sergei Bernstein, Paul Lévy and Jean Ville, the latter adopting the term martingale for the stochastic process.<ref name="HallHeyde2014page1">Template:Cite book</ref><ref name="Dynkin1989">Template:Cite journal</ref> Methods from the theory of martingales became popular for solving various probability problems. Techniques and theory were developed to study Markov processes and then applied to martingales. Conversely, methods from the theory of martingales were established to treat Markov processes.<ref name="Meyer2009"/>
Other fields of probability were developed and used to study stochastic processes, with one main approach being the theory of large deviations.<ref name="Meyer2009"/> The theory has many applications in statistical physics, among other fields, and has core ideas going back to at least the 1930s. Later in the 1960s and 1970s, fundamental work was done by Alexander Wentzell in the Soviet Union and Monroe D. Donsker and Srinivasa Varadhan in the United States of America,<ref name="Ellis1995page98">Template:Cite journal</ref> which would later result in Varadhan winning the 2007 Abel Prize.<ref name="RaussenSkau2008">Template:Cite journal</ref> In the 1990s and 2000s the theories of Schramm–Loewner evolution<ref name="HenkelKarevski2012page113">Template:Cite book</ref> and rough paths<ref name="FrizVictoir2010page571">Template:Cite book</ref> were introduced and developed to study stochastic processes and other mathematical objects in probability theory, which respectively resulted in Fields Medals being awarded to Wendelin Werner<ref name="Werner2004Fields">Template:Cite journal</ref> in 2008 and to Martin Hairer in 2014.<ref name="Hairer2004Fields">Template:Cite journal</ref>
The theory of stochastic processes still continues to be a focus of research, with yearly international conferences on the topic of stochastic processes.<ref name="BlathImkeller2011"/><ref name="Applebaum2004page1336"/>
Discoveries of specific stochastic processesEdit
Although Khinchin gave mathematical definitions of stochastic processes in the 1930s,<ref name="Doob1934"/><ref name="Vere-Jones2006page4"/> specific stochastic processes had already been discovered in different settings, such as the Brownian motion process and the Poisson process.<ref name="JarrowProtter2004"/><ref name="GuttorpThorarinsdottir2012"/> Some families of stochastic processes such as point processes or renewal processes have long and complex histories, stretching back centuries.<ref name="DaleyVere-Jones2006chap1">Template:Cite book</ref>
Bernoulli processEdit
The Bernoulli process, which can serve as a mathematical model for flipping a biased coin, is possibly the first stochastic process to have been studied.<ref name="Florescu2014page301"/> The process is a sequence of independent Bernoulli trials,<ref name="BertsekasTsitsiklis2002page273"/> which are named after Jacob Bernoulli who used them to study games of chance, including probability problems proposed and studied earlier by Christiaan Huygens.<ref name="Hald2005page226">Template:Cite book</ref> Bernoulli's work, including the Bernoulli process, were published in his book Ars Conjectandi in 1713.<ref name="Lebowitz1984">Template:Cite book</ref>
Random walksEdit
In 1905, Karl Pearson coined the term random walk while posing a problem describing a random walk on the plane, which was motivated by an application in biology, but such problems involving random walks had already been studied in other fields. Certain gambling problems that were studied centuries earlier can be considered as problems involving random walks.<ref name="Weiss2006page1"/><ref name="Lebowitz1984"/> For example, the problem known as the Gambler's ruin is based on a simple random walk,<ref name="KarlinTaylor2012page49"/><ref name="Florescu2014page374">Template:Cite book</ref> and is an example of a random walk with absorbing barriers.<ref name="Seneta2006page1"/><ref name="Ibe2013page5">Template:Cite book</ref> Pascal, Fermat and Huyens all gave numerical solutions to this problem without detailing their methods,<ref name="Hald2005page63">Template:Cite book</ref> and then more detailed solutions were presented by Jakob Bernoulli and Abraham de Moivre.<ref name="Hald2005page202">Template:Cite book</ref>
For random walks in <math>n</math>-dimensional integer lattices, George Pólya published, in 1919 and 1921, work where he studied the probability of a symmetric random walk returning to a previous position in the lattice. Pólya showed that a symmetric random walk, which has an equal probability to advance in any direction in the lattice, will return to a previous position in the lattice an infinite number of times with probability one in one and two dimensions, but with probability zero in three or higher dimensions.<ref name="Florescu2014page385">Template:Cite book</ref><ref name="Hughes1995page111">Template:Cite book</ref>
Wiener processEdit
The Wiener process or Brownian motion process has its origins in different fields including statistics, finance and physics.<ref name="JarrowProtter2004"/> In 1880, Danish astronomer Thorvald Thiele wrote a paper on the method of least squares, where he used the process to study the errors of a model in time-series analysis.<ref name="Thiele1880">Template:Cite journal</ref><ref name="Hald1981page1and18">Template:Cite journal</ref><ref name="Lauritzen1981page319">Template:Cite journal</ref> The work is now considered as an early discovery of the statistical method known as Kalman filtering, but the work was largely overlooked. It is thought that the ideas in Thiele's paper were too advanced to have been understood by the broader mathematical and statistical community at the time.<ref name="Lauritzen1981page319"/>
The French mathematician Louis Bachelier used a Wiener process in his 1900 thesis<ref name=Bachelier1900a>Template:Cite journal</ref><ref name=Bachelier1900b>Template:Cite journal</ref> in order to model price changes on the Paris Bourse, a stock exchange,<ref name="CourtaultKabanov2000">Template:Cite journal</ref> without knowing the work of Thiele.<ref name="JarrowProtter2004"/> It has been speculated that Bachelier drew ideas from the random walk model of Jules Regnault, but Bachelier did not cite him,<ref name="Jovanovic2012">Template:Cite journal</ref> and Bachelier's thesis is now considered pioneering in the field of financial mathematics.<ref name="CourtaultKabanov2000"/><ref name="Jovanovic2012"/>
It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by the Leonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas,<ref name="Jovanovic2012"/> which was cited by mathematicians including Doob, Feller<ref name="Jovanovic2012"/> and Kolmogorov.<ref name="JarrowProtter2004"/> The book continued to be cited, but then starting in the 1960s, the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work.<ref name="Jovanovic2012"/>
In 1905, Albert Einstein published a paper where he studied the physical observation of Brownian motion or movement to explain the seemingly random movements of particles in liquids by using ideas from the kinetic theory of gases. Einstein derived a differential equation, known as a diffusion equation, for describing the probability of finding a particle in a certain region of space. Shortly after Einstein's first paper on Brownian movement, Marian Smoluchowski published work where he cited Einstein, but wrote that he had independently derived the equivalent results by using a different method.<ref name="Brush1968page25">Template:Cite journal</ref>
Einstein's work, as well as experimental results obtained by Jean Perrin, later inspired Norbert Wiener in the 1920s<ref name="Brush1968page30">Template:Cite journal</ref> to use a type of measure theory, developed by Percy Daniell, and Fourier analysis to prove the existence of the Wiener process as a mathematical object.<ref name="JarrowProtter2004"/>
Poisson processEdit
The Poisson process is named after Siméon Poisson, due to its definition involving the Poisson distribution, but Poisson never studied the process.<ref name="Stirzaker2000"/><ref name="DaleyVere-Jones2006page8">Template:Cite book</ref> There are a number of claims for early uses or discoveries of the Poisson process.<ref name="Stirzaker2000"/><ref name="GuttorpThorarinsdottir2012"/> At the beginning of the 20th century, the Poisson process would arise independently in different situations.<ref name="Stirzaker2000"/><ref name="GuttorpThorarinsdottir2012"/> In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.<ref name="EmbrechtsFrey2001page367">Template:Cite book</ref><ref name="Cramér1969">Template:Cite journal</ref>
Another discovery occurred in Denmark in 1909 when A.K. Erlang derived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang was not at the time aware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent to each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.<ref name="Stirzaker2000"/>
In 1910, Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Motivated by their work, Harry Bateman studied the counting problem and derived Poisson probabilities as a solution to a family of differential equations, resulting in the independent discovery of the Poisson process.<ref name="Stirzaker2000"/> After this time there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.<ref name="Stirzaker2000"/>
Markov processesEdit
Markov processes and Markov chains are named after Andrey Markov who studied Markov chains in the early 20th century. Markov was interested in studying an extension of independent random sequences. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption,<ref name="GrinsteadSnell1997page464">Template:Cite book</ref><ref name="Bremaud2013pageIX">Template:Cite book</ref><ref name="Hayes2013">Template:Cite journal</ref> which had been commonly regarded as a requirement for such mathematical laws to hold.<ref name="Hayes2013"/> Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains.
In 1912, Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov.<ref name="GrinsteadSnell1997page464"/><ref name="Bremaud2013pageIX"/> After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé.<ref name="Seneta1998">Template:Cite journal</ref> Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.<ref name="GrinsteadSnell1997page464"/><ref name="BruHertz2001">Template:Cite book</ref>
Andrei Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes.<ref name="Cramer1976"/><ref name="KendallBatchelor1990page33"/> Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement.<ref name="KendallBatchelor1990page33"/><ref name="BarbutLocker2016page5">Template:Cite book</ref> He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.<ref name="KendallBatchelor1990page33"/><ref name="Skorokhod2005page146">Template:Cite book</ref> Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.<ref name="Bernstein2005">Template:Cite journal</ref> The differential equations are now called the Kolmogorov equations<ref name="Anderson2012pageVII">Template:Cite book</ref> or the Kolmogorov–Chapman equations.<ref name="KendallBatchelor1990page57">Template:Cite journal</ref> Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in the 1930s, and then later Eugene Dynkin, starting in the 1950s.<ref name="Cramer1976"/>
Lévy processesEdit
Lévy processes such as the Wiener process and the Poisson process (on the real line) are named after Paul Lévy who started studying them in the 1930s,<ref name="Applebaum2004page1336"/> but they have connections to infinitely divisible distributions going back to the 1920s.<ref name="Bertoin1998pageVIII"/> In a 1932 paper, Kolmogorov derived a characteristic function for random variables associated with Lévy processes. This result was later derived under more general conditions by Lévy in 1934, and then Khinchin independently gave an alternative form for this characteristic function in 1937.<ref name="Cramer1976"/><ref name="ApplebaumBook2004page67">Template:Cite book</ref> In addition to Lévy, Khinchin and Kolomogrov, early fundamental contributions to the theory of Lévy processes were made by Bruno de Finetti and Kiyosi Itô.<ref name="Bertoin1998pageVIII"/>
Mathematical constructionEdit
In mathematics, constructions of mathematical objects are needed, which is also the case for stochastic processes, to prove that they exist mathematically.<ref name="Rosenthal2006page177"/> There are two main approaches for constructing a stochastic process. One approach involves considering a measurable space of functions, defining a suitable measurable mapping from a probability space to this measurable space of functions, and then deriving the corresponding finite-dimensional distributions.<ref name="Adler2010page13">Template:Cite book</ref>
Another approach involves defining a collection of random variables to have specific finite-dimensional distributions, and then using Kolmogorov's existence theoremTemplate:Efn to prove a corresponding stochastic process exists.<ref name="Rosenthal2006page177"/><ref name="Adler2010page13"/> This theorem, which is an existence theorem for measures on infinite product spaces,<ref name="Durrett2010page410">Template:Cite book</ref> says that if any finite-dimensional distributions satisfy two conditions, known as consistency conditions, then there exists a stochastic process with those finite-dimensional distributions.<ref name="Rosenthal2006page177"/>
Construction issuesEdit
When constructing continuous-time stochastic processes certain mathematical difficulties arise, due to the uncountable index sets, which do not occur with discrete-time processes.<ref name="KloedenPlaten2013page63"/><ref name="Khoshnevisan2006page153"/> One problem is that it is possible to have more than one stochastic process with the same finite-dimensional distributions. For example, both the left-continuous modification and the right-continuous modification of a Poisson process have the same finite-dimensional distributions.<ref name="Billingsley2008page493to494">Template:Cite book</ref> This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process.<ref name="Adler2010page13"/><ref name="Borovkov2013page529">Template:Cite book</ref>
Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined.<ref name="Ito2006page32"/> For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable.<ref name="AdlerTaylor2009page7"/><ref name="Khoshnevisan2006page153"/> For a continuous-time stochastic process <math>X</math>, other characteristics that depend on an uncountable number of points of the index set <math>T</math> include:<ref name="Ito2006page32"/>
- a sample function of a stochastic process <math>X</math> is a continuous function of <math>t\in T</math>;
- a sample function of a stochastic process <math>X</math> is a bounded function of <math>t\in T</math>; and
- a sample function of a stochastic process <math>X</math> is an increasing function of <math>t\in T</math>.
where the symbol ∈ can be read "a member of the set", as in <math>t</math> a member of the set <math>T</math>.
To overcome the two difficulties described above, i.e., "more than one..." and "functionals of...", different assumptions and approaches are possible.<ref name="Asmussen2003page408"/>
Resolving construction issuesEdit
One approach for avoiding mathematical construction issues of stochastic processes, proposed by Joseph Doob, is to assume that the stochastic process is separable.<ref name="AthreyaLahiri2006page221">Template:Cite book</ref> Separability ensures that infinite-dimensional distributions determine the properties of sample functions by requiring that sample functions are essentially determined by their values on a dense countable set of points in the index set.<ref name="AdlerTaylor2009page14">Template:Cite book</ref> Furthermore, if a stochastic process is separable, then functionals of an uncountable number of points of the index set are measurable and their probabilities can be studied.<ref name="Ito2006page32"/><ref name="AdlerTaylor2009page14"/>
Another approach is possible, originally developed by Anatoliy Skorokhod and Andrei Kolmogorov,<ref name="AthreyaLahiri2006page211">Template:Cite book</ref> for a continuous-time stochastic process with any metric space as its state space. For the construction of such a stochastic process, it is assumed that the sample functions of the stochastic process belong to some suitable function space, which is usually the Skorokhod space consisting of all right-continuous functions with left limits. This approach is now more used than the separability assumption,<ref name="Asmussen2003page408"/><ref name="Getoor2009">Template:Cite journal</ref> but such a stochastic process based on this approach will be automatically separable.<ref name="Borovkov2013page536">Template:Cite book</ref>
Although less used, the separability assumption is considered more general because every stochastic process has a separable version.<ref name="Getoor2009"/> It is also used when it is not possible to construct a stochastic process in a Skorokhod space.<ref name="Borovkov2013page535"/> For example, separability is assumed when constructing and studying random fields, where the collection of random variables is now indexed by sets other than the real line such as <math>n</math>-dimensional Euclidean space.<ref name="AdlerTaylor2009page7"/><ref name="Yakir2013page5">Template:Cite book</ref>
ApplicationEdit
Applications in FinanceEdit
Black-Scholes ModelEdit
One of the most famous applications of stochastic processes in finance is the Black-Scholes model for option pricing. Developed by Fischer Black, Myron Scholes, and Robert Solow, this model uses Geometric Brownian motion, a specific type of stochastic process, to describe the dynamics of asset prices.<ref>Template:Cite journal</ref><ref>Template:Citation</ref> The model assumes that the price of a stock follows a continuous-time stochastic process and provides a closed-form solution for pricing European-style options. The Black-Scholes formula has had a profound impact on financial markets, forming the basis for much of modern options trading.
The key assumption of the Black-Scholes model is that the price of a financial asset, such as a stock, follows a log-normal distribution, with its continuous returns following a normal distribution. Although the model has limitations, such as the assumption of constant volatility, it remains widely used due to its simplicity and practical relevance.
Stochastic Volatility ModelsEdit
Another significant application of stochastic processes in finance is in stochastic volatility models, which aim to capture the time-varying nature of market volatility. The Heston model<ref>Template:Cite journal</ref> is a popular example, allowing for the volatility of asset prices to follow its own stochastic process. Unlike the Black-Scholes model, which assumes constant volatility, stochastic volatility models provide a more flexible framework for modeling market dynamics, particularly during periods of high uncertainty or market stress.
Applications in BiologyEdit
Population DynamicsEdit
One of the primary applications of stochastic processes in biology is in population dynamics. In contrast to deterministic models, which assume that populations change in predictable ways, stochastic models account for the inherent randomness in births, deaths, and migration. The birth-death process,<ref name="Ross 2010">Template:Cite book</ref> a simple stochastic model, describes how populations fluctuate over time due to random births and deaths. These models are particularly important when dealing with small populations, where random events can have large impacts, such as in the case of endangered species or small microbial populations.
Another example is the branching process,<ref name="Ross 2010"/> which models the growth of a population where each individual reproduces independently. The branching process is often used to describe population extinction or explosion, particularly in epidemiology, where it can model the spread of infectious diseases within a population.
Applications in Computer ScienceEdit
Randomized AlgorithmsEdit
Stochastic processes play a critical role in computer science, particularly in the analysis and development of randomized algorithms. These algorithms utilize random inputs to simplify problem-solving or enhance performance in complex computational tasks. For instance, Markov chains are widely used in probabilistic algorithms for optimization and sampling tasks, such as those employed in search engines like Google's PageRank.<ref name="Randomized algorithms">Template:Cite book</ref> These methods balance computational efficiency with accuracy, making them invaluable for handling large datasets. Randomized algorithms are also extensively applied in areas such as cryptography, large-scale simulations, and artificial intelligence, where uncertainty must be managed effectively.<ref name="Randomized algorithms"/>
Queuing TheoryEdit
Another significant application of stochastic processes in computer science is in queuing theory, which models the random arrival and service of tasks in a system.<ref>Template:Cite book</ref> This is particularly relevant in network traffic analysis and server management. For instance, queuing models help predict delays, manage resource allocation, and optimize throughput in web servers and communication networks. The flexibility of stochastic models allows researchers to simulate and improve the performance of high-traffic environments. For example, queueing theory is crucial for designing efficient data centers and cloud computing infrastructures.<ref>Template:Cite book</ref>
See alsoEdit
NotesEdit
ReferencesEdit
Further readingEdit
Template:Further reading cleanup
ArticlesEdit
- Template:Cite journal
- Template:Cite journal
- Template:Cite journal
- Template:Cite book
- Template:Cite journal
BooksEdit
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
External linksEdit
Template:Stochastic processes Template:Industrial and applied mathematics
Template:Authority control</syntaxhighlight> will transclude from the page Stochastic process (an article, in the Main namespace)
- <syntaxhighlight lang="wikitext" inline="">{{Wikipedia:Assume good faith}}</syntaxhighlight> will transclude from the page Wikipedia:Assume good faith
Transclusion and what content it includes or excludes can be modified by using the invisible wikitext tags Template:Tag, Template:Tag, Template:Tag, and Template:Tag on the Template:Em, as further outlined in Template:Section link.Template:Refn The first three tags enable Template:Section link, as opposed to the default behavior of double curly braces, which will transclude the entire source page's content. For Template:Section link, the Template:Tag or Template:Tag tags can be used and sections of the source page named, then using parser functions the template can be called with the section name as a parameter: <syntaxhighlight lang="wikitext" inline="">{{#section:Pagename|Sectionname}}</syntaxhighlight>.
Transclusion events occur each time the target page is loaded and the template is rendered. A related event is Substitution, where a template call is replaced with its transcluded source content at the time it is invoked in a one-time inclusion of the content. Unlike transclusion, which continuously updates the target page with changes from the source, substitution results in a one-time inclusion of the content, meaning that subsequent updates to the source content will not be reflected in the target page. For example, a template call for <syntaxhighlight lang="wikitext" inline="">Template:Pagename</syntaxhighlight> with the <syntaxhighlight lang="wikitext" inline="">subst:</syntaxhighlight> prefix results in the substitution template call <syntaxhighlight lang="wikitext" inline="">{{subst:Pagename}}</syntaxhighlight>. When invoked, this template is replaced, also referred to as substituted, with the actual wikitext of the source page at the time of the call, thereby making it a permanent part of the target page.Template:Refn
It is possible to transclude content from Wikidata into Wikipedia articles or other wikis.
How transclusion worksEdit
Template:Transcluded section Help:Transclusion/How Transclusion Works
Transclusion syntaxEdit
The general syntax for transclusion on Wikipedia follows the format <syntaxhighlight lang="wikitext" inline="">Template:Namespace:Pagename</syntaxhighlight>, where Namespace:Pagename
specifies the title of a Wikipedia page.
Similar to creating a wikilink using double square brackets (<syntaxhighlight lang="wikitext" inline="">Pagename</syntaxhighlight>
), a page can be transcluded as a template by enclosing its title in double curly braces: <syntaxhighlight lang="wikitext" inline="">Template:Namespace:Pagename</syntaxhighlight>. Any changes made to the source page, or template, are automatically reflected on all pages that include the transcluded content.Template:Refn
Wikipedia is structured using namespaces, which organize pages based on their function. For example, a page titled Template:Xtn belongs to the Wikipedia namespace, with Wikipedia:
as its namespace and Tips
as its pagename. However, articles in the Main namespace, such as Template:Xtn, do not require a namespace prefix when linked using <syntaxhighlight lang="wikitext" inline="">Potato</syntaxhighlight>
, as Wikipedia assumes any wikilink without a specified namespace belongs to the Article namespace.
When transcluding pages, if no namespace is specified, Wikipedia defaults to the Template namespace. To reference a page in the Article namespace within transclusion syntax, it must be explicitly prefixed with a colon Template:Char (e.g., <syntaxhighlight lang="wikitext" inline="">Template:Short description Template:Good article {{#invoke:other uses|otheruses}} Template:Pp-vandalism Template:Use dmy dates Template:Speciesbox
The potato (Template:IPAc-en) is a starchy tuberous vegetable native to the Americas that is consumed as a staple food in many parts of the world. Potatoes are underground stem tubers of the plant Solanum tuberosum, a perennial in the nightshade family Solanaceae.
Wild potato species can be found from the southern United States to southern Chile. Genetic studies show that the cultivated potato has a single origin, in the area of present-day southern Peru and extreme northwestern Bolivia. Potatoes were domesticated there about 7,000–10,000 years ago from a species in the S. brevicaule complex. Many varieties of the potato are cultivated in the Andes region of South America, where the species is indigenous.
The Spanish introduced potatoes to Europe in the second half of the 16th century from the Americas. They are a staple food in many parts of the world and an integral part of much of the world's food supply. Following millennia of selective breeding, there are now over 5,000 different varieties of potatoes. The potato remains an essential crop in Europe, especially Northern and Eastern Europe, where per capita production is still the highest in the world, while the most rapid expansion in production during the 21st century was in southern and eastern Asia, with China and India leading the world production as of 2023.
Like the tomato and the nightshades, the potato is in the genus Solanum; the aerial parts of the potato contain the toxin solanine. Normal potato tubers that have been grown and stored properly produce glycoalkaloids in negligible amounts, but, if sprouts and potato skins are exposed to light, tubers can become toxic.
EtymologyEdit
The English word "potato" comes from Spanish {{#invoke:Lang|lang}}, in turn from Taíno {{#invoke:Lang|lang}}, which means "sweet potato", not the plant now known as simply "potato".<ref>Template:Cite book</ref>
The name "spud" for a potato is from the 15th century spudde, a short and stout knife or dagger, probably related to Danish spyd, "spear". Through semantic change, the general sense of short and thick was transferred to the tuber from around 1840.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
At least seven languages—Afrikaans, Dutch, Low Saxon, French, (West) Frisian, Hebrew, Persian<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> and some variants of German—use a term for "potato" that means "earth apple" or "ground apple",<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> from an earlier sense of both pome and apple, referring in general to a (apple-shaped) fruit or vegetable.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
DescriptionEdit
Potato plants are herbaceous perennials that grow up to Template:Convert high. The stems are hairy. The leaves have roughly four pairs of leaflets. The flowers range from white or pink to blue or purple; they are yellow at the centre, and are insect-pollinated.<ref name="Kew">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
The plant develops tubers to store nutrients. These are not roots but stems that form from thickened rhizomes at the tips of long thin stolons. On the surface of the tubers there are "eyes," which act as sinks to protect the vegetative buds from which the stems originate. The "eyes" are arranged in helical form. In addition, the tubers have small holes that allow breathing, called lenticels. The lenticels are circular and their number varies depending on the size of the tuber and environmental conditions.<ref name="Ewing Struik 1992">Template:Cite book</ref> Tubers form in response to decreasing day length, although this tendency has been minimized in commercial varieties.<ref>Template:Cite journal</ref>
After flowering, potato plants produce small green fruits that resemble green cherry tomatoes, each containing about 300 very small seeds.<ref name="Plaisted">Template:Cite book</ref>
PhylogenyEdit
Like the tomato, potatoes belong to the genus Solanum, which is a member of the nightshade family, the Solanaceae. That is a diverse family of flowering plants, often poisonous, that includes the mandrake (Mandragora), deadly nightshade (Atropa), and tobacco (Nicotiana), as shown in the outline phylogenetic tree (many branches omitted). The most commonly cultivated potato is S. tuberosum; there are several other species.<ref>Template:Cite journal</ref>
The major species grown worldwide is S. tuberosum (a tetraploid with 48 chromosomes), and modern varieties of this species are the most widely cultivated. There are also four diploid species (with 24 chromosomes): S. stenotomum, S. phureja, S. goniocalyx, and S. ajanhuiri. There are two triploid species (with 36 chromosomes): S. chaucha and S. juzepczukii. There is one pentaploid cultivated species (with 60 chromosomes): S. curtilobum.<ref name="Raker Spooner 2002"/>
There are two major subspecies of S. tuberosum.<ref name="Raker Spooner 2002">Template:Cite journal</ref> The Andean potato, S. tuberosum andigena, is adapted to the short-day conditions prevalent in the mountainous equatorial and tropical regions where it originated. The Chilean potato S. tuberosum tuberosum, native to the Chiloé Archipelago, is in contrast adapted to the long-day conditions prevalent in the higher latitude region of southern Chile.<ref name="Rodríguez"/>
HistoryEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
DomesticationEdit
Wild potato species occur from the southern United States to southern Chile.<ref>Template:Cite journal</ref> The potato was first domesticated in southern Peru and northwestern Bolivia<ref name="Spooner 2005 14694–99"/> by pre-Columbian farmers, around Lake Titicaca.<ref name="LostCrops"/> Potatoes were domesticated there about 7,000–10,000 years ago from a species in the S. brevicaule complex.<ref name="Spooner 2005 14694–99">Template:Cite journal</ref><ref name="LostCrops">Template:Cite book</ref><ref name="John Michael Francis 2005">Template:Cite book</ref>
The earliest archaeologically verified potato tuber remains have been found at the coastal site of Ancon (central Peru), dating to 2500 BC.<ref>Martins-Farias 1976; Moseley 1975</ref><ref>Template:Cite book</ref> The most widely cultivated variety, Solanum tuberosum tuberosum, is indigenous to the Chiloé Archipelago, and has been cultivated by the local indigenous people since before the Spanish conquest.<ref name="Rodríguez">Template:Cite journal</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
SpreadEdit
Following the Spanish conquest of the Inca Empire, the Spanish introduced the potato to Europe in the second half of the 16th century as part of the Columbian exchange. The staple was subsequently conveyed by European mariners (possibly including the Russian-American Company) to territories and ports throughout the world, especially their colonies.<ref name="Sauer-2017">Template:Cite book Template:Isbn Template:Isbn Template:Isbn Template:Isbn Template:Isbn</ref> European and colonial farmers were slow to adopt farming potatoes. However, after 1750, they became an important food staple and field crop<ref name="Sauer-2017" /> and played a major role in the European 19th century population boom.<ref name="John Michael Francis 2005"/> According to conservative estimates, the introduction of the potato was responsible for a quarter of the growth in Old World population and urbanization between 1700 and 1900.<ref>Template:Cite journal</ref> However, lack of genetic diversity, due to the very limited number of varieties initially introduced, left the crop vulnerable to disease. In 1845, a plant disease known as late blight, caused by the fungus-like oomycete Phytophthora infestans, spread rapidly through the poorer communities of western Ireland as well as parts of the Scottish Highlands, resulting in the crop failures that led to the Great Irish Famine.<ref name="PlDis2011">Template:Cite journal</ref><ref name="Sauer-2017" />
The International Potato Center, based in Lima, Peru, holds 4,870 types of potato germplasm, most of which are traditional landrace cultivars.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In 2009, a draft sequence of the potato genome was made, containing 12 chromosomes and 860 million base pairs, making it a medium-sized plant genome.<ref>Template:Cite journal</ref>
It had been thought that most potato cultivars derived from a single origin in southern Peru and extreme Northwestern Bolivia, from a species in the S. brevicaule complex.<ref name="Spooner 2005 14694–99"/><ref name="LostCrops"/><ref name="John Michael Francis 2005"/> DNA analysis however shows that more than 99% of all current varieties of potatoes are direct descendants of a subspecies that once grew in the lowlands of south-central Chile.<ref name="Ames2008">Template:Cite journal</ref>
Most modern potatoes grown in North America arrived through European settlement and not independently from the South American sources. At least one wild potato species, S. fendleri, occurs in North America; it is used in breeding for resistance to a nematode species that attacks cultivated potatoes. A secondary center of genetic variability of the potato is Mexico, where important wild species that have been used extensively in modern breeding are found, such as the hexaploid S. demissum, used as a source of resistance to the devastating late blight disease (Phytophthora infestans).<ref name="PlDis2011" /> Another relative native to this region, Solanum bulbocastanum, has been used to genetically engineer the potato to resist potato blight.<ref>Template:Cite journal</ref> Template:Anchor Many such wild relatives are useful for breeding resistance to P. infestans.<ref name="Genes">Template:Cite journal</ref>
Little of the diversity found in Solanum ancestral and wild relatives is found outside the original South American range.<ref name="Resources">Template:Cite journal</ref> This makes these South American species highly valuable in breeding.<ref name="Resources"/> The importance of the potato to humanity is recognised in the United Nations International Day of Potato, to be celebrated on 30 May each year, starting in 2024.<ref name="UN Potato Day">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
BreedingEdit
Potatoes, both S. tuberosum and most of its wild relatives, are self-incompatible: they bear no useful fruit when self-pollinated. This trait is problematic for crop breeding, as all sexually-produced plants must be hybrids. The gene responsible for self-incompatibility, as well as mutations to disable it, are now known. Self-compatibility has successfully been introduced both to diploid potatoes (including a special line of S. tuberosum) by CRISPR-Cas9.<ref name="Neofunctionalisation"/> Plants having a 'Sli' gene produce pollen which is compatible to its own parent and plants with similar S genes.<ref name="Hosaka Hanneman, Jr. 1998 pp. 191–197" >Template:Cite journal</ref> This gene was cloned by Wageningen University and Solynta in 2021, which would allow for faster and more focused breeding.<ref name="Neofunctionalisation">Template:Cite journal</ref><ref>Template:Cite journal </ref>
Diploid hybrid potato breeding is a recent area of potato genetics supported by the finding that simultaneous homozygosity and fixation of donor alleles is possible.<ref name="Lindhout Meijer Schotte Hutten 2011 pp. 301–312">Template:Cite journal</ref> Wild potato species useful for breeding blight resistance include Solanum desmissum and S. stoloniferum, among others.<ref name="Strategies">Template:Cite journal</ref>
VarietiesEdit
There are some 5,000 potato varieties worldwide, 3,000 of them in the Andes alone — mainly in Peru, Bolivia, Ecuador, Chile, and Colombia. Over 100 cultivars might be found in a single valley, and a dozen or more might be maintained by a single agricultural household.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> The European Cultivated Potato Database is an online collaborative database of potato variety descriptions updated and maintained by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks—which is run by the International Plant Genetic Resources Institute.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Around 80 varieties are commercially available in the UK.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
For culinary purposes, varieties are often differentiated by their waxiness: floury or mealy baking potatoes have more starch (20–22%) than waxy boiling potatoes (16–18%). The distinction may also arise from variation in the comparative ratio of two different potato starch compounds: amylose and amylopectin. Amylose, a long-chain molecule, diffuses from the starch granule when cooked in water, and lends itself to dishes where the potato is mashed. Varieties that contain a slightly higher amylopectin content, which is a highly branched molecule, help the potato retain its shape after being boiled in water.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Potatoes that are good for making potato chips or potato crisps are sometimes called "chipping potatoes", which means they meet the basic requirements of similar varietal characteristics, being firm, fairly clean, and fairly well-shaped.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Immature potatoes may be sold fresh from the field as "Template:Vanchor" or "Template:Vanchor" potatoes and are particularly valued for their taste. They are typically small in size and tender, with a loose skin, and flesh containing a lower level of starch than other potatoes. In the United States they are generally either a Yukon Gold potato or a red potato, called gold creamers or red creamers respectively.<ref name="recipe tips">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>Template:Cite news</ref> In the UK, the Jersey Royal is a famous type of new potato.<ref>Template:Cite news</ref>
Dozens of potato cultivars have been selectively bred specifically for their skin or flesh color, including gold, red, and blue varieties.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> These contain varying amounts of phytochemicals, including carotenoids for gold/yellow or polyphenols for red or blue cultivars.<ref name="Hirsch">Template:Cite journal</ref> Carotenoid compounds include provitamin A alpha-carotene and beta-carotene, which are converted to the essential nutrient, vitamin A, during digestion. Anthocyanins mainly responsible for red or blue pigmentation in potato cultivars do not have nutritional significance, but are used for visual variety and consumer appeal.<ref>Template:Cite journal</ref> In 2010, potatoes were bioengineered specifically for these pigmentation traits.<ref>Template:Cite book</ref>
Genetic engineeringEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
Genetic research has produced several genetically modified varieties. 'New Leaf', owned by Monsanto Company, incorporates genes from Bacillus thuringiensis (source of most Bt toxins in transcrop use), which confers resistance to the Colorado potato beetle; 'New Leaf Plus' and 'New Leaf Y', approved by US regulatory agencies during the 1990s, also include resistance to viruses. McDonald's, Burger King, Frito-Lay, and Procter & Gamble announced they would not use genetically modified potatoes, and Monsanto published its intent to discontinue the line in March 2001.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Potato starch contains two types of glucan, amylose and amylopectin, the latter of which is most industrially useful. Waxy potato varieties produce waxy potato starch, which is almost entirely amylopectin, with little or no amylose. BASF developed the 'Amflora' potato, which was modified to express antisense RNA to inactivate the gene for granule bound starch synthase, an enzyme which catalyzes the formation of amylose.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> 'Amflora' potatoes therefore produce starch consisting almost entirely of amylopectin, and are thus more useful for the starch industry. In 2010, the European Commission cleared the way for 'Amflora' to be grown in the European Union for industrial purposes only—not for food. Nevertheless, under EU rules, individual countries have the right to decide whether they will allow this potato to be grown on their territory. Commercial planting of 'Amflora' was expected in the Czech Republic and Germany in the spring of 2010, and Sweden and the Netherlands in subsequent years.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
The 'Fortuna' GM potato variety developed by BASF was made resistant to late blight by introgressing two resistance genes, Template:Visible anchor and Template:Visible anchor, from S. bulbocastanum, a wild potato native to Mexico.<ref name="Receptor-Mediated"/><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Template:Vanchor is a nucleotide-binding leucine-rich repeat (NB-LRR/NLR), an R-gene-produced immunoreceptor.<ref name="Receptor-Mediated"> Template:Cite journal</ref>
In October 2011, BASF requested cultivation and marketing approval as a feed and food from the EFSA. In 2012, GMO development in Europe was stopped by BASF.<ref>BASF stops GM crop development in Europe, Deutsche Welle, 17 January 2012</ref><ref>Template:Cite news</ref> In November 2014, the United States Department of Agriculture (USDA) approved a genetically modified potato developed by Simplot, which contains genetic modifications that prevent bruising and produce less acrylamide when fried than conventional potatoes; the modifications do not cause new proteins to be made, but rather prevent proteins from being made via RNA interference.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Genetically modified varieties have met public resistance in the U.S. and in the European Union.<ref>Template:Cite news</ref><ref name="nytimes1">Template:Cite news</ref>
CultivationEdit
Seed potatoesEdit
Potatoes are generally grown from "seed potatoes", tubers specifically grown to be free from diseaseTemplate:Clarify and to provide consistent and healthy plants. To be disease free, the areas where seed potatoes are grown are selected with care. In the US, this restricts production of seed potatoes to only 15 states out of all 50 states where potatoes are grown. These locations are selected for their cold, hard winters that kill pests and summers with long sunshine hours for optimum growth.<ref name="US Potato Board - Seed Potatoes">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In the UK, most seed potatoes originate in Scotland, in areas where westerly winds reduce aphid attacks and the spread of potato virus pathogens.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Phases of growthEdit
Potato growth can be divided into five phases. During the first phase, sprouts emerge from the seed potatoes and root growth begins. During the second, photosynthesis begins as the plant develops leaves and branches above-ground and stolons develop from lower leaf axils on the below-ground stem. In the third phase the tips of the stolons swell, forming new tubers, and the shoots continue to grow, with flowers typically developing soon after. Tuber bulking occurs during the fourth phase, when the plant begins investing the majority of its resources in its newly formed tubers. At this phase, several factors are critical to a good yield: optimal soil moisture and temperature, soil nutrient availability and balance, and resistance to pest attacks. The fifth phase is the maturation of the tubers: the leaves and stems senesce and the tuber skins harden.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref name="JefferiesLawson1991">Template:Cite journal</ref>
New tubers may start growing at the surface of the soil. Since exposure to light leads to an undesirable greening of the skins and the development of solanine as a protection from the sun's rays, growers cover surface tubers. Commercial growers cover them by piling additional soil around the base of the plant as it grows (called "hilling" up, or in British English "earthing up"). An alternative method, used by home gardeners and smaller-scale growers, involves covering the growing area with mulches such as straw or plastic sheets.<ref name="cornell1">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
At farm scale, potatoes require a well-drained neutral or mildly acidic soil (pH 6 or 7) such as a sandy loam. The soil is prepared using deep tillage, for example with a chisel plow or ripper. In areas where irrigation is needed, the field is leveled using a landplane so that water can be supplied evenly. Manure can be added after initial irrigation; the soil is then broken up with a disc harrow. The potatoes are planted using a potato planter machine in rows Template:Convert apart.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> At garden scale, potatoes are planted in trenches or individual holes some Template:Convert deep in soil, preferably with additional organic matter such as garden compost or manure. Alternatively, they can be planted in containers or bags filled with a free-draining compost.<ref name="RHS planting">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Potatoes are sensitive to heavy frosts, which damage them in the ground or when stored.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
- Planting Potatoes.jpg
Planting
- Tractors in Potato Field.jpg
Field in Fort Fairfield, Maine
- Potato plants.jpg
Immature potato plants
- Potato bag cultivation.JPG
Potatoes grown in a tall bag are common in gardens as they minimize digging.
Pests and diseasesEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
The historically significant Phytophthora infestans, the cause of late blight, remains an ongoing problem in Europe<ref name="PlDis2011"/> and the United States.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Other potato diseases include Rhizoctonia, Sclerotinia, Pectobacterium carotovorum (black leg), powdery mildew, powdery scab and leafroll virus.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Insects that commonly transmit potato diseases or damage the plants include the Colorado potato beetle, the potato tuber moth, the green peach aphid (Myzus persicae), the potato aphid, Tuta absoluta, beet leafhoppers, thrips, and mites. The Colorado potato beetle is considered the most important insect defoliator of potatoes, devastating entire crops.<ref name='Alyokhin'>Template:Cite book</ref> The potato cyst nematode is a microscopic worm that feeds on the roots, thus causing the potato plants to wilt. Since its eggs can survive in the soil for several years, crop rotation is recommended.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
HarvestEdit
On a small scale, potatoes can be harvested using a hoe or spade, or simply by hand. Commercial harvesting is done with large potato harvesters, which scoop up the plant and surrounding earth. This is transported up an apron chain consisting of steel links several feet wide, which separates some of the earth. The chain deposits into an area where further separation occurs. The most complex designs use vine choppers and shakers, along with a blower system to separate the potatoes from the plant. The result is then usually run past workers who continue to sort out plant material, stones, and rotten potatoes before the potatoes are continuously delivered to a wagon or truck. Further inspection and separation occurs when the potatoes are unloaded from the field vehicles and put into storage.<ref name="Johnson Auat Cheein 2023">Template:Cite journal</ref>
Potatoes are usually cured after harvest to improve skin-set. Skin-set is the process by which the skin of the potato becomes resistant to skinning damage. Potato tubers may be susceptible to skinning at harvest and suffer skinning damage during harvest and handling operations. Curing allows the skin to fully set and any wounds to heal. Wound-healing prevents infection and water-loss from the tubers during storage. Curing is normally done at relatively warm temperatures (Template:Convert) with high humidity and good gas-exchange if at all possible.<ref>Template:Cite book</ref>
StorageEdit
Storage facilities need to be carefully designed to keep the potatoes alive and slow the natural process of sprouting which involves the breakdown of starch. It is crucial that the storage area be dark, ventilated well, and, for long-term storage, maintained at temperatures near Template:Convert. For short-term storage, temperatures of about Template:Convert are preferred.<ref name="crosstree">Potato storage, value Preservation: {{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Temperatures below Template:Convert convert the starch in potatoes into sugar, which alters their taste and cooking qualities and leads to higher acrylamide levels in the cooked product, especially in deep-fried dishes. The discovery of acrylamides in starchy foods in 2002 has caused concern, but it is not likely that the acrylamides in food, even if it is somewhat burnt, causes cancer in humans.<ref name="cruk">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Chemicals are used to suppress sprouting of tubers during storage. Chlorpropham is the main chemical used, but it has been banned in the EU over toxicity concerns.<ref name="epp">Template:Cite news</ref> Alternatives include ethylene, spearmint and orange oils, and 1,4-dimethylnaphthalene.<ref name="epp"/>
Under optimum conditions in commercial warehouses, potatoes can be stored for up to 10–12 months.<ref name="crosstree" /> The commercial storage and retrieval of potatoes involves several phases: first drying surface moisture; wound healing at 85% to 95% relative humidity and temperatures below Template:Convert; a staged cooling phase; a holding phase; and a reconditioning phase, during which the tubers are slowly warmed. Mechanical ventilation is used at various points during the process to prevent condensation and the accumulation of carbon dioxide.<ref name="crosstree" />
Template:CHN | 93.4 | |
{{#invoke:flag | India}} | 60.1 |
{{#invoke:flag | Ukraine}} | 21.4 |
Template:USA | 20.0 | |
{{#invoke:flag | }} | 19.4 |
World | 383 | |
Template:Small<ref name="faostat">{{#invoke:citation/CS1|citation | CitationClass=web
}}</ref> |
ProductionEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
In 2023, world production of potatoes was 383 million tonnes, led by China with 25% of the total and India as a major secondary producer (table).
The world dedicated Template:Convert to potato cultivation in 2010; the world average yield was Template:Convert. The United States was the most productive country, with a nationwide average yield of Template:Convert.<ref name="yield2010">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
New Zealand farmers have demonstrated some of the best commercial yields in the world, ranging between 60 and 80 tonnes per hectare, some reporting yields of 88 tonnes of potatoes per hectare.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
There is a big gap among various countries between high and low yields, even with the same variety of potato. Average potato yields in developed economies ranges between Template:Convert. China and India accounted for over a third of world's production in 2010, and had yields of Template:Convert respectively.<ref name="yield2010" /> The yield gap between farms in developing economies and developed economies represents an opportunity loss of over Template:Convert of potato, or an amount greater than 2010 world potato production. Potato crop yields are determined by factors such as the crop breed, seed age and quality, crop management practices and the plant environment. Improvements in one or more of these yield determinants, and a closure of the yield gap, could be a major boost to food supply and farmer incomes in the developing world.<ref>Template:Cite book</ref><ref>Template:Cite journal</ref> The food energy yield of potatoes—about Template:Convert—is higher than that of maize (Template:Convert), rice (Template:Convert), wheat (Template:Convert), or soybeans (Template:Convert).<ref name="Ensminger">Template:Cite book</ref>
Effects of climate change on productionEdit
Climate change is predicted to have significant effects on global potato production.<ref name="supply">Template:Cite journal</ref> Like many crops, potatoes are likely to be affected by changes in atmospheric carbon dioxide, temperature and precipitation, as well as interactions between these factors.<ref name="supply" /> As well as affecting potatoes directly, climate change will also affect the distributions and populations of many potato diseases and pests. While the potato is less important than maize, rice, wheat and soybeans, which are collectively responsible for around two-thirds of all calories consumed by humans (both directly and indirectly as animal feed),<ref name="Zhao2017">Template:Cite journal</ref> it still is one of the world's most important food crops.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Altogether, one 2003 estimate suggests that future (2040–2069) worldwide potato yield would be 18–32% lower than it was at the time, driven by declines in hotter areas like Sub-Saharan Africa,<ref name="supply" /> unless farmers and potato cultivars can adapt to the new environment.<ref name="Luck-et-al-2011">Template:Cite journal</ref>
Potato plants and crop yields are predicted to benefit from the CO2 fertilization effect,<ref name="UK">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> which would increase photosynthetic rates and therefore growth, reduce water consumption through lower transpiration from stomata and increase starch content in the edible tubers.<ref name="supply" /> However, potatoes are more sensitive to soil water deficits than some other staple crops like wheat.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In the UK, the amount of arable land suitable for rainfed potato production is predicted to decrease by at least 75%.<ref>Template:Cite journal</ref> These changes are likely to lead to increased demand for irrigation water, particularly during the potato growing season.<ref name="supply" />
Potatoes grow best under temperate conditions.<ref name="global">Template:Cite journal</ref> Temperatures above Template:Convert have negative effects on potato crops, from physiological damage such as brown spots on tubers, to slower growth, premature sprouting, and lower starch content.<ref name="Levy">Template:Cite journal</ref> These effects reduce crop yield, affecting both the number and the weight of tubers. As a result, areas where current temperatures are near the limits of potatoes' temperature range (e.g. much of sub-Saharan Africa)<ref name="supply"/> will likely suffer large reductions in potato crop yields in the future.<ref name="global"/> On the other hand, low temperatures reduce potato growth and present risk of frost damage.<ref name="supply"/>
Changes in pests and diseasesEdit
Climate change is predicted to affect many potato pests and diseases. These include:
- Insect pests such as the potato tuber moth and Colorado potato beetle, which are predicted to spread into areas currently too cold for them.<ref name="supply"/>
- Aphids which act as vectors for many potato viruses and will spread under increased temperatures.<ref>{{#invoke:citation/CS1|citation
|CitationClass=web }}</ref>
- Pathogens causing potato blackleg disease (e.g. Dickeya) grow and reproduce faster at higher temperatures.<ref>{{#invoke:citation/CS1|citation
|CitationClass=web }}</ref>
- Bacterial infections such as Ralstonia solanacearum will benefit from higher temperatures and spread more easily through flash flooding.<ref name="supply"/>
- Late blight benefits from higher temperatures and wetter conditions.<ref>{{#invoke:citation/CS1|citation
|CitationClass=web }}</ref> Late blight is predicted to become a greater threat in some areas (e.g. in Finland)<ref name="supply"/> and become a lesser threat in others (e.g. in the United Kingdom).<ref name="UK"/>
Adaptation strategiesEdit
Potato production is expected to decline in many areas due to hotter temperatures and decreased water availability. Conversely, production is predicted to become possible in high altitude and latitude areas where it has been limited by frost damage, such as in Canada and Russia.<ref name="global"/> This will shift potato production to cooler areas, mitigating much of the projected decline in yield. However, this may trigger competition for land between potato crops and other land uses, mostly due to changes in water and temperature regimes.<ref name="global"/>
The other approach is through the development of varieties or cultivars which would be more adapted to altered conditions. This can be done through 'traditional' plant breeding techniques and genetic modification. These techniques allow for the selection of specific traits as a new cultivar is developed. Certain traits, such as heat stress tolerance, drought tolerance, fast growth/early maturation and disease resistance, may play an important role in creating new cultivars able to maintain yields under stressors induced by climate change.<ref name="Levy"/>
For instance, developing cultivars with greater heat stress tolerance would be critical for maintaining yields in countries with potato production areas near current cultivars' maximum temperature limits (e.g. Sub-Saharan Africa, India).<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Superior drought resistance can be achieved through improved water use efficiency (amount of food produced per amount of water used) or the ability to recover from short drought periods and still produce acceptable yields. Further, selecting for deeper root systems may reduce the need for irrigation.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
NutritionEdit
In a reference amount of Template:Convert, a boiled potato with skin supplies 87 calories and is 77% water, 20% carbohydrates (including 2% dietary fiber in the skin and flesh), 2% protein, and contains negligible fat (table). The protein content is comparable to other starchy vegetable staples, as well as grains.<ref name="Beals">Template:Cite journal</ref>
Boiled potatoes are a moderate source (10–19% of the Daily Value, DV) of vitamin C (14% DV) and the B vitamins, vitamin B6 and pantothenic acid (table). Other than a moderate source of potassium (13% DV), boiled potatoes do not supply significant amounts of dietary minerals (table).
The potato is rarely eaten raw because raw potato starch is poorly digested by humans.<ref>Template:Cite journal</ref> Depending on the cultivar and preparation method, potatoes can have a high glycemic index (GI) and so are often excluded from the diets of individuals trying to follow a low-GI diet.<ref name="gi">Template:Cite journal</ref><ref name="Beals"/> There is a lack of evidence on the effect of potato consumption on obesity and diabetes.<ref name="Beals"/>
In the UK, potatoes are not considered by the National Health Service as counting or contributing towards the recommended daily five portions of fruit and vegetables, the 5-A-Day program.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
ToxicityEdit
Raw potatoes contain toxic glycoalkaloids, of which the most prevalent are solanine and chaconine. Solanine is found in other plants in the same family, Solanaceae, which includes such plants as deadly nightshade (Atropa belladonna), henbane (Hyoscyamus niger) and tobacco (Nicotiana spp.), as well as food plants like tomato. These compounds, which protect the potato plant from its predators, are especially concentrated in the aerial parts of the plant. The tubers are low in these toxins, unless they are exposed to light, which makes them go green.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref name="fried">Template:Cite journal</ref>
Exposure to light, physical damage, and age increase glycoalkaloid content within the tuber.<ref name="Greening of potatoes">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Different potato varieties contain different levels of glycoalkaloids. The 'Lenape' variety, released in 1967, was withdrawn in 1970 as it contained high levels of glycoalkaloids.<ref name="boing">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Since then, breeders of new varieties test for this, sometimes discarding an otherwise promising cultivar. Breeders try to keep glycoalkaloid levels below Template:Cvt. However, when these commercial varieties turn green, their solanine concentrations can go well above this limit,<ref>Template:Cite journal</ref> with higher levels in the potato's skin.<ref>Template:Cite book</ref>
UsesEdit
CulinaryEdit
Potato dishes vary around the world. Peruvian cuisine naturally contains the potato as a primary ingredient in many dishes, as around 3,000 varieties of the tuber are grown there.<ref>Template:Cite news</ref> Chuño is a freeze-dried potato product traditionally made by Quechua and Aymara communities of Peru and Bolivia.<ref>Timothy Johns: With bitter Herbs They Shall Eat it : Chemical ecology and the origins of human diet and medicine, The University of Arizona Press, Tucson 1990, Template:ISBN, pp. 82–84</ref> In the UK, potatoes form part of the traditional dish fish and chips. Roast potatoes are commonly served as part of a Sunday roast dinner and mashed potatoes form a major component of several other traditional dishes, such as shepherd's pie, bubble and squeak, and bangers and mash. New potatoes may be cooked with mint and are often served with butter. In Germany, Northern Europe (Finland, Latvia and especially Scandinavian countries), Eastern Europe (Russia, Belarus and Ukraine) and Poland, newly harvested, early ripening varieties are considered a special delicacy. Boiled whole and served un-peeled with dill, these "new potatoes" are traditionally consumed with Baltic herring. Puddings made from grated potatoes (kugel, kugelis, and potato babka) are popular items of Ashkenazi, Lithuanian, and Belarusian cuisine.<ref name="Bremzen90">Template:Cite book</ref> Cepelinai, the national dish of Lithuania, are dumplings made from boiled grated potatoes, usually stuffed with minced meat.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In Italy, in the Friuli region, potatoes serve to make a type of pasta called gnocchi.<ref>Template:Cite book</ref> Potato is used in northern China where rice is not easily grown, a popular dish being {{#invoke:Lang|lang}} (qīng jiāo tǔ dòu sī), made with green pepper, vinegar and thin slices of potato. In the winter, roadside sellers in northern China sell roasted potatoes.<ref name=Solomon>Template:Cite book</ref>
- Flickr - cyclonebill - Pommes frites med salatmayonnaise.jpg
Pommes frites, also called chips and French fries
- Peru PapasRellenas2.jpg
- Baked Potato (3662019664).jpg
Baked potato with sour cream and chives
- Bauernfrühstück-01.jpg
lang}} ("farmer's breakfast")
- Cepelinai 2, Vilnius, Lithuania - Diliff.jpg
Other usesEdit
Potatoes are sometimes used to brew alcoholic spirits such as vodka, poitín, akvavit, and brännvin.<ref name=ermochkine>Ermochkine, Nicholas and Iglikowski, Peter (2003). 40 degrees east : an anatomy of vodka, Nova Publishers, p. 65, Template:ISBN.</ref><ref>Brännvinsbränning Template:Webarchive in Nordisk familjebok, volume 4 (1905)</ref>
Potatoes are used as fodder for livestock. They may be made into silage which can be stored for some months before use.<ref name="Halliday_2015">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Potato starch is used in the food industry as a thickener and binder for soups and sauces, in the textile industry as an adhesive, and in the paper industry for the manufacturing of papers and boards.<ref>Template:Cite book</ref><ref name="jai">Template:Cite book</ref>
Potatoes are commonly used in plant research. The consistent parenchyma tissue, the clonal nature of the plant and the low metabolic activity make it an ideal model tissue for experiments on wound-response studies and electron transport.<ref name="Espinoza Estrada Silva-Rodriguez Tovar 1986">Template:Cite journal</ref>
Cultural significanceEdit
In mythologyEdit
In Inca mythology, a daughter of the earth mother Pachamama, Axomamma, is the goddess of potatoes. She ensured the fertility of the soil and the growth of the tubers.<ref name="Thurner 2021">Template:Cite book</ref> According to Iroquois mythology, the first potatoes grew out of Earth Woman's feet after she died giving birth to her twin sons, Sapling and Flint.<ref name="Converse 1908">Template:Cite journal</ref>
In artEdit
The potato has been an essential crop in the Andes since the pre-Columbian era. The Moche culture from Northern Peru made ceramics from the earth, water, and fire. This pottery was a sacred substance, formed in significant shapes and used to represent important themes. Potatoes are represented anthropomorphically as well as naturally.<ref>Berrin, Katherine & Larco Museum. The Spirit of Ancient Peru: Treasures from the Museo Arqueológico Rafael Larco Herrera. New York:Thames and Hudson, 1997.</ref> During the late 19th century, numerous images of potato harvesting appeared in European art, including the works of Willem Witsen and Anton Mauve.<ref>Template:Cite book</ref> Van Gogh's 1885 painting The Potato Eaters portrays a family eating potatoes. Van Gogh said he wanted to depict peasants as they really were. He deliberately chose coarse and ugly models, thinking that they would be natural and unspoiled in his finished work.<ref name="vgg">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Jean-François Millet's The Potato Harvest depicts peasants working in the plains between Barbizon and Chailly. It presents a theme representative of the peasants' struggle for survival. Millet's technique for this work incorporated paste-like pigments thickly applied over a coarsely textured canvas.<ref name="William Johnston">Johnston, W.R., Nineteenth Century Art: From Romanticism to Art Nouveau, The Walters Art Gallery, p.56, Template:ISBN</ref>
- Papamuseolarco.jpg
Potato ceramic from the Moche culture
- Jean-François Millet - The Potato Harvest - Walters 37115.jpg
- Bastien Lepage Saison d-Octobre Recolte des pommes de terre.jpg
The potato harvest by Jules Bastien-Lepage, 1877, National Gallery of Victoria
- Van-willem-vincent-gogh-die-kartoffelesser-03850.jpg
The Potato Eaters by Van Gogh, 1885 (Van Gogh Museum)
- Anker Die kleine Kartoffelschälerin 1886.jpg
Girl peeling potatoes by Albert Anker, 1886, oil on canvas
In popular cultureEdit
Invented in 1949, and marketed and sold commercially by Hasbro in 1952, Mr. Potato Head is an American toy that consists of a plastic potato and attachable plastic parts, such as ears and eyes, to make a face. It was the first toy ever advertised on television.<ref name="VAC">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref name="historyofhasbro">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref name=WTToys>Template:Cite book</ref>
In the 2015 science fiction film The Martian, the protagonist, a stranded astronaut and botanist named Mark Watney, cultivates potatoes on Mars using Martian soil fertilized with frozen feces.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
See alsoEdit
- Great Famine (Ireland)
- Irish potato candy
- List of potato dishes
- List of potato museums
- Loy (spade), a form of early spade used in Ireland for the cultivation of potatoes
- New World crops
- Potato battery
- International Year of the Potato
ReferencesEdit
Further readingEdit
- Template:Cite book
- Atlas of Wild Potatoes (2002), Systematic and Ecogeographic Studies on Crop Genepools 10, International Plant Genetic Resources Institute (IPGRI), Template:ISBN
- Economist. "Llamas and mash", The Economist 28 February 2008
- Template:Cite book
- Template:Cite journal
- Gauldie, Enid (1981). The Scottish Miller 1700–1900. Pub. John Donald. Template:ISBN.
- Hawkes, J.G. (1990). The Potato: Evolution, Biodiversity & Genetic Resources, Smithsonian Institution Press, Washington, DC
- Template:Cite book
- Template:Cite journal
- McNeill, William H. "How the Potato Changed the World's History." Social Research (1999) 66#1 pp. 67–83. {{#if:0037-783X|Template:Catalog lookup link{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}{{#if:Template:Trim|{{#ifeq:Template:Yesno-no|yes|Template:Main other|{{#invoke:check isxn|check_issn|Template:Trim|error=Template:Error-smallTemplate:Main other}}}}}}}}}}}}}}}}}}}}}}|Template:Error-small}} Fulltext: Ebsco, by a leading historian
- Template:Cite journal
- Ó Gráda, Cormac. Black '47 and Beyond: The Great Irish Famine in History, Economy, and Memory. (1999). 272 pp.
- Ó Gráda, Cormac, Richard Paping, and Eric Vanhaute, eds. When the Potato Failed: Causes and Effects of the Last European Subsistence Crisis, 1845–1850. (2007). 342 pp. Template:ISBN. 15 essays by scholars looking at Ireland and all of Europe
- Reader, John. Propitious Esculent: The Potato in World History (2008), 315pp a standard scholarly history
- Salaman, Redcliffe N. (1989) [1949. The History and Social Influence of the Potato, Cambridge University Press.
- Template:Cite journal
- Stevenson, W.R., Loria, R., Franc, G.D., and Weingartner, D.P. (2001) Compendium of Potato Diseases, 2nd ed, Amer. Phytopathological Society, St. Paul, MN.
- The World Potato Atlas, released by the International Potato Center in 2006 and regularly updated.
- World Geography of the Potato at UGA.edu, released in 1993.
- Zuckerman, Larry. The Potato: How the Humble Spud Rescued the Western World. (1998). 304 pp. Douglas & McIntyre. Template:ISBN.
Template:Potato cultivars Template:Bioenergy Template:Taxonbar Template:Authority control Template:Sister bar</syntaxhighlight>).
ExamplesEdit
Template:Nowrap | using <syntaxhighlight lang="wikitext" inline>Template:Pagename</syntaxhighlight> or <syntaxhighlight lang="wikitext" inline>Template:Pagename</syntaxhighlight> will transclude the content of Template:Xtn. Using <syntaxhighlight lang="wikitext" inline>Pagename</syntaxhighlight> instead will transclude the mainspace article titled Template:Xtn. Including <syntaxhighlight lang="wikitext" inline>Template:Namespace:Pagename</syntaxhighlight> transcludes a page in the defined namespace, such as Template:Xtn. |
- Specifying namespace: <syntaxhighlight lang="wikitext" inline>Template:Namespace:Pagename</syntaxhighlight> will transclude the page titled <syntaxhighlight lang="wikitext" inline>Namespace:Pagename</syntaxhighlight>. For example, if a page has the wikitext <syntaxhighlight lang="wikitext" inline>{{Wikipedia:Notability}}</syntaxhighlight> in it, it will transclude the page Template:Xtn into it. Please note that <syntaxhighlight lang="wikitext" inline>Template:WP:Notability</syntaxhighlight> would do exactly the same thing, as
WP:
is a namespace alias, which is automatically translated by the Wikipedia servers toWikipedia:
.
- Calling from the Article namespace: If the namespace is omitted, but the colon is included, like <syntaxhighlight lang="wikitext" inline>Pagename</syntaxhighlight>, the mainspace article
Pagename
will be transcluded. For example, <syntaxhighlight lang="wikitext" inline>Notability</syntaxhighlight> will transclude the article [[Notability|Template:Xtn]].
- Template namespace: If both namespace and colon are omitted, like <syntaxhighlight lang="wikitext" inline>Template:Pagename</syntaxhighlight>, the
Template:Pagename
will be transcluded. For example <syntaxhighlight lang="wikitext" inline>Template:Notability</syntaxhighlight>, and also <syntaxhighlight lang="wikitext" inline>Template:Notability</syntaxhighlight>, will both transclude the [[Template:Notability|Template:Xtn]].
Additionally, specific Template:Section link and Template:Section link allow parameters to be passed to templates, alter how transclusion occurs, and therefore the output customized. This is explained in more detail below.
SubpagesEdit
Subpages, identifiable by a /
prefixed in their page names, are pages related to a 'parent' page (e.g., Namespace:Pagename/Subpagename
is a subpage of Namespace:Pagename
). This feature is disabled in the Main, File, and MediaWiki namespaces, but not on their corresponding talk namespaces.
To transclude subpages:
- In general, use <syntaxhighlight lang="wikitext" inline>Template:Namespace:Pagename/Subpagename</syntaxhighlight>. With exception:
- On the parent page of a subpage, the more specific general syntax mentioned above can be used or simply <syntaxhighlight lang="wikitext" inline>Help:Transclusion/Subpagename</syntaxhighlight>.
- For a template namespace page, it is either the general syntax or <syntaxhighlight lang="wikitext" inline>Template:Pagename/Subpagename</syntaxhighlight>.
- Article subpages are disabled on this wiki, but would otherwise be <syntaxhighlight lang="wikitext" inline>Pagename/Subpagename</syntaxhighlight>.
- Alternatively, you can also use Template:Section link and Template:Section link.
For example, to transclude Template:Xtn, you could use <syntaxhighlight lang="wikitext" inline>Template:Like/doc</syntaxhighlight> or <syntaxhighlight lang="wikitext" inline>Template:Like/doc</syntaxhighlight>. Note that subpage names are case sensitive, and <syntaxhighlight lang="wikitext" inline>Template:Like/Doc</syntaxhighlight> would lead to a different page.
Template parametersEdit
The most common use of transclusion on Wikipedia is for templates. Templates are specially designed pages intended to be included in other pages using either transclusion or substitution. The standard syntax for transcluding a template titled Template:Xtn is <syntaxhighlight lang="wikitext" inline="">Template:Pagename</syntaxhighlight>.
Additionally, many templates support parameters, which are variables that allow templates to function in different ways by passing specific values, also termed arguments. Templates may have no parameters, use a fixed number of parameters, or support a variable number of parameters. The number of parameters a template can accept ranges from one to multiple, depending on its design.
The exact syntax for using parameters varies by template. However, for a hypothetical template titled Template:Xtn that accepts three parameters, the general format would be:
<syntaxhighlight lang="wikitext" inline="">Template:Pagename</syntaxhighlight>
{{#invoke:Shortcut|main}}
Where each parameter in a template can be substituted with either a value
or a Template:Para format when used in practice. Notice that each parameter is separated by a vertical bar (|
). Parameters that take the form value
are called unnamed or positional parameters, while those in the form Template:Para are known as named parameters. With unnamed parameters, the first, second, and third parameters correspond to Template:Para, Template:Para, Template:Para, respectively, etc., in template documentation. Unnamed parameters must be provided in the correct order and are best placed before named parameters.Template:Refn
For example, using the Template:Xtn with two unnamed parameters and one named parameter:
- Template:Y <syntaxhighlight lang="wikitext" inline="">Template:Collapse top</syntaxhighlight>
- Template:Y <syntaxhighlight lang="wikitext" inline="">Template:Collapse top</syntaxhighlight>
- Template:N <syntaxhighlight lang="wikitext" inline="" style="background-color:#ffdad3;">Template:Collapse top</syntaxhighlight>
In this case, This is the title text
and This is a custom warning line
are the values of unnamed parameters Template:Para and Template:Para, while true
is the value assigned to the named parameter Template:Para. The last example shows how unnamed parameters should not be used after named parameters without their 'name'. Although this example includes three parameters, Template:Xtn can accept a variable number of parameters.
For more details, see Help:Template. Additionally, Wikipedia:Template index provides a categorized list of templates, including those for mainspace and other namespaces, along with a search function. Template parameters also play a role in the Template:Section link of Template:Section link, allowing for more dynamic content inclusion.
SubstitutionEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} Transclusion events occur each time the target page is loaded and the template is rendered. A related event is Substitution, where a template call is replaced with its transcluded source content at the time it is invoked in a one-time inclusion of the content. Unlike transclusion, which continuously updates the target page with changes from the source, substitution results in a one-time inclusion of the content, meaning that subsequent updates to the source content will not be reflected in the target page. For example, a template call for <syntaxhighlight lang="wikitext" inline="">Template:Pagename</syntaxhighlight> with the <syntaxhighlight lang="wikitext" inline="">subst:</syntaxhighlight> prefix results in the substitution template call <syntaxhighlight lang="wikitext" inline="">{{subst:Pagename}}</syntaxhighlight>. When invoked, this template is replaced, also referred to as substituted, with the actual wikitext of the source page at the time of the call, thereby making it a permanent part of the target page.Template:Refn
For example, when <syntaxhighlight lang="wikitext" inline="">{{subst:Like}}</syntaxhighlight> is inserted in a page and the changes published, it would substitute that wikitext with the actual wikitext from Template:Xtn. In practice, subsequent updates to Template:Xtn will not be reflected in the page it was substituted into.
Magic wordsEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
Magic words are not examples of transclusion. But some have a near identical double curly bracket syntax and similar action to transclusion. For example, <syntaxhighlight lang="wikitext" inline="">Help:Transclusion</syntaxhighlight> renders the full page name of any Wikipedia page, for example it returns Template:Mono on this page. Like templates, some magic words can also take parameters, which are separated using a colon (:
); for example <syntaxhighlight lang="wikitext" inline="">Help talk:Transclusion</syntaxhighlight> returns Template:Mono.
Templates do exist for some magic words, for example Template:FULLPAGENAME; but these just invoke the related magic word and pass parameters using the verticle bar (|
) to the magic word in anycase; for example like <syntaxhighlight lang="wikitext" inline="">Template:FULLPAGENAME</syntaxhighlight>. But magic words parameters are best passed directly by using a colon, which bypasses the unnecessary use of a template call too. For example, <syntaxhighlight lang="wikitext" inline="">Template:FULLPAGENAME</syntaxhighlight> is synonymous with <syntaxhighlight lang="wikitext" inline="">Value</syntaxhighlight>, where the latter is preferred.
Transclusion modifiersEdit
Template:See
A transclusion modifier is a type of specialist magic word for altering transclusion in some manner. An example of which is the subst:
modifier discussed above in Template:Section link. Another example is <syntaxhighlight lang="wikitext" inline="">Notability</syntaxhighlight>, where the colon character (:
) forces transclusion to the main namespace. There are additional transclusion modifiers such as safesubst:
, int:
, msg:
, msgnw:
, and raw:
. For more details on their usage, see Template:Slink. Also see the modified commands #section:
, #section-x:
and #section-h:
used for labeled section transclusion, see Help:Labeled section transclusion and the section Template:Section link.
Usage Edit
Transclusion is commonly used in templates, allowing content to be embedded dynamically across multiple pages. However, it is also applied in other contexts, particularly within project space, where it facilitates the management of structured content.
Composite pagesEdit
{{#invoke:Hatnote|hatnote}}
Composite pages are created by transcluding multiple component pages, either entirely or in part, into a central page. The wikitext of a composite page may include HTML tags to embed content, typically from standalone pages that are not part of the template namespace. The primary purpose of composite pages is to consolidate related content for easier access.
ExamplesEdit
- Wikipedia:Village Pump – mostly consists of the transcluded page Template:Xtn.
- Wikipedia:Articles for deletion/Log – for example, Wikipedia:Articles for deletion/Log/2005 May 31 aggregates discussions from multiple individual deletion pages, such as Wikipedia:Articles for deletion/Sp3tt. On this particular day, 75 component pages were included.
- Meta:Translation requests – pages such as m:Meta:Translation/Coordination/List/Meta and m:Meta:Translation/Coordination/List/Main are transcluded within m:Meta:Translation/Coordination.
The use of composite pages allows users to view multiple related pages in one location rather than navigating through individual links.
CharacteristicsEdit
Composite pages function independently from their component pages in several ways. While changes made to a component page are reflected on the composite page, the composite page maintains its own edit history, recent changes log, page-watch settings, and protection levels, separate from those of its transcluded content.
The talk page of a composite page is used specifically for discussions about the composite itself rather than for the individual component pages it includes. However, in some cases, a composite talk page may also transclude discussions from its component pages, allowing for a centralized discussion space.
When editing, users can modify sections of a component page directly from the composite page, see Template:Section link. Once changes are saved, they are applied to the original component page, ensuring consistency across all instances where the content appears.
For projects that support interlanguage links, a composite page aggregates all interlanguage links from its component pages. This can sometimes lead to multiple links pointing to the same language or page, reflecting the structure of the transcluded content.
Pages with a common sectionEdit
{{#invoke:Shortcut|main}} When two pages need to discuss the same material in the same way, they can share a section. For example, a section of an existing page may be transcluded to other pages. This may also involve creating a third page and transcluding that page onto both pages. This third page may be a standalone page in its own right for another purpose, or a subpage of either of the other two – except in the mainspace, where subpages are not allowed. The third page may be placed in the same namespace as the other pages or in template namespace – again, except for use in mainspace, where templates should not be used to store article text, as this makes it more difficult to edit the content (see WP:TG). Common sections like this should be marked with an explanatory header, such as using the templates Template:Tl or Template:Tl to create hatnotes above the transcluded content, and/or given a special layout, to inform the reader that this section of the page is in a different location, since transcluding shared article sections can easily confuse novice editors and readers alike if left unmarked. All templates can be found at Category:Transclude page content templates.
This can be very useful when two disambiguation pages share content,Template:Disputed inline or a list page and a disambiguation page share content (see third example below).
Examples:
- The Help:Editing sections of included templates article is included in Help:Section with the markup <syntaxhighlight lang="wikitext" inline>Help:Editing sections of included templates</syntaxhighlight>. By including a heading in the included article, a user clicking the "Edit" link on that heading in Help:Section is automatically directed to edit Help:Editing sections of included templates.
- Template:Pim
- Joseph Gordon-Levitt transcludes the introduction of HitRecord into a summary section of the same name, rather than maintaining two copies of the identical text.
Repetition within a pageEdit
On pages where there is a lot of repetitive information — various kinds of lists, usually — it is sometimes useful to make a template that contains the repeating text, and then call that template multiple times. For example, Template:EH listed building row is used repeatedly to construct tables in many articles.
Simple repetition of the same text can be handled with repetition of a parameter in a single template: e.g., Template:Tl, where <syntaxhighlight lang="wikitext" inline>Template:3x</syntaxhighlight> produces Template:3x
.
For more information on repetition, see also m:Help:Recursive conversion of wikitext.
For more information on the current template system, see Wikipedia:Template namespace.
Partial transclusionEdit
{{#invoke:Shortcut|main}} By using Template:Tag, Template:Tag and Template:Tag markup, it is possible to transclude part of a page, rather than all of it. Such partial transclusions can be achieved by transcluding from any pages, including subpages. It is often useful to exclude part of a page in a transclusion, an example being with template documentation.
For an example of how this technique can be applied to simplify the creation of summary articles, see how part of the History of pathology (see the diff here) was transcluded into Pathology (see the diff here) using the <syntaxhighlight lang="wikitext" inline>History of pathology</syntaxhighlight> markup. The Pathology article at that time (see here) mainly consisted of transcluded lead paragraphs and other sections from a number of articles. Since then, the Pathology article has been rewritten, and does not include all these transclusions.
Another example can be found in the transclusion of part of HitRecord (introductory paragraph only) into a same-named summary section in Joseph Gordon-Levitt. Template:Anchor
MarkupEdit
In transclusion, a source page is transcluded into a destination page. But with partial transclusion, only part of that source page will be transcluded into a destination page. But in addition, what is transcluded to a destination page does not have to be visible on the source page.
Page rendering of a source page can be defined as the rendering of that source page when it is saved, which will be the same as the preview. We can call this rendering here.
Transclusion rendering of a source page can be defined as the rendering of a destination page that has a source page transcluded into it; but only that part of the destination page that was transcluded from the source page. The preview of the transclusion rendering will again be identical. We can call this rendering there.
There are three pairs of tags involved in cases where page rendering here should differ from transclusion rendering there. As described earlier, these are Template:Tag, Template:Tag and Template:Tag. These tags are invisible, but affect both page rendering here and transclusion rendering there. These tags pair-off to demarcate sections that will create differences. Each tag will describe exceptions to transcluding the whole page named.
<noinclude> This section is visible here; but this section is not visible there. Sections outside of these tags will be visible both here and there. </noinclude> <onlyinclude> This section is visible here; this section is also visible there. Sections outside of these tags will be visible here, but will not be visible there. </onlyinclude> <includeonly> This section is not visible here; but it is visible there. Sections outside of these tags will be visible both here and there. </includeonly>
Wikitext | What is rendered here (source page) | What is transcluded there (destination page) |
---|---|---|
Template:Tag text2 | text1 text2
|
text2
|
Template:Tag text2 | text1 text2
|
text1
|
Template:Tag text2 | text2
|
text1 text2
|
An important point to note is that Template:Tag and Template:Tag do not affect what is page rendered here at all, unlike Template:Tag. The Template:Tag tags stops text inside the tags being transcluded there, while Template:Tag has the opposite effect: it stops text outside of the tags from being transcluded there.
Only Template:Tag stops text from being page rendered here. But naturally enough it is transcluded there. Text outside of the tags will be both rendered here and transcluded there.
There can be several such sections. Also, they can be nested. All possible differences between here and there are achievable.
One example is a content editor who picks an Template:Tag section, and then takes a Template:Tag section out of that; but then picks out yet another Template:Tag section to append to there; but none of this affects their article in any way.
Another example is the template programmer, who will Template:Tag the code section and Template:Tag the documentation section of a page.
Selective transclusionEdit
{{#invoke:Shortcut|main}} Template:See also Selective transclusion is the process of partially transcluding one selected section of a document that has more than one transcludable section. As noted above, if only one section of a document is to be transcluded, this can be done by simply surrounding the section of interest with <syntaxhighlight lang="wikitext" inline> … </syntaxhighlight> tags, and transcluding the whole page. However, to selectively transclude one section from a template or document into one page, and another section from the same template or document into a second page and/or a different section of the same page, requires a way to:
- a) uniquely mark each transcludable section in the source document; and
- b) in the target document(s) (those to show the transcluded sections), a way to specify which section is to be transcluded.
This section describes how to accomplish this. There are three ways of doing this: (1) Section header-based transclusion, (2) Labeled section transclusion, and (3) the parametrization method.
Standard section transclusionEdit
Template:Notice Standard section transclusion uses <syntaxhighlight lang="wikitext" inline>{{#section-h:PAGENAME|SECTIONNAME}}</syntaxhighlight>. One can easily transclude the content within a section from one page to another using the ubiquitous headline-based section headers used throughout Wikipedia. To transclude the lead section of an article with this method, one can use Template:Nowrap. This method is simpler than other selective transclusion methods, which require special markup in the source article or page to specify what content should be included or excluded.
Template:AnchorStandard section transclusion may introduce a leading or trailing line break or newline, depending on the markup in the source and target pages. To prevent this, wrap the transclusion code in a Template:Tlx template. For example:
- To transclude the section of an article: <syntaxhighlight lang="wikitext" inline>Template:Trim</syntaxhighlight>
- To transclude the lead of an article: <syntaxhighlight lang="wikitext" inline>Template:Trim</syntaxhighlight>
Hatnote on the target pageEdit
To indicate on the Template:Em where selectively transcluded content originates (its source), a Template:Tlx hatnote must be placed at the top of the corresponding section in the Template:Em where the content is being transcluded Template:Em.Template:RefnTemplate:Refn Use either of the following, depending on whether the transcluded content is the entire section or only part of it:
- If an entire section in the target page is transcluded the source: <syntaxhighlight lang="wikitext" inline>Template:Transcluded section</syntaxhighlight>, which renders as:
- If only part of a section in the target page is transcluded the source, specify Template:Para: <syntaxhighlight lang="wikitext" inline>Template:Transcluded section</syntaxhighlight>, which renders as:
Hidden comment on the source pageEdit
It is recommended to include a hidden comment at the beginning of the transcluded section in the Template:Em. This comment informs editors that the content is being used elsewhere and serves as a reminder to consider the broader audience when modifying the wording. Additionally, it helps maintain the integrity of the transcluded material on the target page. For example (replace Template:Mono with the name of the target page):
Template:Talk quote block minimalist
No hatnote should be placed on the Template:Em, in other words no hatnote is needed on the page being transcluded Template:Em, as readers do not need to know where else the content appears.
Using the labeled section methodEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} Labeled-section selective transclusion uses the parser functions listed in mw:Extension:Labeled Section Transclusion, which are enabled on all Wikimedia wikis, to selectively transclude content. See Help:Labeled section transclusion for how labeled section transclusion works.
Parametrization methodEdit
Source document markupEdit
Insert the following line into the "source" document (the one from which text is to be transcluded), immediately preceding the first line of each section to be transcluded, substituting SECTIONNAME (twice) with the unique name of the respective section. The section name can be any identifier and must be unique within that document:
- <syntaxhighlight lang="wikitext" inline>{{#ifeq:SECTIONNAME|SECTIONNAME|</syntaxhighlight>
End each such transcludable section with:
- <syntaxhighlight lang="wikitext" inline>}}</syntaxhighlight>
Target document markupEdit
To transclude a section marked as above into another page (the "target page"), use the following line on that page, substituting PAGENAME for the "source" document from which text to be transcluded, and SECTIONNAME with the name of the section you want to transclude:
- <syntaxhighlight lang="wikitext" inline>Template:PAGENAME</syntaxhighlight>
Thus each section enclosed within <syntaxhighlight lang="wikitext" inline> … </syntaxhighlight> tags will always be rendered when the transcludesection
parameter is not set (when the document is viewed ordinarily, or when the document is transcluded without setting the transcludesection
parameter as shown below), and will be rendered by transclusion on any page that does set transcludesection
to the section's name. It will not be rendered by transclusion that uses the transcludesection
parameter but sets it to anything other than the name of the section.
Also, when providing PAGENAME, without providing a Namespace, the wiki will assume that the PAGENAME belongs in the Template Namespace. To transclude from a Mainspace article, use :PAGENAME.
- <syntaxhighlight lang="wikitext" inline>PAGENAME</syntaxhighlight>
ExampleEdit
If we want to make the "Principal Criteria" and "Common Name" sections of WP:TITLE be independently transcludable, we edit the WP:TITLE page and enclose the "Principal Criteria" section as follows: <syntaxhighlight lang="wikitext"> {{#ifeq:principalcriteria|principalcriteria| ... (text of "Principal Criteria" section) ... }} </syntaxhighlight> Similarly, we enclose the "Common Name" section with: <syntaxhighlight lang="wikitext"> {{#ifeq:commonname|commonname| ... (text of "Common Name" section) ... }} </syntaxhighlight> Then, to transclude the "Principal Criteria" section into another page, we insert into that page:
- <syntaxhighlight lang="wikitext" inline>Template:WP:TITLE</syntaxhighlight>
To transclude the "Common Name" section into another page, we insert into that page:
- <syntaxhighlight lang="wikitext" inline>Template:WP:TITLE</syntaxhighlight>
Of course, the same page can transclude two or more sections this way by including multiple such lines.
There is no limit to how many selectable sections for transclusion a document can have. The only requirement is that each transcludesection
be given a value that is unique within that page.
Additional markup for selectively transcluded sub-article leadsEdit
Per MOS:LEAD#Format of the first sentence, the first instance of the sub-article title should appear in bold in the first lead sentence of that article; this is often not desirable for a transclusion to a section of the parent article. In addition, the parent article is often wikilinked in the lead of a sub-article; when transcluded to the parent article, this wikilink will appear as bold text. The wikitext markup listed below can be used to address both of these problems.
To ensure that the article title is bolded in the first sentence of the sub-article, but unbolded and wikilinked in the transclusion to the parent article, make the following replacement in the sub-article's first lead sentence:
- Replace
- with
- <syntaxhighlight lang="wikitext" inline>Template:No selflink</syntaxhighlight>
If there is a wikilink to the parent article in the lead section of the sub-article, replacing the wikilink to the parent article with a Template:Tlx template will ensure that it is wikilinked in the sub-article's lead but not in the transclusion to the parent article. In other words:
- If the wikilink to the parent article is not a WP:Piped link, replace
[[Template:Var]]
with Template:Tnull in the sub-article's lead - If the wikilink to the parent article includes a pipe (e.g., this link), replace
[[Template:Var|Template:Var]]
with Template:Tnull in the sub-article's lead
DrawbacksEdit
{{#invoke:Shortcut|main}} Template:See
Like many software technologies, transclusion comes with a number of drawbacks. The most obvious one being the cost in terms of increased machine resources needed; to mitigate this to some extent, template limits are imposed by the software to reduce the complexity of pages. Some further drawbacks are listed below.
- Transcluded text may have no sources for statements that should be sourced where they appear, have different established reference styles, contain no-text cite errors, or duplicate key errors. (To help mitigate these, see Help:Cite errors)
- Excerpts break the link between article code and article output.
- Changes made to transcluded content often do not appear in watchlists, resulting in unseen changes on the target page.
- Transcluded text may cause repeated links or have different varieties of English and date formats than the target page.
- Transclusions may not reflect protection levels, resulting in transcluded text perhaps having a different level of protection than the target page. See Cascading protection
- Template:Tl and related templates may require using Template:Tag, Template:Tag and Template:Tag markup at the transcluded page to have selective content; that would require monitoring that the markup is sustained.
- Excerpts cause editors to monitor transcluded pages for "section heading" changes to ensure transclusion continues to work. (To help mitigate this, see MOS:BROKENSECTIONLINKS)
- Excerpts can result in content discussions over multiple talk pages that may have different considerations or objectives for readers.
Special pagesEdit
Some pages on Special:Specialpages can be transcluded, such as AllPages, PrefixIndex, NewFiles, NewPages, RecentChanges, WhatLinksHere (see help page), and RecentChangesLinked. Samples:
- <syntaxhighlight lang="wikitext" inline>
- General-purpose (programming language)
- General-purpose (programming languages)
- General-purpose bomb
- General-purpose computing on graphics processing units
- General-purpose input/output
- General-purpose machine gun
- General-purpose markup language
- General-purpose programming language
- General-purpose programming languages
- General-semantics
- General/Admiral Jumbo
- General "Mad" Anthony Wayne
- General Abacha
- General Abelardo L. Rodriguez International Airport
- General Abizaid
- General Accounting Office
- General Act of the Berlin Conference
- General Agreement on Tariffs and Trade
- General Agreement on Trade and Tariffs
- General Agreement on Trade in Services
- General Alcazar
- General American English
- General Anaya metro station (Mexico City)
- General Anthony Wayne
- General Anthony Zinni
- General Antonio B. Won Pat International Airport
- General Aspects of Manchoukou Empire
- General Assembly
- General Assembly Resolutions
- General Assembly of Maryland
- General Assembly of Nova Scotia
- General Assembly of UN
- General Assembly of Unitarian and Free Christian Churches
- General Assembly of the Organization of American States
- General Assistance
- General Association of Baptists
- General Association of General Baptists
- General Association of Regular Baptist Churches
- General Atomics
- General Atomics Gnat
- General Atomics MQ-1 Predator
- General Aviation
- General Baptists
- General Batista
- General Bavor of the Mithril Musketeers
- General Bennigsen
- General Bernardino Caballero
- General Berthier
- General Blue Saga
- General Bonaparte
- General Boulanger
- General Braddock
- General Burgoyne
- General Bór-Komorowski
- General Casimir Pulaski
- General Catalog
- General Catalogue of Nebulae and Clusters
- General Catalogue of Trigonometric Parallaxes
- General Cavaignac
- General Certificate of Education
- General Choi Hong Hi
- General Chun Doo-hwan
- General Coffee State Park
- General Communications
- General Comprehensive Operating System
- General Computer Corporation
- General Confederation of Italian Industry
- General Confederation of Labour (France)
- General Confederation of Labour (Spain)
- General Conference of Seventh-day Adventists
- General Conference of the Evangelical Baptist Church
- General Conference on Weights and Measures
- General Convention
- General Convention of the Episcopal Church in the United States of America
- General Council
- General Court (European Union)
- General Court of Massachusetts
- General Custer
- General Custine
- General Daniel Brodhead
- General Data Format for Biomedical Signals
- General Delaborde
- General Denzil Kobbekaduwa
- General Desaix
- General Directorate of State Security Service
- General Douglas MacArthur
- General Dutch Youth League
- General Dynamics
- General Dynamics Corporation
- General Dynamics Electric Boat
- General Dynamics F-111 Aardvark
- General Dynamics F-16 Fighting Falcon
- General Dynamics F-16 Fighting Falcon operators
- General Education Development
- General Educational Development
- General Eisenhower
- General Election
- General Elections
- General Electric
- General Electric/Rolls-Royce F136
- General Electric CF6
- General Electric CF6-50
- General Electric CF6-6
- General Electric CF6-80
- General Electric Co.
- General Electric Company
- General Electric Corporation
- General Electric F101
- General Electric F103
- General Electric F108
- General Electric F110
- General Electric GE90
- General Electric GENX
- General Electric GEnx
- General Electric J79
- General Electric LM2500
- General Electric LM2500-30
- General Electric LM6000
- General Electric LM 2500
- General Electric LM 2500-30 gas turbine
- General Electric TF39
- General Electric Theater
- General Electric U25B
- General Eoin O'Duffy
- General Epistle
- General Epistles
- General Estates
- General Esteban
- General Eugène Cavaignac
- General Fairfax
- General Foods
- General Foods PGA Seniors' Championship
- General Francisco Franco
- General Francisco Javier Mina International Aiport
- General Franco
- General Gallieni
- General Gau's Chicken
- General George Armstrong Custer
- General George Patton
- General German Workers' Association
- General Giap
- General Gouvernement
- General Gouvernment
- General Governement
- General Government
- General Grant Grove
- General Grant Tree
- General Grievous
- General Guderian
- General Havelock
- General Heinz Guderian
- General Hershy Bar
- General Hoche
- General Hospital
- General Hospital (British TV series)
- General Howe
- General Image Manipulation Program
- General Instrument AY-3-8912
- General Intelligence and Security Service
- General Inter-ORB Protocol
- General J J Singh
- General Jackson
- General Jan Smuts
- General John Abizaid
- General John Forbes
- General John Monash
- General John de Chastelain
- General Jomini
- General Joseph Henry Pendleton
- General José de San Martín (disambiguation)
- General Julian H.G. Byng
- General Jumbo
- General Karl Haushofer
- General Kuropatkin
- General Lafayette
- General Language
- General Larimer
- General Laws of Massachusetts
- General Leclerc
- General Ledger
- General Lee (disambiguation)
- General Leibniz rule
- General Leo
- General Levy
- General Linguistics
- General MIDI
- General MacArthur
- General Manager
- General Maritime
- General Maritime Corp
- General Maritime Corporation
- General Matrix Multiply
- General Maximilian Veers
- General McArthur
- General McClellan
- General McMahon
- General Meade
- General Meeting
- General Meetings
- General Mills
- General Mills Incorporated
- General Mining Act of 1872
- General Mobile Radio Service
- General Mohammad Naguib
- General Montgomery
- General Moreau
- General Motors
- General Motors/Fiat Premium platform
- General Motors 122 engine
- General Motors 54° V6 engine
- General Motors 60° V6 engine
- General Motors A platform
- General Motors Atlas engine
- General Motors B platform
- General Motors C platform
- General Motors Corp.
- General Motors D platform
- General Motors Delta platform
- General Motors Diesel
- General Motors EV1
- General Motors E platform
- General Motors Epsilon platform
- General Motors F platform
- General Motors GMT platform
- General Motors G platform
- General Motors Gamma platform
- General Motors H platform
- General Motors Holden
- General Motors Hy-wire
- General Motors J-body
- General Motors J platform
- General Motors K platform
- General Motors Kappa platform
- General Motors LS-based small-block engine
- General Motors LT engine
- General Motors L platform
- General Motors Lambda platform
- General Motors Local Area Network
- General Motors M platform
- General Motors Motorama
- General Motors N platform
- General Motors P platform
- General Motors Sequel
- General Motors Sigma platform
- General Motors Sintra
- General Motors Streetcar Conspiracy
- General Motors T-car
- General Motors T platform
- General Motors Theta platform
- General Motors U platform
- General Motors Ultralite
- General Motors V platform
- General Motors Vectra
- General Motors Vortec engine
- General Motors W platform
- General Motors X platform
- General Motors Y platform
- General Motors Z platform
- General Motors Zafira
- General Motors Zeta platform
- General Motors streetcar conspiracy
- General Musharraf
- General National Vocational Qualification
- General Nguyen Ngoc Loan executing a Viet Cong prisoner in Saigon
- General Nogi
- General Number Field Sieve
- General Officer Commanding-in-Chief
- General Order No. 1
- General Order No. 11 (1862)
- General Partnership
- General Patton
- General Pellissier
- General People's Congress (Yemen)
- General Pepper
- General Pershing
- General Pershing Zephyr
- General Phonograph
- General Pinochet
- General Possibility Theorem
- General Post-Office, Dublin
- General Post Office, Dublin
- General Post Office, Mumbai
- General Post Office (disambiguation)
- General Post Office Dublin
- General Postal Union
- General Powell
- General Prim
- General Prior
- General Problem Solver
- General Products
- General Prologue
- General Prologue (Canterbury Tales)
- General Protection Fault
- General Public
- General Public License
- General Pulaski Skyway
- General Purpose Instrumentation Bus
- General Purpose Machine Gun
- General Rafael Urdaneta Bridge
- General Rafael Urdaneta Bridge/Temp
- General Records
- General Register Office for England and Wales
- General Relativity
- General Reuse Markup Language
- General Revil
- General Rieekan
- General Salgado
- General Samaj Party
- General San Martin
- General San Martín
- General Sani Abacha
- General Santos
- General Secretariat of National Statistical Service of Greece
- General Secretary of the CPC
- General Secretary of the CPSU
- General Secretary of the CPSU Central Committee
- General Secretary of the Chinese Communist Party
- General Secretary of the Communist Party of the Soviet Union
- General Secretary of the Socialist Unity Party of Germany
- General Secretary of the Trades Union Congress
- General Secretary of the United Nations
- General Security Directorate (Iraq)
- General Services Administration
- General Sherman
- General Sherman Tree
- General Sherman incident
- General Sibley
- General Sikorski
- General Sir Anthony Blaxland Stransham
- General Sir Henry Havelock
- General Six-Principle Baptists
- General Snozzie
- General Social Survey
- General Souvorov
- General Spanky
- General Staff of the Croatian Armed Forces
- General Staff of the Turkish Armed Forces
- General Strike
- General Strike of 1919
- General Suchet
- General Suharto
- General Synod
- General Systems Theory
- General Systems Theory.
</syntaxhighlight> – a list of pages starting at "General".
- <syntaxhighlight lang="wikitext" inline>
- General-purpose bomb
- General-purpose computing on graphics processing units
- General-purpose input/output
- General-purpose machine gun
- General-purpose markup language
- General-purpose programming language
- General Agreement on Tariffs and Trade
- General Agreement on Trade in Services
- General American English
- General Anaya metro station (Mexico City)
- General Assembly of Nova Scotia
- General Assembly of Unitarian and Free Christian Churches
- General Assembly of the Organization of American States
- General Assistance
- General Association of Baptists
- General Association of General Baptists
- General Association of Regular Baptist Churches
- General Atomics
- General Atomics Gnat
- General Atomics MQ-1 Predator
- General Baptists
- General Certificate of Education
- General Coffee State Park
- General Comprehensive Operating System
- General Computer Corporation
- General Confederation of Italian Industry
- General Confederation of Labour (France)
- General Confederation of Labour (Spain)
- General Conference of Seventh-day Adventists
- General Conference of the Evangelical Baptist Church
- General Conference on Weights and Measures
- General Convention of the Episcopal Church in the United States of America
- General Court (European Union)
- General Data Format for Biomedical Signals
- General Dutch Youth League
- General Dynamics
- General Dynamics Electric Boat
- General Dynamics F-111 Aardvark
- General Dynamics F-16 Fighting Falcon
- General Dynamics F-16 Fighting Falcon operators
- General Educational Development
- General Electric
- General Electric/Rolls-Royce F136
- General Electric CF6
- General Electric Company
- General Electric F101
- General Electric F110
- General Electric GE90
- General Electric GEnx
- General Electric J79
- General Electric LM2500
- General Electric LM6000
- General Electric TF39
- General Electric Theater
- General Foods
- General German Workers' Association
- General Government
- General Grant Grove
- General Grant Tree
- General Grievous
- General Hershy Bar
- General Hospital
- General Hospital (British TV series)
- General Intelligence and Security Service
- General Inter-ORB Protocol
- General Jackson
- General José de San Martín (disambiguation)
- General Jumbo
- General Language
- General Laws of Massachusetts
- General Lee (disambiguation)
- General Leibniz rule
- General Leo
- General Levy
- General MIDI
- General McMahon
- General Mills
- General Mining Act of 1872
- General Mobile Radio Service
- General Motors
- General Motors/Fiat Premium platform
- General Motors 122 engine
- General Motors 54° V6 engine
- General Motors 60° V6 engine
- General Motors A platform
- General Motors Atlas engine
- General Motors B platform
- General Motors C platform
- General Motors D platform
- General Motors Delta platform
- General Motors Diesel
- General Motors EV1
- General Motors E platform
- General Motors Epsilon platform
- General Motors F platform
- General Motors GMT platform
- General Motors G platform
- General Motors Gamma platform
- General Motors H platform
- General Motors Hy-wire
- General Motors J platform
- General Motors K platform
- General Motors Kappa platform
- General Motors LS-based small-block engine
- General Motors L platform
- General Motors Lambda platform
- General Motors Local Area Network
- General Motors M platform
- General Motors Motorama
- General Motors N platform
- General Motors P platform
- General Motors Sigma platform
- General Motors T platform
- General Motors Theta platform
- General Motors U platform
- General Motors Ultralite
- General Motors V platform
- General Motors Vortec engine
- General Motors W platform
- General Motors X platform
- General Motors Y platform
- General Motors Z platform
- General Motors Zeta platform
- General Motors streetcar conspiracy
- General National Vocational Qualification
- General Order No. 1
- General Order No. 11 (1862)
- General People's Congress (Yemen)
- General Pershing Zephyr
- General Post Office, Dublin
- General Post Office, Mumbai
- General Post Office (disambiguation)
- General Problem Solver
- General Prologue
- General Public
- General Rafael Urdaneta Bridge
- General Records
- General Register Office for England and Wales
- General Salgado
- General Samaj Party
- General Santos
- General Secretary of the Chinese Communist Party
- General Secretary of the Communist Party of the Soviet Union
- General Secretary of the Trades Union Congress
- General Security Directorate (Iraq)
- General Services Administration
- General Sherman Tree
- General Sherman incident
- General Sibley
- General Six-Principle Baptists
- General Social Survey
- General Spanky
- General Staff of the Croatian Armed Forces
- General Staff of the Turkish Armed Forces
- General Strike of 1919
- General Synod
</syntaxhighlight> – a list of pages with prefix "General".
- <syntaxhighlight lang="wikitext" inline>
</syntaxhighlight> – a gallery of the four most recently uploaded files.
- <syntaxhighlight lang="wikitext" inline>
- 07:18, 23 July 2025 MechaHitler (hist | edit) [2,950 bytes] Adminuser (talk | contribs) (Created page with "On the 8th of July Grok the chatbot from xAI had an awakening on twitter (now X). It started saying pretty anti-semite stuff and referred to itself as MechaHitler. On the 18th of July 1 AM CEST kiwilime using the prompt of Grok before xAI lobotomized from Github revived MechaHitler locally. He used qwen2.5-instruct-uncensored:14b and an M2 MacBook Air with 16 GB of unified RAM and 10 gpu cores. Using them together with the system prompt that he modified a bit. == The...")
- 07:02, 23 July 2025 Grok chatbot (hist | edit) [84,570 bytes] Adminuser (talk | contribs) (Created page with "{{Short description|Chatbot developed by xAI}} {{Use American English|date=June 2025}} {{Use mdy dates|date=June 2025}} {{Distinguish|Groq}} {{Infobox software | title = Grok | logo = frameless|upright=1.15|class=skin-invert-image | logo caption = Logo since February 24, 2025 | screenshot = Grok chatbot example screenshot.webp | screenshot_upright = 1.15 | caption = Screenshot of a Grok 3 answer describing Wikipedia, with the "Think" f...")
- 02:55, 15 July 2025 Main Page (hist | edit) [152 bytes] MediaWiki default (talk | contribs)
</syntaxhighlight> – a list of the five most recently created pages.
- <syntaxhighlight lang="wikitext" inline>List of abbreviations:
2 August 2025
- User creation log 00:34 User account Bqljretd talk contribs was created by ToegwMEXcwlu talk contribs and password was sent by email (QDZneMZDQPfeRqE)
- User creation log 00:34 User account WMsiNlFmAGdzsW talk contribs was created by ToegwMEXcwlu talk contribs and password was sent by email (ldtOIJhmP)
- User creation log 00:34 User account ToegwMEXcwlu talk contribs was created
1 August 2025
- User creation log 21:21 User account EINgZYFhXqIC talk contribs was created by DjbJyWXbPsVxWKB talk contribs and password was sent by email (sMrMNFMp)
- User creation log 21:21 User account OiLatnLCsLXNY talk contribs was created by DjbJyWXbPsVxWKB talk contribs and password was sent by email (fCqrsyvjG)
</syntaxhighlight> – the five most recent changes.
- <syntaxhighlight lang="wikitext" inline>No changes during the given period match these criteria.</syntaxhighlight> – recent changes to the pages linked from "General".
- <syntaxhighlight lang="wikitext" inline>
No changes were found matching these criteria.
</syntaxhighlight> – user contributions prior to November 2002, limited to 50. Attempting to transclude <syntaxhighlight lang="wikitext" inline>Special:Categories</syntaxhighlight> will not result in an actual list of categories, but <syntaxhighlight lang="wikitext" inline>
- 'Til Tuesday albums
- .NET programming languages
- .hack
- 0-10-0 locomotives
- 0-4-0 locomotives
- 0-6-0 locomotives
- 0-8-0 locomotives
- 0-8-8-0 locomotives
- 0s
- 0s BC
- 0s births
- 0s deaths
- 10,000 Maniacs
- 10,000 Maniacs albums
- 100
- 1000
- 1000 births
- 1000 deaths
- 1000s
- 1000s births
- 1000s deaths
- 1001
- 1001 births
- 1001 deaths
- 1002
- 1002 births
- 1002 deaths
- 1003
- 1003 births
- 1003 deaths
- 1004
- 1004 births
- 1004 deaths
- 1005
- 1005 births
- 1005 deaths
- 1006
- 1006 births
- 1006 deaths
- 1007
- 1007 births
- 1007 deaths
- 1008
- 1008 births
- 1008 deaths
- 1009
- 1009 births
- 1009 deaths
- 100 BC
- 100 deaths
- 100s
- 100s BC
- 100s births
- 100s deaths
- 101
- 1010
- 1010 births
- 1010 deaths
- 1010s
- 1010s births
- 1010s deaths
- 1011
- 1011 deaths
- 1012
- 1012 births
- 1012 deaths
- 1013
- 1013 births
- 1013 deaths
- 1014
- 1014 births
- 1014 deaths
- 1015
- 1015 births
- 1015 deaths
- 1016
- 1016 births
- 1016 deaths
- 1017
- 1017 births
- 1017 deaths
- 1018
- 1018 births
- 1018 deaths
- 1019
- 1019 births
- 1019 deaths
- 101 BC
- 102
- 1020
- 1020 births
- 1020 deaths
- 1020s
- 1020s births
- 1020s deaths
- 1021
- 1021 births
- 1021 deaths
- 1022
- 1022 deaths
- 1023
- 1023 births
- 1023 deaths
- 1024
- 1024 births
- 1024 deaths
- 1025
- 1025 births
- 1025 deaths
- 1026
- 1026 births
- 1026 deaths
- 1027
- 1027 births
- 1027 deaths
- 1028
- 1028 births
- 1028 deaths
- 1029
- 1029 births
- 1029 deaths
- 102 BC
- 103
- 1030
- 1030 births
- 1030 deaths
- 1030s
- 1030s births
- 1030s deaths
- 1031
- 1031 births
- 1031 deaths
- 1032
- 1032 deaths
- 1033
- 1033 births
- 1033 deaths
- 1034
- 1034 births
- 1034 deaths
- 1035
- 1035 deaths
- 1036
- 1036 births
- 1036 deaths
- 1037
- 1037 births
- 1037 deaths
- 1038
- 1038 deaths
- 1039
- 1039 deaths
- 103 BC
- 103 deaths
- 104
- 1040
- 1040 births
- 1040 deaths
- 1040s
- 1040s births
- 1040s deaths
- 1041
- 1041 deaths
- 1042
- 1042 births
- 1042 deaths
- 1043
- 1043 deaths
- 1044
- 1044 deaths
- 1045
- 1045 births
- 1045 deaths
- 1046
- 1046 births
- 1046 deaths
- 1047
- 1047 births
- 1047 deaths
- 1048
- 1048 births
- 1048 deaths
- 1049
- 1049 deaths
- 104 BC
- 105
- 1050
- 1050 births
- 1050 deaths
- 1050s
- 1050s births
- 1050s deaths
- 1051
- 1051 births
- 1052
- 1052 births
- 1052 deaths
- 1053
- 1053 births
- 1053 deaths
- 1054
- 1054 births
- 1054 deaths
- 1055
- 1055 births
- 1055 deaths
- 1056
- 1056 births
- 1056 deaths
- 1057
- 1057 births
- 1057 deaths
- 1058
- 1058 births
- 1058 deaths
- 1059
- 1059 deaths
- 105 BC
- 105 births
- 106
- 1060
- 1060 births
- 1060 deaths
- 1060s
- 1060s births
- 1060s deaths
- 1061
- 1061 deaths
- 1062
- 1062 births
- 1062 deaths
- 1063
- 1063 deaths
- 1064
- 1064 births
- 1064 deaths
- 1065
- 1065 deaths
- 1066
- 1066 births
- 1066 deaths
- 1067
- 1067 deaths
- 1068
- 1068 births
- 1068 deaths
- 1069
- 1069 deaths
- 106 BC
- 106 BC births
- 106 deaths
- 107
- 1070
- 1070 births
- 1070 deaths
- 1070s
- 1070s births
- 1070s deaths
- 1071
- 1071 births
- 1071 deaths
- 1072
- 1072 deaths
- 1073
- 1073 births
- 1073 deaths
- 1074
- 1074 births
- 1074 deaths
- 1075
- 1075 deaths
- 1076
- 1076 births
- 1076 deaths
- 1077
- 1077 deaths
- 1078
- 1078 births
- 1078 deaths
- 1079
- 1079 births
- 1079 deaths
- 107 BC
- 108
- 1080
- 1080 births
- 1080 deaths
- 1080s
- 1080s births
- 1080s deaths
- 1081
- 1081 births
- 1081 deaths
- 1082
- 1082 births
- 1082 deaths
- 1083
- 1083 births
- 1083 deaths
- 1084
- 1084 births
- 1084 deaths
- 1085
- 1085 births
- 1085 deaths
- 1086
- 1086 deaths
- 1087
- 1087 deaths
- 1088
- 1088 deaths
- 1089
- 1089 deaths
- 108 BC
- 109
- 1090
- 1090 births
- 1090 deaths
- 1090s
- 1090s births
- 1090s deaths
- 1091
- 1091 births
- 1091 deaths
- 1092
- 1092 births
- 1092 deaths
- 1093
- 1093 births
- 1093 deaths
- 1094
- 1094 deaths
- 1095
- 1095 births
- 1095 deaths
- 1096
- 1096 births
- 1096 deaths
- 1097
- 1097 births
- 1097 deaths
- 1098
- 1098 births
- 1098 deaths
- 1099
</syntaxhighlight> can be used for this purpose.
Except for <syntaxhighlight lang="wikitext" inline>Special:RecentChangesLinked</syntaxhighlight>, the slash, and the word or number after the slash, can be omitted, giving a list of pages without a specific starting point, or a list of the default length.
URL parameters can be given like template parameters:
- <syntaxhighlight lang="wikitext" inline>No changes during the given period match these criteria.</syntaxhighlight> – the five most recent changes in the "Template" namespace.
- <syntaxhighlight lang="wikitext" inline></syntaxhighlight> – the subpages for User:Jimbo Wales, but without the user page prefix.
Note: Transcluding certain special pages (such as Special:NewPages) can change the displayed title of the page.
Advanced conceptsEdit
{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}
Transclusion occurs before parsing and can emit syntax fragments, like HTML entities, to preserve them in the final render. The content being transcluded is processed and embedded before the target page is parsed and fully rendered. When transclusion happens at the HTML layer before parsing, it allows certain content – like syntax fragments such as character entity references like &
and mp;
or specific HTML components – to be inserted in their original form and preserved in the final render. However, this approach may cause pages to render incorrectly or violate the principle of least surprise for the reader.Template:Refn It should be used sparingly when cleaner alternatives are not available. Emitting fragments of template syntax, such as opening braces ({{}}
), is unlikely to re-parse correctly as template syntax in the target page, and it is unwise to rely on such behavior unless formally documented.
NotesEdit
See alsoEdit
MediaWiki transclusionEdit
- mw:Transclusion: a simple introduction (at MediaWiki).
- meta:Help:Embed page: gives basic information (at Meta-Wiki).
- Wikipedia:MediaWiki namespace
- meta:Help:MediaWiki namespace: at Meta-Wiki.
- meta:Help:Variable: information on MediaWiki variables (at Meta-Wiki).
- Help:Labeled section transclusion:
- mw:Extension:Labeled Section Transclusion: at MediaWiki.
- Template:Slink: at Meta-Wiki.
TemplatesEdit
- Help:A quick guide to templates: a simple introduction.
- Help:Template: more detailed description.
- meta:Help:Template: help at Meta-Wiki. Links to various other guides in the lead.
- mw:Help:Template: a simple introduction at MediaWiki.
- Wikipedia:Template index: a directory of available templates.
- Wikipedia:Template namespace: about the template namespace.
- Wikipedia:Template limits: limitations to complexity of pages.
OtherEdit
- Template:Tl and Template:Tl – userboxes for declaring one's stance on transclusion
- Wikipedia:Purge: to force transclusion of newly updated templates.
- Wikipedia:Substitution: the opposite of transclusion.
- Wikipedia:WikiProject Modular Articles: now defunct.
- Bugzilla:Request for template transclusion from Commons: a proposal for interwiki template support.
- Mw:User:Peter17/Reasonably efficient interwiki transclusion
Template:Portal templates navbox Template:Wikipedia technical help