Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Measure-preserving dynamical system
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Subject of study in ergodic theory}} {{redirect|Area-preserving map|the map projection concept|Equal-area map}} In [[mathematics]], a '''measure-preserving dynamical system''' is an object of study in the abstract formulation of [[dynamical systems]], and [[ergodic theory]] in particular. Measure-preserving systems obey the [[Poincaré recurrence theorem]], and are a special case of [[conservative system]]s. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from [[classical mechanics]] (in particular, most [[dissipative system|non-dissipative]] systems) as well as systems in [[thermodynamic equilibrium]]. ==Definition== A measure-preserving dynamical system is defined as a [[probability space]] and a [[Invariant measure|measure-preserving]] transformation on it. In more detail, it is a system :<math>(X, \mathcal{B}, \mu, T)</math> with the following structure: *<math>X</math> is a set, *<math>\mathcal B</math> is a [[sigma-algebra|σ-algebra]] over <math>X</math>, *<math>\mu:\mathcal{B}\rightarrow[0,1]</math> is a [[probability measure]], so that <math>\mu (X) = 1</math>, and <math>\mu(\varnothing) = 0</math>, *<math> T:X \rightarrow X</math> is a [[measurable function|measurable]] transformation which [[Invariant measure|preserves]] the measure <math>\mu</math>, i.e., <math>\forall A\in \mathcal{B}\;\; \mu(T^{-1}(A))=\mu(A) </math>. ==Discussion== One may ask why the measure preserving transformation is defined in terms of the inverse <math>\mu(T^{-1}(A))=\mu(A)</math> instead of the forward transformation <math>\mu(T(A))=\mu(A)</math>. This can be understood intuitively. Consider the typical measure on the unit interval <math>[0, 1]</math>, and a map <math>Tx = 2x\mod 1 = \begin{cases} 2x \text{ if } x < 1/2 \\ 2x-1 \text{ if } x > 1/2 \\ \end{cases}</math>. This is the [[Bernoulli map]]. Now, distribute an even layer of paint on the unit interval <math>[0, 1]</math>, and then map the paint forward. The paint on the <math>[0, 1/2]</math> half is spread thinly over all of <math>[0, 1]</math>, and the paint on the <math>[1/2, 1]</math> half as well. The two layers of thin paint, layered together, recreates the exact same paint thickness. More generally, the paint that would arrive at subset <math>A \subset [0, 1]</math> comes from the subset <math>T^{-1}(A)</math>. For the paint thickness to remain unchanged (measure-preserving), the mass of incoming paint should be the same: <math>\mu(A) = \mu(T^{-1}(A))</math>. Consider a mapping <math>\mathcal{T}</math> of [[power set]]s: :<math>\mathcal{T}:P(X)\to P(X)</math> Consider now the special case of maps <math>\mathcal{T}</math> which preserve intersections, unions and complements (so that it is a map of [[Borel set]]s) and also sends <math>X</math> to <math>X</math> (because we want it to be [[conservative system|conservative]]). Every such conservative, Borel-preserving map can be specified by some [[surjective]] map <math>T:X\to X</math> by writing <math>\mathcal{T}(A)=T^{-1}(A)</math>. Of course, one could also define <math>\mathcal{T}(A)=T(A)</math>, but this is not enough to specify all such possible maps <math>\mathcal{T}</math>. That is, conservative, Borel-preserving maps <math>\mathcal{T}</math> cannot, in general, be written in the form <math>\mathcal{T}(A)=T(A);</math>. <math>\mu(T^{-1}(A))</math> has the form of a [[Pushforward measure|pushforward]], whereas <math>\mu(T(A))</math> is generically called a [[pullback]]. Almost all properties and behaviors of dynamical systems are defined in terms of the pushforward. For example, the [[transfer operator]] is defined in terms of the pushforward of the transformation map <math>T</math>; the measure <math>\mu</math> can now be understood as an [[invariant measure]]; it is just the [[Perron–Frobenius theorem|Frobenius–Perron eigenvector]] of the transfer operator (recall, the FP eigenvector is the largest eigenvector of a matrix; in this case it is the eigenvector which has the eigenvalue one: the invariant measure.) There are two classification problems of interest. One, discussed below, fixes <math>(X, \mathcal{B}, \mu)</math> and asks about the isomorphism classes of a transformation map <math>T</math>. The other, discussed in [[transfer operator]], fixes <math>(X, \mathcal{B})</math> and <math>T</math>, and asks about maps <math>\mu</math> that are measure-like. Measure-like, in that they preserve the Borel properties, but are no longer invariant; they are in general dissipative and so give insights into [[dissipative system]]s and the route to equilibrium. In terms of physics, the measure-preserving dynamical system <math>(X, \mathcal{B}, \mu, T)</math> often describes a physical system that is in equilibrium, for example, [[thermodynamic equilibrium]]. One might ask: how did it get that way? Often, the answer is by stirring, [[mixing (mathematics)|mixing]], [[turbulence]], [[thermalization]] or other such processes. If a transformation map <math>T</math> describes this stirring, mixing, etc. then the system <math>(X, \mathcal{B}, \mu, T)</math> is all that is left, after all of the transient modes have decayed away. The transient modes are precisely those eigenvectors of the transfer operator that have eigenvalue less than one; the invariant measure <math>\mu</math> is the one mode that does not decay away. The rate of decay of the transient modes are given by (the logarithm of) their eigenvalues; the eigenvalue one corresponds to infinite half-life. ==Informal example== The [[microcanonical ensemble]] from physics provides an informal example. Consider, for example, a fluid, gas or plasma in a box of width, length and height <math>w\times l\times h,</math> consisting of <math>N</math> atoms. A single atom in that box might be anywhere, having arbitrary velocity; it would be represented by a single point in <math>w\times l\times h\times \mathbb{R}^3.</math> A given collection of <math>N</math> atoms would then be a ''single point'' somewhere in the space <math>(w\times l\times h)^N \times \mathbb{R}^{3N}.</math> The "ensemble" is the collection of all such points, that is, the collection of all such possible boxes (of which there are an uncountably-infinite number). This ensemble of all-possible-boxes is the space <math>X</math> above. In the case of an [[ideal gas]], the measure <math>\mu</math> is given by the [[Maxwell–Boltzmann distribution]]. It is a [[product measure]], in that if <math>p_i(x,y,z,v_x,v_y,v_z)\,d^3x\,d^3p</math> is the probability of atom <math>i</math> having position and velocity <math>x,y,z,v_x,v_y,v_z</math>, then, for <math>N</math> atoms, the probability is the product of <math>N</math> of these. This measure is understood to apply to the ensemble. So, for example, one of the possible boxes in the ensemble has all of the atoms on one side of the box. One can compute the likelihood of this, in the Maxwell–Boltzmann measure. It will be enormously tiny, of order <math>\mathcal{O}\left(2^{-3N}\right).</math> Of all possible boxes in the ensemble, this is a ridiculously small fraction. The only reason that this is an "informal example" is because writing down the transition function <math>T</math> is difficult, and, even if written down, it is hard to perform practical computations with it. Difficulties are compounded if there are interactions between the particles themselves, like a [[Van der Waals force|van der Waals interaction]] or some other interaction suitable for a liquid or a plasma; in such cases, the invariant measure is no longer the Maxwell–Boltzmann distribution. The art of physics is finding reasonable approximations. This system does exhibit one key idea from the classification of measure-preserving dynamical systems: two ensembles, having different temperatures, are inequivalent. The entropy for a given canonical ensemble depends on its temperature; as physical systems, it is "obvious" that when the temperatures differ, so do the systems. This holds in general: systems with different entropy are not isomorphic. ==Examples== [[Image:Exampleergodicmap.svg|thumb|Example of a ([[Lebesgue measure]]) preserving map: ''T'' : [0,1) → [0,1), <math>x \mapsto 2x \mod 1.</math>]] Unlike the informal example above, the examples below are sufficiently well-defined and tractable that explicit, formal computations can be performed. * μ could be the normalized angle measure dθ/2π on the [[unit circle]], and ''T'' a rotation. See [[equidistribution theorem]]; * the [[Bernoulli scheme]]; * the [[interval exchange transformation]]; * with the definition of an appropriate measure, a [[subshift of finite type]]; * the [[base flow (random dynamical systems)|base flow]] of a [[random dynamical system]]; * the flow of a Hamiltonian vector field on the tangent bundle of a closed connected smooth manifold is measure-preserving (using the measure induced on the Borel sets by the [[Volume form#Symplectic manifolds|symplectic volume form]]) by [[Liouville's theorem (Hamiltonian)]];<ref name=walters2000>{{cite book |last=Walters |first=Peter |title=An Introduction to Ergodic Theory |year=2000 |publisher=Springer |isbn=0-387-95152-0 }}</ref> * for certain maps and [[Markov chain|Markov processes]], the [[Krylov–Bogolyubov theorem]] establishes the existence of a suitable measure to form a measure-preserving dynamical system. ==Generalization to groups and monoids== The definition of a measure-preserving dynamical system can be generalized to the case in which ''T'' is not a single transformation that is iterated to give the dynamics of the system, but instead is a [[monoid]] (or even a [[group (mathematics)|group]], in which case we have the [[Group action|action of a group]] upon the given probability space) of transformations ''T<sub>s</sub>'' : ''X'' → ''X'' parametrized by ''s'' ∈ '''Z''' (or '''R''', or '''N''' ∪ {0}, or [0, +∞)), where each transformation ''T<sub>s</sub>'' satisfies the same requirements as ''T'' above.<ref name=walters2000/> In particular, the transformations obey the rules: * <math>T_0 = \mathrm{id}_X :X \rightarrow X</math>, the [[identity function]] on ''X''; * <math>T_{s} \circ T_{t} = T_{t + s}</math>, whenever all the terms are [[well-defined]]; * <math>T_{s}^{-1} = T_{-s}</math>, whenever all the terms are well-defined. The earlier, simpler case fits into this framework by defining ''T<sub>s</sub>'' = ''T<sup>s</sup>'' for ''s'' ∈ '''N'''. ==Homomorphisms== The concept of a [[homomorphism]] and an [[isomorphism]] may be defined. Consider two dynamical systems <math>(X, \mathcal{A}, \mu, T)</math> and <math>(Y, \mathcal{B}, \nu, S)</math>. Then a mapping :<math>\varphi:X \to Y</math> is a '''homomorphism of dynamical systems''' if it satisfies the following three properties: # The map <math>\varphi\ </math> is [[measurable function|measurable]]. # For each <math>B \in \mathcal{B}</math>, one has <math>\mu (\varphi^{-1}B) = \nu(B)</math>. # For [[almost everywhere|<math>\mu</math>-almost all]] <math>x \in X</math>, one has <math>\varphi(Tx) = S(\varphi x)</math>. The system <math>(Y, \mathcal{B}, \nu, S)</math> is then called a '''factor''' of <math>(X, \mathcal{A}, \mu, T)</math>. The map <math>\varphi\;</math> is an '''isomorphism of dynamical systems''' if, in addition, there exists another mapping :<math>\psi:Y \to X</math> that is also a homomorphism, which satisfies # for <math>\mu</math>-almost all <math>x \in X</math>, one has <math>x = \psi(\varphi x)</math>; # for <math>\nu</math>-almost all <math>y \in Y</math>, one has <math>y = \varphi(\psi y)</math>. Hence, one may form a [[category (mathematics)|category]] of dynamical systems and their homomorphisms. ==Generic points== A point ''x'' ∈ ''X'' is called a '''generic point''' if the [[orbit (dynamics)|orbit]] of the point is [[ergodic theorem|distributed uniformly]] according to the measure. ==Symbolic names and generators== Consider a dynamical system <math>(X, \mathcal{B}, T, \mu)</math>, and let ''Q'' = {''Q''<sub>1</sub>, ..., ''Q<sub>k</sub>''} be a [[partition of a set|partition]] of ''X'' into ''k'' measurable pair-wise disjoint sets. Given a point ''x'' ∈ ''X'', clearly ''x'' belongs to only one of the ''Q<sub>i</sub>''. Similarly, the iterated point ''T<sup>n</sup>x'' can belong to only one of the parts as well. The '''symbolic name''' of ''x'', with regards to the partition ''Q'', is the sequence of integers {''a''<sub>''n''</sub>} such that :<math>T^nx \in Q_{a_n}.</math> The set of symbolic names with respect to a partition is called the [[symbolic dynamics]] of the dynamical system. A partition ''Q'' is called a '''generator''' or '''generating partition''' if μ-almost every point ''x'' has a unique symbolic name. ==Operations on partitions== Given a partition Q = {''Q''<sub>1</sub>, ..., ''Q''<sub>''k''</sub>} and a dynamical system <math>(X, \mathcal{B}, T, \mu)</math>, define the ''T''-pullback of ''Q'' as :<math> T^{-1}Q = \{T^{-1}Q_1,\ldots,T^{-1}Q_k\}.</math> Further, given two [[partition of a set|partitions]] ''Q'' = {''Q''<sub>1</sub>, ..., ''Q<sub>k</sub>''} and ''R'' = {''R''<sub>1</sub>, ..., ''R''<sub>''m''</sub>}, define their [[join (sigma algebra)|refinement]] as :<math> Q \vee R = \{Q_i \cap R_j \mid i=1,\ldots,k,\ j=1,\ldots,m,\ \mu(Q_i \cap R_j) > 0 \}.</math> With these two constructs, the ''refinement of an iterated pullback'' is defined as :<math> \begin{align} \bigvee_{n=0}^N T^{-n}Q & = \{Q_{i_0} \cap T^{-1}Q_{i_1} \cap \cdots \cap T^{-N}Q_{i_N} \\ & {} \qquad \mbox { where }i_\ell = 1,\ldots,k ,\ \ell=0,\ldots,N,\ \\ & {} \qquad \qquad \mu \left (Q_{i_0} \cap T^{-1}Q_{i_1} \cap \cdots \cap T^{-N}Q_{i_N} \right )>0 \} \\ \end{align} </math> which plays crucial role in the construction of the measure-theoretic entropy of a dynamical system. ==Measure-theoretic entropy== {{see also|approximate entropy}} The [[information entropy|entropy]] of a partition <math>\mathcal{Q}</math> is defined as<ref>{{cite journal |first=Ya. G. |last=Sinai |year=1959 |title=On the Notion of Entropy of a Dynamical System |journal=[[Proceedings of the USSR Academy of Sciences|Doklady Akademii Nauk SSSR]] |volume=124 |pages=768–771 }}</ref><ref>{{cite web |first=Ya. G. |last=Sinai |year=2007 |url=https://web.math.princeton.edu/facultypapers/Sinai/MetricEntropy2.pdf |title=Metric Entropy of Dynamical System }}</ref> :<math>H(\mathcal{Q})=-\sum_{Q \in \mathcal{Q}}\mu (Q) \log \mu(Q).</math> The measure-theoretic entropy of a dynamical system <math>(X, \mathcal{B}, T, \mu)</math> with respect to a partition ''Q'' = {''Q''<sub>1</sub>, ..., ''Q''<sub>''k''</sub>} is then defined as :<math>h_\mu(T,\mathcal{Q}) = \lim_{N \rightarrow \infty} \frac{1}{N} H\left(\bigvee_{n=0}^N T^{-n}\mathcal{Q}\right).</math> Finally, the '''Kolmogorov–Sinai metric''' or '''measure-theoretic entropy''' of a dynamical system <math>(X, \mathcal{B},T,\mu)</math> is defined as :<math>h_\mu(T) = \sup_{\mathcal{Q}} h_\mu(T,\mathcal{Q}).</math> where the [[supremum]] is taken over all finite measurable partitions. A theorem of [[Yakov Sinai]] in 1959 shows that the supremum is actually obtained on partitions that are generators. Thus, for example, the entropy of the [[Bernoulli process]] is log 2, since [[almost every]] [[real number]] has a unique [[binary expansion]]. That is, one may partition the [[unit interval]] into the intervals <nowiki>[</nowiki>0, 1/2<nowiki>)</nowiki> and [1/2, 1]. Every real number ''x'' is either less than 1/2 or not; and likewise so is the fractional part of 2<sup>''n''</sup>''x''. If the space ''X'' is compact and endowed with a topology, or is a metric space, then the [[topological entropy]] may also be defined. If <math>T</math> is an ergodic, piecewise expanding, and Markov on <math>X \subset \R</math>, and <math>\mu</math> is absolutely continuous with respect to the Lebesgue measure, then we have the Rokhlin formula<ref>''[https://web.archive.org/web/20240117051216/https://www.mat.univie.ac.at/~bruin/ET1_lect15.pdf The Shannon-McMillan-Breiman Theorem]''</ref> (section 4.3 and section 12.3 <ref>{{Cite book |last=Pollicott |first=Mark |url=https://www.cambridge.org/core/books/dynamical-systems-and-ergodic-theory/3C1AA7BE85F5D2EE027D60CC72FDBEB8 |title=Dynamical Systems and Ergodic Theory |last2=Yuri |first2=Michiko |date=1998 |publisher=Cambridge University Press |isbn=978-0-521-57294-1 |series=London Mathematical Society Student Texts |location=Cambridge}}</ref>):<math display="block">h_{\mu }(T) = \int \ln |dT/dx| \mu(dx) </math>This allows calculation of entropy of many interval maps, such as the [[logistic map]]. Ergodic means that <math>T^{-1}(A) = A</math> implies <math>A</math> has full measure or zero measure. Piecewise expanding and Markov means that there is a partition of <math>X</math> into finitely many open intervals, such that for some <math>\epsilon > 0</math>, <math>|T'| \geq 1 + \epsilon</math> on each open interval. Markov means that for each <math>I_i</math> from those open intervals, either <math>T(I_i) \cap I_i = \emptyset </math> or <math>T(I_i) \cap I_i = I_i </math>. ==Classification and anti-classification theorems== One of the primary activities in the study of measure-preserving systems is their classification according to their properties. That is, let <math>(X, \mathcal{B}, \mu)</math> be a measure space, and let <math>U</math> be the set of all measure preserving systems <math>(X, \mathcal{B}, \mu, T)</math>. An isomorphism <math>S\sim T</math> of two transformations <math>S, T</math> defines an [[equivalence relation]] <math>\mathcal{R}\subset U\times U.</math> The goal is then to describe the relation <math>\mathcal{R}</math>. A number of classification theorems have been obtained; but quite interestingly, a number of anti-classification theorems have been found as well. The anti-classification theorems state that there are more than a countable number of isomorphism classes, and that a countable amount of information is not sufficient to classify isomorphisms.<ref>{{cite journal |first1=Matthew |last1=Foreman |first2=Benjamin |last2=Weiss |year=2019 |arxiv=1703.07093 |title=From Odometers to Circular Systems: A Global Structure Theorem |journal=Journal of Modern Dynamics |volume=15 |pages=345–423 |doi=10.3934/jmd.2019024 |s2cid=119128525 }}</ref><ref>{{cite journal |first1=Matthew |last1=Foreman |first2=Benjamin |last2=Weiss |date=2022 |arxiv=1705.04414 |title=Measure preserving Diffeomorphisms of the Torus are unclassifiable |journal=[[Journal of the European Mathematical Society]] |volume=24 |issue=8 |pages=2605–2690 |doi=10.4171/JEMS/1151 |doi-access=free}}</ref> The first anti-classification theorem, due to Hjorth, states that if <math>U</math> is endowed with the [[weak topology]], then the set <math>\mathcal{R}</math> is not a [[Borel set]].<ref>{{cite journal |first=G. |last=Hjorth |year=2001 |title=On invariants for measure preserving transformations |journal=Fund. Math. |volume=169 |issue=1 |pages=51–84 |doi=10.4064/FM169-1-2 |s2cid=55619325 |doi-access=free }}</ref> There are a variety of other anti-classification results. For example, replacing isomorphism with [[Kakutani's theorem (measure theory)|Kakutani equivalence]], it can be shown that there are uncountably many non-Kakutani equivalent ergodic measure-preserving transformations of each entropy type.<ref>{{cite book |first1=D. |last1=Ornstein |authorlink1=Donald Samuel Ornstein |first2=D. |last2=Rudolph |first3=B. |last3=Weiss |year=1982 |title=Equivalence of measure preserving transformations |series=Mem. American Mathematical Soc. |volume=37 |issue=262 |isbn=0-8218-2262-4 }}</ref> These stand in contrast to the classification theorems. These include: * Ergodic measure-preserving transformations with a pure point spectrum have been classified.<ref>{{cite journal |first1=P. |last1=Halmos |first2=J. |last2=von Neumann |year=1942 |title=Operator methods in classical mechanics. II. |journal=Annals of Mathematics |series=(2) |volume=43 |issue=2 |pages=332–350 |doi=10.2307/1968872 |jstor=1968872 }}</ref> * [[Bernoulli shift]]s are classified by their metric entropy.<ref>{{cite journal |first=Ya. |last=Sinai |year=1962 |title=A weak isomorphism of transformations with invariant measure |journal=[[Proceedings of the USSR Academy of Sciences|Doklady Akademii Nauk SSSR]] |volume=147 |pages=797–800 }}</ref><ref>{{cite journal |first=D. |last=Ornstein |authorlink=Donald Samuel Ornstein |year=1970 |title=Bernoulli shifts with the same entropy are isomorphic |journal=[[Advances in Mathematics]] |volume=4 |issue=3 |pages=337–352 |doi=10.1016/0001-8708(70)90029-0 |doi-access=free }}</ref><ref>{{cite book |first1=A. |last1=Katok |first2=B. |last2=Hasselblatt |year=1995 |chapter=Introduction to the modern theory of dynamical systems |title=Encyclopedia of Mathematics and its Applications |volume=54 |publisher=Cambridge University Press }}</ref> See [[Ornstein theory]] for more. {{Math theorem | math_statement = Given a dynamical system on a Lebesgue space of measure 1, where <math display="inline">T</math> is invertible, measure preserving, and ergodic. If <math>h_T \leq \ln k</math> for some integer <math>k</math>, then the system has a size-<math>k</math> generator. If the entropy is exactly equal to <math>\ln k</math>, then such a generator exists iff the system is isomorphic to the Bernoulli shift on <math>k</math> symbols with equal measures. | name = Krieger finite generator theorem<ref>{{Cite book |last=Downarowicz |first=Tomasz |title=Entropy in dynamical systems |date=2011 |publisher=Cambridge University Press |isbn=978-0-521-88885-1 |series=New Mathematical Monographs |location=Cambridge |page=106}}</ref> | note = Krieger 1970 }} ==See also== * {{annotated link|Krylov–Bogolyubov theorem}} on the existence of invariant measures * {{annotated link|Poincaré recurrence theorem}} ==References== {{reflist}} ==Further reading== * Michael S. Keane, "Ergodic theory and subshifts of finite type", (1991), appearing as Chapter 2 in ''Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces'', Tim Bedford, Michael Keane and Caroline Series, Eds. Oxford University Press, Oxford (1991). {{isbn|0-19-853390-X}} ''(Provides expository introduction, with exercises, and extensive references.)'' * [[Lai-Sang Young]], "Entropy in Dynamical Systems" ([http://www.math.nyu.edu/~lsy/papers/entropy.pdf pdf]; [http://www.math.nyu.edu/~lsy/papers/entropy.ps ps]), appearing as Chapter 16 in ''Entropy'', Andreas Greven, Gerhard Keller, and Gerald Warnecke, eds. Princeton University Press, Princeton, NJ (2003). {{isbn|0-691-11338-6}} * T. Schürmann and I. Hoffmann, ''The entropy of strange billiards inside n-simplexes.'' J. Phys. A 28(17), page 5033, 1995. [https://arxiv.org/abs/nlin/0208048 PDF-Document] ''(gives a more involved example of measure-preserving dynamical system.)'' {{Measure theory}} [[Category:Dynamical systems]] [[Category:Entropy]] [[Category:Entropy and information]] [[Category:Information theory]] [[Category:Measure theory]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Annotated link
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Isbn
(
edit
)
Template:Math theorem
(
edit
)
Template:Measure theory
(
edit
)
Template:Redirect
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)