Template:Short description Template:Distinguish

File:Total variation distance.svg
Total variation distance is half the absolute area between the two curves: Half the shaded area above.

In probability theory, the total variation distance is a statistical distance between probability distributions, and is sometimes called the statistical distance, statistical difference or variational distance.

DefinitionEdit

Consider a measurable space <math>(\Omega, \mathcal{F})</math> and probability measures <math>P</math> and <math>Q</math> defined on <math>(\Omega, \mathcal{F})</math>. The total variation distance between <math>P</math> and <math>Q</math> is defined as<ref name=Chatterjee2007>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

<math>\delta(P,Q)=\sup_{ A\in \mathcal{F}}\left|P(A)-Q(A)\right|.</math>

This is the largest absolute difference between the probabilities that the two probability distributions assign to the same event.

PropertiesEdit

The total variation distance is an f-divergence and an integral probability metric.

Relation to other distancesEdit

The total variation distance is related to the Kullback–Leibler divergence by Pinsker’s inequality:

<math>\delta(P,Q) \le \sqrt{\frac{1}{2} D_{\mathrm{KL}}(P\parallel Q)}.</math>

One also has the following inequality, due to Bretagnolle and Huber<ref>Bretagnolle, J.; Huber, C, Estimation des densités: risque minimax, Séminaire de Probabilités, XII (Univ. Strasbourg, Strasbourg, 1976/1977), pp. 342–363, Lecture Notes in Math., 649, Springer, Berlin, 1978, Lemma 2.1 (French).</ref> (see also <ref>Tsybakov, Alexandre B., Introduction to nonparametric estimation, Revised and extended from the 2004 French original. Translated by Vladimir Zaiats. Springer Series in Statistics. Springer, New York, 2009. xii+214 pp. Template:ISBN, Equation 2.25.</ref>), which has the advantage of providing a non-vacuous bound even when <math>\textstyle D_{\mathrm{KL}}(P\parallel Q)>2\colon</math>

<math>\delta(P,Q) \le \sqrt{1-e^{ -D_{\mathrm{KL}}(P\parallel Q) }}.</math>

The total variation distance is half of the L1 distance between the probability functions: on discrete domains, this is the distance between the probability mass functions<ref>David A. Levin, Yuval Peres, Elizabeth L. Wilmer, Markov Chains and Mixing Times, 2nd. rev. ed. (AMS, 2017), Proposition 4.2, p. 48.</ref>

<math>\delta(P, Q) = \frac12 \sum_{x} |P(x) - Q(x)|,</math>

and when the distributions have standard probability density functions Template:Mvar and Template:Mvar,<ref>Template:Cite book</ref>

<math>\delta(P, Q) = \frac12 \int | p(x) - q(x) | \, \mathrm{d}x</math>

(or the analogous distance between Radon-Nikodym derivatives with any common dominating measure). This result can be shown by noticing that the supremum in the definition is achieved exactly at the set where one distribution dominates the other.<ref>Template:Cite book</ref>

The total variation distance is related to the Hellinger distance <math>H(P,Q)</math> as follows:<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

<math>H^2(P,Q) \leq \delta(P,Q) \leq \sqrt 2 H(P,Q).</math>

These inequalities follow immediately from the inequalities between the 1-norm and the 2-norm.

Connection to transportation theoryEdit

Template:See also The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is <math>c(x,y) = {\mathbf{1}}_{x \neq y}</math>, that is,

<math>\frac{1}{2} \| P - Q \|_1 = \delta(P,Q) = \inf\big\{ \mathbb{P}(X\neq Y ) : \text{Law}(X) = P , \text{Law}(Y) = Q\big\} = \inf_\pi \operatorname{E}_{\pi}[{\mathbf{1}}_{x\neq y}],</math>

where the expectation is taken with respect to the probability measure <math>\pi</math> on the space where <math>(x,y)</math> lives, and the infimum is taken over all such <math>\pi</math> with marginals <math>P</math> and <math>Q</math>, respectively.<ref>Template:Cite book</ref>

See alsoEdit

ReferencesEdit

Template:Reflist


Template:Probability-stub