Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Typical set
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
In [[information theory]], the '''typical set''' is a set of sequences whose [[probability]] is close to two raised to the negative power of the [[Information entropy|entropy]] of their source distribution. That this set has total [[probability]] close to one is a consequence of the [[asymptotic equipartition property]] (AEP) which is a kind of [[law of large numbers]]. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself. This has great use in [[data compression|compression]] theory as it provides a theoretical means for compressing data, allowing us to represent any sequence ''X''<sup>''n''</sup> using ''nH''(''X'') bits on average, and, hence, justifying the use of entropy as a measure of information from a source. The AEP can also be proven for a large class of [[stationary ergodic process]]es, allowing typical set to be defined in more general cases. Additionally, the typical set concept is foundational in understanding the limits of data transmission and error correction in communication systems. By leveraging the properties of typical sequences, efficient coding schemes like Shannon's [[Shannon's source coding theorem|source coding theorem]] and [[Noisy-channel coding theorem|channel coding theorem]] are developed, enabling near-optimal data compression and reliable transmission over noisy channels. ==(Weakly) typical sequences (weak typicality, entropy typicality)== If a sequence ''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub> is drawn from an [[Independent identically-distributed random variables|independent identically-distributed random variable]] (IID) ''X'' defined over a finite alphabet <math>\mathcal{X}</math>, then the typical set, ''A''<sub>''ε''</sub><sup>(''n'')</sup><math>\in\mathcal{X}</math><sup>(''n'')</sup> is defined as those sequences which satisfy: :<math> 2^{-n( H(X)+\varepsilon)} \leqslant p(x_1, x_2, \dots , x_n) \leqslant 2^{-n( H(X)-\varepsilon)} </math> where : <math> H(X) = - \sum_{x \isin \mathcal{X}}p(x)\log_2 p(x) </math> is the information entropy of ''X''. The probability above need only be within a factor of 2<sup>''n'' ''ε''</sup>. Taking the logarithm on all sides and dividing by ''-n'', this definition can be equivalently stated as :<math> H(X) - \varepsilon \leq -\frac{1}{n}\log_2 p(x_1, x_2, \ldots, x_n) \leq H(X) + \varepsilon.</math> For i.i.d sequence, since :<math>p(x_1, x_2, \ldots, x_n) = \prod_{i=1}^n p(x_i),</math> we further have :<math> H(X) - \varepsilon \leq -\frac{1}{n} \sum_{i=1}^n \log_2 p(x_i) \leq H(X) + \varepsilon.</math> By the law of large numbers, for sufficiently large ''n'' :<math>-\frac{1}{n} \sum_{i=1}^n \log_2 p(x_i) \rightarrow H(X).</math> ===Properties=== An essential characteristic of the typical set is that, if one draws a large number ''n'' of independent random samples from the distribution ''X'', the resulting sequence (''x''<sub>1</sub>, ''x''<sub>2</sub>, ..., ''x''<sub>''n''</sub>) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any <math>\varepsilon>0</math>, one can choose ''n'' such that: #The probability of a sequence from ''X''<sup>(n)</sup> being drawn from ''A''<sub>''ε''</sub><sup>(''n'')</sup> is greater than 1 − ''ε'', i.e. <math>Pr[x^{(n)} \in A_\epsilon^{(n)}] \geq 1 - \varepsilon </math> #<math>\left| {A_\varepsilon}^{(n)} \right| \leqslant 2^{n(H(X)+\varepsilon)}</math> #<math>\left| {A_\varepsilon}^{(n)} \right| \geqslant (1-\varepsilon)2^{n(H(X)-\varepsilon)}</math> #If the distribution over <math>\mathcal{X}</math> is not uniform, then the fraction of sequences that are typical is ::<math>\frac{|A_\epsilon^{(n)}|}{|\mathcal{X}^{(n)}|} \equiv \frac{2^{nH(X)}}{2^{n\log_2|\mathcal{X}|}} = 2^{-n(\log_2|\mathcal{X}|-H(X))} \rightarrow 0 </math> ::as ''n'' becomes very large, since <math>H(X) < \log_2|\mathcal{X}|,</math> where <math>|\mathcal{X}|</math> is the [[cardinality]] of <math>\mathcal{X}</math>. For a general stochastic process {''X''(''t'')} with AEP, the (weakly) typical set can be defined similarly with ''p''(''x''<sub>1</sub>, ''x''<sub>2</sub>, ..., ''x''<sub>''n''</sub>) replaced by ''p''(''x''<sub>0</sub><sup>''τ''</sup>) (i.e. the probability of the sample limited to the time interval [0, ''τ'']), ''n'' being the [[degrees of freedom (physics and chemistry)|degree of freedom]] of the process in the time interval and ''H''(''X'') being the [[entropy rate]]. If the process is continuous valued, [[differential entropy]] is used instead. ===Example=== Counter-intuitively, the most likely sequence is often not a member of the typical set. For example, suppose that ''X'' is an i.i.d [[Bernoulli_distribution|Bernoulli random variable]] with ''p''(0)=0.1 and ''p''(1)=0.9. In ''n'' independent trials, since ''p''(1)>''p''(0), the most likely sequence of outcome is the sequence of all 1's, (1,1,...,1). Here the entropy of ''X'' is ''H''(''X'')=0.469, while :<math> -\frac{1}{n}\log_2 p\left(x^{(n)}=(1,1,\ldots,1)\right) = -\frac{1}{n}\log_2 (0.9^n) = 0.152</math> So this sequence is not in the typical set because its average logarithmic probability cannot come arbitrarily close to the entropy of the random variable ''X'' no matter how large we take the value of ''n''. For Bernoulli random variables, the typical set consists of sequences with average numbers of 0s and 1s in ''n'' independent trials. This is easily demonstrated: If ''p(1) = p'' and ''p(0) = 1-p'', then for ''n'' trials with ''m'' 1's, we have :<math> -\frac{1}{n} \log_2 p(x^{(n)}) = -\frac{1}{n} \log_2 p^m (1-p)^{n-m} = -\frac{m}{n} \log_2 p - \left( \frac{n-m}{n} \right) \log_2 (1-p).</math> The average number of 1's in a sequence of Bernoulli trials is ''m = np''. Thus, we have :<math> -\frac{1}{n} \log_2 p(x^{(n)}) = - p \log_2 p - (1-p) \log_2 (1-p) = H(X).</math> For this example, if ''n''=10, then the typical set consist of all sequences that have a single 0 in the entire sequence. In case ''p''(0)=''p''(1)=0.5, then every possible binary sequences belong to the typical set. ==Strongly typical sequences (strong typicality, letter typicality)== If a sequence ''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub> is drawn from some specified joint distribution defined over a finite or an infinite alphabet <math>\mathcal{X}</math>, then the strongly typical set, ''A''<sub>ε,strong</sub><sup>(''n'')</sup><math>\in\mathcal{X}</math> is defined as the set of sequences which satisfy :<math> \left|\frac{N(x_i)}{n}-p(x_i)\right| < \frac{\varepsilon}{\|\mathcal{X}\|}. </math> where <math>{N(x_i)}</math> is the number of occurrences of a specific symbol in the sequence. It can be shown that strongly typical sequences are also weakly typical (with a different constant ε), and hence the name. The two forms, however, are not equivalent. Strong typicality is often easier to work with in proving theorems for memoryless channels. However, as is apparent from the definition, this form of typicality is only defined for random variables having finite support. ==Jointly typical sequences== Two sequences <math>x^n</math> and <math>y^n</math> are jointly ε-typical if the pair <math>(x^n,y^n)</math> is ε-typical with respect to the joint distribution <math>p(x^n,y^n)=\prod_{i=1}^n p(x_i,y_i)</math> and both <math>x^n</math> and <math>y^n</math> are ε-typical with respect to their marginal distributions <math>p(x^n)</math> and <math>p(y^n)</math>. The set of all such pairs of sequences <math>(x^n,y^n)</math> is denoted by <math>A_{\varepsilon}^n(X,Y)</math>. Jointly ε-typical ''n''-tuple sequences are defined similarly. Let <math>\tilde{X}^n</math> and <math>\tilde{Y}^n</math> be two independent sequences of random variables with the same marginal distributions <math>p(x^n)</math> and <math>p(y^n)</math>. Then for any ε>0, for sufficiently large ''n'', jointly typical sequences satisfy the following properties: #<math> P\left[ (X^n,Y^n) \in A_{\varepsilon}^n(X,Y) \right] \geqslant 1 - \epsilon </math> #<math> \left| A_{\varepsilon}^n(X,Y) \right| \leqslant 2^{n (H(X,Y) + \epsilon)} </math> #<math> \left| A_{\varepsilon}^n(X,Y) \right| \geqslant (1 - \epsilon) 2^{n (H(X,Y) - \epsilon)} </math> #<math> P\left[ (\tilde{X}^n,\tilde{Y}^n) \in A_{\varepsilon}^n(X,Y) \right] \leqslant 2^{-n (I(X;Y) - 3 \epsilon)} </math> #<math> P\left[ (\tilde{X}^n,\tilde{Y}^n) \in A_{\varepsilon}^n(X,Y) \right] \geqslant (1 - \epsilon) 2^{-n (I(X;Y) + 3 \epsilon)}</math> {{Expand section|date=December 2009}} ==Applications of typicality== {{Expand section|date=December 2009}} ===Typical set encoding=== {{see|Shannon's source coding theorem}} In [[information theory]], typical set encoding encodes only the sequences in the typical set of a stochastic source with fixed length block codes. Since the size of the typical set is about ''2''<sup>nH(X)</sup>, only ''nH(X)'' bits are required for the coding, while at the same time ensuring that the chances of encoding error is limited to ε. Asymptotically, it is, by the AEP, lossless and achieves the minimum rate equal to the entropy rate of the source. {{Expand section|date=December 2009}} ===Typical set decoding=== In [[information theory]], typical set decoding is used in conjunction with [[random coding]] to estimate the transmitted message as the one with a codeword that is jointly ε-typical with the observation. i.e. :<math>\hat{w}=w \iff (\exists w)( (x_1^n(w),y_1^n)\in A_{\varepsilon}^n(X,Y)) </math> where <math>\hat{w},x_1^n(w),y_1^n</math> are the message estimate, codeword of message <math>w</math> and the observation respectively. <math>A_{\varepsilon}^n(X,Y)</math> is defined with respect to the joint distribution <math>p(x_1^n)p(y_1^n|x_1^n)</math> where <math>p(y_1^n|x_1^n)</math> is the transition probability that characterizes the channel statistics, and <math>p(x_1^n)</math> is some input distribution used to generate the codewords in the random codebook. {{Expand section|date=December 2009}} ===Universal null-hypothesis testing=== {{Empty section|date=December 2009}} ===Universal channel code=== {{Expand section|date=December 2009}} {{See also|algorithmic complexity theory}} ==See also== * [[Asymptotic equipartition property]] * [[Source coding theorem]] * [[Noisy-channel coding theorem]] ==References== * [[C. E. Shannon]], "[http://plan9.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf A Mathematical Theory of Communication]", ''[[Bell System Technical Journal]]'', vol. 27, pp. 379–423, 623-656, July, October, 1948 * {{Cite book | last = Cover | first = Thomas M. | title = Elements of Information Theory | chapter = Chapter 3: Asymptotic Equipartition Property, Chapter 5: Data Compression, Chapter 8: Channel Capacity | year = 2006 | publisher = John Wiley & Sons | isbn = 0-471-24195-4 }} * [[David J. C. MacKay]]. ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]'' Cambridge: Cambridge University Press, 2003. {{isbn|0-521-64298-1}} {{DEFAULTSORT:Typical Set}} [[Category:Information theory]] [[Category:Probability theory]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Empty section
(
edit
)
Template:Expand section
(
edit
)
Template:Isbn
(
edit
)
Template:See
(
edit
)
Template:See also
(
edit
)