Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Information content
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Basic quantity derived from the probability of a particular event occurring from a random variable}} {{cleanup|reason=unclear terminology|date=June 2017}} In [[information theory]], the '''information content''', '''self-information''', '''surprisal''', or '''Shannon information''' is a basic quantity derived from the [[probability]] of a particular [[Event (probability theory)|event]] occurring from a [[random variable]]. It can be thought of as an alternative way of expressing probability, much like [[odds]] or [[log-odds]], but which has particular mathematical advantages in the setting of information theory. The Shannon information can be interpreted as quantifying the level of "surprise" of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimal [[Shannon's source coding theorem|source coding]] of the random variable. The Shannon information is closely related to ''[[Entropy (information theory)|entropy]]'', which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is "on average". This is the average amount of self-information an observer would expect to gain about a random variable when measuring it.<ref>Jones, D.S., ''Elementary Information Theory'', Vol., Clarendon Press, Oxford pp 11β15 1979</ref> The information content can be expressed in various [[units of information]], of which the most common is the "bit" (more formally called the ''shannon''), as explained below. The term 'perplexity' has been used in language modelling to quantify the uncertainty inherent in a set of prospective events.{{Citation needed|reason=This is not common knowledge, needs a source|date=May 2025}} == Definition == [[Claude Shannon]]'s definition of self-information was chosen to meet several axioms: # An event with probability 100% is perfectly unsurprising and yields no information. # The less probable an event is, the more surprising it is and the more information it yields. # If two independent events are measured separately, the total amount of information is the sum of the self-informations of the individual events. The detailed derivation is below, but it can be shown that there is a unique function of probability that meets these three axioms, up to a multiplicative scaling factor. Broadly, given a real number <math>b>1</math> and an [[Event (probability theory)|event]] <math>x</math> with [[probability]] <math>P</math>, the information content is defined as follows: <math display="block">\mathrm{I}(x) := - \log_b{\left[\Pr{\left(x\right)}\right]} = -\log_b{\left(P\right)}. </math> The base ''b'' corresponds to the scaling factor above. Different choices of ''b'' correspond to different units of information: when {{nowrap|1=''b'' = 2}}, the unit is the [[Shannon (unit)|shannon]] (symbol Sh), often called a 'bit'; when {{nowrap|1=''b'' = [[Euler's number|e]]}}, the unit is the [[Nat (unit)|natural unit of information]] (symbol nat); and when {{nowrap|1=''b'' = 10}}, the unit is the [[Hartley (unit)|hartley]] (symbol Hart). Formally, given a discrete random variable <math>X</math> with [[probability mass function]] <math>p_{X}{\left(x\right)}</math>, the self-information of measuring <math>X</math> as [[Outcome (probability)|outcome]] <math>x</math> is defined as<ref name=":0">{{Cite book|title=Quantum Computing Explained|last=McMahon|first=David M.|publisher=Wiley-Interscience|year=2008|isbn=9780470181386 |location=Hoboken, NJ|oclc=608622533}}</ref> <math display="block">\operatorname I_X(x) := - \log{\left[p_{X}{\left(x\right)}\right]} = \log{\left(\frac{1}{p_{X}{\left(x\right)}}\right)}. </math> The use of the notation <math>I_X(x)</math> for self-information above is not universal. Since the notation <math>I(X;Y)</math> is also often used for the related quantity of [[mutual information]], many authors use a lowercase <math>h_X(x)</math> for self-entropy instead, mirroring the use of the capital <math>H(X)</math> for the entropy. == Properties == {{Expand section|date=October 2018}} === Monotonically decreasing function of probability === For a given [[probability space]], the measurement of rarer [[event (probability theory)|event]]s are intuitively more "surprising", and yield more information content, than more common values. Thus, self-information is a [[Monotonic function|strictly decreasing monotonic function]] of the probability, or sometimes called an "antitonic" function. While standard probabilities are represented by real numbers in the interval <math>[0, 1]</math>, self-informations are represented by [[extended real number]]s in the interval <math>[0, \infty]</math>. In particular, we have the following, for any choice of logarithmic base: * If a particular event has a 100% probability of occurring, then its self-information is <math>-\log(1) = 0</math>: its occurrence is "perfectly non-surprising" and yields no information. * If a particular event has a 0% probability of occurring, then its self-information is <math>-\log(0) = \infty</math>: its occurrence is "infinitely surprising". From this, we can get a few general properties: * Intuitively, more information is gained from observing an unexpected eventβit is "surprising". ** For example, if there is a [[wikt:one in a million|one-in-a-million]] chance of Alice winning the [[lottery]], her friend Bob will gain significantly more information from learning that she [[Winning the lottery|won]] than that she lost on a given day. (See also ''[[Lottery mathematics]]''.) * This establishes an implicit relationship between the self-information of a [[random variable]] and its [[variance]]. === Relationship to log-odds === The Shannon information is closely related to the [[log-odds]]. In particular, given some event <math>x</math>, suppose that <math>p(x)</math> is the probability of <math>x</math> occurring, and that <math>p(\lnot x) = 1-p(x)</math> is the probability of <math>x</math> not occurring. Then we have the following definition of the log-odds: <math display="block">\text{log-odds}(x) = \log\left(\frac{p(x)}{p(\lnot x)}\right)</math> This can be expressed as a difference of two Shannon informations: <math display="block">\text{log-odds}(x) = \mathrm{I}(\lnot x) - \mathrm{I}(x)</math> In other words, the log-odds can be interpreted as the level of surprise when the event ''doesn't'' happen, minus the level of surprise when the event ''does'' happen. === Additivity of independent events === The information content of two [[independent events]] is the sum of each event's information content. This property is known as [[Additive map|additivity]] in mathematics, and [[sigma additivity]] in particular in [[Measure (mathematics)|measure]] and probability theory. Consider two [[independent random variables]] <math display="inline">X,\, Y</math> with [[probability mass function]]s <math>p_X(x)</math> and <math>p_Y(y)</math> respectively. The [[joint probability mass function]] is <math display="block"> p_{X, Y}\!\left(x, y\right) = \Pr(X = x,\, Y = y) = p_X\!(x)\,p_Y\!(y) </math> because <math display="inline">X</math> and <math display="inline">Y</math> are [[Independence (probability theory)|independent]]. The information content of the [[Outcome (probability)|outcome]] <math> (X, Y) = (x, y)</math> is<math display="block"> \begin{align} \operatorname{I}_{X,Y}(x, y) &= -\log_2\left[p_{X,Y}(x, y)\right] = -\log_2 \left[p_X\!(x)p_Y\!(y)\right] \\[5pt] &= -\log_2 \left[p_X{(x)}\right] -\log_2 \left[p_Y{(y)}\right] \\[5pt] &= \operatorname{I}_X(x) + \operatorname{I}_Y(y) \end{align} </math> See ''{{Section link||Two independent, identically distributed dice|nopage=y}}'' below for an example. The corresponding property for [[likelihood]]s is that the [[log-likelihood]] of independent events is the sum of the log-likelihoods of each event. Interpreting log-likelihood as "support" or negative surprisal (the degree to which an event supports a given model: a model is supported by an event to the extent that the event is unsurprising, given the model), this states that independent events add support: the information that the two events together provide for statistical inference is the sum of their independent information. ==Relationship to entropy== The [[Shannon entropy]] of the random variable <math>X </math> above is [[Shannon entropy#Definition|defined as]] <math display="block">\begin{alignat}{2} \Eta(X) &= \sum_{x} {-p_{X}{\left(x\right)} \log{p_{X}{\left(x\right)}}} \\ &= \sum_{x} {p_{X}{\left(x\right)} \operatorname{I}_X(x)} \\ &{\overset{\underset{\mathrm{def}}{}}{=}} \ \operatorname{E}{\left[\operatorname{I}_X (X)\right]}, \end{alignat} </math> by definition equal to the [[Expected value|expected]] information content of measurement of <math>X </math>.<ref>{{cite book|url=https://books.google.com/books?id=Lyte2yl1SPAC&pg=PA11|title=Fundamentals in Information Theory and Coding|author=Borda, Monica|publisher=Springer|year=2011|isbn=978-3-642-20346-6}}</ref>{{rp|11}}<ref>{{cite book|url=https://books.google.com/books?id=VpRESN24Zj0C&pg=PA19|title=Mathematics of Information and Coding|publisher=American Mathematical Society|year=2002|isbn=978-0-8218-4256-0|author1=Han, Te Sun |author2=Kobayashi, Kingo }}</ref>{{rp|19β20}} The expectation is taken over the [[discrete random variable|discrete values]] over its [[Support (mathematics)|support]]. Sometimes, the entropy itself is called the "self-information" of the random variable, possibly because the entropy satisfies <math>\Eta(X) = \operatorname{I}(X; X)</math>, where <math>\operatorname{I}(X;X)</math> is the [[mutual information]] of <math>X</math> with itself.<ref>Thomas M. Cover, Joy A. Thomas; Elements of Information Theory; p. 20; 1991.</ref> For [[Continuous Random Variables|continuous random variables]] the corresponding concept is [[differential entropy]]. == Notes == This measure has also been called '''surprisal''', as it represents the "[[surprise (emotion)|surprise]]" of seeing the outcome (a highly improbable outcome is very surprising). This term (as a log-probability measure) was introduced by [[Edward W. Samson]] in his 1951 report "Fundamental natural concepts of information theory".<ref name="samson53"> {{Cite journal| volume = 10| issue = 4, Summer 1953, special issue on information theory| pages = 283β297| last = Samson| first = Edward W.| title = Fundamental natural concepts of information theory| journal = ETC: A Review of General Semantics| date = 1953| url = https://www.jstor.org/stable/42581366| jstor = 42581366|orig-date = Originally published October 1951 as Tech Report No. E5079, Air Force Cambridge Research Center}} </ref><ref name="attneave">{{cite book | last=Attneave | first=Fred | title=Applications of Information Theory to Psychology: A Summary of Basic Concepts, Methods, and Results |publisher=Holt, Rinehart and Winston | publication-place=New York| edition=1 | date=1959 }}</ref> An early appearance in the Physics literature is in [[Myron Tribus]]' 1961 book ''Thermostatics and Thermodynamics''.<ref name="Bernstein1972">{{cite journal | url=https://aip.scitation.org/doi/abs/10.1063/1.1677983 | doi=10.1063/1.1677983 | title=Entropy and Chemical Change. I. Characterization of Product (And Reactant) Energy Distributions in Reactive Molecular Collisions: Information and Entropy Deficiency | date=1972 | last1=Bernstein | first1=R. B. | last2=Levine | first2=R. D. | journal=The Journal of Chemical Physics | volume=57 | issue=1 | pages=434β449 | bibcode=1972JChPh..57..434B | url-access=subscription }}</ref><ref name="Tribus1961">[http://www.eoht.info/page/Myron+Tribus Myron Tribus] (1961) '''Thermodynamics and Thermostatics:''' ''An Introduction to Energy, Information and States of Matter, with Engineering Applications'' (D. Van Nostrand, 24 West 40 Street, New York 18, New York, U.S.A) Tribus, Myron (1961), pp. 64β66 [https://archive.org/details/thermostaticsthe00trib borrow].</ref> When the event is a random realization (of a variable) the self-information of the variable is defined as the [[expected value]] of the self-information of the realization.{{citation-needed|date=February 2025}} ==Examples== === Fair coin toss === Consider the [[Bernoulli trial]] of [[coin flipping|tossing a fair coin]] <math>X</math>. The [[Probability|probabilities]] of the [[Event (probability theory)|events]] of the coin landing as heads <math>\text{H}</math> and tails <math>\text{T}</math> (see [[fair coin]] and [[obverse and reverse]]) are [[one half]] each, <math display="inline">p_X{(\text{H})} = p_X{(\text{T})} = \tfrac{1}{2} = 0.5</math>. Upon [[Sampling (signal processing)|measuring]] the variable as heads, the associated information gain is <math display="block">\operatorname{I}_X(\text{H}) = -\log_2 {p_X{(\text{H})}} = -\log_2\!{\tfrac{1}{2}} = 1,</math>so the information gain of a fair coin landing as heads is 1 [[Shannon (unit)|shannon]].<ref name=":0" /> Likewise, the information gain of measuring tails <math>T</math> is<math display="block">\operatorname{I}_X(T) = -\log_2 {p_X{(\text{T})}} = -\log_2 {\tfrac{1}{2}} = 1 \text{ Sh}.</math> === Fair die roll === Suppose we have a [[Fair dice|fair six-sided die]]. The value of a die roll is a [[Discrete uniform distribution|discrete uniform random variable]] <math>X \sim \mathrm{DU}[1, 6]</math> with [[probability mass function]] <math display="block">p_X(k) = \begin{cases} \frac{1}{6}, & k \in \{1, 2, 3, 4, 5, 6\} \\ 0, & \text{otherwise} \end{cases}</math>The probability of rolling a 4 is <math display="inline">p_X(4) = \frac{1}{6}</math>, as for any other valid roll. The information content of rolling a 4 is thus<math display="block">\operatorname{I}_{X}(4) = -\log_2{p_X{(4)}} = -\log_2{\tfrac{1}{6}} \approx 2.585\; \text{Sh}</math>of information. === Two independent, identically distributed dice === Suppose we have two [[Independent and identically distributed random variables|independent, identically distributed random variables]] <math display="inline">X,\, Y \sim \mathrm{DU}[1, 6]</math> each corresponding to an [[Independent random variables|independent]] fair 6-sided dice roll. The [[Joint probability distribution|joint distribution]] of <math>X</math> and <math>Y</math> is<math display="block"> \begin{align} p_{X, Y}\!\left(x, y\right) & {} = \Pr(X = x,\, Y = y) = p_X\!(x)\,p_Y\!(y) \\ & {} = \begin{cases} \displaystyle{1 \over 36}, \ &x, y \in [1, 6] \cap \mathbb{N} \\ 0 & \text{otherwise.} \end{cases} \end{align}</math> The information content of the [[random variate]] <math> (X, Y) = (2,\, 4)</math> is <math display="block"> \begin{align} \operatorname{I}_{X, Y}{(2, 4)} &= -\log_2\!{\left[p_{X,Y}{(2, 4)}\right]} = \log_2\!{36} = 2 \log_2\!{6} \\ & \approx 5.169925 \text{ Sh}, \end{align} </math> and can also be calculated by [[#Additivity of independent events|additivity of events]] <math display="block"> \begin{align} \operatorname{I}_{X, Y}{(2, 4)} &= -\log_2\!{\left[p_{X,Y}{(2, 4)}\right]} = -\log_2\!{\left[p_X(2)\right]} -\log_2\!{\left[p_Y(4)\right]} \\ & = 2\log_2\!{6} \\ & \approx 5.169925 \text{ Sh}. \end{align} </math> ==== Information from frequency of rolls ==== If we receive information about the value of the dice [[Twelvefold way#case fx|without knowledge]] of which die had which value, we can formalize the approach with so-called counting variables <math display="block"> C_k := \delta_k(X) + \delta_k(Y) = \begin{cases} 0, & \neg\, (X = k \vee Y = k) \\ 1, & \quad X = k\, \veebar \, Y = k \\ 2, & \quad X = k\, \wedge \, Y = k \end{cases} </math> for <math> k \in \{1, 2, 3, 4, 5, 6\}</math>, then <math display="inline"> \sum_{k=1}^{6}{C_k} = 2</math> and the counts have the [[multinomial distribution]] <math display="block"> \begin{align} f(c_1,\ldots,c_6) & {} = \Pr(C_1 = c_1 \text{ and } \dots \text{ and } C_6 = c_6) \\ & {} = \begin{cases} { \displaystyle {1\over{18}}{1 \over c_1!\cdots c_k!}}, \ & \text{when } \sum_{i=1}^6 c_i=2 \\ 0 & \text{otherwise,} \end{cases} \\ & {} = \begin{cases} {1 \over 18}, \ & \text{when 2 } c_k \text{ are } 1 \\ {1 \over 36}, \ & \text{when exactly one } c_k = 2 \\ 0, \ & \text{otherwise.} \end{cases} \end{align}</math> To verify this, the 6 outcomes <math display="inline">(X, Y) \in \left\{(k, k)\right\}_{k = 1}^{6} = \left\{ (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6) \right\}</math> correspond to the event <math>C_k = 2</math> and a [[total probability]] of {{Sfrac|6}}. These are the only events that are faithfully preserved with identity of which dice rolled which outcome because the outcomes are the same. Without knowledge to distinguish the dice rolling the other numbers, the other <math display="inline"> \binom{6}{2} = 15</math> [[combination]]s correspond to one die rolling one number and the other die rolling a different number, each having probability {{Sfrac|18}}. Indeed, <math display="inline"> 6 \cdot \tfrac{1}{36} + 15 \cdot \tfrac{1}{18} = 1</math>, as required. Unsurprisingly, the information content of learning that both dice were rolled as the same particular number is more than the information content of learning that one dice was one number and the other was a different number. Take for examples the events <math> A_k = \{(X, Y) = (k, k)\}</math> and <math> B_{j, k} = \{c_j = 1\} \cap \{c_k = 1\}</math> for <math> j \ne k, 1 \leq j, k \leq 6</math>. For example, <math> A_2 = \{X = 2 \text{ and } Y = 2\}</math> and <math> B_{3, 4} = \{(3, 4), (4, 3)\}</math>. The information contents are <math display="block"> \operatorname{I}(A_2) = -\log_2\!{\tfrac{1}{36}} = 5.169925 \text{ Sh}</math> <math display="block"> \operatorname{I}\left(B_{3, 4}\right) = - \log_2 \! \tfrac{1}{18} = 4.169925 \text{ Sh}</math> Let <math display="inline"> \text{Same} = \bigcup_{i = 1}^{6}{A_i}</math> be the event that both dice rolled the same value and <math> \text{Diff} = \overline{\text{Same}}</math> be the event that the dice differed. Then <math display="inline"> \Pr(\text{Same}) = \tfrac{1}{6}</math> and <math display="inline"> \Pr(\text{Diff}) = \tfrac{5}{6}</math>. The information contents of the events are <math display="block"> \operatorname{I}(\text{Same}) = -\log_2\!{\tfrac{1}{6}} = 2.5849625 \text{ Sh}</math> <math display="block"> \operatorname{I}(\text{Diff}) = -\log_2\!{\tfrac{5}{6}} = 0.2630344 \text{ Sh}.</math> ==== Information from sum of dice ==== The probability mass or density function (collectively [[probability measure]]) of the [[Sum of independent random variables|sum of two independent random variables]] [[Convolution#Convolution of measures|is the convolution of each probability measure]]. In the case of independent fair 6-sided dice rolls, the random variable <math> Z = X + Y</math> has probability mass function <math display="inline"> p_Z(z) = p_X(x) * p_Y(y) = {6 - |z - 7| \over 36} </math>, where <math> *</math> represents the [[discrete convolution]]. The [[Outcome (probability)|outcome]] <math> Z = 5 </math> has probability <math display="inline"> p_Z(5) = \frac{4}{36} = {1 \over 9} </math>. Therefore, the information asserted is<math display="block"> \operatorname{I}_Z(5) = -\log_2{\tfrac{1}{9}} = \log_2{9} \approx 3.169925 \text{ Sh}. </math> === General discrete uniform distribution === Generalizing the {{Section link||Fair dice roll|nopage=y}} example above, consider a general [[discrete uniform random variable]] (DURV) <math>X \sim \mathrm{DU}[a,b]; \quad a, b \in \mathbb{Z}, \ b \ge a.</math> For convenience, define <math display="inline">N := b - a + 1</math>. The [[probability mass function]] is <math display="block">p_X(k) = \begin{cases} \frac{1}{N}, & k \in [a, b] \cap \mathbb{Z} \\ 0, & \text{otherwise}. \end{cases}</math>In general, the values of the DURV need not be [[integer]]s, or for the purposes of information theory even uniformly spaced; they need only be [[equiprobable]].<ref name=":0" /> The information gain of any observation <math>X = k</math> is<math display="block">\operatorname{I}_X(k) = -\log_2{\frac{1}{N}} = \log_2{N} \text{ Sh}.</math> ==== Special case: constant random variable ==== If <math>b = a</math> above, <math>X</math> [[Degeneracy (mathematics)|degenerates]] to a [[constant random variable]] with probability distribution deterministically given by <math>X = b</math> and probability measure the [[Dirac measure]] <math display="inline">p_X(k) = \delta_{b}(k)</math>. The only value <math>X</math> can take is [[Deterministic system|deterministically]] <math>b</math>, so the information content of any measurement of <math>X</math> is<math display="block">\operatorname{I}_X(b) = - \log_2{1} = 0.</math>In general, there is no information gained from measuring a known value.<ref name=":0" /> === Categorical distribution === Generalizing all of the above cases, consider a [[Categorical variable|categorical]] [[discrete random variable]] with [[Support (mathematics)|support]] <math display="inline">\mathcal{S} = \bigl\{s_i\bigr\}_{i=1}^{N}</math> and [[probability mass function]] given by <math display="block">p_X(k) = \begin{cases} p_i, & k = s_i \in \mathcal{S} \\ 0, & \text{otherwise} . \end{cases}</math> For the purposes of information theory, the values <math>s \in \mathcal{S}</math> do not have to be [[number]]s; they can be any [[Mutually exclusive#Probability|mutually exclusive]] [[Event (probability theory)|events]] on a [[measure space]] of [[finite measure]] that has been [[Normalization (statistics)|normalized]] to a [[probability measure]] <math>p</math>. [[Without loss of generality]], we can assume the categorical distribution is supported on the set <math display="inline">[N] = \left\{1, 2, \dots, N \right\}</math>; the mathematical structure is [[Isomorphism|isomorphic]] in terms of [[probability theory]] and therefore [[information theory]] as well. The information of the outcome <math>X = x</math> is given <math display="block">\operatorname{I}_X(x) = -\log_2{p_X(x)}.</math> From these examples, it is possible to calculate the information of any set of [[Independent random variables|independent]] [[Discrete Random Variable|DRVs]] with known [[Probability distribution|distributions]] by [[Sigma additivity|additivity]]. ==Derivation== By definition, information is transferred from an originating entity possessing the information to a receiving entity only when the receiver had not known the information [[A priori knowledge|a priori]]. If the receiving entity had previously known the content of a message with certainty before receiving the message, the amount of information of the message received is zero. Only when the advance knowledge of the content of the message by the receiver is less than 100% certain does the message actually convey information. For example, quoting a character (the Hippy Dippy Weatherman) of comedian [[George Carlin]]:<blockquote>''Weather forecast for tonight: dark.'' ''Continued dark overnight, with widely scattered light by morning.''<ref>{{Cite web|title=A quote by George Carlin |url=https://www.goodreads.com/quotes/94336-weather-forecast-for-tonight-dark-continued-dark-overnight-with-widely|access-date=2021-04-01|website=www.goodreads.com}}</ref> </blockquote>Assuming that one does not reside near the [[Polar regions of Earth|polar regions]], the amount of information conveyed in that forecast is zero because it is known, in advance of receiving the forecast, that darkness always comes with the night. Accordingly, the amount of self-information contained in a message conveying content informing an occurrence of [[event (probability theory)|event]], <math>\omega_n</math>, depends only on the probability of that event. <math display="block">\operatorname I(\omega_n) = f(\operatorname P(\omega_n)) </math> for some function <math>f(\cdot)</math> to be determined below. If <math>\operatorname P(\omega_n) = 1</math>, then <math>\operatorname I(\omega_n) = 0</math>. If <math>\operatorname P(\omega_n) < 1</math>, then <math>\operatorname I(\omega_n) > 0</math>. Further, by definition, the [[Measure (mathematics)|measure]] of self-information is nonnegative and additive. If a message informing of event <math>C</math> is the '''intersection''' of two [[statistical independence|independent]] events <math>A</math> and <math>B</math>, then the information of event <math>C</math> occurring is that of the compound message of both independent events <math>A</math> and <math>B</math> occurring. The quantity of information of compound message <math>C</math> would be expected to equal the '''sum''' of the amounts of information of the individual component messages <math>A</math> and <math>B</math> respectively: <math display="block">\operatorname I(C) = \operatorname I(A \cap B) = \operatorname I(A) + \operatorname I(B).</math> Because of the independence of events <math>A</math> and <math>B</math>, the probability of event <math>C</math> is <math display="block">\operatorname P(C) = \operatorname P(A \cap B) = \operatorname P(A) \cdot \operatorname P(B).</math> However, applying function <math>f(\cdot)</math> results in <math display="block">\begin{align} \operatorname I(C) & = \operatorname I(A) + \operatorname I(B) \\ f(\operatorname P(C)) & = f(\operatorname P(A)) + f(\operatorname P(B)) \\ & = f\big(\operatorname P(A) \cdot \operatorname P(B)\big) \\ \end{align}</math> Thanks to work on [[Cauchy's functional equation]], the only monotone functions <math>f(\cdot)</math> having the property such that <math display="block">f(x \cdot y) = f(x) + f(y)</math> are the [[logarithm]] functions <math>\log_b(x)</math>. The only operational difference between logarithms of different bases is that of different scaling constants, so we may assume <math display="block">f(x) = K \log(x)</math> where <math>\log</math> is the [[natural logarithm]]. Since the probabilities of events are always between 0 and 1 and the information associated with these events must be nonnegative, that requires that <math>K<0</math>. Taking into account these properties, the self-information <math>\operatorname I(\omega_n)</math> associated with outcome <math>\omega_n</math> with probability <math>\operatorname P(\omega_n)</math> is defined as: <math display="block">\operatorname I(\omega_n) = -\log(\operatorname P(\omega_n)) = \log \left(\frac{1}{\operatorname P(\omega_n)} \right) </math> The smaller the probability of event <math>\omega_n</math>, the larger the quantity of self-information associated with the message that the event indeed occurred. If the above logarithm is base 2, the unit of <math> I(\omega_n)</math> is [[Shannon (unit)|shannon]]. This is the most common practice. When using the [[natural logarithm]] of base <math> e</math>, the unit will be the [[Nat (unit)|nat]]. For the base 10 logarithm, the unit of information is the [[Hartley (unit)|hartley]]. As a quick illustration, the information content associated with an outcome of 4 heads (or any specific outcome) in 4 consecutive tosses of a coin would be 4 shannons (probability 1/16), and the information content associated with getting a result other than the one specified would be ~0.09 shannons (probability 15/16). See above for detailed examples. == See also == * [[Kolmogorov complexity]] * [[Surprisal analysis]] == References == {{Reflist}} == Further reading == *[[Claude Shannon|C.E. Shannon]], [[A Mathematical Theory of Communication]], ''Bell Systems Technical Journal'', Vol. 27, pp 379β423, (Part I), 1948. == External links == * [http://www.umsl.edu/~fraundor/egsurpri.html Examples of surprisal measures] * [https://web.archive.org/web/20120717011943/http://www.lecb.ncifcrf.gov/~toms/glossary.html#surprisal "Surprisal" entry in a glossary of molecular information theory] * [http://ilab.usc.edu/surprise/ Bayesian Theory of Surprise] {{Authority control}} [[Category:Information theory]] [[Category:Entropy and information]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Citation-needed
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Cleanup
(
edit
)
Template:Expand section
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Section link
(
edit
)
Template:Sfrac
(
edit
)
Template:Short description
(
edit
)