Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Information content
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Basic quantity derived from the probability of a particular event occurring from a random variable}} {{cleanup|reason=unclear terminology|date=June 2017}} In [[information theory]], the '''information content''', '''self-information''', '''surprisal''', or '''Shannon information''' is a basic quantity derived from the [[probability]] of a particular [[Event (probability theory)|event]] occurring from a [[random variable]]. It can be thought of as an alternative way of expressing probability, much like [[odds]] or [[log-odds]], but which has particular mathematical advantages in the setting of information theory. The Shannon information can be interpreted as quantifying the level of "surprise" of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimal [[Shannon's source coding theorem|source coding]] of the random variable. The Shannon information is closely related to ''[[Entropy (information theory)|entropy]]'', which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is "on average". This is the average amount of self-information an observer would expect to gain about a random variable when measuring it.<ref>Jones, D.S., ''Elementary Information Theory'', Vol., Clarendon Press, Oxford pp 11β15 1979</ref> The information content can be expressed in various [[units of information]], of which the most common is the "bit" (more formally called the ''shannon''), as explained below. The term 'perplexity' has been used in language modelling to quantify the uncertainty inherent in a set of prospective events.{{Citation needed|reason=This is not common knowledge, needs a source|date=May 2025}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)