Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Binary symmetric channel
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Capacity == [[Image:Noisy-channel coding theorem — channel capacity graph.png|thumb|right|300px|Graph showing the proportion of a channel’s capacity (''y''-axis) that can be used for payload based on how noisy the channel is (probability of bit flips; ''x''-axis).]] The [[channel capacity]] of the binary symmetric channel, in [[bit]]s, is:{{sfnp|MacKay|2003|p=15}} :<math>\ C_{\text{BSC}} = 1 - \operatorname H_\text{b}(p), </math> where <math>\operatorname H_\text{b}(p)</math> is the [[binary entropy function]], defined by:{{sfnp|MacKay|2003|p=15}} :<math>\operatorname H_\text{b}(x)=x\log_2\frac{1}{x}+(1-x)\log_2\frac{1}{1-x}</math> :{| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof{{sfnp|Cover|Thomas|1991|p=187}} |- |The capacity is defined as the maximum [[mutual information]] between input and output for all possible input distributions <math>p_X(x)</math>: :<math> C = \max_{p_X(x)} \left \{\, I(X;Y)\, \right \} </math> The mutual information can be reformulated as :<math>\begin{align} I(X;Y) &= H(Y) - H(Y|X) \\ &= H(Y) - \sum_{x \in \{0,1\} }{p_X(x) H(Y|X=x)} \\ &= H(Y) - \sum_{x \in \{0,1\} }{p_X(x)} \operatorname H_\text{b}(p) \\ &= H(Y) - \operatorname H_\text{b}(p), \end{align}</math> where the first and second step follows from the definition of mutual information and [[conditional entropy]] respectively. The entropy at the output for a given and fixed input symbol (<math>H(Y|X=x)</math>) equals the binary entropy function, which leads to the third line and this can be further simplified. In the last line, only the first term <math>H(Y)</math> depends on the input distribution <math>p_X(x)</math>. The entropy of a binary variable is at most 1 bit, and equality is attained if its probability distribution is uniform. It therefore suffices to exhibit an input distribution that yields a uniform probability distribution for the output <math>Y</math>. For this, note that it is a property of any binary symmetric channel that a uniform probability distribution of the input results in a uniform probability distribution of the output. Hence the value <math>H(Y)</math> will be 1 when we choose a uniform distribution for <math>p_X(x)</math>. We conclude that the channel capacity for our binary symmetric channel is <math>C_{\text{BSC}}=1-\operatorname H_\text{b}(p)</math>. |}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)