Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Channel capacity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Feedback Capacity== Feedback capacity is the greatest rate at which [[information]] can be reliably transmitted, per unit time, over a point-to-point [[communication channel]] in which the receiver feeds back the channel outputs to the transmitter. Information-theoretic analysis of communication systems that incorporate feedback is more complicated and challenging than without feedback. Possibly, this was the reason [[C.E. Shannon]] chose feedback as the subject of the first Shannon Lecture, delivered at the 1973 IEEE International Symposium on Information Theory in Ashkelon, Israel. The feedback capacity is characterized by the maximum of the [[directed information]] between the channel inputs and the channel outputs, where the maximization is with respect to the causal conditioning of the input given the output. The [[directed information]] was coined by [[James Massey]]<ref>{{Cite journal |last=Massey |first=James |date=Nov 1990 |title=Causality, Feedback and Directed Information |url=http://www.isiweb.ee.ethz.ch/archive/massey_pub/pdf/BI532.pdf |journal=Proc. 1990 Int. Symp. On Information Theory and Its Applications (ISITA-90), Waikiki, HI. |pages=303β305}}</ref> in 1990, who showed that its an upper bound on feedback capacity. For [[memoryless channels]], Shannon showed<ref>{{cite journal |last1=Shannon |first1=C. |title=The zero error capacity of a noisy channel |journal=IEEE Transactions on Information Theory |date=September 1956 |volume=2 |issue=3 |pages=8β19 |doi=10.1109/TIT.1956.1056798}}</ref> that feedback does not increase the capacity, and the feedback capacity coincides with the channel capacity characterized by the [[mutual information]] between the input and the output. The feedback capacity is known as a closed-form expression only for several examples such as the trapdoor channel,<ref>{{Cite journal |last1=Permuter |first1=Haim |last2=Cuff |first2=Paul |last3=Van Roy |first3=Benjamin |last4=Weissman |first4=Tsachy |date=July 2008 |title=Capacity of the trapdoor channel with feedback |url=https://www.ee.bgu.ac.il/~haimp/trapdoor_channel_it.pdf |journal=IEEE Trans. Inf. Theory |volume=54 |issue=7 |pages=3150β3165|doi=10.1109/TIT.2008.924681 |arxiv=cs/0610047 |s2cid=1265 }}</ref> Ising channel,<ref>{{cite journal |last1=Elishco |first1=Ohad |last2=Permuter |first2=Haim |title=Capacity and Coding for the Ising Channel With Feedback |journal=IEEE Transactions on Information Theory |date=September 2014 |volume=60 |issue=9 |pages=5138β5149 |doi=10.1109/TIT.2014.2331951|arxiv=1205.4674 |s2cid=9761759 }}</ref><ref>{{cite journal |last1=Aharoni |first1=Ziv |last2=Sabag |first2=Oron |last3=Permuter |first3=Haim H. |title=Feedback Capacity of Ising Channels With Large Alphabet via Reinforcement Learning |journal=IEEE Transactions on Information Theory |date=September 2022 |volume=68 |issue=9 |pages=5637β5656 |doi=10.1109/TIT.2022.3168729|s2cid=248306743 }}</ref>. For some other channels, it is characterized through constant-size optimization problems such as the binary erasure channel with a no-consecutive-ones input constraint,<ref>{{cite journal |last1=Sabag |first1=Oron |last2=Permuter |first2=Haim H. |last3=Kashyap |first3=Navin |date=2016 |title=The Feedback Capacity of the Binary Erasure Channel With a No-Consecutive-Ones Input Constraint |journal=IEEE Transactions on Information Theory |volume=62 |issue=1 |pages=8β22 |doi=10.1109/TIT.2015.2495239}}</ref> NOST channel.<ref>{{cite journal |last1=Shemuel |first1=Eli |last2=Sabag |first2=Oron |last3=Permuter |first3=Haim H. |date=2022 |title=The Feedback Capacity of Noisy Output Is the State (NOST) Channels |journal=IEEE Transactions on Information Theory |volume=68 |issue=8 |pages=5044β5059 |doi=10.1109/TIT.2022.3165538|arxiv=2107.07164 }}</ref> The basic mathematical model for a communication system is the following: [[File:Communication with feedback.png|none|thumb|417x417px|Communication with feedback]] Here is the formal definition of each element (where the only difference with respect to the nonfeedback capacity is the encoder definition): * <math>W</math> is the message to be transmitted, taken in an [[Alphabet (formal languages)|alphabet]] <math>\mathcal{W}</math>; * <math>X</math> is the channel input symbol (<math>X^n</math> is a sequence of <math>n</math> symbols) taken in an [[Alphabet (formal languages)|alphabet]] <math>\mathcal{X}</math>; * <math>Y</math> is the channel output symbol (<math>Y^n</math> is a sequence of <math>n</math> symbols) taken in an alphabet <math>\mathcal{Y}</math>; * <math>\hat{W}</math> is the estimate of the transmitted message; * <math>f_i: \mathcal{W} \times \mathcal{Y}^{i-1} \to \mathcal{X}</math> is the encoding function at time <math>i</math>, for a block of length <math>n</math>; * <math>p(y_i|x^i,y^{i-1}) = p_{Y_i|X^i,Y^{i-1}}(y_i|x^i,y^{i-1})</math> is the noisy channel at time <math>i</math>, which is modeled by a [[conditional probability distribution]]; and, * <math>\hat{w}: \mathcal{Y}^n \to \mathcal{W}</math> is the decoding function for a block of length <math>n</math>. That is, for each time <math>i </math> there exists a feedback of the previous output <math>Y_{i-1} </math> such that the encoder has access to all previous outputs <math>Y^{i-1} </math>. An <math>(2^{nR},n)</math> code is a pair of encoding and decoding mappings with <math>\mathcal{W}=[1,2,\dots, 2^{nR}]</math>, and <math>W</math> is uniformly distributed. A rate <math>R</math> is said to be '''achievable''' if there exists a sequence of codes <math>(2^{nR},n)</math> such that the ''average probability of error:'' <math>P_e^{(n)}\triangleq \Pr (\hat{W}\neq W)</math> tends to zero as <math>n\to \infty</math>. The '''feedback capacity''' is denoted by <math>C_{\text{feedback}}</math>, and is defined as the supremum over all achievable rates. === Main results on feedback capacity === Let <math>X</math> and <math>Y</math> be modeled as random variables. The [[causal conditioning]] <math>P(y^n||x^n) \triangleq \prod_{i=1}^n P(y_i|y^{i-1},x^{i})</math> describes the given channel. The choice of the [[causally conditional distribution]] <math>P(x^n||y^{n-1}) \triangleq \prod_{i=1}^n P(x_i|x^{i-1},y^{i-1})</math> determines the [[Joint probability distribution|joint distribution]] <math>p_{X^n,Y^n}(x^n,y^n)</math> due to the chain rule for causal conditioning<ref name="2008.2009849">{{cite journal |last1=Permuter |first1=Haim Henry |last2=Weissman |first2=Tsachy |last3=Goldsmith |first3=Andrea J. |date=February 2009 |title=Finite State Channels With Time-Invariant Deterministic Feedback |journal=IEEE Transactions on Information Theory |volume=55 |issue=2 |pages=644β662 |arxiv=cs/0608070 |doi=10.1109/TIT.2008.2009849 |s2cid=13178}}</ref> <math>P(y^n, x^n) = P(y^n||x^n) P(x^n||y^{n-1})</math> which, in turn, induces a [[directed information]] <math>I(X^N \rightarrow Y^N)=\mathbf E\left[ \log \frac{P(Y^N||X^N)}{P(Y^N)} \right]</math>. The '''feedback capacity''' is given by : <math>\ C_{\text{feedback}} = \lim_{n \to \infty} \frac{1}{n} \sup_{P_{X^n||Y^{n-1}}} I(X^n \to Y^n)\, </math>, where the [[Infimum and supremum|supremum]] is taken over all possible choices of <math>P_{X^n||Y^{n-1}}(x^n||y^{n-1})</math>. === Gaussian feedback capacity === When the Gaussian noise is colored, the channel has memory. Consider for instance the simple case on an [[autoregressive model]] noise process <math> z_i = z_{i-1}+w_i</math> where <math>w_i\sim N(0,1)</math> is an i.i.d. process. === Solution techniques === The feedback capacity is difficult to solve in the general case. There are some techniques that are related to control theory and [[Markov decision processes]] if the channel is discrete.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)