Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Additive white Gaussian noise
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Channel capacity== The AWGN channel is represented by a series of outputs <math>Y_i</math> at discrete-time event index <math>i</math>. <math>Y_i</math> is the sum of the input <math>X_i</math> and noise, <math>Z_i</math>, where <math>Z_i</math> is [[Independent and identically distributed random variables|independent and identically distributed]] and drawn from a zero-mean [[normal distribution]] with [[variance]] <math>N</math> (the noise). The <math>Z_i</math> are further assumed to not be correlated with the <math>X_i</math>. :<math> Z_i \sim \mathcal{N}(0, N) \,\!</math> :<math> Y_i = X_i + Z_i. \,\!</math> The capacity of the channel is infinite unless the noise <math>N</math> is nonzero, and the <math>X_i</math> are sufficiently constrained. The most common constraint on the input is the so-called "power" constraint, requiring that for a codeword <math>(x_1, x_2, \dots , x_k)</math> transmitted through the channel, we have: :<math> \frac{1}{k}\sum_{i=1}^k x_i^2 \leq P, </math> where <math>P</math> represents the maximum channel power. Therefore, the [[channel capacity]] for the power-constrained channel is given by:{{what|reason=Please define I(X;Y)|date=October 2023}} :<math> C = \max \left\{ I(X;Y) : f \text{ s.t. } E \left( X^2 \right) \leq P \right\} \,\!</math> where <math>f</math> is the distribution of <math>X</math>. Expand <math>I(X;Y)</math>, writing it in terms of the [[differential entropy]]: :<math> \begin{align} & I(X;Y) = h(Y) - h(Y\mid X) \\[5pt] = {} & h(Y)-h(X+Z\mid X) \\[5pt] = {} & h(Y)-h(Z\mid X) \end{align} \,\!</math> But <math>X</math> and <math>Z</math> are independent, therefore: :<math> I(X;Y) = h(Y) - h(Z) \,\!</math> Evaluating the [[differential entropy]] of a Gaussian gives: :<math> h(Z) = \frac{1}{2} \log(2 \pi e N) \,\!</math> Because <math>X</math> and <math>Z</math> are independent and their sum gives <math>Y</math>: :<math> E(Y^2) = E((X+Z)^2) = E(X^2) + 2E(X)E(Z)+E(Z^2) \leq P + N \,\!</math> From this bound, we infer from a property of the differential entropy that :<math> h(Y) \leq \frac{1}{2} \log(2 \pi e(P+N)) \,\!</math> Therefore, the channel capacity is given by the highest achievable bound on the [[mutual information]]: :<math> I(X;Y) \leq \frac{1}{2}\log(2 \pi e (P+N)) - \frac {1}{2}\log(2 \pi e N) \,\!</math> Where <math>I(X;Y)</math> is maximized when: :<math> X \sim \mathcal{N}(0, P) \,\!</math> Thus the channel capacity <math>C</math> for the AWGN channel is given by: :<math> C = \frac {1}{2} \log\left(1+\frac{P}{N}\right) \,\!</math> === Channel capacity and sphere packing === Suppose that we are sending messages through the channel with index ranging from <math>1</math> to <math>M</math>, the number of distinct possible messages. If we encode the <math>M</math> messages to <math>n</math> bits, then we define the rate <math>R</math> as: :<math> R = \frac {\log M}{n} \,\!</math> A rate is said to be achievable if there is a sequence of codes so that the maximum probability of error tends to zero as <math>n</math> approaches infinity. The capacity <math>C</math> is the highest achievable rate. Consider a codeword of length <math>n</math> sent through the AWGN channel with noise level <math>N</math>. When received, the codeword vector variance is now <math>N</math>, and its mean is the codeword sent. The vector is very likely to be contained in a sphere of radius <math display=inline>\sqrt{n(N+\varepsilon)}</math> around the codeword sent. If we decode by mapping every message received onto the codeword at the center of this sphere, then an error occurs only when the received vector is outside of this sphere, which is very unlikely. Each codeword vector has an associated sphere of received codeword vectors which are decoded to it and each such sphere must map uniquely onto a codeword. Because these spheres therefore must not intersect, we are faced with the problem of [[sphere packing]]. How many distinct codewords can we pack into our <math>n</math>-bit codeword vector? The received vectors have a maximum energy of <math>n(P+N)</math> and therefore must occupy a sphere of radius <math display=inline>\sqrt{n(P+N)}</math>. Each codeword sphere has radius <math>\sqrt{nN}</math>. The volume of an ''n''-dimensional sphere is directly proportional to <math>r^n</math>, so the maximum number of uniquely decodeable spheres that can be packed into our sphere with transmission power ''P'' is: :<math> \frac{(n(P+N))^{n/2}}{(nN)^{n/2}} = 2^{(n/2) \log\left(1+P/N \right)} \,\!</math> By this argument, the rate ''R'' can be no more than <math>\frac{1}{2} \log \left( 1+\frac P N \right)</math>. ===Achievability=== In this section, we show achievability of the upper bound on the rate from the last section. A codebook, known to both encoder and decoder, is generated by selecting codewords of length ''n'', i.i.d. Gaussian with variance <math>P-\varepsilon</math> and mean zero. For large n, the empirical variance of the codebook will be very close to the variance of its distribution, thereby avoiding violation of the power constraint probabilistically. Received messages are decoded to a message in the codebook which is uniquely jointly typical. If there is no such message or if the power constraint is violated, a decoding error is declared. Let <math>X^n(i)</math> denote the codeword for message <math>i</math>, while <math>Y^n</math> is, as before the received vector. Define the following three events: # Event <math>U</math>:the power of the received message is larger than <math>P</math>. # Event <math>V</math>: the transmitted and received codewords are not jointly typical. # Event <math>E_j</math>: <math>(X^n(j), Y^n)</math> is in <math>A_\varepsilon^{(n)}</math>, the [[typical set]] where <math>i \neq j</math>, which is to say that the incorrect codeword is jointly typical with the received vector. An error therefore occurs if <math>U</math>, <math>V</math> or any of the <math>E_i</math> occur. By the law of large numbers, <math>P(U)</math> goes to zero as n approaches infinity, and by the joint [[Asymptotic Equipartition Property]] the same applies to <math>P(V)</math>. Therefore, for a sufficiently large <math>n</math>, both <math>P(U)</math> and <math>P(V)</math> are each less than <math>\varepsilon</math>. Since <math>X^n(i)</math> and <math>X^n(j)</math> are independent for <math>i \neq j</math>, we have that <math>X^n(i)</math> and <math>Y^n</math> are also independent. Therefore, by the joint AEP, <math>P(E_j) = 2^{-n(I(X;Y)-3\varepsilon)}</math>. This allows us to calculate <math>P^{(n)}_e</math>, the probability of error as follows: : <math> \begin{align} P^{(n)}_e & \leq P(U) + P(V) + \sum_{j \neq i} P(E_j) \\ & \leq \varepsilon + \varepsilon + \sum_{j \neq i} 2^{-n(I(X;Y)-3\varepsilon)} \\ & \leq 2\varepsilon + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\ & \leq 2\varepsilon + (2^{3n\varepsilon})2^{-n(I(X;Y)-R)} \\ & \leq 3\varepsilon \end{align} </math> Therefore, as ''n'' approaches infinity, <math>P^{(n)}_e</math> goes to zero and <math>R < I(X;Y) - 3\varepsilon</math>. Therefore, there is a code of rate R arbitrarily close to the capacity derived earlier. === Coding theorem converse === Here we show that rates above the capacity <math>C = \frac {1}{2} \log\left( 1+\frac P N \right)</math> are not achievable. Suppose that the power constraint is satisfied for a codebook, and further suppose that the messages follow a uniform distribution. Let <math>W</math> be the input messages and <math>\hat{W}</math> the output messages. Thus the information flows as: <math>W \longrightarrow X^{(n)}(W) \longrightarrow Y^{(n)} \longrightarrow \hat{W}</math> Making use of [[Fano's inequality]] gives: <math>H(W\mid\hat{W}) \leq 1+nRP^{(n)}_e = n \varepsilon_n</math> where <math>\varepsilon_n \rightarrow 0</math> as <math>P^{(n)}_e \rightarrow 0</math> Let <math>X_i</math> be the encoded message of codeword index ''i''. Then: : <math> \begin{align} nR & = H(W) \\ & =I(W;\hat{W}) + H(W\mid\hat{W}) \\ & \leq I(W;\hat{W}) + n\varepsilon_n \\ & \leq I(X^{(n)}; Y^{(n)}) + n\varepsilon_n \\ & = h(Y^{(n)}) - h(Y^{(n)}\mid X^{(n)}) + n\varepsilon_n \\ & = h(Y^{(n)}) - h(Z^{(n)}) + n\varepsilon_n \\ & \leq \sum_{i=1}^n h(Y_i)- h(Z^{(n)}) + n\varepsilon_n \\ & \leq \sum_{i=1}^n I(X_i; Y_i) + n\varepsilon_n \end{align} </math> Let <math>P_i</math> be the average power of the codeword of index i: :<math> P_i = \frac{1}{2^{nR}}\sum_{w}x^2_i(w) \,\!</math> where the sum is over all input messages <math>w</math>. <math>X_i</math> and <math>Z_i</math> are independent, thus the expectation of the power of <math>Y_i</math> is, for noise level <math>N</math>: :<math> E(Y_i^2) = P_i+N \,\!</math> And, if <math>Y_i</math> is normally distributed, we have that :<math> h(Y_i) \leq \frac{1}{2}\log{2 \pi e} (P_i +N) \,\!</math> Therefore, : <math> \begin{align} nR & \leq \sum(h(Y_i)-h(Z_i)) + n \varepsilon_n \\ & \leq \sum \left( \frac{1}{2} \log(2 \pi e (P_i + N)) - \frac{1}{2} \log(2 \pi e N)\right) + n \varepsilon_n \\ & = \sum \frac{1}{2} \log \left(1 + \frac{P_i}N \right) + n \varepsilon_n \end{align} </math> We may apply Jensen's equality to <math>\log(1+x)</math>, a concave (downward) function of ''x'', to get: :<math> \frac{1}{n} \sum_{i=1}^n \frac{1}{2}\log\left(1+\frac{P_i}{N}\right) \leq \frac{1}{2}\log\left(1+\frac{1}{n}\sum_{i=1}^n \frac{P_i}{N}\right) \,\!</math> Because each codeword individually satisfies the power constraint, the average also satisfies the power constraint. Therefore, :<math> \frac{1}{n}\sum_{i=1}^n \frac{P_i}{N}, \,\!</math> which we may apply to simplify the inequality above and get: :<math> \frac{1}{2}\log\left(1+\frac{1}{n}\sum_{i=1}^{n}\frac{P_i}{N}\right) \leq \frac{1}{2}\log\left(1+\frac{P}{N}\right). \,\!</math> Therefore, it must be that <math>R \leq \frac{1}{2}\log \left(1+ \frac{P}{N}\right) + \varepsilon_n</math>. Therefore, ''R'' must be less than a value arbitrarily close to the capacity derived earlier, as <math>\varepsilon_n \rightarrow 0</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)