Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Wiener process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Information rate === The [[information rate]] of the Wiener process with respect to the squared error distance, i.e. its quadratic [[rate-distortion function]], is given by <ref>T. Berger, "Information rates of Wiener processes," in IEEE Transactions on Information Theory, vol. 16, no. 2, pp. 134-139, March 1970. doi: 10.1109/TIT.1970.1054423</ref> <math display="block">R(D) = \frac{2}{\pi^2 D \ln 2} \approx 0.29D^{-1}.</math> Therefore, it is impossible to encode <math>\{w_t \}_{t \in [0,T]}</math> using a [[binary code]] of less than <math>T R(D)</math> [[bit]]s and recover it with expected mean squared error less than <math>D</math>. On the other hand, for any <math> \varepsilon>0</math>, there exists <math>T</math> large enough and a [[binary code]] of no more than <math>2^{TR(D)}</math> distinct elements such that the expected [[mean squared error]] in recovering <math>\{w_t \}_{t \in [0,T]}</math> from this code is at most <math>D - \varepsilon</math>. In many cases, it is impossible to [[binary code|encode]] the Wiener process without [[Sampling (signal processing)|sampling]] it first. When the Wiener process is sampled at intervals <math>T_s</math> before applying a binary code to represent these samples, the optimal trade-off between [[code rate]] <math>R(T_s,D)</math> and expected [[mean square error]] <math>D</math> (in estimating the continuous-time Wiener process) follows the parametric representation <ref>Kipnis, A., Goldsmith, A.J. and Eldar, Y.C., 2019. The distortion-rate function of sampled Wiener processes. IEEE Transactions on Information Theory, 65(1), pp.482-499.</ref> <math display="block"> R(T_s,D_\theta) = \frac{T_s}{2} \int_0^1 \log_2^+\left[\frac{S(\varphi)- \frac{1}{6}}{\theta}\right] d\varphi, </math> <math display="block"> D_\theta = \frac{T_s}{6} + T_s\int_0^1 \min\left\{S(\varphi)-\frac{1}{6},\theta \right\} d\varphi, </math> where <math>S(\varphi) = (2 \sin(\pi \varphi /2))^{-2}</math> and <math>\log^+[x] = \max\{0,\log(x)\}</math>. In particular, <math>T_s/6</math> is the mean squared error associated only with the sampling operation (without encoding).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)