Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Quantization (signal processing)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Neglecting the entropy constraint: Lloyd–Max quantization=== In the above formulation, if the bit rate constraint is neglected by setting <math>\lambda</math> equal to 0, or equivalently if it is assumed that a fixed-length code (FLC) will be used to represent the quantized data instead of a [[variable-length code]] (or some other entropy coding technology such as arithmetic coding that is better than an FLC in the rate–distortion sense), the optimization problem reduces to minimization of distortion <math>D</math> alone. The indices produced by an <math>M</math>-level quantizer can be coded using a fixed-length code using <math> R = \lceil \log_2 M \rceil </math> bits/symbol. For example, when <math>M=</math>256 levels, the FLC bit rate <math>R</math> is 8 bits/symbol. For this reason, such a quantizer has sometimes been called an 8-bit quantizer. However using an FLC eliminates the compression improvement that can be obtained by use of better entropy coding. Assuming an FLC with <math>M</math> levels, the rate–distortion minimization problem can be reduced to distortion minimization alone. The reduced problem can be stated as follows: given a source <math>X</math> with PDF <math>f(x)</math> and the constraint that the quantizer must use only <math>M</math> classification regions, find the decision boundaries <math>\{b_k\}_{k=1}^{M-1} </math> and reconstruction levels <math>\{y_k\}_{k=1}^M</math> to minimize the resulting distortion :<math> D=E[(x-Q(x))^2] = \int_{-\infty}^{\infty} (x-Q(x))^2f(x)dx = \sum_{k=1}^{M} \int_{b_{k-1}}^{b_k} (x-y_k)^2 f(x)dx =\sum_{k=1}^{M} d_k </math>. Finding an optimal solution to the above problem results in a quantizer sometimes called a MMSQE (minimum mean-square quantization error) solution, and the resulting PDF-optimized (non-uniform) quantizer is referred to as a ''Lloyd–Max'' quantizer, named after two people who independently developed iterative methods<ref name=GrayNeuhoff/><ref>{{cite journal | last=Lloyd | first=S. | title=Least squares quantization in PCM | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=28 | issue=2 | year=1982 | issn=0018-9448 | doi=10.1109/tit.1982.1056489 | pages=129–137| s2cid=10833328 | citeseerx=10.1.1.131.1338 }} (work documented in a manuscript circulated for comments at [[Bell Laboratories]] with a department log date of 31 July 1957 and also presented at the 1957 meeting of the [[Institute of Mathematical Statistics]], although not formally published until 1982).</ref><ref>{{cite journal | last=Max | first=J. | title=Quantizing for minimum distortion | journal=IEEE Transactions on Information Theory | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=6 | issue=1 | year=1960 | issn=0018-9448 | doi=10.1109/tit.1960.1057548 | pages=7–12| bibcode=1960ITIT....6....7M }}</ref> to solve the two sets of simultaneous equations resulting from <math> {\partial D / \partial b_k} = 0 </math> and <math>{\partial D/ \partial y_k} = 0 </math>, as follows: :<math> {\partial D \over\partial b_k} = 0 \Rightarrow b_k = {y_k + y_{k+1} \over 2} </math>, which places each threshold at the midpoint between each pair of reconstruction values, and :<math> {\partial D \over\partial y_k} = 0 \Rightarrow y_k = { \int_{b_{k-1}}^{b_k} x f(x) dx \over \int_{b_{k-1}}^{b_k} f(x)dx } = \frac1{p_k} \int_{b_{k-1}}^{b_k} x f(x) dx </math> which places each reconstruction value at the centroid (conditional expected value) of its associated classification interval. [[Lloyd's algorithm|Lloyd's Method I algorithm]], originally described in 1957, can be generalized in a straightforward way for application to vector data. This generalization results in the [[Linde–Buzo–Gray algorithm|Linde–Buzo–Gray (LBG)]] or [[k-means]] classifier optimization methods. Moreover, the technique can be further generalized in a straightforward way to also include an entropy constraint for vector data.<ref name=ChouLookabaughGray>{{cite journal | last1=Chou | first1=P.A. | last2=Lookabaugh | first2=T. | last3=Gray | first3=R.M. |author-link3=Robert M. Gray| title=Entropy-constrained vector quantization | journal=IEEE Transactions on Acoustics, Speech, and Signal Processing | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=37 | issue=1 | year=1989 | issn=0096-3518 | doi=10.1109/29.17498 | pages=31–42}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)