Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Lossless compression
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Techniques == Most lossless compression programs do two things in sequence: the first step generates a ''statistical model'' for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (i.e. frequently encountered) data will produce shorter output than "improbable" data. The primary encoding algorithms used to produce bit sequences are [[Huffman coding]] (also used by the [[Deflate|deflate algorithm]]) and [[arithmetic coding]]. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the [[information entropy]], whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1. There are two primary ways of constructing statistical models: in a ''static'' model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. ''Adaptive'' models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders. Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (''general-purpose'' meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for [[Indexed color|indexed image]]s. === Multimedia === These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values. This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes. For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken. A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called [[discrete wavelet transform]]. [[JPEG2000]] additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but the distribution of values could be more peaked. {{Citation needed|date=December 2007}} <!-- can someone please explain JPEG2000 for dummies!? Wavelets are fun if taken as continuous real valued function. But all this math for simple integer linear algebra? --> The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.<ref name="Unser" /> === Historical legal issues === Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the [[United States]] and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of [[Lempel–Ziv–Welch|LZW]] compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the [[Graphics Interchange Format]] (GIF) for compressing still image files in favor of [[Portable Network Graphics]] (PNG), which combines the [[LZ77 and LZ78|LZ77]]-based [[Deflate|deflate algorithm]] with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003.<ref>{{cite web |url=http://www.unisys.com/about__unisys/lzw |publisher=Unisys |title=LZW Patent Information |website=About Unisys |url-status=dead |archive-url=https://web.archive.org/web/20090602212118/http://www.unisys.com/about__unisys/lzw |archive-date=2009-06-02 }}</ref> Many of the lossless compression techniques used for text also work reasonably well for [[indexed image]]s, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space). As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the {{nowrap|data{{px2}}{{mdash}}{{px2}}}}essentially using [[autoregressive]] models to predict the "next" value and encoding the (possibly small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the ''error'') tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits. It is sometimes beneficial to compress only the differences between two versions of a file (or, in [[video compression]], of successive images within a sequence). This is called [[delta encoding]] (from the Greek letter [[delta (letter)|Δ]], which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)