Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Lossless compression
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Limitations == Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument called the [[pigeonhole principle]], as follows:{{Sfn|Sayood|2002|p=41}}<ref name=ISSEP-2015>{{cite book |doi=10.1007/978-3-319-25396-1_1 |chapter=Surprising Computer Science |title=Informatics in Schools. Curricula, Competences, and Competitions |series=Lecture Notes in Computer Science |date=2015 |last1=Bell |first1=Tim |volume=9378 |pages=1β11 |isbn=978-3-319-25395-4 }} See in particular [https://books.google.com/books?id=exicCgAAQBAJ&pg=PA9 pp. 8β9].</ref> * Assume that each file is represented as a string of bits of some arbitrary length. * Suppose that there is a compression algorithm that transforms every file into an output file that is no longer than the original file, and that at least one file will be compressed into an output file that is shorter than the original file. * Let ''M'' be the least number such that there is a file ''F'' with length ''M'' bits that compresses to something shorter. Let ''N'' be the length (in bits) of the compressed version of ''F''. * Because ''N''<''M'', '''every''' file of length ''N'' keeps its size during compression. There are 2<sup>''N''</sup> such files possible. Together with ''F'', this makes 2<sup>''N''</sup>+1 files that all compress into one of the 2<sup>''N''</sup> files of length ''N''. * But 2<sup>''N''</sup> is smaller than 2<sup>''N''</sup>+1, so by the pigeonhole principle there must be some file of length ''N'' that is simultaneously the output of the compression function on two different inputs. That file cannot be decompressed reliably (which of the two originals should that yield?), which contradicts the assumption that the algorithm was lossless. * We must therefore conclude that our original hypothesis (that the compression function makes no file longer) is necessarily untrue. Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, [[deflate]] compressed files never need to grow by more than 5 bytes per 65,535 bytes of input. In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be ''greater'' than N.{{fact|date=January 2025}} So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better. Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a ''subset'' of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data. The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled [[redundancy (information theory)|redundancy]] that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa. In particular, files of [[random]] data cannot be consistently compressed by any conceivable lossless data compression algorithm; indeed, this result is used to ''define'' the concept of randomness in [[Kolmogorov complexity]].{{Sfn|Sayood|2002|p=38}} It is provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number ''N'' of random bits can always be compressed to ''N'' β 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 1.<ref name=ISSEP-2015/> On the other hand, it has also been proven<ref>{{cite book|first1=Ming|last1=Li|first2=Paul|last2=VitΓ‘nyi|title=An Introduction to Kolmogorov Complexity and its Applications|year=1993|location=New York|publisher=Springer|page=102|isbn=0-387-94053-7|url=https://archive.org/details/introductiontoko00limi/page/102/mode/2up|quotation=Theorem 2.6 The function <math>C(x)</math> is not partial recursive.}}</ref> that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it is possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant ''[[pi]]'', which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a [[Kolmogorov complexity#Compression|simple theorem about incompressible strings]] shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor). === Mathematical background === Abstractly, a [[compression algorithm]] can be viewed as a [[Function (mathematics)|function]] on sequences (normally of octets). Compression is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map). For a compression algorithm to be [[lossless]], the compression map must form an [[Injective function|injection]] from "plain" to "compressed" bit sequences. The pigeonhole principle prohibits a bijection between the collection of sequences of length ''N'' and any subset of the collection of sequences of length ''N''β1. Therefore, it is not possible to produce a lossless algorithm that reduces the size of every possible input sequence.<ref>{{cite book |doi=10.1007/978-3-319-16250-8_3 |chapter=The Pigeonhole Principle |title=Proof Patterns |date=2015 |last1=Joshi |first1=Mark |pages=19β23 |isbn=978-3-319-16249-2 }}</ref> === Points of application in real compression theory === Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by [[heuristics]]; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimizing the compression overhead. For example, the [[ZIP (file format)|zip]] data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.<ref>{{cite web |url=http://www.pkware.com/documents/casestudies/APPNOTE.TXT |title=.ZIP File Format Specification |publisher=[[PKWARE, Inc.]] |at=chapter V, section J}}</ref> === The Million Random Digit Challenge === Mark Nelson, in response to claims of "magic" compression algorithms appearing in comp.compression, has constructed a 415,241 byte binary file of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute it without error.<ref>{{cite web | last = Nelson | first = Mark | title = The Million Random Digit Challenge Revisited | work = Mark Nelson | date = 2006-06-20 | url = https://marknelson.us/posts/2006/06/20/million-digit-challenge.html }}</ref> A similar challenge, with $5,000 as reward, was issued by Mike Goldman.<ref>{{cite web | last = Craig | first = Patrick | title = The $5000 Compression Challenge | url = http://www.patrickcraig.co.uk/other/compression.htm | access-date = 2009-06-08 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)