Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Kolmogorov complexity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==History and context== [[Algorithmic information theory]] is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other [[data structure]]s). The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by [[Ray Solomonoff]], who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference"<ref>{{cite report |author-link=Ray Solomonoff | last=Solomonoff |first= Ray | url=http://world.std.com/~rjs/rayfeb60.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://world.std.com/~rjs/rayfeb60.pdf |archive-date=2022-10-09 |url-status=live | title=A Preliminary Report on a General Theory of Inductive Inference | work = Report V-131 | date= February 4, 1960|id=[http://world.std.com/~rjs/z138.pdf Revision] published November 1960}}</ref> as part of his invention of [[algorithmic probability]]. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in ''Information and Control''.<ref>{{cite journal | doi=10.1016/S0019-9958(64)90223-2 | last=Solomonoff | first= Ray | title=A Formal Theory of Inductive Inference Part I | journal = Information and Control | url=http://world.std.com/~rjs/1964pt1.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://world.std.com/~rjs/1964pt1.pdf |archive-date=2022-10-09 |url-status=live | volume=7 | issue= 1 | pages= 1β22 | date=March 1964 | doi-access=free }}</ref><ref>{{cite journal | doi=10.1016/S0019-9958(64)90131-7 | last=Solomonoff |first= Ray | title=A Formal Theory of Inductive Inference Part II | journal = Information and Control | url=http://world.std.com/~rjs/1964pt2.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://world.std.com/~rjs/1964pt2.pdf |archive-date=2022-10-09 |url-status=live |volume=7 |issue= 2 |pages= 224β254 |date=June 1964 |doi-access=free }}</ref> Andrey Kolmogorov later [[multiple discovery|independently published]] this theorem in ''Problems Inform. Transmission''<ref>{{cite journal|volume=1 |issue=1 |year=1965 |pages=1β7 |title=Three Approaches to the Quantitative Definition of Information |url=http://www.ece.umd.edu/~abarg/ppi/contents/1-65-abstracts.html#1-65.2 |journal=Problems Inform. Transmission |first=A.N. |last=Kolmogorov |url-status=dead |archive-url=https://web.archive.org/web/20110928032821/http://www.ece.umd.edu/~abarg/ppi/contents/1-65-abstracts.html |archive-date=September 28, 2011 }}</ref> in 1965. [[Gregory Chaitin]] also presents this theorem in ''J. ACM'' β Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers.<ref>{{cite journal | last1 = Chaitin | first1 = Gregory J. | title = On the Simplicity and Speed of Programs for Computing Infinite Sets of Natural Numbers| journal = Journal of the ACM | volume = 16 | pages = 407β422| year = 1969 | doi = 10.1145/321526.321530 | issue = 3| citeseerx = 10.1.1.15.3821 | s2cid = 12584692 }}</ref> The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm and the code lengths it allows to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information. When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority.<ref>{{cite journal | last1=Kolmogorov | first1=A. | title=Logical basis for information theory and probability theory | journal=IEEE Transactions on Information Theory | volume=14|issue=5 | pages=662β664 | year=1968 | doi =10.1109/TIT.1968.1054210 | s2cid=11402549 }}</ref> For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist [[Ming Li]] considers this an example of the [[Matthew effect (sociology)|Matthew effect]]: "...to everyone who has, more will be given..."<ref>{{Cite book |doi=10.1007/978-0-387-49820-1_1 |chapter=Preliminaries |title=An Introduction to Kolmogorov Complexity and its Applications |url=https://archive.org/details/introductiontoko00limi_695 |url-access=limited |pages=[https://archive.org/details/introductiontoko00limi_695/page/n23 1]β99 |series=Texts in Computer Science |year=2008 |last1=Li |first1=Ming |last2=VitΓ‘nyi |first2=Paul |isbn=978-0-387-33998-6 }}</ref> There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on [[self-delimiting program]]s, and is mainly due to [[Leonid Levin]] (1974). An axiomatic approach to Kolmogorov complexity based on [[Blum axioms]] (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov.<ref name=Burgin1982>{{cite journal |last=Burgin |first=M. |year=1982 |title=Generalized Kolmogorov complexity and duality in theory of computations |journal=Notices of the Russian Academy of Sciences |volume=25 |issue=3 |pages=19β23 |url=https://www.mathnet.ru/eng/dan45265}}</ref> In the late 1990s and early 2000s, methods developed to approximate Kolmogorov complexity relied on popular compression algorithms like LZW,<ref name="zenil20202">{{cite journal |last1=Zenil |first1=Hector |year=2020 |title=A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions |journal=Entropy |volume=22 |issue=6 |pages=612 |doi=10.3390/e22060612 |doi-access=free |pmid=33286384 |pmc=7517143 }}</ref> which made difficult or impossible to provide any estimation to short strings until a method based on [[Algorithmic probability]] was introduced,<ref>{{cite journal |last1=Delahaye |first1=Jean-Paul |last2=Zenil |first2=Hector |title=Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness |journal=Applied Mathematics and Computation |volume=219 |issue=1 |pages=63β77 |year=2012 |doi=10.1016/j.amc.2011.09.020 |url=https://www.sciencedirect.com/science/article/pii/S0096300311012367 }}</ref><ref>{{cite journal |last1=Soler-Toscano |first1=Fernando |last2=Zenil |first2=Hector |last3=Delahaye |first3=Jean-Paul |last4=Gauvrit |first4=Nicolas |title=Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines |journal=PLOS ONE |volume=9 |issue=5 |pages=74β85 |year=2014 |doi=10.1371/journal.pone.0096223 |doi-access=free |pmid=24787763 |pmc=4013017 |bibcode=2014PLoSO...996223S }}</ref> offering the only alternative to compression-based methods.<ref>Zenil, Hector; Soler Toscano, Fernando; Gauvrit, Nicolas (2022). "Methods and Applications of Algorithmic Complexity: Beyond Statistical Lossless Compression". <i>Emergence, Complexity and Computation</i>. Springer Berlin, Heidelberg. doi:10.1007/978-3-662-64985-5. ISBN 978-3-662-64983-1. </ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)