Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Minimum message length
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Formal information theory restatement of Occam's Razor}} '''Minimum message length''' ('''MML''') is a Bayesian information-theoretic method for statistical model comparison and selection.<ref>{{Cite book|title=Statistical and inductive inference by minimum message length|last=Wallace, C. S. (Christopher S.), -2004.|date=2005|publisher=Springer|isbn=9780387237954|location=New York|oclc=62889003}}</ref> It provides a formal [[information theory]] restatement of [[Occam's Razor]]: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise ''explanation'' of data is more likely to be correct (where the ''explanation'' consists of the statement of the model, followed by the [[Lossless compression|lossless encoding]] of the data using the stated model). MML was invented by [[Chris Wallace (computer scientist)|Chris Wallace]], first appearing in the seminal paper "An information measure for classification".<ref>{{Cite journal|last1=Wallace|first1=C. S.|last2=Boulton|first2=D. M.|date=1968-08-01|title=An Information Measure for Classification|journal=The Computer Journal|language=en|volume=11|issue=2|pages=185β194|doi=10.1093/comjnl/11.2.185|issn=0010-4620|doi-access=free}}</ref> MML is intended not just as a theoretical construct, but as a technique that may be deployed in practice.<ref>{{Cite book|title=Coding Ockham's Razor.|last=Allison, Lloyd.|date=2019|publisher=Springer|isbn=978-3030094881|oclc=1083131091}}</ref> It differs from the related concept of [[Kolmogorov complexity]] in that it does not require use of a [[Turing completeness|Turing-complete]] language to model data.<ref name=":0">{{Cite journal|last1=Wallace|first1=C. S.|last2=Dowe|first2=D. L.|date=1999-01-01|title=Minimum Message Length and Kolmogorov Complexity|url=https://academic.oup.com/comjnl/article/42/4/270/558949|journal=The Computer Journal|language=en|volume=42|issue=4|pages=270β283|doi=10.1093/comjnl/42.4.270|issn=0010-4620|url-access=subscription}}</ref> ==Definition== [[Claude E. Shannon|Shannon]]'s ''[[A Mathematical Theory of Communication]]'' (1948) states that in an optimal code, the message length (in binary) of an event <math>E</math>, <math>\operatorname{length}(E)</math>, where <math>E</math> has probability <math>P(E)</math>, is given by <math>\operatorname{length}(E) = -\log_2(P(E))</math>. [[Bayes's theorem]] states that the probability of a (variable) hypothesis <math>H</math> given fixed evidence <math>E</math> is proportional to <math>P(E|H) P(H)</math>, which, by the definition of [[conditional probability]], is equal to <math>P(H \land E)</math>. We want the model (hypothesis) with the highest such [[posterior probability]]. Suppose we encode a message which represents (describes) both model and data jointly. Since <math>\operatorname{length}(H \land E) = -\log_2(P(H \land E))</math>, the most probable model will have the shortest such message. The message breaks into two parts: <math>-\log_2(P(H \land E)) = -\log_2(P(H)) + -\log_2(P(E|H))</math>. The first part encodes the model itself. The second part contains information (e.g., values of parameters, or initial conditions, etc.) that, when processed by the model, outputs the observed data. MML naturally and precisely trades model complexity for goodness of fit. A more complicated model takes longer to state (longer first part) but probably fits the data better (shorter second part). So, an MML metric won't choose a complicated model unless that model pays for itself. ==Continuous-valued parameters== One reason why a model might be longer would be simply because its various parameters are stated to greater precision, thus requiring transmission of more digits. Much of the power of MML derives from its handling of how accurately to state parameters in a model, and a variety of approximations that make this feasible in practice. This makes it possible to usefully compare, say, a model with many parameters imprecisely stated against a model with fewer parameters more accurately stated. ==Key features of MML== * MML can be used to compare models of different structure. For example, its earliest application was in finding [[mixture model]]s with the optimal number of classes. Adding extra classes to a mixture model will always allow the data to be fitted to greater accuracy, but according to MML this must be weighed against the extra bits required to encode the parameters defining those classes. * MML is a method of [[Bayesian model comparison]]. It gives every model a score. * MML is scale-invariant and statistically invariant. Unlike many Bayesian selection methods, MML doesn't care if you change from measuring length to volume or from Cartesian co-ordinates to polar co-ordinates. * MML is statistically consistent. For problems like the [[#{{harvid|Dowe|Wallace|1997}}|Neyman-Scott]] (1948) problem or factor analysis where the amount of data per parameter is bounded above, MML can estimate all parameters with [[statistical consistency]]. * MML accounts for the precision of measurement. It uses the [[Fisher information]] (in the Wallace-Freeman 1987 approximation, or other hyper-volumes in [[#{{harvid|Wallace (posthumous)|2005}}|other approximations]]) to optimally discretize continuous parameters. Therefore the posterior is always a probability, not a probability density. * MML has been in use since 1968. MML coding schemes have been developed for several distributions, and many kinds of machine learners including unsupervised classification, decision trees and graphs, DNA sequences, [[Bayesian network]]s, neural networks (one-layer only so far), image compression, image and function segmentation, etc. ==See also== * [[Algorithmic probability]] * [[Algorithmic information theory]] * [[Grammar induction]] * [[Inductive inference]] * [[Inductive probability]] * [[Kolmogorov complexity]] β absolute complexity (within a constant, depending on the particular choice of Universal [[Turing machine|Turing Machine]]); MML is typically a computable approximation (see <ref name=":0" />) * [[Minimum description length]] β an alternative with a possibly different (non-Bayesian) motivation, developed 10 years after MML. * [[Occam's razor]] ==References== {{Reflist}} ==External links== ''Original Publication:'' *{{cite journal|last1=Wallace|last2=Boulton|url=http://www.csse.monash.edu.au/~dld/CSWallacePublications/WallaceBoultonAnInformationMeasureForClassification_ComputerJournal1968_pp185-194.txt|title=An information measure for classification|journal=Computer Journal|volume=11|issue=2|date=August 1968|pages=185β194|doi=10.1093/comjnl/11.2.185|doi-access=free}} Books: * {{cite book |authorlink=Chris Wallace (computer scientist) |first=C.S. |last=Wallace |url=https://link.springer.com/book/10.1007/0-387-27656-4 |title=Statistical and Inductive Inference by Minimum Message Length |publisher=Springer-Verlag |series=Information Science and Statistics |isbn=978-0-387-23795-4 |date=May 2005 |doi=10.1007/0-387-27656-4 |ref=CITEREFWallace (posthumous)2005}} * {{cite book|first=L.|last=Allison|title=Coding Ockham's Razor|publisher=Springer|isbn=978-3319764320|date=2018|doi=10.1007/978-3-319-76433-7|s2cid=19136282}}, on implementing MML, and [https://www.cantab.net/users/mmlist/MML/A/ source-code]. Related Links: * Links to all [http://www.csse.monash.edu.au/~dld/CSWallacePublications/ Chris Wallace]'s known publications. * A [http://www.allisons.org/ll/Images/People/Wallace/ searchable database of Chris Wallace's publications]. *{{cite journal|title=Minimum Message Length and Kolmogorov Complexity|first1=C.S.|last1=Wallace|first2=D.L.|last2=Dowe|journal=Computer Journal|volume=42|issue=4|year=1999|pages=270β283|doi=10.1093/comjnl/42.4.270|citeseerx=10.1.1.17.321}} *{{cite journal|url=http://comjnl.oxfordjournals.org/content/42/4.toc|title=Special Issue on Kolmogorov Complexity|journal=Computer Journal|volume=42|issue=4|year=1999|ref={{harvid|Special Issue on Kolmogorov Complexity|1999}}}}{{dead link|date=January 2025|bot=medic}}{{cbignore|bot=medic}} *{{cite conference|last1=Dowe|first1=D.L.|first2=C.S.|last2=Wallace|year=1997|title=Resolving the Neyman-Scott Problem by Minimum Message Length|journal=Computing Science and Statistics|volume=28|conference=28th Symposium on the interface, Sydney, Australia|pages=614β618|url=http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#DoweWallace1997}} *[http://www.allisons.org/ll/MML/20031120e/ History of MML, CSW's last talk]. *{{cite conference|title=Message Length as an Effective Ockham's Razor in Decision Tree Induction|first1=S.|last1=Needham|first2=D.|last2=Dowe|conference-url=http://www.ai.mit.edu/conferences/aistats2001|conference=Proc. 8th International Workshop on AI and Statistics|year=2001|url=http://www.csse.monash.edu.au/~dld/Publications/2001/Needham+Dowe2001_Ockham.pdf|pages=253β260}} (Shows how [[Occam's razor]] works fine when interpreted as MML.) *{{cite journal|first=L.|last=Allison|title=Models for machine learning and data mining in functional programming|journal=Journal of Functional Programming|volume=15|issue=1|pages=15β32|date=Jan 2005|doi=10.1017/S0956796804005301|doi-broken-date=17 April 2025 |s2cid=5218889|doi-access=free}} (MML, FP, and Haskell [http://www.allisons.org/ll/Publications/200309/READ-ME.shtml code]). *{{cite book|first1=J.W.|last1=Comley|first2=D.L.|last2=Dowe|chapter-url=http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#ComleyDowe2005|chapter=Chapter 11: Minimum Message Length, MDL and Generalised Bayesian Networks with Asymmetric Languages|pages=265β294|editor1-first=P.|editor1-last=Grunwald|editor2-first=M. A.|editor2-last=Pitt|editor3-first=I. J.|editor3-last=Myung|url=http://mitpress.mit.edu/catalog/item/default.asp?sid=4C100C6F-2255-40FF-A2ED-02FC49FEBE7C&ttype=2&tid=10478|title=Advances in Minimum Description Length: Theory and Applications|publisher=M.I.T. Press|date=April 2005|isbn=978-0-262-07262-5}} *{{cite conference|url=http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#ComleyDowe2003|last1=Comley|first1=Joshua W.|first2=D.L.|last2=Dowe|title=General Bayesian Networks and Asymmetric Languages|conference=Proc. 2nd Hawaii International Conference on Statistics and Related Fields|date=5β8 June 2003}}, [http://www.csse.monash.edu.au/~dld/Publications/2003/Comley+Dowe03_HICS2003_GeneralBayesianNetworksAsymmetricLanguages.pdf .pdf]. Comley & Dowe ([http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#ComleyDowe2003 2003], [http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#ComleyDowe2005 2005]) are the first two papers on MML Bayesian nets using both discrete and continuous valued parameters. *{{cite book|last=Dowe|first=David L.|year=2010|chapter-url=http://www.csse.monash.edu.au/~dld/Publications/2010/Dowe2010_MML_HandbookPhilSci_Vol7_HandbookPhilStat_MML+hybridBayesianNetworkGraphicalModels+StatisticalConsistency+InvarianceAndUniqueness_pp901-982.pdf|chapter=MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness|title=Handbook of Philosophy of Science (Volume 7: Handbook of Philosophy of Statistics)|publisher=Elsevier|isbn=978-0-444-51862-0|pages=901β982}} *[http://www.csse.monash.edu.au/~lloyd/tildeMML/ Minimum Message Length (MML)], LA's MML introduction, [http://www.allisons.org/ll/MML/ (MML alt.)]. *[http://www.csse.monash.edu.au/~dld/MML.html Minimum Message Length (MML), researchers and links]. *{{cite web|url=http://www.csse.monash.edu.au/mml/|title=Another MML research website|archive-url=https://web.archive.org/web/20170412041031/http://www.csse.monash.edu.au/mml/|archive-date=12 April 2017}} *[http://www.csse.monash.edu.au/~dld/Snob.html Snob page] for MML [[mixture model]]ling. *[http://ai.ato.ms/MITECS/Entry/wallace MITECS]: [http://www.csse.monash.edu.au/~dld/CSWallacePublications/ Chris Wallace] wrote an entry on MML for MITECS. (Requires account) *[https://web.archive.org/web/20170706095733/https://www.cs.helsinki.fi/u/floreen/sem/mikko.ps mikko.ps]: Short introductory slides by Mikko Koivisto in Helsinki *[[Akaike information criterion]] ([[Akaike information criterion|AIC]]) method of [[model selection]], and a [http://www.csse.monash.edu.au/~dld/David.Dowe.publications.html#DoweGardnerOppy2007 comparison] with MML: {{cite journal|first1=D.L.|last1=Dowe|first2=S.|last2=Gardner|first3=G.|last3=Oppy|title=Bayes not Bust! Why Simplicity is no Problem for Bayesians|journal=Br. J. Philos. Sci.|volume=58|issue=4|date=Dec 2007|pages=709β754|doi=10.1093/bjps/axm033}} {{Statistics}} {{Least Squares and Regression Analysis}} [[Category:Algorithmic information theory]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cbignore
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Dead link
(
edit
)
Template:Harvid
(
edit
)
Template:Least Squares and Regression Analysis
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Statistics
(
edit
)