Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Minimum description length
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Related concepts=== Statistical MDL learning is very strongly connected to [[probability theory]] and [[statistics]] through the correspondence between codes and probability distributions mentioned above. This has led some researchers to view MDL as equivalent to [[Bayesian inference]]: code length of model and data together in MDL correspond respectively to [[prior probability]] and [[marginal likelihood]] in the Bayesian framework.<ref name="mackay">{{cite book |last1=MacKay |first1=David J. C. |last2=Kay |first2=David J. C. Mac |title=Information Theory, Inference and Learning Algorithms |date=2003 |publisher=Cambridge University Press |isbn=978-0-521-64298-9 }}{{page needed|date=May 2020}}</ref> While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkov ''normalized maximum likelihood code'', which plays a central role in current MDL theory, but has no equivalent in Bayesian inference. Furthermore, Rissanen stresses that we should make no assumptions about the ''true'' [[Probabilistic model|data-generating process]]: in practice, a model class is typically a simplification of reality and thus does not contain any code or probability distribution that is true in any objective sense.<ref name="cwi">{{cite news |url=http://www.mdl-research.net/jorma.rissanen/ |title=Homepage of Jorma Rissanen |last=Rissanen |first=Jorma |date= |publisher= |accessdate=2010-07-03 |archive-url=https://web.archive.org/web/20151210054325/http://www.mdl-research.net/jorma.rissanen/ |archive-date=2015-12-10 |url-status=dead }}</ref>{{self-published inline|date=May 2020}}<ref name="springer">{{cite book |url=https://www.springer.com/computer/foundations/book/978-0-387-36610-4 |title=Information and Complexity in Statistical Modeling |last=Rissanen |first=J. |year=2007 |publisher=Springer |accessdate=2010-07-03}}{{page needed|date=May 2020}}</ref> In the last mentioned reference Rissanen bases the mathematical underpinning of MDL on the [[Kolmogorov structure function]]. According to the MDL philosophy, Bayesian methods should be dismissed if they are based on unsafe [[Prior probability|priors]] that would lead to poor results. The priors that are acceptable from an MDL point of view also tend to be favored in so-called [[Objective Bayesian probability|objective Bayesian]] analysis; there, however, the motivation is usually different.<ref name="volker">{{cite journal |last1=Nannen |first1=Volker |title=A Short Introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length (MDL) |date=May 2010 |arxiv=1005.2364 |bibcode=2010arXiv1005.2364N }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)