Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Statistical inference
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Other paradigms for inference=== ====Minimum description length==== {{Main|Minimum description length}} The minimum description length (MDL) principle has been developed from ideas in [[information theory]]<ref name="Soofi 2000 1349β1353">Soofi (2000)</ref> and the theory of [[Kolmogorov complexity]].<ref name=HY>Hansen & Yu (2001)</ref> The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or [[probability models]] for the data, as might be done in frequentist or Bayesian approaches. However, if a "data generating mechanism" does exist in reality, then according to [[Claude Shannon|Shannon]]'s [[source coding theorem]] it provides the MDL description of the data, on average and asymptotically.<ref name=HY747>Hansen and Yu (2001), page 747.</ref> In minimizing description length (or descriptive complexity), MDL estimation is similar to [[maximum likelihood estimation]] and [[maximum a posteriori estimation]] (using [[Maximum entropy probability distribution|maximum-entropy]] [[Bayesian probability|Bayesian priors]]). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling.<ref name=HY747/><ref name=JR>Rissanen (1989), page 84</ref> The MDL principle has been applied in communication-[[coding theory]] in [[information theory]], in [[linear regression]],<ref name=JR/> and in [[data mining]].<ref name=HY/> The evaluation of MDL-based inferential procedures often uses techniques or criteria from [[computational complexity theory]].<ref>Joseph F. Traub, G. W. Wasilkowski, and H. Wozniakowski. (1988) {{page needed|date=June 2011}}</ref> ====Fiducial inference==== {{Main|Fiducial inference}} [[Fiducial inference]] was an approach to statistical inference based on [[fiducial probability]], also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious.<ref>Neyman (1956)</ref><ref>Zabell (1992)</ref> However this argument is the same as that which shows<ref>Cox (2006) page 66</ref> that a so-called [[confidence distribution]] is not a valid [[probability distribution]] and, since this has not invalidated the application of [[confidence interval]]s, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's [[Fiducial probability|fiducial argument]] as a special case of an inference theory using [[upper and lower probabilities]].{{sfn|Hampel|2003}} ====Structural inference==== Developing ideas of Fisher and of Pitman from 1938 to 1939,<ref>Davison, page 12. {{full citation needed|date=November 2012}}</ref> [[George A. Barnard]] developed "structural inference" or "pivotal inference",<ref>Barnard, G.A. (1995) "Pivotal Models and the Fiducial Argument", International Statistical Review, 63 (3), 309β323. {{JSTOR|1403482}}</ref> an approach using [[Haar measure|invariant probabilities]] on [[group family|group families]]. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. [[Donald A. S. Fraser]] developed a general theory for structural inference<ref>{{Cite book|last=Fraser|first=D. A. S.|url=https://www.worldcat.org/oclc/440926|title=The structure of inference|date=1968|publisher=Wiley|isbn=0-471-27548-4|location=New York|oclc=440926}}</ref> based on [[group theory]] and applied this to linear models.<ref>{{Cite book|last=Fraser|first=D. A. S.|url=https://www.worldcat.org/oclc/3559629|title=Inference and linear models|date=1979|publisher=McGraw-Hill|isbn=0-07-021910-9|location=London|oclc=3559629}}</ref> The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist.<ref>{{Cite journal|last1=Taraldsen|first1=Gunnar|last2=Lindqvist|first2=Bo Henry|date=2013-02-01|title=Fiducial theory and optimal inference|url=https://projecteuclid.org/journals/annals-of-statistics/volume-41/issue-1/Fiducial-theory-and-optimal-inference/10.1214/13-AOS1083.full|journal=The Annals of Statistics|volume=41|issue=1|doi=10.1214/13-AOS1083|arxiv=1301.1717|s2cid=88520957|issn=0090-5364}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)