Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Statistical inference
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Paradigms for inference == Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms. Bandyopadhyay and Forster describe four paradigms: The classical (or [[Frequentist inference|frequentist]]) paradigm, the [[Bayesian inference|Bayesian]] paradigm, the [[Likelihoodism|likelihoodist]] paradigm, and the [[Akaike information criterion|Akaikean-Information Criterion]]-based paradigm.<ref>Bandyopadhyay & Forster (2011). See the book's Introduction (p.3) and "Section III: Four Paradigms of Statistics".</ref> === Frequentist inference === {{Main|Frequentist inference}} This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging. ==== Examples of frequentist inference ==== * [[p-value|''p''-value]] * [[Confidence interval]] * [[Null hypothesis]] significance testing ==== Frequentist inference, objectivity, and decision theory ==== One interpretation of [[frequentist inference]] (or classical inference) is that it is applicable only in terms of [[frequency probability]]; that is, in terms of repeated sampling from a population. However, the approach of Neyman<ref>{{cite journal | last = Neyman | first = J. | author-link = Jerzy Neyman | year = 1937 | title = Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability | jstor = 91337 | journal = Philosophical Transactions of the Royal Society of London A | volume = 236 | issue = 767| pages = 333–380 | doi=10.1098/rsta.1937.0005 | bibcode = 1937RSPTA.236..333N | doi-access = free }}</ref> develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach. The frequentist procedures of significance testing and confidence intervals can be constructed without regard to [[utility function]]s. However, some elements of frequentist statistics, such as [[statistical decision theory]], do incorporate [[utility function]]s.{{Citation needed|date=April 2012}} In particular, frequentist developments of optimal inference (such as [[minimum-variance unbiased estimator]]s, or [[uniformly most powerful test]]ing) make use of [[loss function]]s, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property.<ref>Preface to Pfanzagl.</ref> However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under [[absolute value]] loss functions, in that they minimize expected loss, and [[least squares]] estimators are optimal under squared error loss functions, in that they minimize expected loss. While statisticians using frequentist inference must choose for themselves the parameters of interest, and the [[estimators]]/[[Test statistic#Common test statistics|test statistic]] to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'.<ref>{{Cite journal|last=Little|first=Roderick J.|date=2006|title=Calibrated Bayes: A Bayes/Frequentist Roadmap|journal=The American Statistician|volume=60|issue=3|pages=213–223|issn=0003-1305|jstor=27643780|doi=10.1198/000313006X117837|s2cid=53505632}}</ref> ===Bayesian inference=== {{See also|Bayesian inference}} The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions.<ref>{{Cite journal |last=Lee|first=Se Yoon| title = Gibbs sampler and coordinate ascent variational inference: A set-theoretical review|journal=Communications in Statistics - Theory and Methods|year=2021|volume=51|issue=6|pages=1549–1568|doi=10.1080/03610926.2021.1921214|arxiv=2008.01006|s2cid=220935477}}</ref> There are [[Bayesian probability#Justification of Bayesian probabilities|several different justifications]] for using the Bayesian approach. ====Examples of Bayesian inference==== * [[Credible interval]] for [[interval estimation]] * [[Bayes factor]]s for model comparison ====Bayesian inference, subjectivity and decision theory==== Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's [[utility function]] need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been [[Bayesian probability#Personal probabilities and objective methods for constructing priors|proposed]] but not yet fully developed.) Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides [[optimal decision]]s in a [[decision theory|decision theoretic]] sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) [[Coherence (statistics)|incoherent]]; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be [[Coherence (statistics)|coherent]]. Some advocates of [[Bayesian inference]] assert that inference ''must'' take place in this decision-theoretic framework, and that [[Bayesian inference]] should not conclude with the evaluation and summarization of posterior beliefs. ===Likelihood-based inference=== {{Main|Likelihoodism}}Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. [[Likelihoodism]] approaches statistics by using the [[likelihood function]], denoted as <math>L(x | \theta)</math>, quantifies the probability of observing the given data <math>x</math>, assuming a specific set of parameter values <math>\theta</math>. In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data. The process of likelihood-based inference usually involves the following steps: # Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects. # Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters. # Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as <math>\bar{y}</math>, are the [[Maximum likelihood estimation|maximum likelihood estimates]] (MLEs). # Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating [[standard error]]s, confidence intervals, or conducting [[hypothesis test]]s based on asymptotic theory or simulation techniques such as [[Bootstrapping (statistics)|bootstrapping]]. # Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics. # Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model. ===AIC-based inference=== {{Main|Akaike information criterion}} {{expand section|date=November 2017}} The ''[[Akaike information criterion]]'' (AIC) is an [[estimator]] of the relative quality of [[statistical model]]s for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for [[model selection]]. AIC is founded on [[information theory]]: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the [[goodness of fit]] of the model and the simplicity of the model.) ===Other paradigms for inference=== ====Minimum description length==== {{Main|Minimum description length}} The minimum description length (MDL) principle has been developed from ideas in [[information theory]]<ref name="Soofi 2000 1349–1353">Soofi (2000)</ref> and the theory of [[Kolmogorov complexity]].<ref name=HY>Hansen & Yu (2001)</ref> The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or [[probability models]] for the data, as might be done in frequentist or Bayesian approaches. However, if a "data generating mechanism" does exist in reality, then according to [[Claude Shannon|Shannon]]'s [[source coding theorem]] it provides the MDL description of the data, on average and asymptotically.<ref name=HY747>Hansen and Yu (2001), page 747.</ref> In minimizing description length (or descriptive complexity), MDL estimation is similar to [[maximum likelihood estimation]] and [[maximum a posteriori estimation]] (using [[Maximum entropy probability distribution|maximum-entropy]] [[Bayesian probability|Bayesian priors]]). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling.<ref name=HY747/><ref name=JR>Rissanen (1989), page 84</ref> The MDL principle has been applied in communication-[[coding theory]] in [[information theory]], in [[linear regression]],<ref name=JR/> and in [[data mining]].<ref name=HY/> The evaluation of MDL-based inferential procedures often uses techniques or criteria from [[computational complexity theory]].<ref>Joseph F. Traub, G. W. Wasilkowski, and H. Wozniakowski. (1988) {{page needed|date=June 2011}}</ref> ====Fiducial inference==== {{Main|Fiducial inference}} [[Fiducial inference]] was an approach to statistical inference based on [[fiducial probability]], also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious.<ref>Neyman (1956)</ref><ref>Zabell (1992)</ref> However this argument is the same as that which shows<ref>Cox (2006) page 66</ref> that a so-called [[confidence distribution]] is not a valid [[probability distribution]] and, since this has not invalidated the application of [[confidence interval]]s, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's [[Fiducial probability|fiducial argument]] as a special case of an inference theory using [[upper and lower probabilities]].{{sfn|Hampel|2003}} ====Structural inference==== Developing ideas of Fisher and of Pitman from 1938 to 1939,<ref>Davison, page 12. {{full citation needed|date=November 2012}}</ref> [[George A. Barnard]] developed "structural inference" or "pivotal inference",<ref>Barnard, G.A. (1995) "Pivotal Models and the Fiducial Argument", International Statistical Review, 63 (3), 309–323. {{JSTOR|1403482}}</ref> an approach using [[Haar measure|invariant probabilities]] on [[group family|group families]]. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. [[Donald A. S. Fraser]] developed a general theory for structural inference<ref>{{Cite book|last=Fraser|first=D. A. S.|url=https://www.worldcat.org/oclc/440926|title=The structure of inference|date=1968|publisher=Wiley|isbn=0-471-27548-4|location=New York|oclc=440926}}</ref> based on [[group theory]] and applied this to linear models.<ref>{{Cite book|last=Fraser|first=D. A. S.|url=https://www.worldcat.org/oclc/3559629|title=Inference and linear models|date=1979|publisher=McGraw-Hill|isbn=0-07-021910-9|location=London|oclc=3559629}}</ref> The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist.<ref>{{Cite journal|last1=Taraldsen|first1=Gunnar|last2=Lindqvist|first2=Bo Henry|date=2013-02-01|title=Fiducial theory and optimal inference|url=https://projecteuclid.org/journals/annals-of-statistics/volume-41/issue-1/Fiducial-theory-and-optimal-inference/10.1214/13-AOS1083.full|journal=The Annals of Statistics|volume=41|issue=1|doi=10.1214/13-AOS1083|arxiv=1301.1717|s2cid=88520957|issn=0090-5364}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)