Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Marginal likelihood
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|In Bayesian probability theory}} {{Bayesian statistics}} A '''marginal likelihood''' is a [[likelihood function]] that has been [[Integral|integrated]] over the [[parameter space]]. In [[Bayesian statistics]], it represents the probability of generating the [[Sampling (statistics)|observed sample]] for all possible values of the parameters; it can be understood as the probability of the model itself and is therefore often referred to as '''model evidence''' or simply '''evidence'''. Due to the integration over the parameter space, the marginal likelihood does not directly depend upon the parameters. If the focus is not on model comparison, the marginal likelihood is simply the normalizing constant that ensures that the [[posterior probability|posterior]] is a proper probability. It is related to the [[Partition function (statistical mechanics)|partition function in statistical mechanics]].<ref>{{cite book |first=Václav |last=Šmídl |first2=Anthony |last2=Quinn |chapter=Bayesian Theory |title=The Variational Bayes Method in Signal Processing |pages=13–23 |year=2006 |publisher=Springer |doi=10.1007/3-540-28820-1_2 }}</ref> ==Concept== Given a set of [[independent identically distributed]] data points <math>\mathbf{X}=(x_1,\ldots,x_n),</math> where <math>x_i \sim p(x|\theta)</math> according to some [[probability distribution]] parameterized by <math>\theta</math>, where <math>\theta</math> itself is a [[random variable]] described by a distribution, i.e. <math>\theta \sim p(\theta\mid\alpha),</math> the marginal likelihood in general asks what the probability <math>p(\mathbf{X}\mid\alpha)</math> is, where <math>\theta</math> has been [[marginal distribution|marginalized out]] (integrated out): :<math>p(\mathbf{X}\mid\alpha) = \int_\theta p(\mathbf{X}\mid\theta) \, p(\theta\mid\alpha)\ \operatorname{d}\!\theta </math> The above definition is phrased in the context of [[Bayesian statistics]] in which case <math>p(\theta\mid\alpha)</math> is called prior density and <math>p(\mathbf{X}\mid\theta)</math> is the likelihood. Recognizing that the marginal likelihood is the normalizing constant of the Bayesian posterior density <math>p(\theta\mid\mathbf{X},\alpha)</math>, one also has the alternative expression<ref>{{cite journal |first=Siddhartha |last=Chib |title=Marginal likelihood from the Gibbs output |journal=Journal of the American Statistical Association |year=1995 |volume=90 |issue=432 |pages=1313–1321 |doi=10.1080/01621459.1995.10476635 }}</ref> :<math>p(\mathbf{X} \mid \alpha) = \frac{p(\mathbf{X} \mid \theta, \alpha) p(\theta \mid \alpha)}{p(\theta \mid \mathbf{X}, \alpha)}</math> which is an identity in <math>\theta</math>. The marginal likelihood quantifies the agreement between data and prior in a geometric sense made precise{{How|date=February 2023}} in de Carvalho et al. (2019). In classical ([[frequentist statistics|frequentist]]) statistics, the concept of marginal likelihood occurs instead in the context of a joint parameter <math>\theta = (\psi,\lambda)</math>, where <math>\psi</math> is the actual parameter of interest, and <math>\lambda</math> is a non-interesting [[nuisance parameter]]. If there exists a probability distribution for <math>\lambda</math>{{Dubious|1=Frequentist_marginal_likelihood|reason=Parameters do not have distributions in frequentists statistics|date=February 2023}}, it is often desirable to consider the likelihood function only in terms of <math>\psi</math>, by marginalizing out <math>\lambda</math>: :<math>\mathcal{L}(\psi;\mathbf{X}) = p(\mathbf{X}\mid\psi) = \int_\lambda p(\mathbf{X}\mid\lambda,\psi) \, p(\lambda\mid\psi) \ \operatorname{d}\!\lambda </math> Unfortunately, marginal likelihoods are generally difficult to compute. Exact solutions are known for a small class of distributions, particularly when the marginalized-out parameter is the [[conjugate prior]] of the distribution of the data. In other cases, some kind of [[numerical integration]] method is needed, either a general method such as [[Gaussian integration]] or a [[Monte Carlo method]], or a method specialized to statistical problems such as the [[Laplace approximation]], [[Gibbs sampling|Gibbs]]/[[Metropolis–Hastings_algorithm|Metropolis]] sampling, or the [[EM algorithm]]. It is also possible to apply the above considerations to a single random variable (data point) <math>x</math>, rather than a set of observations. In a Bayesian context, this is equivalent to the [[prior predictive distribution]] of a data point. == Applications == === Bayesian model comparison === In [[Bayesian model comparison]], the marginalized variables <math>\theta</math> are parameters for a particular type of model, and the remaining variable <math>M</math> is the identity of the model itself. In this case, the marginalized likelihood is the probability of the data given the model type, not assuming any particular model parameters. Writing <math>\theta</math> for the model parameters, the marginal likelihood for the model ''M'' is :<math> p(\mathbf{X}\mid M) = \int p(\mathbf{X}\mid\theta, M) \, p(\theta\mid M) \, \operatorname{d}\!\theta </math> It is in this context that the term ''model evidence'' is normally used. This quantity is important because the posterior odds ratio for a model ''M''<sub>1</sub> against another model ''M''<sub>2</sub> involves a ratio of marginal likelihoods, called the [[Bayes factor]]: :<math> \frac{p(M_1\mid \mathbf{X})}{p(M_2\mid \mathbf{X})} = \frac{p(M_1)}{p(M_2)} \, \frac{p(\mathbf{X}\mid M_1)}{p(\mathbf{X}\mid M_2)} </math> which can be stated schematically as :posterior [[odds]] = prior odds × [[Bayes factor]] ==See also== * [[Empirical Bayes methods]] * [[Lindley's paradox]] * [[Marginal probability]] * [[Bayesian information criterion]] {{No footnotes|date=July 2010}} == References == {{Reflist}} == Further reading == * Charles S. Bos. "A comparison of marginal likelihood computation methods". In W. Härdle and B. Ronz, editors, ''COMPSTAT 2002: Proceedings in Computational Statistics'', pp. 111–117. 2002. ''(Available as a preprint on {{SSRN|332860}})'' * de Carvalho, Miguel; Page, Garritt; Barney, Bradley (2019). "On the geometry of Bayesian inference". ''Bayesian Analysis''. 14 (4): 1013‒1036. ''(Available as a preprint on the web: [https://www.maths.ed.ac.uk/~mdecarv/papers/decarvalho2018.pdf])'' * {{cite book |first=Ben |last=Lambert |chapter=The devil is in the denominator |pages=109–120 |title=A Student's Guide to Bayesian Statistics |location= |publisher=Sage |year=2018 |isbn=978-1-4739-1636-4 }} * [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]]. [[Category:Bayesian statistics]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Bayesian statistics
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Dubious
(
edit
)
Template:Fix
(
edit
)
Template:How
(
edit
)
Template:No footnotes
(
edit
)
Template:Reflist
(
edit
)
Template:SSRN
(
edit
)
Template:Short description
(
edit
)