Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Bayes factor
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Statistical factor used to compare competing hypotheses}} {{Bayesian statistics}} The '''Bayes factor''' is a ratio of two competing [[statistical model]]s represented by their [[marginal likelihood|evidence]], and is used to quantify the support for one model over the other.<ref>{{cite journal |last1=Morey |first1=Richard D. |first2=Jan-Willem |last2=Romeijn |first3=Jeffrey N. |last3=Rouder |title=The philosophy of Bayes factors and the quantification of statistical evidence |journal=Journal of Mathematical Psychology |volume=72 |year=2016 |pages=6–18 |doi=10.1016/j.jmp.2015.11.001 |doi-access=free }}</ref> The models in question can have a common set of parameters, such as a [[null hypothesis]] and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its [[linear approximation]]. The Bayes factor can be thought of as a Bayesian analog to the [[likelihood-ratio test]], although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values).<ref>{{cite book |first1=Emmanuel |last1=Lesaffre |last2=Lawson |first2=Andrew B. |title=Bayesian Biostatistics |location=Somerset |publisher=John Wiley & Sons |year=2012 |isbn=978-0-470-01823-1 |chapter=Bayesian hypothesis testing |pages=72–78 |doi=10.1002/9781119942412.ch3 }}</ref> Also, in contrast with [[null hypothesis significance testing]], Bayes factors support evaluation of evidence ''in favor'' of a null hypothesis, rather than only allowing the null to be rejected or not rejected.<ref>{{cite journal |first1=Alexander |last1=Ly |first2=Angelika |last2=Stefan |first3=Johnny |last3=van Doorn |first4=Fabian |last4=Dablander |display-authors=1 |title=The Bayesian Methodology of Sir Harold Jeffreys as a Practical Alternative to the ''P'' Value Hypothesis Test |journal=Computational Brain & Behavior |volume=3 |issue= 2|pages=153–161 |year=2020 |doi=10.1007/s42113-019-00070-x |doi-access=free |hdl=2066/226717 |hdl-access=free }}</ref> Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses.<ref>{{cite journal |first1=Fernando |last1=Llorente |first2=Luca |last2=Martino |first3=David |last3=Delgado |first4=Javier |display-authors=1 |last4=Lopez-Santiago |title=Marginal likelihood computation for model selection and hypothesis testing: an extensive review |journal=SIAM Review |volume=to appear |year=2023 |issue= |pages= 3–58|doi=10.1137/20M1310849 |arxiv=2005.08334 |s2cid=210156537 }}</ref> Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on [[Markov chain Monte Carlo|MCMC samples]] have been suggested.<ref>{{cite book |first=Peter |last=Congdon |chapter=Estimating model probabilities or marginal likelihoods in practice |pages=38–40 |title=Applied Bayesian Modelling |publisher=Wiley |edition=2nd |year=2014 |isbn=978-1-119-95151-3 }}</ref> A widely used approach is the method proposed by Chib (1995).<ref>{{cite journal |last=Chib |first=Siddhartha |year=1995 |title=Marginal Likelihood from the Gibbs Output |journal=Journal of the American Statistical Association |volume=90 |issue=432 |pages=1313–1321 |doi=10.2307/2291521 }}</ref> Chib and Jeliazkov (2001) later extended this method to handle cases where Metropolis-Hastings samplers are used.<ref>{{cite journal |last1=Chib |first1=Siddhartha |last2=Jeliazkov |first2=Ivan |year=2001 |title=Marginal Likelihood from the Metropolis–Hastings Output |journal=Journal of the American Statistical Association |volume=96 |issue=453 |pages=270–281 |doi=10.1198/016214501750332848 }}</ref> For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative.<ref>{{cite book |first=Gary |last=Koop |title=Bayesian Econometrics |location=Somerset |publisher=John Wiley & Sons |year=2003 |isbn=0-470-84567-8 |chapter=Model Comparison: The Savage–Dickey Density Ratio |pages=69–71 }}</ref><ref>{{cite journal |first1=Eric-Jan |last1=Wagenmakers |first2=Tom |last2=Lodewyckx |first3=Himanshu |last3=Kuriyal |first4=Raoul |last4=Grasman |title=Bayesian hypothesis testing for psychologists: A tutorial on the Savage–Dickey method |journal=Cognitive Psychology |volume=60 |issue=3 |year=2010 |pages=158–189 |doi=10.1016/j.cogpsych.2009.12.001 |pmid=20064637 |s2cid=206867662 |url=http://www.ejwagenmakers.com/2010/WagenmakersEtAlCogPsy2010.pdf }}</ref> Another approximation, derived by applying [[Laplace's approximation]] to the integrated likelihoods, is known as the [[Bayesian information criterion]] (BIC);<ref>{{cite book |first1=Joseph G. |last1=Ibrahim |first2=Ming-Hui |last2=Chen |first3=Debajyoti |last3=Sinha |chapter=Model Comparison |series=Springer Series in Statistics |title=Bayesian Survival Analysis |location=New York |publisher=Springer |year=2001 |isbn=0-387-95277-2 |doi=10.1007/978-1-4757-3447-8_6 |pages=246–254 }}</ref> in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be [[improper prior|improper]] since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite. ==Definition== The Bayes factor is the ratio of two marginal likelihoods; that is, the [[likelihood function|likelihoods]] of two statistical models integrated over the [[prior probability|prior probabilities]] of their parameters.<ref>{{cite book |first=Jeff |last=Gill |authorlink=Jeff Gill (academic) |title=Bayesian Methods : A Social and Behavioral Sciences Approach |location= |publisher=Chapman & Hall |year=2002 |isbn=1-58488-288-3 |chapter=Bayesian Hypothesis Testing and the Bayes Factor |pages=199–237 }}</ref> The [[posterior probability]] <math>\Pr(M|D)</math> of a model ''M'' given data ''D'' is given by [[Bayes' theorem]]: :<math>\Pr(M|D) = \frac{\Pr(D|M)\Pr(M)}{\Pr(D)}.</math> The key data-dependent term <math>\Pr(D|M)</math> represents the probability that some data are produced under the assumption of the model ''M''; evaluating it correctly is the key to Bayesian model comparison. Given a [[model selection]] problem in which one wishes to choose between two models on the basis of observed data ''D'', the plausibility of the two different models ''M''<sub>1</sub> and ''M''<sub>2</sub>, parametrised by model parameter vectors <math> \theta_1 </math> and <math> \theta_2 </math>, is assessed by the Bayes factor ''K'' given by :<math> K = \frac{\Pr(D|M_1)}{\Pr(D|M_2)} = \frac{\int \Pr(\theta_1|M_1)\Pr(D|\theta_1,M_1)\,d\theta_1} {\int \Pr(\theta_2|M_2)\Pr(D|\theta_2,M_2)\,d\theta_2} = \frac{\frac{\Pr(M_1|D)\Pr(D)}{\Pr(M_1)}}{\frac{\Pr(M_2|D)\Pr(D)}{\Pr(M_2)}} = \frac{\Pr(M_1|D)}{\Pr(M_2|D)}\frac{\Pr(M_2)}{\Pr(M_1)}. </math> When the two models have equal prior probability, so that <math>\Pr(M_1) = \Pr(M_2)</math>, the Bayes factor is equal to the ratio of the posterior probabilities of ''M''<sub>1</sub> and ''M''<sub>2</sub>. If instead of the Bayes factor integral, the likelihood corresponding to the [[Maximum likelihood|maximum likelihood estimate]] of the parameter for each statistical model is used, then the test becomes a classical [[likelihood-ratio test]]. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.<ref name=kassraftery1995>{{Cite journal |author1=Robert E. Kass |author2=Adrian E. Raftery |name-list-style=amp |year=1995|title=Bayes Factors|url=http://www.andrew.cmu.edu/user/kk3n/simplicity/KassRaftery1995.pdf|journal=Journal of the American Statistical Association|volume= 90 |number= 430|page= 791|doi=10.2307/2291091|jstor=2291091 }}</ref> It thus guards against [[overfitting]]. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, [[approximate Bayesian computation]] can be used for model selection in a Bayesian framework,<ref name= Toni2009b>{{cite journal |author1=Toni, T. |author2=Stumpf, M.P.H. |year = 2009 |title = Simulation-based model selection for dynamical systems in systems and population biology |journal = Bioinformatics |volume = 26 |pages = 104–10 |doi = 10.1093/bioinformatics/btp619 |url= |pmid = 19880371 |issue = 1 |pmc = 2796821 |arxiv=0911.1705 }}</ref> with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.<ref name=Robert2011>{{cite journal |last1 = Robert |first1 = C.P. |author2=J. Cornuet |author3=J. Marin |author4=N.S. Pillai |name-list-style=amp | year = 2011 | title = Lack of confidence in approximate Bayesian computation model choice | journal = Proceedings of the National Academy of Sciences | volume = 108 | issue = 37 | pages = 15112–15117 | doi = 10.1073/pnas.1102900108 | pmid = 21876135 | pmc=3174657|bibcode=2011PNAS..10815112R |doi-access = free }}</ref> Other approaches are: * to treat model comparison as a [[Decision theory#Choice under uncertainty|decision problem]], computing the expected value or cost of each model choice; * to use [[minimum message length]] (MML). * to use [[minimum description length]] (MDL). == Interpretation == A value of ''K'' > 1 means that ''M''<sub>1</sub> is more strongly supported by the data under consideration than ''M''<sub>2</sub>. Note that classical [[hypothesis testing]] gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence ''against'' it. The fact that a Bayes factor can produce evidence ''for'' and not just against a null hypothesis is one of the key advantages of this analysis method.<ref>{{cite journal |last1=Williams |first1=Matt |last2=Bååth |first2=Rasmus |last3=Philipp |first3=Michael |title=Using Bayes Factors to Test Hypotheses in Developmental Research |journal=Research in Human Development | date=2017 |volume=14 |issue=4 |pages=321–337 |doi=10.1080/15427609.2017.1370964|url=https://osf.io/88c5k/ }}</ref> [[Harold Jeffreys]] gave a scale ('''Jeffreys' scale''') for interpretation of <math>K</math>:<ref>{{cite book | url = https://books.google.com/books?id=vh9Act9rtzQC&pg=PA432 |first = Harold |last = Jeffreys |title = The Theory of Probability |edition=3rd |location= Oxford, England |orig-year=1961 |page = 432 |year = 1998 |isbn = 9780191589676 }}</ref> {{alternating rows table|class=wikitable style="text-align: center; margin-left: auto; margin-right: auto; border: none;"}} ! ''K'' !! dHart !! bits !! Strength of evidence |- | '''< 10<sup>0</sup>''' || < 0 || < 0 || Negative (supports ''M''<sub>2</sub>) |- | '''10<sup>0</sup> to 10<sup>1/2</sup>''' || 0 to 5 || 0 to 1.6 || Barely worth mentioning |- | '''10<sup>1/2</sup> to 10<sup>1</sup>''' || 5 to 10 || 1.6 to 3.3 || Substantial |- | '''10<sup>1</sup> to 10<sup>3/2</sup>''' || 10 to 15 || 3.3 to 5.0 || Strong |- | '''10<sup>3/2</sup> to 10<sup>2</sup>''' || 15 to 20 || 5.0 to 6.6 || Very strong |- | '''> 10<sup>2</sup>''' || > 20 || > 6.6 || Decisive |- |} The second column gives the corresponding weights of evidence in [[hartley (unit)|decihartley]]s (also known as [[deciban]]s); [[bit]]s are added in the third column for clarity. The table continues in the other direction, so that, for example, <math>K \leq 10^{-2}</math> is decisive evidence for <math>M_2</math>. An alternative table, widely cited, is provided by Kass and Raftery (1995):<ref name=kassraftery1995/> {{alternating rows table|class=wikitable style="text-align: center; margin-left: auto; margin-right: auto; border: none;"}} ! log<sub>10</sub> ''K'' !! ''K'' !! Strength of evidence |- | '''0 to 1/2''' || 1 to 3.2 || Not worth more than a bare mention |- | '''1/2 to 1''' || 3.2 to 10 || Substantial |- | '''1 to 2''' || 10 to 100 || Strong |- | '''> 2''' || > 100 || Decisive |- |} According to [[I. J. Good]], the [[just-noticeable difference]] of humans in their everyday life, when it comes to a change [[Bayesian probability|degree of belief]] in a hypothesis, is about a factor of 1.3x, or 1 deciban, or 1/3 of a bit, or from 1:1 to 5:4 in odds ratio.<ref>{{cite journal |last=Good |first=I.J. |author-link=I. J. Good |year=1979 |title=Studies in the History of Probability and Statistics. XXXVII A. M. Turing's statistical work in World War II |journal=[[Biometrika]] |volume=66 |issue=2 |pages=393–396 |doi=10.1093/biomet/66.2.393 |mr=548210}}</ref> ==Example== Suppose we have a [[random variable]] that produces either a success or a failure. We want to compare a model ''M''<sub>1</sub> where the probability of success is ''q'' = {{frac|1|2}}, and another model ''M''<sub>2</sub> where ''q'' is unknown and we take a [[prior distribution]] for ''q'' that is [[uniform distribution (continuous)|uniform]] on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the [[binomial distribution]]: :<math>{{200 \choose 115}q^{115}(1-q)^{85}}.</math> Thus we have for ''M''<sub>1</sub> :<math>P(X=115 \mid M_1)={200 \choose 115}\left({1 \over 2}\right)^{200} \approx 0.006</math> whereas for ''M''<sub>2</sub> we have :<math>P(X=115 \mid M_2) = \int_{0}^1{200 \choose 115}q^{115}(1-q)^{85}dq = {1 \over 201} \approx 0.005 </math> The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards ''M''<sub>1</sub>. A [[frequentist]] [[Statistical hypothesis testing|hypothesis test]] of ''M''<sub>1</sub> (here considered as a [[null hypothesis]]) would have produced a very different result. Such a test says that ''M''<sub>1</sub> should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if ''q'' = {{frac|1|2}} is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a [[frequentist]] [[Statistical hypothesis testing|hypothesis test]] would yield [[Statistical significance|significant results]] at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test. A classical [[likelihood-ratio test]] would have found the [[maximum likelihood]] estimate for ''q'', namely <math>\hat q =\frac{115}{200} = 0.575</math>, whence :<math>\textstyle P(X=115 \mid M_2) = {{200 \choose 115}\hat q^{115}(1-\hat q)^{85}} \approx 0.06</math> (rather than averaging over all possible ''q''). That gives a likelihood ratio of 0.1 and points towards ''M''<sub>2</sub>. ''M''<sub>2</sub> is a more complex model than ''M''<sub>1</sub> because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why [[Bayesian inference]] has been put forward as a theoretical justification for and generalisation of [[Occam's razor]], reducing [[Type I error]]s.<ref>[http://www.stat.duke.edu/~berger/papers/ockham.html Sharpening Ockham's Razor On a Bayesian Strop]</ref> On the other hand, the modern method of [[relative likelihood]] takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model ''M''<sub>1</sub> has 0 parameters, and so its [[Akaike information criterion]] (AIC) value is <math>2\cdot 0 - 2\cdot \ln(0.005956)\approx 10.2467</math>. Model ''M''<sub>2</sub> has 1 parameter, and so its AIC value is <math>2\cdot 1 - 2\cdot\ln(0.056991)\approx 7.7297</math>. Hence ''M''<sub>1</sub> is about <math>\exp\left(\frac{7.7297- 10.2467}{2}\right)\approx 0.284</math> times as probable as ''M''<sub>2</sub> to minimize the information loss. Thus ''M''<sub>2</sub> is slightly preferred, but ''M''<sub>1</sub> cannot be excluded. == See also == {{Portal|Mathematics}} * [[Akaike information criterion]] * [[Approximate Bayesian computation]] * [[Bayesian information criterion]] * [[Deviance information criterion]] * [[Lindley's paradox]] * [[Minimum message length]] * [[Model selection]] * [[E-values|E-Value]] ; Statistical ratios * [[Odds ratio]] * [[Relative risk]] == References == {{Reflist}} == Further reading == * {{cite book |last1=Bernardo |first1=J. |last2=Smith |first2=A. F. M. |title=Bayesian Theory |publisher=John Wiley |year=1994 |isbn=0-471-92416-4 }} * {{cite book |last1=Denison |first1=D. G. T. |last2=Holmes |first2=C. C. |last3=Mallick |first3=B. K. |last4=Smith |first4=A. F. M. |title=Bayesian Methods for Nonlinear Classification and Regression |publisher=John Wiley |year=2002 |isbn=0-471-49036-9 }} *Dienes, Z. (2019). How do I know what my theory predicts? ''Advances in Methods and Practices in Psychological Science'' {{doi|10.1177/2515245919876960}} * {{cite book |first1=Richard O. |last1=Duda |first2=Peter E. |last2=Hart |first3=David G. |last3=Stork |year=2000 |title=Pattern classification |edition=2nd |chapter=Section 9.6.5 |pages=487–489 |publisher=Wiley |isbn=0-471-05669-3 }} * {{cite book |last1=Gelman |first1=A. |last2=Carlin |first2=J. |last3=Stern |first3=H. |last4=Rubin |first4=D. |title=Bayesian Data Analysis |location=London |publisher=[[Chapman & Hall]] |year=1995 |isbn=0-412-03991-5 }} * [[Edwin Thompson Jaynes|Jaynes, E. T.]] (1994), ''[http://omega.math.albany.edu:8008/JaynesBook.html Probability Theory: the logic of science]'', chapter 24. * {{cite book |last1=Kadane |first1=Joseph B. |last2=Dickey |first2=James M. |chapter=Bayesian Decision Theory and the Simplification of Models |pages=245–268 |title=Evaluation of Econometric Models |editor-first=Jan |editor-last=Kmenta |editor2-first=James B. |editor2-last=Ramsey |location=New York |publisher=Academic Press |year=1980 |isbn=0-12-416550-8 }} * {{cite book |last=Lee |first=P. M. |title=Bayesian Statistics: an introduction |publisher=Wiley|year=2012 |isbn=9781118332573 }} * {{cite journal |title= Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis | year=2021 |first1=Mark |last1=Richard |first2=Jan |last2=Vecer |journal=Risks |volume=9 |issue=2|page= 31 |doi= 10.3390/risks9020031 |doi-access= free |hdl=10419/258120 |hdl-access=free }} * {{cite book |last=Winkler |first=Robert |title=Introduction to Bayesian Inference and Decision |edition=2nd |year=2003 |publisher=Probabilistic |isbn=0-9647938-4-9 }} == External links == * [http://bayesfactorpcl.r-forge.r-project.org/ BayesFactor] —an R package for computing Bayes factors in common research designs * [http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm Bayes factor calculator] — Online calculator for informed Bayes factors * [http://pcl.missouri.edu/bayesfactor Bayes Factor Calculators] {{Webarchive|url=https://web.archive.org/web/20150507195546/http://pcl.missouri.edu/bayesfactor |date=2015-05-07 }} —web-based version of much of the BayesFactor package {{DEFAULTSORT:Bayes Factor}} [[Category:Bayesian inference|Factor]] [[Category:Model selection]] [[Category:Statistical ratios]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Alternating rows table
(
edit
)
Template:Bayesian statistics
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Doi
(
edit
)
Template:Frac
(
edit
)
Template:Portal
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Webarchive
(
edit
)