Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Beta distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Bayesian inference=== {{Main|Bayesian inference}} [[File:Beta(1,1) Uniform distribution - J. Rodal.png|thumb|<math>Beta(1,1)</math>: The [[uniform distribution (continuous)|uniform distribution]] probability density was proposed by [[Thomas Bayes]] to represent ignorance of prior probabilities in [[Bayesian inference]].]] The use of Beta distributions in [[Bayesian inference]] is due to the fact that they provide a family of [[conjugate prior distribution|conjugate prior probability distribution]]s for [[binomial distribution|binomial]] (including [[Bernoulli distribution|Bernoulli]]) and [[geometric distribution]]s. The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of a probability value ''p'':<ref name=MacKay/> :<math>P(p;\alpha,\beta) = \frac{p^{\alpha-1}(1-p)^{\beta-1}}{\Beta(\alpha,\beta)}.</math> Examples of beta distributions used as prior probabilities to represent ignorance of prior parameter values in Bayesian inference are Beta(1,1), Beta(0,0) and Beta(1/2,1/2). ====Rule of succession==== {{Main|Rule of succession}} A classic application of the beta distribution is the [[rule of succession]], introduced in the 18th century by [[Pierre-Simon Laplace]]<ref name=Laplace>{{cite book|last=Laplace|first=Pierre Simon, marquis de|title=A philosophical essay on probabilities|year=1902|publisher=New York : J. Wiley; London : Chapman & Hall|isbn=978-1-60206-328-0|url=https://archive.org/details/philosophicaless00lapliala}}</ref> in the course of treating the [[sunrise problem]]. It states that, given ''s'' successes in ''n'' [[conditional independence|conditionally independent]] [[Bernoulli trial]]s with probability ''p,'' that the estimate of the expected value in the next trial is <math>\frac{s+1}{n+2}</math>. This estimate is the expected value of the posterior distribution over ''p,'' namely Beta(''s''+1, ''n''−''s''+1), which is given by [[Bayes' rule]] if one assumes a uniform prior probability over ''p'' (i.e., Beta(1, 1)) and then observes that ''p'' generated ''s'' successes in ''n'' trials. Laplace's rule of succession has been criticized by prominent scientists. R. T. Cox described Laplace's application of the rule of succession to the [[sunrise problem]] (<ref name=CoxRT>{{cite book|last=Cox|first=Richard T.|title=Algebra of Probable Inference|year=1961|publisher=The Johns Hopkins University Press|isbn=978-0801869822}}</ref> p. 89) as "a travesty of the proper use of the principle". Keynes remarks (<ref name=KeynesTreatise>{{cite book|last=Keynes|first=John Maynard|title=A Treatise on Probability: The Connection Between Philosophy and the History of Science|orig-year=1921|year=2010|publisher=Wildside Press|isbn=978-1434406965}}</ref> Ch.XXX, p. 382) "indeed this is so foolish a theorem that to entertain it is discreditable". Karl Pearson<ref name=PearsonRuleSuccession>{{cite journal|last=Pearson|first=Karl|title=On the Influence of Past Experience on Future Expectation|journal=Philosophical Magazine|year=1907|volume=6|issue=13|pages=365–378}}</ref> showed that the probability that the next (''n'' + 1) trials will be successes, after n successes in n trials, is only 50%, which has been considered too low by scientists like Jeffreys and unacceptable as a representation of the scientific process of experimentation to test a proposed scientific law. As pointed out by Jeffreys (<ref name=Jeffreys/> p. 128) (crediting [[C. D. Broad]]<ref name=BroadMind>{{cite journal|last=Broad|first=C. D.|title=On the relation between induction and probability|journal=MIND, A Quarterly Review of Psychology and Philosophy|date=October 1918|volume=27 (New Series)|issue=108|pages=389–404|jstor=2249035|doi=10.1093/mind/XXVII.4.389}}</ref> ) Laplace's rule of succession establishes a high probability of success ((n+1)/(n+2)) in the next trial, but only a moderate probability (50%) that a further sample (n+1) comparable in size will be equally successful. As pointed out by Perks,<ref name=Perks>{{cite journal|last=Perks|first=Wilfred|title=Some observations on inverse probability including a new indifference rule|journal=Journal of the Institute of Actuaries|date=January 1947|volume=73|issue=2|pages=285–334|url=http://www.actuaries.org.uk/research-and-resources/documents/some-observations-inverse-probability-including-new-indifference-ru|doi=10.1017/S0020268100012270|access-date=2012-09-19|archive-date=2014-01-12|archive-url=https://web.archive.org/web/20140112111032/http://www.actuaries.org.uk/research-and-resources/documents/some-observations-inverse-probability-including-new-indifference-ru|url-status=dead}}</ref> "The rule of succession itself is hard to accept. It assigns a probability to the next trial which implies the assumption that the actual run observed is an average run and that we are always at the end of an average run. It would, one would think, be more reasonable to assume that we were in the middle of an average run. Clearly a higher value for both probabilities is necessary if they are to accord with reasonable belief." These problems with Laplace's rule of succession motivated Haldane, Perks, Jeffreys and others to search for other forms of prior probability (see the next {{section link||Bayesian inference}}). According to Jaynes,<ref name=Jaynes/> the main problem with the rule of succession is that it is not valid when s=0 or s=n (see [[rule of succession]], for an analysis of its validity). ====Bayes–Laplace prior probability (Beta(1,1))==== The beta distribution achieves maximum differential entropy for Beta(1,1): the [[Uniform density|uniform]] probability density, for which all values in the domain of the distribution have equal density. This uniform distribution Beta(1,1) was suggested ("with a great deal of doubt") by [[Thomas Bayes]]<ref name="ThomasBayes"/> as the prior probability distribution to express ignorance about the correct prior distribution. This prior distribution was adopted (apparently, from his writings, with little sign of doubt<ref name=Laplace/>) by [[Pierre-Simon Laplace]], and hence it was also known as the "Bayes–Laplace rule" or the "Laplace rule" of "[[inverse probability]]" in publications of the first half of the 20th century. In the later part of the 19th century and early part of the 20th century, scientists realized that the assumption of uniform "equal" probability density depended on the actual functions (for example whether a linear or a logarithmic scale was most appropriate) and parametrizations used. In particular, the behavior near the ends of distributions with finite support (for example near ''x'' = 0, for a distribution with initial support at ''x'' = 0) required particular attention. Keynes (<ref name=KeynesTreatise/> Ch.XXX, p. 381) criticized the use of Bayes's uniform prior probability (Beta(1,1)) that all values between zero and one are equiprobable, as follows: "Thus experience, if it shows anything, shows that there is a very marked clustering of statistical ratios in the neighborhoods of zero and unity, of those for positive theories and for correlations between positive qualities in the neighborhood of zero, and of those for negative theories and for correlations between negative qualities in the neighborhood of unity. " ===={{Anchor|Haldane prior}}Haldane's prior probability (Beta(0,0))==== [[File:Beta distribution for alpha and beta approaching zero - J. Rodal.png|thumb|<math>Beta(0,0)</math>: The Haldane prior probability expressing total ignorance about prior information, where we are not even sure whether it is physically possible for an experiment to yield either a success or a failure. As α, β → 0, the beta distribution approaches a two-point [[Bernoulli distribution]] with all probability density concentrated at each end, at 0 and 1, and nothing in between. A coin-toss: one face of the coin being at 0 and the other face being at 1. ]] The Beta(0,0) distribution was proposed by [[J.B.S. Haldane]],<ref>{{cite journal|last=Haldane |first=J. B. S.| authorlink1=J. B. S. Haldane |title=A note on inverse probability|journal=[[Mathematical Proceedings of the Cambridge Philosophical Society]]|year=1932|volume=28|issue=1|pages=55–61|doi=10.1017/s0305004100010495|bibcode=1932PCPS...28...55H|s2cid=122773707 }}</ref> who suggested that the prior probability representing complete uncertainty should be proportional to ''p''<sup>−1</sup>(1−''p'')<sup>−1</sup>. The function ''p''<sup>−1</sup>(1−''p'')<sup>−1</sup> can be viewed as the limit of the numerator of the beta distribution as both shape parameters approach zero: α, β → 0. The Beta function (in the denominator of the beta distribution) approaches infinity, for both parameters approaching zero, α, β → 0. Therefore, ''p''<sup>−1</sup>(1−''p'')<sup>−1</sup> divided by the Beta function approaches a 2-point [[Bernoulli distribution]] with equal probability 1/2 at each end, at 0 and 1, and nothing in between, as α, β → 0. A coin-toss: one face of the coin being at 0 and the other face being at 1. The Haldane prior probability distribution Beta(0,0) is an "[[improper prior]]" because its integration (from 0 to 1) fails to strictly converge to 1 due to the singularities at each end. However, this is not an issue for computing posterior probabilities unless the sample size is very small. Furthermore, Zellner<ref name=Zellner>{{cite book|last=Zellner |first=Arnold|title=An Introduction to Bayesian Inference in Econometrics|year=1971|publisher=Wiley-Interscience|isbn=978-0471169376}}</ref> points out that on the [[log-odds]] scale, (the [[logit]] transformation <math>\log(p/(1-p))</math>), the Haldane prior is the uniformly flat prior. The fact that a uniform prior probability on the [[logit]] transformed variable ln(''p''/1 − ''p'') (with domain (−∞, ∞)) is equivalent to the Haldane prior on the domain [0, 1] was pointed out by [[Harold Jeffreys]] in the first edition (1939) of his book Theory of Probability (<ref name=Jeffreys/> p. 123). Jeffreys writes "Certainly if we take the Bayes–Laplace rule right up to the extremes we are led to results that do not correspond to anybody's way of thinking. The (Haldane) rule d''x''/(''x''(1 − ''x'')) goes too far the other way. It would lead to the conclusion that if a sample is of one type with respect to some property there is a probability 1 that the whole population is of that type." The fact that "uniform" depends on the parametrization, led Jeffreys to seek a form of prior that would be invariant under different parametrizations. ====Jeffreys' prior probability (Beta(1/2,1/2) for a Bernoulli or for a binomial distribution)==== {{Main|Jeffreys prior}} [[File:Jeffreys prior probability for the beta distribution - J. Rodal.png|thumb|[[Jeffreys prior]] probability for the beta distribution: the square root of the determinant of [[Fisher's information]] matrix: <math>\scriptstyle\sqrt{\det(\mathcal{I}(\alpha, \beta))} = \sqrt{\psi_1(\alpha)\psi_1(\beta)-( \psi_1(\alpha)+\psi_1(\beta) )\psi_1(\alpha + \beta)}</math> is a function of the [[trigamma function]] ψ<sub>1</sub> of shape parameters α, β]] [[File:Beta distribution for 3 different prior probability functions - J. Rodal.png|thumb|Posterior Beta densities with samples having success = "s", failure = "f" of ''s''/(''s'' + ''f'') = 1/2, and ''s'' + ''f'' = {3,10,50}, based on 3 different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 50 (with more pronounced peak near ''p'' = 1/2). Significant differences appear for very small sample sizes (the flatter distribution for sample size of 3)]] [[File:Beta distribution for 3 different prior probability functions, skewed case - J. Rodal.png|thumb|Posterior Beta densities with samples having success = "s", failure = "f" of ''s''/(''s'' + ''f'') = 1/4, and ''s'' + ''f'' ∈ {3,10,50}, based on three different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 50 (with more pronounced peak near ''p'' = 1/4). Significant differences appear for very small sample sizes (the very skewed distribution for the degenerate case of sample size = 3, in this degenerate and unlikely case the Haldane prior results in a reverse "J" shape with mode at ''p'' = 0 instead of ''p'' = 1/4. If there is sufficient [[Sample (statistics)|sampling data]], the three priors of Bayes (Beta(1,1)), Jeffreys (Beta(1/2,1/2)) and Haldane (Beta(0,0)) should yield similar [[posterior probability|''posterior'' probability]] densities.]] [[File:Beta distribution for 3 different prior probability functions, skewed case sample size = (4,12,40) - J. Rodal.png|thumb|Posterior Beta densities with samples having success = ''s'', failure = ''f'' of ''s''/(''s'' + ''f'') = 1/4, and ''s'' + ''f'' ∈ {4,12,40}, based on three different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 40 (with more pronounced peak near ''p'' = 1/4). Significant differences appear for very small sample sizes]] [[Harold Jeffreys]]<ref name=Jeffreys>{{cite book|last=Jeffreys|first=Harold|title=Theory of Probability|year=1998|publisher=Oxford University Press, 3rd edition|isbn=978-0198503682}}</ref><ref name=JeffreysPRIOR>{{cite journal|last=Jeffreys|first=Harold|title=An Invariant Form for the Prior Probability in Estimation Problems|journal=Proceedings of the Royal Society|date=September 1946|volume=186|series=A 24|issue=1007|pages=453–461|doi=10.1098/rspa.1946.0056|pmid=20998741|bibcode=1946RSPSA.186..453J|doi-access=free}}</ref> proposed to use an [[uninformative prior]] probability measure that should be [[Parametrization invariance|invariant under reparameterization]]: proportional to the square root of the [[determinant]] of [[Fisher's information]] matrix. For the [[Bernoulli distribution]], this can be shown as follows: for a coin that is "heads" with probability ''p'' ∈ [0, 1] and is "tails" with probability 1 − ''p'', for a given (H,T) ∈ {(0,1), (1,0)} the probability is ''p<sup>H</sup>''(1 − ''p'')<sup>''T''</sup>. Since ''T'' = 1 − ''H'', the [[Bernoulli distribution]] is ''p<sup>H</sup>''(1 − ''p'')<sup>1 − ''H''</sup>. Considering ''p'' as the only parameter, it follows that the log likelihood for the Bernoulli distribution is :<math>\ln \mathcal{L} (p\mid H) = H \ln(p)+ (1-H) \ln(1-p).</math> The Fisher information matrix has only one component (it is a scalar, because there is only one parameter: ''p''), therefore: :<math>\begin{align} \sqrt{\mathcal{I}(p)} &= \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{dp} \ln(\mathcal{L} (p\mid H)) \right)^2\right]} \\[6pt] &= \sqrt{\operatorname{E}\!\left[ \left( \frac{H}{p} - \frac{1-H}{1-p}\right)^2 \right]} \\[6pt] &= \sqrt{p^1 (1-p)^0 \left( \frac{1}{p} - \frac{0}{1-p}\right)^2 + p^0 (1-p)^1 \left(\frac{0}{p} - \frac{1}{1-p}\right)^2} \\ &= \frac{1}{\sqrt{p(1-p)}}. \end{align}</math> Similarly, for the [[Binomial distribution]] with ''n'' [[Bernoulli trials]], it can be shown that :<math>\sqrt{\mathcal{I}(p)}= \frac{\sqrt{n}}{\sqrt{p(1-p)}}.</math> Thus, for the [[Bernoulli distribution|Bernoulli]], and [[Binomial distribution]]s, [[Jeffreys prior]] is proportional to <math>\scriptstyle \frac{1}{\sqrt{p(1-p)}}</math>, which happens to be proportional to a beta distribution with domain variable ''x'' = ''p'', and shape parameters α = β = 1/2, the [[arcsine distribution]]: :<math>Beta(\tfrac{1}{2}, \tfrac{1}{2}) = \frac{1}{\pi \sqrt{p(1-p)}}.</math> It will be shown in the next section that the normalizing constant for Jeffreys prior is immaterial to the final result because the normalizing constant cancels out in Bayes' theorem for the posterior probability. Hence Beta(1/2,1/2) is used as the Jeffreys prior for both Bernoulli and binomial distributions. As shown in the next section, when using this expression as a prior probability times the likelihood in [[Bayes' theorem]], the posterior probability turns out to be a beta distribution. It is important to realize, however, that Jeffreys prior is proportional to <math>\scriptstyle \frac{1}{\sqrt{p(1-p)}}</math> for the Bernoulli and binomial distribution, but not for the beta distribution. Jeffreys prior for the beta distribution is given by the determinant of Fisher's information for the beta distribution, which, as shown in the {{section link||Fisher information matrix}} is a function of the [[trigamma function]] ψ<sub>1</sub> of shape parameters α and β as follows: :<math> \begin{align} \sqrt{\det(\mathcal{I}(\alpha, \beta))} &= \sqrt{\psi_1(\alpha)\psi_1(\beta)-(\psi_1(\alpha)+\psi_1(\beta))\psi_1(\alpha + \beta)} \\ \lim_{\alpha\to 0} \sqrt{\det(\mathcal{I}(\alpha, \beta))} &=\lim_{\beta \to 0} \sqrt{\det(\mathcal{I}(\alpha, \beta))} = \infty\\ \lim_{\alpha\to \infty} \sqrt{\det(\mathcal{I}(\alpha, \beta))} &=\lim_{\beta \to \infty} \sqrt{\det(\mathcal{I}(\alpha, \beta))} = 0 \end{align}</math> As previously discussed, Jeffreys prior for the Bernoulli and binomial distributions is proportional to the [[arcsine distribution]] Beta(1/2,1/2), a one-dimensional ''curve'' that looks like a basin as a function of the parameter ''p'' of the Bernoulli and binomial distributions. The walls of the basin are formed by ''p'' approaching the singularities at the ends ''p'' → 0 and ''p'' → 1, where Beta(1/2,1/2) approaches infinity. Jeffreys prior for the beta distribution is a ''2-dimensional surface'' (embedded in a three-dimensional space) that looks like a basin with only two of its walls meeting at the corner α = β = 0 (and missing the other two walls) as a function of the shape parameters α and β of the beta distribution. The two adjoining walls of this 2-dimensional surface are formed by the shape parameters α and β approaching the singularities (of the trigamma function) at α, β → 0. It has no walls for α, β → ∞ because in this case the determinant of Fisher's information matrix for the beta distribution approaches zero. It will be shown in the next section that Jeffreys prior probability results in posterior probabilities (when multiplied by the binomial likelihood function) that are intermediate between the posterior probability results of the Haldane and Bayes prior probabilities. Jeffreys prior may be difficult to obtain analytically, and for some cases it just doesn't exist (even for simple distribution functions like the asymmetric [[triangular distribution]]). Berger, Bernardo and Sun, in a 2009 paper<ref name="BergerBernardoSun">{{cite journal|last=Berger|first=James |author2=Bernardo, Jose |author3=Sun, Dongchu|title=The formal definition of reference priors|journal=The Annals of Statistics|year=2009|volume=37|issue=2|pages=905–938|doi=10.1214/07-AOS587|url= http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdfview_1&handle=euclid.aos/1236693154|arxiv=0904.0156|bibcode=2009arXiv0904.0156B |s2cid=3221355 }}</ref> defined a reference prior probability distribution that (unlike Jeffreys prior) exists for the asymmetric [[triangular distribution]]. They cannot obtain a closed-form expression for their reference prior, but numerical calculations show it to be nearly perfectly fitted by the (proper) prior :<math> \operatorname{Beta}(\tfrac{1}{2}, \tfrac{1}{2}) \sim\frac{1}{\sqrt{\theta(1-\theta)}}</math> where θ is the vertex variable for the asymmetric triangular distribution with support [0, 1] (corresponding to the following parameter values in Wikipedia's article on the [[triangular distribution]]: vertex ''c'' = ''θ'', left end ''a'' = 0,and right end ''b'' = 1). Berger et al. also give a heuristic argument that Beta(1/2,1/2) could indeed be the exact Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution. Therefore, Beta(1/2,1/2) not only is Jeffreys prior for the Bernoulli and binomial distributions, but also seems to be the Berger–Bernardo–Sun reference prior for the asymmetric triangular distribution (for which the Jeffreys prior does not exist), a distribution used in project management and [[PERT]] analysis to describe the cost and duration of project tasks. Clarke and Barron<ref>{{cite journal|last=Clarke|first=Bertrand S.|author2=Andrew R. Barron|title=Jeffreys' prior is asymptotically least favorable under entropy risk|journal=Journal of Statistical Planning and Inference|year=1994|volume=41|pages=37–60|url=http://www.stat.yale.edu/~arb4/publications_files/jeffery's%20prior.pdf|doi=10.1016/0378-3758(94)90153-8}}</ref> prove that, among continuous positive priors, Jeffreys prior (when it exists) asymptotically maximizes Shannon's [[mutual information]] between a sample of size n and the parameter, and therefore ''Jeffreys prior is the most uninformative prior'' (measuring information as Shannon information). The proof rests on an examination of the [[Kullback–Leibler divergence]] between probability density functions for [[iid]] random variables. ====Effect of different prior probability choices on the posterior beta distribution==== If samples are drawn from the population of a random variable ''X'' that result in ''s'' successes and ''f'' failures in ''n'' [[Bernoulli trial]]s ''n'' = ''s'' + ''f'', then the [[likelihood function]] for parameters ''s'' and ''f'' given ''x'' = ''p'' (the notation ''x'' = ''p'' in the expressions below will emphasize that the domain ''x'' stands for the value of the parameter ''p'' in the binomial distribution), is the following [[binomial distribution]]: :<math>\mathcal{L}(s,f\mid x=p) = {s+f \choose s} x^s(1-x)^f = {n \choose s} x^s(1-x)^{n - s}. </math> If beliefs about [[prior probability]] information are reasonably well approximated by a beta distribution with parameters ''α'' Prior and ''β'' Prior, then: :<math>{\operatorname{PriorProbability}}(x=p;\alpha \operatorname{Prior},\beta \operatorname{Prior}) = \frac{ x^{\alpha \operatorname{Prior}-1}(1-x)^{\beta \operatorname{Prior}-1}}{\Beta(\alpha \operatorname{Prior},\beta \operatorname{Prior})}</math> According to [[Bayes' theorem]] for a continuous event space, the [[posterior probability]] density is given by the product of the [[prior probability]] and the likelihood function (given the evidence ''s'' and ''f'' = ''n'' − ''s''), normalized so that the area under the curve equals one, as follows: :<math>\begin{align} & \text{posterior probability density}(x=p\mid s,n-s) \\[6pt] = {} & \frac{\operatorname{prior probability density}(x=p;\alpha \operatorname{prior},\beta \operatorname{prior}) \mathcal{L}(s,f\mid x=p)} {\int_0^1\text{prior probability density}(x=p;\alpha \operatorname{prior},\beta \operatorname{prior}) \mathcal{L}(s,f\mid x=p) \, dx} \\[6pt] = {} & \frac{{{n \choose s} x^{s+\alpha \operatorname{prior}-1}(1-x)^{n-s+\beta \operatorname{prior}-1} / \Beta(\alpha \operatorname{prior},\beta \operatorname{prior})}}{\int_0^1 \left({n \choose s} x^{s+\alpha \operatorname{prior}-1}(1-x)^{n-s+\beta \operatorname{prior}-1} /\Beta(\alpha \operatorname{prior}, \beta \operatorname{prior})\right) \, dx} \\[6pt] = {} & \frac{x^{s+\alpha \operatorname{prior}-1}(1-x)^{n-s+\beta \operatorname{prior}-1}}{\int_0^1 \left(x^{s+\alpha \operatorname{prior}-1}(1-x)^{n-s+\beta \operatorname{prior}-1}\right) \, dx} \\[6pt] = {} & \frac{x^{s+\alpha \operatorname{prior}-1}(1-x)^{n-s+\beta \operatorname{prior}-1}}{\Beta(s+\alpha \operatorname{prior},n-s+\beta \operatorname{prior})}. \end{align}</math> The [[binomial coefficient]] :<math>{s+f \choose s}={n \choose s}=\frac{(s+f)!}{s! f!}=\frac{n!}{s!(n-s)!}</math> appears both in the numerator and the denominator of the posterior probability, and it does not depend on the integration variable ''x'', hence it cancels out, and it is irrelevant to the final result. Similarly the normalizing factor for the prior probability, the beta function B(αPrior,βPrior) cancels out and it is immaterial to the final result. The same posterior probability result can be obtained if one uses an un-normalized prior :<math>x^{\alpha \operatorname{prior}-1}(1-x)^{\beta \operatorname{prior}-1}</math> because the normalizing factors all cancel out. Several authors (including Jeffreys himself) thus use an un-normalized prior formula since the normalization constant cancels out. The numerator of the posterior probability ends up being just the (un-normalized) product of the prior probability and the likelihood function, and the denominator is its integral from zero to one. The beta function in the denominator, B(''s'' + ''α'' Prior, ''n'' − ''s'' + ''β'' Prior), appears as a normalization constant to ensure that the total posterior probability integrates to unity. The ratio ''s''/''n'' of the number of successes to the total number of trials is a [[sufficient statistic]] in the binomial case, which is relevant for the following results. For the '''Bayes'''' prior probability (Beta(1,1)), the posterior probability is: :<math>\operatorname{posterior probability}(p=x\mid s,f) = \frac{x^{s}(1-x)^{n-s}}{\Beta(s+1,n-s+1)}, \text{ with mean }=\frac{s+1}{n+2},\text{ (and mode}=\frac{s}{n}\text{ if } 0 < s < n).</math> For the '''Jeffreys'''' prior probability (Beta(1/2,1/2)), the posterior probability is: :<math>\operatorname{posterior probability}(p=x\mid s,f) = {x^{s-\tfrac{1}{2}}(1-x)^{n-s-\frac{1}{2}} \over \Beta(s+\tfrac{1}{2},n-s+\tfrac{1}{2})} ,\text{ with mean} = \frac{s+\tfrac{1}{2}}{n+1},\text{ (and mode}=\frac{s-\tfrac{1}{2}}{n-1}\text{ if } \tfrac{1}{2} < s < n-\tfrac{1}{2}).</math> and for the '''Haldane''' prior probability (Beta(0,0)), the posterior probability is: :<math>\operatorname{posterior probability}(p=x\mid s,f) = \frac{x^{s-1}(1-x)^{n-s-1}}{\Beta(s,n-s)}, \text{ with mean} = \frac{s}{n},\text{ (and mode}=\frac{s-1}{n-2}\text{ if } 1 < s < n -1).</math> From the above expressions it follows that for ''s''/''n'' = 1/2) all the above three prior probabilities result in the identical location for the posterior probability mean = mode = 1/2. For ''s''/''n'' < 1/2, the mean of the posterior probabilities, using the following priors, are such that: mean for Bayes prior > mean for Jeffreys prior > mean for Haldane prior. For ''s''/''n'' > 1/2 the order of these inequalities is reversed such that the Haldane prior probability results in the largest posterior mean. The ''Haldane'' prior probability Beta(0,0) results in a posterior probability density with ''mean'' (the expected value for the probability of success in the "next" trial) identical to the ratio ''s''/''n'' of the number of successes to the total number of trials. Therefore, the Haldane prior results in a posterior probability with expected value in the next trial equal to the maximum likelihood. The ''Bayes'' prior probability Beta(1,1) results in a posterior probability density with ''mode'' identical to the ratio ''s''/''n'' (the maximum likelihood). In the case that 100% of the trials have been successful ''s'' = ''n'', the ''Bayes'' prior probability Beta(1,1) results in a posterior expected value equal to the rule of succession (''n'' + 1)/(''n'' + 2), while the Haldane prior Beta(0,0) results in a posterior expected value of 1 (absolute certainty of success in the next trial). Jeffreys prior probability results in a posterior expected value equal to (''n'' + 1/2)/(''n'' + 1). Perks<ref name=Perks/> (p. 303) points out: "This provides a new rule of succession and expresses a 'reasonable' position to take up, namely, that after an unbroken run of n successes we assume a probability for the next trial equivalent to the assumption that we are about half-way through an average run, i.e. that we expect a failure once in (2''n'' + 2) trials. The Bayes–Laplace rule implies that we are about at the end of an average run or that we expect a failure once in (''n'' + 2) trials. The comparison clearly favours the new result (what is now called Jeffreys prior) from the point of view of 'reasonableness'." Conversely, in the case that 100% of the trials have resulted in failure (''s'' = 0), the ''Bayes'' prior probability Beta(1,1) results in a posterior expected value for success in the next trial equal to 1/(''n'' + 2), while the Haldane prior Beta(0,0) results in a posterior expected value of success in the next trial of 0 (absolute certainty of failure in the next trial). Jeffreys prior probability results in a posterior expected value for success in the next trial equal to (1/2)/(''n'' + 1), which Perks<ref name=Perks/> (p. 303) points out: "is a much more reasonably remote result than the Bayes–Laplace result 1/(''n'' + 2)". Jaynes<ref name=Jaynes/> questions (for the uniform prior Beta(1,1)) the use of these formulas for the cases ''s'' = 0 or ''s'' = ''n'' because the integrals do not converge (Beta(1,1) is an improper prior for ''s'' = 0 or ''s'' = ''n''). In practice, the conditions 0<s<n necessary for a mode to exist between both ends for the Bayes prior are usually met, and therefore the Bayes prior (as long as 0 < ''s'' < ''n'') results in a posterior mode located between both ends of the domain. As remarked in the section on the rule of succession, K. Pearson showed that after ''n'' successes in ''n'' trials the posterior probability (based on the Bayes Beta(1,1) distribution as the prior probability) that the next (''n'' + 1) trials will all be successes is exactly 1/2, whatever the value of ''n''. Based on the Haldane Beta(0,0) distribution as the prior probability, this posterior probability is 1 (absolute certainty that after n successes in ''n'' trials the next (''n'' + 1) trials will all be successes). Perks<ref name=Perks/> (p. 303) shows that, for what is now known as the Jeffreys prior, this probability is ((''n'' + 1/2)/(''n'' + 1))((''n'' + 3/2)/(''n'' + 2))...(2''n'' + 1/2)/(2''n'' + 1), which for ''n'' = 1, 2, 3 gives 15/24, 315/480, 9009/13440; rapidly approaching a limiting value of <math>1/\sqrt{2} = 0.70710678\ldots</math> as n tends to infinity. Perks remarks that what is now known as the Jeffreys prior: "is clearly more 'reasonable' than either the Bayes–Laplace result or the result on the (Haldane) alternative rule rejected by Jeffreys which gives certainty as the probability. It clearly provides a very much better correspondence with the process of induction. Whether it is 'absolutely' reasonable for the purpose, i.e. whether it is yet large enough, without the absurdity of reaching unity, is a matter for others to decide. But it must be realized that the result depends on the assumption of complete indifference and absence of knowledge prior to the sampling experiment." Following are the variances of the posterior distribution obtained with these three prior probability distributions: for the '''Bayes'''' prior probability (Beta(1,1)), the posterior variance is: :<math>\text{variance} = \frac{(n-s+1)(s+1)}{(3+n)(2+n)^2},\text{ which for } s=\frac{n}{2} \text{ results in variance} =\frac{1}{12+4n}</math> for the '''Jeffreys'''' prior probability (Beta(1/2,1/2)), the posterior variance is: : <math>\text{variance} = \frac{(n-s+\frac{1}{2})(s+\frac{1}{2})}{(2+n)(1+n)^2} ,\text{ which for } s=\frac n 2 \text{ results in var} = \frac 1 {8 + 4n}</math> and for the '''Haldane''' prior probability (Beta(0,0)), the posterior variance is: :<math>\text{variance} = \frac{(n-s)s}{(1+n)n^2}, \text{ which for }s=\frac{n}{2}\text{ results in variance} =\frac{1}{4+4n}</math> So, as remarked by Silvey,<ref name=Silvey/> for large ''n'', the variance is small and hence the posterior distribution is highly concentrated, whereas the assumed prior distribution was very diffuse. This is in accord with what one would hope for, as vague prior knowledge is transformed (through Bayes' theorem) into a more precise posterior knowledge by an informative experiment. For small ''n'' the Haldane Beta(0,0) prior results in the largest posterior variance while the Bayes Beta(1,1) prior results in the more concentrated posterior. Jeffreys prior Beta(1/2,1/2) results in a posterior variance in between the other two. As ''n'' increases, the variance rapidly decreases so that the posterior variance for all three priors converges to approximately the same value (approaching zero variance as ''n'' → ∞). Recalling the previous result that the ''Haldane'' prior probability Beta(0,0) results in a posterior probability density with ''mean'' (the expected value for the probability of success in the "next" trial) identical to the ratio s/n of the number of successes to the total number of trials, it follows from the above expression that also the ''Haldane'' prior Beta(0,0) results in a posterior with ''variance'' identical to the variance expressed in terms of the max. likelihood estimate s/n and sample size (in {{section link||Variance}}): :<math>\text{variance} = \frac{\mu(1-\mu)}{1 + \nu}= \frac{(n-s)s}{(1+n) n^2} </math> with the mean ''μ'' = ''s''/''n'' and the sample size ''ν'' = ''n''. In Bayesian inference, using a [[prior distribution]] Beta(''α''Prior,''β''Prior) prior to a binomial distribution is equivalent to adding (''α''Prior − 1) pseudo-observations of "success" and (''β''Prior − 1) pseudo-observations of "failure" to the actual number of successes and failures observed, then estimating the parameter ''p'' of the binomial distribution by the proportion of successes over both real- and pseudo-observations. A uniform prior Beta(1,1) does not add (or subtract) any pseudo-observations since for Beta(1,1) it follows that (''α''Prior − 1) = 0 and (''β''Prior − 1) = 0. The Haldane prior Beta(0,0) subtracts one pseudo observation from each and Jeffreys prior Beta(1/2,1/2) subtracts 1/2 pseudo-observation of success and an equal number of failure. This subtraction has the effect of [[smoothing]] out the posterior distribution. If the proportion of successes is not 50% (''s''/''n'' ≠ 1/2) values of ''α''Prior and ''β''Prior less than 1 (and therefore negative (''α''Prior − 1) and (''β''Prior − 1)) favor sparsity, i.e. distributions where the parameter ''p'' is closer to either 0 or 1. In effect, values of ''α''Prior and ''β''Prior between 0 and 1, when operating together, function as a [[concentration parameter]]. The accompanying plots show the posterior probability density functions for sample sizes ''n'' ∈ {3,10,50}, successes ''s'' ∈ {''n''/2,''n''/4} and Beta(''α''Prior,''β''Prior) ∈ {Beta(0,0),Beta(1/2,1/2),Beta(1,1)}. Also shown are the cases for ''n'' = {4,12,40}, success ''s'' = {''n''/4} and Beta(''α''Prior,''β''Prior) ∈ {Beta(0,0),Beta(1/2,1/2),Beta(1,1)}. The first plot shows the symmetric cases, for successes ''s'' ∈ {n/2}, with mean = mode = 1/2 and the second plot shows the skewed cases ''s'' ∈ {''n''/4}. The images show that there is little difference between the priors for the posterior with sample size of 50 (characterized by a more pronounced peak near ''p'' = 1/2). Significant differences appear for very small sample sizes (in particular for the flatter distribution for the degenerate case of sample size = 3). Therefore, the skewed cases, with successes ''s'' = {''n''/4}, show a larger effect from the choice of prior, at small sample size, than the symmetric cases. For symmetric distributions, the Bayes prior Beta(1,1) results in the most "peaky" and highest posterior distributions and the Haldane prior Beta(0,0) results in the flattest and lowest peak distribution. The Jeffreys prior Beta(1/2,1/2) lies in between them. For nearly symmetric, not too skewed distributions the effect of the priors is similar. For very small sample size (in this case for a sample size of 3) and skewed distribution (in this example for ''s'' ∈ {''n''/4}) the Haldane prior can result in a reverse-J-shaped distribution with a singularity at the left end. However, this happens only in degenerate cases (in this example ''n'' = 3 and hence ''s'' = 3/4 < 1, a degenerate value because s should be greater than unity in order for the posterior of the Haldane prior to have a mode located between the ends, and because ''s'' = 3/4 is not an integer number, hence it violates the initial assumption of a binomial distribution for the likelihood) and it is not an issue in generic cases of reasonable sample size (such that the condition 1 < ''s'' < ''n'' − 1, necessary for a mode to exist between both ends, is fulfilled). In Chapter 12 (p. 385) of his book, Jaynes<ref name=Jaynes/> asserts that the ''Haldane prior'' Beta(0,0) describes a ''prior state of knowledge of complete ignorance'', where we are not even sure whether it is physically possible for an experiment to yield either a success or a failure, while the ''Bayes (uniform) prior Beta(1,1) applies if'' one knows that ''both binary outcomes are possible''. Jaynes states: "''interpret the Bayes–Laplace (Beta(1,1)) prior as describing not a state of complete ignorance'', but the state of knowledge in which we have observed one success and one failure...once we have seen at least one success and one failure, then we know that the experiment is a true binary one, in the sense of physical possibility." Jaynes <ref name=Jaynes/> does not specifically discuss Jeffreys prior Beta(1/2,1/2) (Jaynes discussion of "Jeffreys prior" on pp. 181, 423 and on chapter 12 of Jaynes book<ref name=Jaynes/> refers instead to the improper, un-normalized, prior "1/''p'' ''dp''" introduced by Jeffreys in the 1939 edition of his book,<ref name=Jeffreys/> seven years before he introduced what is now known as Jeffreys' invariant prior: the square root of the determinant of Fisher's information matrix. ''"1/p" is Jeffreys' (1946) invariant prior for the [[exponential distribution]], not for the Bernoulli or binomial distributions''). However, it follows from the above discussion that Jeffreys Beta(1/2,1/2) prior represents a state of knowledge in between the Haldane Beta(0,0) and Bayes Beta (1,1) prior. Similarly, [[Karl Pearson]] in his 1892 book [[The Grammar of Science]]<ref name=PearsonGrammar>{{cite book| last=Pearson|first=Karl|title=The Grammar of Science|year=1892|publisher=Walter Scott, London|url=https://books.google.com/books?id=IvdsEcFwcnsC&q=grammar+of+science&pg=PR19}}</ref><ref name=PearsnGrammar2009>{{cite book|last=Pearson|first=Karl|title=The Grammar of Science|year=2009|publisher=BiblioLife|isbn=978-1110356119}}</ref> (p. 144 of 1900 edition) maintained that the Bayes (Beta(1,1) uniform prior was not a complete ignorance prior, and that it should be used when prior information justified to "distribute our ignorance equally"". K. Pearson wrote: "Yet the only supposition that we appear to have made is this: that, knowing nothing of nature, routine and anomy (from the Greek ανομία, namely: a- "without", and nomos "law") are to be considered as equally likely to occur. Now we were not really justified in making even this assumption, for it involves a knowledge that we do not possess regarding nature. We use our ''experience'' of the constitution and action of coins in general to assert that heads and tails are equally probable, but we have no right to assert before experience that, as we know nothing of nature, routine and breach are equally probable. In our ignorance we ought to consider before experience that nature may consist of all routines, all anomies (normlessness), or a mixture of the two in any proportion whatever, and that all such are equally probable. Which of these constitutions after experience is the most probable must clearly depend on what that experience has been like." If there is sufficient [[Sample (statistics)|sampling data]], ''and the posterior probability mode is not located at one of the extremes of the domain'' (''x'' = 0 or ''x'' = 1), the three priors of Bayes (Beta(1,1)), Jeffreys (Beta(1/2,1/2)) and Haldane (Beta(0,0)) should yield similar [[posterior probability|''posterior'' probability]] densities. Otherwise, as Gelman et al.<ref name=Gelman>{{cite book|last=Gelman|first=A., Carlin, J. B., Stern, H. S., and Rubin, D. B.|title=Bayesian Data Analysis| year=2003|publisher=Chapman and Hall/CRC|isbn=978-1584883883}}</ref> (p. 65) point out, "if so few data are available that the choice of noninformative prior distribution makes a difference, one should put relevant information into the prior distribution", or as Berger<ref name=BergerDecisionTheory/> (p. 125) points out "when different reasonable priors yield substantially different answers, can it be right to state that there ''is'' a single answer? Would it not be better to admit that there is scientific uncertainty, with the conclusion depending on prior beliefs?."
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)