Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Dirichlet distribution
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Probability distribution}} {{Probability distribution | name =Dirichlet distribution| type =density| pdf_image =Dirichlet.pdf| cdf_image =| parameters =<math>K \geq 2</math> number of categories ([[integer]])<br /><math>\boldsymbol\alpha=(\alpha_1,\ldots,\alpha_K)</math> [[concentration parameter]]s, where <math>\alpha_i > 0</math>| support =<math>x_1, \ldots, x_K</math> where <math>x_i \in [0,1]</math> and <math>\sum_{i=1}^K x_i = 1</math> <br /> (i.e. a <math>K-1</math> [[simplex]])| pdf =<math>\frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1} </math><br />where <math>\mathrm{B}(\boldsymbol\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)}{\Gamma\bigl(\alpha_0\bigr)}</math><br />where <math>\alpha_0 = \sum_{i=1}^K\alpha_i</math>| cdf =| mean =<math>\operatorname{E}[X_i] = \frac{\alpha_i}{\alpha_0}</math><br /><math> \operatorname{E}[\ln X_i] = \psi(\alpha_i)-\psi(\alpha_0)</math><br />(where <math>\psi</math> is the [[digamma function]])| median =| mode =<math>x_i = \frac{\alpha_i - 1}{\alpha_0 - K}, \quad \alpha_i > 1. </math>| variance =<math>\operatorname{Var}[X_i] = \frac{\tilde{\alpha}_i(1-\tilde{\alpha}_i)}{\alpha_0+1},</math> <math>\operatorname{Cov}[X_i,X_j] = \frac{\delta_{ij}\,\tilde{\alpha}_i-\tilde{\alpha}_i \tilde{\alpha}_j}{\alpha_0+1}</math> <br/>where <math>\tilde{\alpha}_i = \frac{\alpha_i}{\alpha_0}</math>, and <math>\delta_{ij}</math> is the [[Kronecker delta]] | skewness =| kurtosis =| entropy = <math> H(X) = \log \mathrm{B}(\boldsymbol\alpha)</math><math> + (\alpha_0-K)\psi(\alpha_0) -</math><math> \sum_{j=1}^K (\alpha_j-1)\psi(\alpha_j) </math><br/>with <math>\alpha_0</math> defined as for variance, above; and <math>\psi</math> is the [[digamma function]]| moments = <math> \alpha_i = E[X_i]\left(\frac{E[X_j](1 - E[X_j])}{V[X_j]} - 1 \right)</math> where {{mvar|j}} is any index, possibly {{mvar|i}} itself }} In [[probability]] and [[statistics]], the '''Dirichlet distribution''' (after [[Peter Gustav Lejeune Dirichlet]]), often denoted <math>\operatorname{Dir}(\boldsymbol\alpha)</math>, is a family of [[Continuous probability distribution|continuous]] [[multivariate random variable|multivariate]] [[probability distribution]]s parameterized by a vector {{math|'''α'''}} of positive [[real number|reals]]. It is a multivariate generalization of the [[beta distribution]],<ref name=KBJ>{{cite book|author1=S. Kotz |author2=N. Balakrishnan |author3=N. L. Johnson |title= Continuous Multivariate Distributions. Volume 1: Models and Applications|year=2000| publisher=Wiley|location= New York|isbn=978-0-471-18387-7}} (Chapter 49: Dirichlet and Inverted Dirichlet Distributions)</ref> hence its alternative name of '''multivariate beta distribution''' ('''MBD''').<ref>{{Cite journal |jstor = 2238036|title = Multivariate Beta Distributions and Independence Properties of the Wishart Distribution|journal = The Annals of Mathematical Statistics|volume = 35|issue = 1|pages = 261–269|last1 = Olkin|first1 = Ingram|last2 = Rubin|first2 = Herman|year = 1964|doi=10.1214/aoms/1177703748|doi-access = free}}</ref> Dirichlet distributions are commonly used as [[prior distribution]]s in [[Bayesian statistics]], and in fact, the Dirichlet distribution is the [[conjugate prior]] of the [[categorical distribution]] and [[multinomial distribution]]. The infinite-dimensional generalization of the Dirichlet distribution is the ''[[Dirichlet process]]''. ==Definitions== ===Probability density function=== [[Image:LogDirichletDensity-alpha 0.3 to alpha 2.0.gif|thumb|right|250px|Illustrating how the log of the density function changes when {{math|1=''K'' = 3}} as we change the vector {{math|'''α'''}} from {{math|1='''α''' = (0.3, 0.3, 0.3)}} to {{math|(2.0, 2.0, 2.0)}}, keeping all the individual <math>\alpha_i</math>'s equal to each other.]] The Dirichlet distribution of order {{math|''K'' ≥ 2}} with parameters {{math|''α''{{sub|1}}, ..., ''α''{{sub|''K''}} > 0}} has a [[probability density function]] with respect to [[Lebesgue measure]] on the [[Euclidean space]] {{math|'''R'''{{isup|''K''−1}}}} given by <math display=block>f \left(x_1,\ldots, x_{K}; \alpha_1,\ldots, \alpha_K \right) = \frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1}</math> where <math>\{x_k\}_{k=1}^{k=K}</math> belong to the standard <math>K-1</math> [[simplex]], or in other words: <math display=block>\sum_{i=1}^{K} x_i = 1 \mbox{ and } x_i \in \left[0,1\right] \mbox{ for all } i \in \{1,\dots,K\}\,.</math> The [[normalizing constant]] is the multivariate [[beta function]], which can be expressed in terms of the [[gamma function]]: <math display=block>\mathrm{B}(\boldsymbol\alpha) = \frac{\prod\limits_{i=1}^K \Gamma(\alpha_i)}{\Gamma\left(\sum\limits_{i=1}^K \alpha_i\right)},\qquad\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_K).</math> ===Support=== The [[support (mathematics)|support]] of the Dirichlet distribution is the set of {{mvar|K}}-dimensional vectors {{math|'''x'''}} whose entries are real numbers in the interval [0,1] such that <math>\|\boldsymbol x\|_1 = 1</math>, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a {{mvar|K}}-way [[categorical distribution|categorical]] event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of [[probability distribution]]s, specifically the set of {{mvar|K}}-dimensional [[discrete distribution]]s. The technical term for the set of points in the support of a {{mvar|K}}-dimensional Dirichlet distribution is the [[open set|open]] [[standard simplex|standard {{math|(''K'' − 1)}}-simplex]],<ref name=FKG>{{cite web |url=https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2010-0006.pdf |title=Introduction to the Dirichlet Distribution and Related Processes |year=2010 |author1=Bela A. Frigyik |author2=Amol Kapila |author3=Maya R. Gupta |access-date= |format=Technical Report UWEETR-2010-006 |publisher=University of Washington Department of Electrical Engineering |archive-url=https://web.archive.org/web/20150219021331/https://www.ee.washington.edu/techsite/papers/documents/UWEETR-2010-0006.pdf |archive-date=2015-02-19 |url-status=dead }}</ref> which is a generalization of a [[triangle]], embedded in the next-higher dimension. For example, with {{math|1=''K'' = 3}}, the support is an [[equilateral triangle]] embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin. ===Special cases=== A common special case is the '''symmetric Dirichlet distribution''', where all of the elements making up the parameter vector {{math|'''α'''}} have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value {{mvar|α}}, called the [[concentration parameter]]. In terms of {{mvar|α}}, the density function has the form <math display=block>f(x_1,\dots, x_{K}; \alpha) = \frac{\Gamma(\alpha K)}{\Gamma(\alpha)^K} \prod_{i=1}^K x_i^{\alpha - 1}.</math> When {{math|1=''α'' = 1}},{{ref|concentration-parameter-disambiguation}} the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open [[standard simplex|standard {{math|(''K''−1)}}-simplex]], i.e. it is uniform over all points in its [[support (mathematics)|support]]. This particular distribution is known as the '''flat Dirichlet distribution'''. Values of the concentration parameter above 1 prefer [[random variate|variate]]s that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values. When {{math|1=''α'' = 1/2}}, the distribution is the same as would be obtained by choosing a point uniformly at random from the surface of a {{math|(''K''−1)}}-dimensional [[unit hypersphere]] and squaring each coordinate. The {{math|1=''α'' = 1/2}} distribution is the [[Jeffreys prior]] for the Dirichlet distribution. More generally, the parameter vector is sometimes written as the product <math>\alpha \boldsymbol n</math> of a ([[Scalar (mathematics)|scalar]]) [[concentration parameter]] {{mvar|α}} and a ([[Vector (mathematics and physics)|vector]]) [[base measure]] <math>\boldsymbol n=(n_1,\dots,n_K)</math> where {{math|'''n'''}} lies within the {{math|(''K'' − 1)}}-simplex (i.e.: its coordinates <math>n_i</math> sum to one). The concentration parameter in this case is larger by a factor of {{mvar|K}} than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing [[Dirichlet process]]es and is often used in the topic modelling literature. <div style="font-size:smaller"> :{{note|concentration-parameter-disambiguation}} If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter {{mvar|K}}, the dimension of the distribution, is the uniform distribution on the {{math|(''K'' − 1)}}-simplex. </div> ==Properties== ===Moments=== Let <math>X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\boldsymbol\alpha)</math>. Let <math display=block>\alpha_0 = \sum_{i=1}^K \alpha_i.</math> Then<ref>Eq. (49.9) on page 488 of [http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471183873.html Kotz, Balakrishnan & Johnson (2000). Continuous Multivariate Distributions. Volume 1: Models and Applications. New York: Wiley.]</ref><ref>{{cite book|author=BalakrishV. B.|year=2005|title=A Primer on Statistical Distributions|publisher=John Wiley & Sons, Inc.|location=Hoboken, NJ|isbn=978-0-471-42798-8|chapter="Chapter 27. Dirichlet Distribution"|page=[https://archive.org/details/primeronstatisti0000bala/page/274 274]|chapter-url=https://archive.org/details/primeronstatisti0000bala/page/274}}</ref> <math display=block>\operatorname{E}[X_i] = \frac{\alpha_i}{\alpha_0},</math> <math display=block>\operatorname{Var}[X_i] = \frac{\alpha_i (\alpha_0-\alpha_i)}{\alpha_0^2 (\alpha_0+1)}.</math> Furthermore, if <math> i\neq j</math> <math display=block>\operatorname{Cov}[X_i,X_j] = \frac{- \alpha_i \alpha_j}{\alpha_0^2 (\alpha_0+1)}.</math> The covariance matrix is [[invertible matrix|singular]]. More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. For <math> \boldsymbol{t}=(t_1,\dotsc,t_K) \in \mathbb{R}^K</math>, denote by <math>\boldsymbol{t}^{\circ i} = (t_1^i,\dotsc,t_K^i)</math> its {{mvar|i}}-th [[Hadamard product (matrices)#Analogous operations|Hadamard power]]. Then,<ref>{{Cite journal |last=Dello Schiavo |first=Lorenzo |date=2019 |title=Characteristic functionals of Dirichlet measures |journal=Electron. J. Probab. |volume=24 |pages=1–38 |doi=10.1214/19-EJP371 |doi-access=free|arxiv=1810.09790 }}</ref> <math>\operatorname{E}\left[ (\boldsymbol{t} \cdot \boldsymbol{X})^n \right] = \frac{n! \, \Gamma ( \alpha_0 )}{\Gamma (\alpha_0+n)} \sum \frac{{t_1}^{k_1} \cdots {t_K}^{k_K}}{k_1! \cdots k_K!} \prod_{i=1}^K \frac{\Gamma(\alpha_i + k_i)}{\Gamma(\alpha_i)} = \frac{n! \, \Gamma ( \alpha_0 )}{\Gamma (\alpha_0+n)} Z_n(\boldsymbol{t}^{\circ 1} \cdot \boldsymbol{\alpha}, \cdots, \boldsymbol{t}^{\circ n} \cdot \boldsymbol{\alpha}),</math> where the sum is over non-negative integers <math>k_1,\ldots,k_K</math> with <math>n=k_1+\cdots+k_K</math>, and <math>Z_n</math> is the [[Cycle index#Symmetric group Sn|cycle index polynomial]] of the [[Symmetric group]] of degree {{mvar|n}}. We have the special case <math>\operatorname{E}\left[ \boldsymbol{t} \cdot \boldsymbol{X} \right] = \frac{\boldsymbol{t} \cdot \boldsymbol{\alpha}}{\alpha_0}. </math> The multivariate analogue <math display="inline">\operatorname{E}\left[ (\boldsymbol{t}_1 \cdot \boldsymbol{X})^{n_1} \cdots (\boldsymbol{t}_q \cdot \boldsymbol{X})^{n_q} \right]</math> for vectors <math>\boldsymbol{t}_1, \dotsc, \boldsymbol{t}_q \in \mathbb{R}^K</math> can be expressed<ref>{{ cite arXiv | last1=Dello Schiavo | first1=Lorenzo | last2=Quattrocchi | first2=Filippo | date=2023 | title=Multivariate Dirichlet Moments and a Polychromatic Ewens Sampling Formula | eprint=2309.11292 | class=math.PR }}</ref> in terms of a color pattern of the exponents <math>n_1, \dotsc, n_q</math> in the sense of [[Pólya enumeration theorem]]. Particular cases include the simple computation<ref>{{cite web|last1=Hoffmann|first1=Till|title=Moments of the Dirichlet distribution|url=https://tillahoffmann.github.io/Moments-of-the-Dirichlet-distribution/|archive-url=https://web.archive.org/web/20160214015422/https://tillahoffmann.github.io/Moments-of-the-Dirichlet-distribution/ |access-date=14 February 2016|archive-date=2016-02-14 }}</ref> <math display=block>\operatorname{E}\left[\prod_{i=1}^K X_i^{\beta_i}\right] = \frac{B\left(\boldsymbol{\alpha} + \boldsymbol{\beta}\right)}{B\left(\boldsymbol{\alpha}\right)} = \frac{\Gamma\left(\sum\limits_{i=1}^K \alpha_{i}\right)}{\Gamma\left[\sum\limits_{i=1}^K (\alpha_i+\beta_i)\right]}\times\prod_{i=1}^K \frac{\Gamma(\alpha_i+\beta_i)}{\Gamma(\alpha_i)}.</math> ===Mode=== The [[mode (statistics)|mode]] of the distribution is<ref name="Bishop2006">{{cite book|author=Christopher M. Bishop|title=Pattern Recognition and Machine Learning|url=https://books.google.com/books?id=kTNoQgAACAAJ|date=17 August 2006|publisher=Springer|isbn=978-0-387-31073-2}}</ref> the vector {{math|(''x''{{sub|1}}, ..., ''x{{sub|K}}'')}} with <math display=block>x_i = \frac{\alpha_i - 1}{\alpha_0 - K}, \qquad \alpha_i > 1. </math> ===Marginal distributions=== The [[marginal distribution]]s are [[beta distribution]]s:<ref>{{cite web|last=Farrow|first=Malcolm|title=MAS3301 Bayesian Statistics|url=http://www.mas.ncl.ac.uk/~nmf16/teaching/mas3301/week6.pdf|work=Newcastle University|access-date=10 April 2013}}</ref> <math display=block>X_i \sim \operatorname{Beta} (\alpha_i, \alpha_0 - \alpha_i). </math> Also see {{slink||Related distributions}} below. ===Conjugate to categorical or multinomial=== The Dirichlet distribution is the [[conjugate prior]] distribution of the [[categorical distribution]] (a generic [[discrete probability distribution]] with a given number of possible outcomes) and [[multinomial distribution]] (the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the [[prior distribution]] of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the [[posterior distribution]] of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties. Formally, this can be expressed as follows. Given a model <math display=block>\begin{array}{rcccl} \boldsymbol\alpha &=& \left(\alpha_1, \ldots, \alpha_K \right) &=& \text{concentration hyperparameter} \\ \mathbf{p}\mid\boldsymbol\alpha &=& \left(p_1, \ldots, p_K \right ) &\sim& \operatorname{Dir}(K, \boldsymbol\alpha) \\ \mathbb{X}\mid\mathbf{p} &=& \left(\mathbf{x}_1, \ldots, \mathbf{x}_K \right ) &\sim& \operatorname{Cat}(K,\mathbf{p}) \end{array}</math> then the following holds: <math display=block>\begin{array}{rcccl} \mathbf{c} &=& \left(c_1, \ldots, c_K \right ) &=& \text{number of occurrences of category }i \\ \mathbf{p} \mid \mathbb{X},\boldsymbol\alpha &\sim& \operatorname{Dir}(K,\mathbf{c}+\boldsymbol\alpha) &=& \operatorname{Dir} \left (K,c_1+\alpha_1,\ldots,c_K+\alpha_K \right) \end{array}</math> This relationship is used in [[Bayesian statistics]] to estimate the underlying parameter {{math|'''p'''}} of a [[categorical distribution]] given a collection of {{mvar|N}} samples. Intuitively, we can view the [[hyperprior]] vector {{math|'''α'''}} as [[pseudocount]]s, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector {{math|'''c'''}}) in order to derive the posterior distribution. In Bayesian [[mixture model]]s and other [[hierarchical Bayesian model]]s with mixture components, Dirichlet distributions are commonly used as the prior distributions for the [[categorical distribution|categorical variable]]s appearing in the models. See the section on [[#Occurrence and applications|applications]] below for more information. ===Relation to Dirichlet-multinomial distribution=== In a model where a Dirichlet prior distribution is placed over a set of [[categorical distribution|categorical-valued]] observations, the [[marginal distribution|marginal]] [[joint distribution]] of the observations (i.e. the joint distribution of the observations, with the prior parameter [[marginalized out]]) is a [[Dirichlet-multinomial distribution]]. This distribution plays an important role in [[hierarchical Bayesian model]]s, because when doing [[statistical inference|inference]] over such models using methods such as [[Gibbs sampling]] or [[variational Bayes]], Dirichlet prior distributions are often marginalized out. See the [[Dirichlet-multinomial distribution|article on this distribution]] for more details. ===Entropy=== If {{mvar|X}} is a <math>\operatorname{Dir}(\boldsymbol\alpha)</math> random variable, the [[differential entropy]] of {{mvar|X}} (in [[nat (unit)|nat units]]) is<ref>{{cite book |last1=Lin |first1=Jiayu |title=On The Dirichlet Distribution |date=2016 |publisher=Queen's University |location=Kingston, Canada |pages=§ 2.4.9 |url=https://mast.queensu.ca/~communications/Papers/msc-jiayu-lin.pdf}}</ref> <math display=block>h(\boldsymbol X) = \operatorname{E}[- \ln f(\boldsymbol X)] = \ln \operatorname{B}(\boldsymbol\alpha) + (\alpha_0-K)\psi(\alpha_0) - \sum_{j=1}^K (\alpha_j-1)\psi(\alpha_j) </math> where <math>\psi</math> is the [[digamma function]]. The following formula for <math> \operatorname{E}[\ln(X_i)]</math> can be used to derive the differential [[information entropy|entropy]] above. Since the functions <math>\ln(X_i)</math> are the sufficient statistics of the Dirichlet distribution, the [[Exponential family#Moments and cumulants of the sufficient statistic|exponential family differential identities]] can be used to get an analytic expression for the expectation of <math>\ln(X_i)</math> (see equation (2.62) in <ref>{{cite web|last=Nguyen|first=Duy|title=AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE|date=15 August 2023 |ssrn=4541076 |url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4541076|access-date=15 August 2023}}</ref>) and its associated covariance matrix: <math display=block>\operatorname{E}[\ln(X_i)] = \psi(\alpha_i)-\psi(\alpha_0)</math> and <math display=block>\operatorname{Cov}[\ln(X_i),\ln(X_j)] = \psi'(\alpha_i) \delta_{ij} - \psi'(\alpha_0)</math> where <math>\psi</math> is the [[digamma function]], <math>\psi'</math> is the [[trigamma function]], and <math>\delta_{ij}</math> is the [[Kronecker delta]]. The spectrum of [[Rényi entropy|Rényi information]] for values other than <math> \lambda = 1</math> is given by<ref>{{cite journal | journal=Journal of Statistical Planning and Inference | volume=93 | issue=325 | pages=51–69 | year=2001 | author=Song, Kai-Sheng | title=Rényi information, loglikelihood, and an intrinsic distribution measure| doi = 10.1016/S0378-3758(00)00169-5 | publisher=Elsevier}}</ref> <math display=block>F_R(\lambda) = (1-\lambda)^{-1} \left( - \lambda \log \mathrm{B}(\boldsymbol\alpha) + \sum_{i=1}^K \log \Gamma(\lambda(\alpha_i - 1) + 1) - \log \Gamma(\lambda (\alpha_0 - K) + K ) \right) </math> and the information entropy is the limit as <math>\lambda</math> goes to 1. Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector {{math|'''Z'''}} with probability-mass distribution {{math|'''X'''}}, i.e., <math> P(Z_i=1, Z_{j\ne i} = 0 | \boldsymbol X) = X_i </math>. The conditional [[information entropy]] of {{math|'''Z'''}}, given {{math|'''X'''}} is <math display=block>S(\boldsymbol X) = H(\boldsymbol Z | \boldsymbol X) = \operatorname{E}_{\boldsymbol Z}[- \log P(\boldsymbol Z | \boldsymbol X ) ] = \sum_{i=1}^K - X_i \log X_i </math> This function of {{math|'''X'''}} is a scalar random variable. If {{math|'''X'''}} has a symmetric Dirichlet distribution with all <math>\alpha_i = \alpha</math>, the expected value of the entropy (in [[nat (unit)|nat units]]) is<ref>{{cite conference |last1=Nemenman |first1=Ilya |last2=Shafee |first2=Fariel |last3=Bialek |first3=William |title= Entropy and Inference, revisited |date=2002 |conference=NIPS 14 |url=http://papers.nips.cc/paper/1965-entropy-and-inference-revisited.pdf}}, eq. 8</ref> <math display=block>\operatorname{E}[S(\boldsymbol X)] = \sum_{i=1}^K \operatorname{E}[- X_i \ln X_i] = \psi(K\alpha + 1) - \psi(\alpha + 1) </math> ===Aggregation=== If <math display=block>X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\alpha_1,\ldots,\alpha_K)</math> then, if the random variables with subscripts {{mvar|i}} and {{mvar|j}} are dropped from the vector and replaced by their sum, <math display=block>X' = (X_1, \ldots, X_i + X_j, \ldots, X_K)\sim\operatorname{Dir} (\alpha_1, \ldots, \alpha_i + \alpha_j, \ldots, \alpha_K).</math> This aggregation property may be used to derive the marginal distribution of <math>X_i</math> mentioned above. ===Neutrality=== {{main|Neutral vector}} If <math>X = (X_1, \ldots, X_K)\sim\operatorname{Dir}(\boldsymbol\alpha)</math>, then the vector {{mvar|X}} is said to be ''neutral''<ref>{{cite journal | journal=Journal of the American Statistical Association | volume=64 | issue=325 | pages=194–206 | year=1969 | author=Connor, Robert J. | title=Concepts of Independence for Proportions with a Generalization of the Dirichlet Distribution | doi = 10.2307/2283728 | jstor=2283728 | author2=Mosimann, James E | publisher=American Statistical Association }}</ref> in the sense that ''X{{sub|K}}'' is independent of <math>X^{(-K)}</math><ref name=FKG/> where <math display=block>X^{(-K)}=\left(\frac{X_1}{1-X_K},\frac{X_2}{1-X_K},\ldots,\frac{X_{K-1}}{1-X_K} \right),</math> and similarly for removing any of <math>X_2,\ldots,X_{K-1}</math>. Observe that any permutation of {{mvar|X}} is also neutral (a property not possessed by samples drawn from a [[generalized Dirichlet distribution]]).<ref>See Kotz, Balakrishnan & Johnson (2000), Section 8.5, "Connor and Mosimann's Generalization", pp. 519–521.</ref> Combining this with the property of aggregation it follows that {{math|''X''{{sub|''j''}} + ... + ''X''{{sub|''K''}}}} is independent of <math>\left(\frac{X_1}{X_1+\cdots +X_{j-1}},\frac{X_2}{X_1+\cdots +X_{j-1}},\ldots,\frac{X_{j-1}}{X_1+\cdots +X_{j-1}} \right)</math>. In fact it is true, further, for the Dirichlet distribution, that for <math>3\le j\le K-1</math>, the pair <math>\left(X_1+\cdots +X_{j-1}, X_j+\cdots +X_K\right)</math>, and the two vectors <math>\left(\frac{X_1}{X_1+\cdots +X_{j-1}},\frac{X_2}{X_1+\cdots +X_{j-1}},\ldots,\frac{X_{j-1}}{X_1+\cdots +X_{j-1}} \right)</math> and <math>\left(\frac{X_j}{X_j+\cdots +X_K},\frac{X_{j+1}}{X_j+\cdots +X_K},\ldots,\frac{X_K}{X_j+\cdots +X_K} \right)</math>, viewed as triple of normalised random vectors, are [[Independence (probability theory)#More than two random variables|mutually independent]]. The analogous result is true for partition of the indices {{math|{{mset|1, 2, ..., ''K''}}}} into any other pair of non-singleton subsets. ===Characteristic function=== The characteristic function of the Dirichlet distribution is a [[confluent hypergeometric function|confluent]] form of the [[Lauricella hypergeometric series]]. It is given by [[Peter C. B. Phillips|Phillips]] as<ref name="phillips1988">{{cite journal |first=P. C. B. |last=Phillips |year=1988 |url=https://cowles.yale.edu/sites/default/files/files/pub/d08/d0865.pdf |title=The characteristic function of the Dirichlet and multivariate F distribution |journal=Cowles Foundation Discussion Paper 865 }}</ref> <math display=block> CF\left(s_1,\ldots,s_{K-1}\right) = \operatorname{E}\left(e^{i\left(s_1X_1+\cdots+s_{K-1}X_{K-1} \right)} \right)= \Psi^{\left[K-1\right]} (\alpha_1,\ldots,\alpha_{K-1};\alpha_0;is_1,\ldots, is_{K-1}) </math> where <math display=block> \Psi^{[m]} (a_1,\ldots,a_m;c;z_1,\ldots z_m) = \sum\frac{(a_1)_{k_1} \cdots (a_m)_{k_m} \, z_1^{k_1} \cdots z_m^{k_m}}{(c)_k\,k_1!\cdots k_m!}. </math> The sum is over non-negative integers <math>k_1,\ldots,k_m</math> and <math>k=k_1+\cdots+k_m</math>. Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of a [[Methods of contour integration|complex path integral]]: <math display=block> \Psi^{[m]} = \frac{\Gamma(c)}{2\pi i}\int_L e^t\,t^{a_1+\cdots+a_m-c}\,\prod_{j=1}^m (t-z_j)^{-a_j} \, dt</math> where {{mvar|L}} denotes any path in the complex plane originating at <math>-\infty</math>, encircling in the positive direction all the singularities of the integrand and returning to <math>-\infty</math>. ===Inequality=== Probability density function <math>f \left(x_1,\ldots, x_{K-1}; \alpha_1,\ldots, \alpha_K \right)</math> plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.<ref>{{cite journal | last1=Grinshpan | first1=A. Z. | title=An inequality for multiple convolutions with respect to Dirichlet probability measure | doi=10.1016/j.aam.2016.08.001 | year=2017 | journal=Advances in Applied Mathematics | volume=82 | issue=1 | pages=102–119 | doi-access=free }}</ref> Another inequality relates the moment-generating function of the Dirichlet distribution to the convex conjugate of the scaled reversed Kullback-Leibler divergence:<ref>{{cite arXiv | last1=Perrault| first1=P. | title=A New Bound on the Cumulant Generating Function of Dirichlet Processes |eprint=2409.18621 | year=2024| class=math.PR }} Theorem 3.3</ref> <math display=block> \log \operatorname{E}\left(\exp{\sum_{i=1}^K s_i X_i } \right) \leq \sup_p \sum_{i=1}^K \left(p_i s_i - \alpha_i\log\left(\frac{\alpha_i}{\alpha_0 p_i} \right)\right), </math> where the supremum is taken over {{mvar|p}} spanning the {{math|(''K'' − 1)}}-simplex. ==Related distributions== When <math>\boldsymbol X=(X_1, \ldots,X_K)\sim \operatorname{Dir}\left(\alpha_1, \ldots, \alpha_K \right)</math>, the marginal distribution of each component <math>X_i \sim \operatorname{Beta}(\alpha_i, \alpha_0-\alpha_i)</math>, a [[Beta distribution]]. In particular, if {{math|''K'' {{=}} 2}} then <math>X_1 \sim \operatorname{Beta}(\alpha_1, \alpha_2)</math> is equivalent to <math>\boldsymbol X=(X_1,1-X_1) \sim \operatorname{Dir}\left(\alpha_1, \alpha_2 \right)</math>. For {{mvar|K}} independently distributed [[Gamma distribution]]s: <math display=block>Y_1 \sim \operatorname{Gamma}(\alpha_1, \theta), \ldots, Y_K \sim \operatorname{Gamma}(\alpha_K, \theta)</math> we have:<ref name=devroye>{{cite book |publisher=Springer-Verlag |year=1986 |last=Devroye |first=Luc |url=http://luc.devroye.org/rnbookindex.html |title=Non-Uniform Random Variate Generation |isbn=0-387-96305-7}}</ref>{{Rp|402}} <math display=block>V=\sum_{i=1}^K Y_i\sim\operatorname{Gamma} \left(\alpha_0, \theta \right ),</math> <math display=block>X = (X_1, \ldots, X_K) = \left(\frac{Y_1}{V}, \ldots, \frac{Y_K}{V} \right)\sim \operatorname{Dir}\left (\alpha_1, \ldots, \alpha_K \right).</math> Although the ''X{{sub|i}}''s are not independent from one another, they can be seen to be generated from a set of {{mvar|K}} independent [[Gamma distribution|gamma]] random variables.<ref name=devroye/>{{Rp|594}} Unfortunately, since the sum {{mvar|V}} is lost in forming {{mvar|X}} (in fact it can be shown that {{mvar|V}} is stochastically independent of {{mvar|X}}), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution. ===Conjugate prior of the Dirichlet distribution=== Because the Dirichlet distribution is an [[exponential family|exponential family distribution]] it has a conjugate prior. The conjugate prior is of the form:<ref name=Lefkimmiatis2009>{{cite journal |first1=Stamatios |last1=Lefkimmiatis |first2=Petros |last2=Maragos |first3=George |last3=Papandreou |year=2009 |title=Bayesian Inference on Multiscale Models for Poisson Intensity Estimation: Applications to Photon-Limited Image Denoising |journal=IEEE Transactions on Image Processing |volume=18 |issue=8 |pages=1724–1741 |doi=10.1109/TIP.2009.2022008 |pmid=19414285 |bibcode=2009ITIP...18.1724L |s2cid=859561 }}</ref> <math display=block>\operatorname{CD}(\boldsymbol\alpha \mid \boldsymbol{v},\eta) \propto \left(\frac{1}{\operatorname{B}(\boldsymbol\alpha)}\right)^\eta \exp\left(-\sum_k v_k \alpha_k\right).</math> Here <math>\boldsymbol{v}</math> is a {{mvar|K}}-dimensional real vector and <math>\eta</math> is a scalar parameter. The domain of <math>(\boldsymbol{v},\eta)</math> is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is:<ref name=Andreoli2018>{{cite arXiv |last=Andreoli |first=Jean-Marc |year=2018 |eprint=1811.05266 |title=A conjugate prior for the Dirichlet distribution |class=cs.LG }}</ref> <math display=block> \forall k\;\;v_k>0\;\;\;\;\text{ and } \;\;\;\;\eta>-1 \;\;\;\;\text{ and } \;\;\;\;(\eta\leq0\;\;\;\;\text{ or }\;\;\;\;\sum_k \exp-\frac{v_k} \eta < 1) </math> The conjugation property can be expressed as : if [''prior'': <math>\boldsymbol{\alpha}\sim\operatorname{CD}(\cdot \mid \boldsymbol{v},\eta)</math>] and [''observation'': <math>\boldsymbol{x}\mid\boldsymbol{\alpha}\sim\operatorname{Dirichlet}(\cdot \mid \boldsymbol{\alpha})</math>] then [''posterior'': <math>\boldsymbol{\alpha}\mid\boldsymbol{x}\sim\operatorname{CD}(\cdot \mid \boldsymbol{v}-\log \boldsymbol{x}, \eta+1)</math>]. In the published literature there is no practical algorithm to efficiently generate samples from <math>\operatorname{CD}(\boldsymbol{\alpha} \mid \boldsymbol{v},\eta)</math>. ===Generalization by scaling and translation of log-probabilities=== As noted above, Dirichlet variates can be generated by normalizing independent [[Gamma distribution|gamma]] variates. If instead one normalizes [[Generalized gamma distribution|generalized gamma]] variates, one obtains variates from the [[simplicial generalized beta distribution]] (SGB).<ref name="sgb">{{cite web |last1=Graf |first1=Monique (2019)|title=The Simplicial Generalized Beta distribution - R-package SGB and applications |url=https://libra.unine.ch/server/api/core/bitstreams/dd593778-b1fd-4856-855b-7b21e005ee77/content |website=Libra |access-date=26 May 2025}}</ref> On the other hand, SGB variates can also be obtained by applying the [[softmax function]] to scaled and translated logarithms of Dirichlet variates. Specifically, let <math>\mathbf x = (x_1, \ldots, x_K)\sim\operatorname{Dir}(\boldsymbol\alpha)</math> and let <math>\mathbf y = (y_1, \ldots, y_k)</math>, where applying the logarithm elementwise: <math display=block> \mathbf y = \operatorname{softmax}(a^{-1}\log\mathbf x + \log\mathbf b)\;\iff\;\mathbf x = \operatorname{softmax}(a\log\mathbf y - a\log\mathbf b) </math> or <math display=block> y_k = \frac{b_kx_k^{1/a}}{\sum_{i=1}^Kb_ix_i^{1/a}}\; \iff\; x_k = \frac{(y_k/b_k)^a}{\sum_{i=1}^K(y_i/b_i)^a} </math> where <math>a>0</math> and <math>\mathbf b = (b_1, \ldots, b_K)</math>, with all <math>b_k>0</math>, then <math>\mathbf y\sim\operatorname{SGB}(a, \mathbf b, \boldsymbol\alpha)</math>. The SGB density function can be derived by noting that the transformation <math>\mathbf x\mapsto\mathbf y</math>, which is a [[bijection]] from the simplex to itself, induces a differential volume change factor<ref name='manifold_flow'>{{cite web |last1=Sorrenson |first1=Peter |last2=et al. (2024) |title=Learning Distributions on Manifolds with Free-Form Flows |url=https://arxiv.org/abs/2312.09852 |website=arXiv}}</ref> of: <math display=block> R(\mathbf y, a,\mathbf b) = a^{1-K}\prod_{k=1}^K\frac{y_k}{x_k} </math> where it is understood that <math>\mathbf x</math> is recovered as a function of <math>\mathbf y</math>, as shown above. This facilitates writing the SGB density in terms of the Dirichlet density, as: <math display=block> f_{\text{SGB}}(\mathbf y\mid a, \mathbf b, \boldsymbol\alpha) = \frac{f_{\text{Dir}}(\mathbf x\mid\boldsymbol\alpha)}{R(\mathbf y,a,\mathbf b)} </math> This generalization of the Dirichlet density, via a [[change of variables]], is closely related to a [[normalizing flow]], while it must be noted that the differential volume change is not given by the [[Jacobian determinant]] of <math>\mathbf x\mapsto\mathbf y:\mathbb R^K\to\mathbb R^K</math> which is zero, but by the Jacobian determinant of <math>(x_1,\ldots,x_{K-1})\mapsto\mathbf (y_1,\ldots,y_{K-1})</math>. For further insight into the interaction between the Dirichlet shape parameters <math>\boldsymbol\alpha</math>, and the transformation parameters <math>a, \mathbf b</math>, it may be helpful to consider the logarithmic marginals, <math>\log\frac{x_k}{1-x_k}</math>, which follow the [[logistic-beta distribution]], <math>B_\sigma(\alpha_k,\sum_{i\ne k} \alpha_i)</math>. See in particular the sections on [[Generalized_logistic_distribution#Tail_behaviour|tail behaviour]] and [[Generalized_logistic_distribution#Generalization_with_location_and_scale_parameters|generalization with location and scale parameters]]. ====Application==== When <math>b_1=b_2=\cdots=b_K</math>, then the transformation simplifies to <math>\mathbf x\mapsto\operatorname{softmax}(a^{-1}\log\mathbf x)</math>, which is known as [[Platt_scaling#Analysis|temperature scaling]] in [[machine learning]], where it is used as a calibration transform for multiclass probabilistic classiers.<ref>{{cite journal |last1=Ferrer |first1=Luciana |last2=Ramos |first2=Daniel |title=Evaluating Posterior Probabilities: Decision Theory, Proper Scoring Rules, and Calibration |journal=Transactions on Machine Learning Research |date=2025 |url=https://openreview.net/forum?id=qbrE0LR7fF}}</ref> Traditionally the temperature parameter (<math>a</math> here) is learnt [[Discriminative_model|discriminatively]] by minimizing multiclass [[cross-entropy]] over a supervised calibration data set with known class labels. But the above PDF transformation mechanism can be used to facilitate also the design of [[Generative_model|generatively trained]] calibration models with a temperature scaling component. ==Occurrence and applications== ===Bayesian models=== Dirichlet distributions are most commonly used as the [[prior distribution]] of [[categorical distribution|categorical variable]]s or [[multinomial distribution|multinomial variable]]s in Bayesian [[mixture model]]s and other [[hierarchical Bayesian model]]s. (In many fields, such as in [[natural language processing]], categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when [[Bernoulli distribution]]s and [[binomial distribution]]s are commonly conflated.) Inference over hierarchical Bayesian models is often done using [[Gibbs sampling]], and in such a case, instances of the Dirichlet distribution are typically [[Marginal distribution|marginalized out]] of the model by integrating out the Dirichlet [[random variable]]. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a [[Dirichlet-multinomial distribution]], conditioned on the hyperparameters of the Dirichlet distribution (the [[concentration parameter]]s). One of the reasons for doing this is that Gibbs sampling of the [[Dirichlet-multinomial distribution]] is extremely easy; see that article for more information. ===Intuitive interpretations of the parameters=== ====The concentration parameter==== Dirichlet distributions are very often used as [[prior distribution]]s in [[Bayesian inference]]. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value {{mvar|α}} to which all parameters are set is called the [[concentration parameter]]. If the sample space of the Dirichlet distribution is interpreted as a [[discrete probability distribution]], then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the [[concentration parameter]] for further discussion. ====String cutting==== One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into {{mvar|K}} pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that <math>\alpha_0 = \sum_{i=1}^K \alpha_i.</math> The <math>\alpha_i/\alpha_0</math> values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with <math>\alpha_0</math>. [[Image:Dirichlet example.png|center|Example of Dirichlet(1/2,1/3,1/6) distribution]] ====[[Pólya urn model|Pólya's urn]]==== Consider an urn containing balls of {{mvar|K}} different colors. Initially, the urn contains {{math|''α''{{sub|1}}}} balls of color 1, {{math|''α''{{sub|2}}}} balls of color 2, and so on. Now perform {{mvar|N}} draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as {{mvar|N}} approaches infinity, the proportions of different colored balls in the urn will be distributed as {{math|Dir(''α''{{sub|1}}, ..., ''α{{sub|K}}'')}}.<ref>{{cite journal | journal=Ann. Stat. | volume=1 | issue=2 | pages=353–355 | year=1973 | author=Blackwell, David | title=Ferguson distributions via Polya urn schemes | doi = 10.1214/aos/1176342372 | last2=MacQueen | first2=James B. | doi-access=free }}</ref> For a formal proof, note that the proportions of the different colored balls form a bounded {{math|[0,1]{{isup|''K''}}}}-valued [[martingale (probability theory)|martingale]], hence by the [[martingale convergence theorem]], these proportions converge [[almost sure convergence|almost surely]] and [[convergence in mean|in mean]] to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed [[moment (mathematics)|moments]] agree. Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls. ==Random variate generation== {{further|Non-uniform random variate generation}} ===From gamma distribution=== With a source of Gamma-distributed random variates, one can easily sample a random vector <math>x=(x_1, \ldots, x_K)</math> from the {{mvar|K}}-dimensional Dirichlet distribution with parameters <math>(\alpha_1, \ldots, \alpha_K)</math> . First, draw {{mvar|K}} independent random samples <math>y_1, \ldots, y_K</math> from [[Gamma distribution]]s each with density <math display=block>\operatorname{Gamma}(\alpha_i, 1) = \frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)}, \!</math> and then set <math display=block>x_i = \frac{y_i}{\sum_{j=1}^K y_j}.</math> {{hidden begin|style=width:60%|ta1=center|border=1px #aaa solid|title=[Proof]}} The joint distribution of the independently sampled gamma variates, <math>\{y_{i}\}</math>, is given by the product: <math display=block>e^{-\sum_{i}y_{i}} \prod _{i=1}^{K} \frac{y_{i}^{\alpha _{i}-1}}{\Gamma (\alpha _{i})} </math> Next, one uses a change of variables, parametrising <math> \{y_{i}\}</math> in terms of <math> y_{1}, y_{2}, \ldots , y_{K-1} </math> and <math> \sum _{i=1}^{K}y_{i}</math> , and performs a change of variables from <math> y \to x </math> such that <math>\bar x = \textstyle\sum_{i=1}^{K}y_{i}, x_{1} = \frac{y_{1}}{\bar x}, x_{2} = \frac{y_{2}}{\bar x}, \ldots , x_{K-1} = \frac{y_{K-1}}{\bar x}</math>. Each of the variables <math>0 \leq x_{1}, x_{2}, \ldots , x_{k-1} \leq 1 </math> and likewise <math>0 \leq \textstyle\sum _{i=1}^{K-1}x_{i} \leq 1 </math>. One must then use the change of variables formula, <math> P(x) = P(y(x))\bigg|\frac{\partial y}{\partial x}\bigg| </math> in which <math>\bigg|\frac{\partial y}{\partial x}\bigg|</math> is the transformation Jacobian. Writing y explicitly as a function of x, one obtains <math>y_{1} = \bar xx_{1}, y_{2} = \bar xx_{2} \ldots y_{K-1} = \bar xx_{K-1}, y_{K} = \bar x(1-\textstyle\sum_{i=1}^{K-1}x_{i}) </math> The Jacobian now looks like <math display=block>\begin{vmatrix}\bar x & 0 & \ldots & x_{1} \\ 0 & \bar x & \ldots & x_{2} \\ \vdots & \vdots & \ddots & \vdots \\ -\bar x & -\bar x & \ldots & 1-\sum_{i=1}^{K-1}x_{i} \end{vmatrix}</math> The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain <math display=block>\begin{vmatrix}\bar x & 0 & \ldots & x_{1} \\ 0 & \bar x & \ldots & x_{2} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end{vmatrix} </math> which can be expanded about the bottom row to obtain the determinant value <math>\bar x^{K-1}</math>. Substituting for x in the joint pdf and including the Jacobian determinant, one obtains: <math display=block> \begin{align} &\frac{\left[\prod _{i=1}^{K-1}(\bar xx_{i})^{\alpha _{i}-1} \right] \left[\bar x(1-\sum_{i=1}^{K-1}x_{i})\right]^{\alpha_{K}-1}}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}\bar x^{K-1}e^{-\bar x} \\ =&\frac{\Gamma(\bar\alpha)\left[\prod _{i=1}^{K-1}(x_{i})^{\alpha _{i}-1} \right] \left[1-\sum_{i=1}^{K-1}x_{i}\right]^{\alpha_{K}-1}}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}\times\frac{\bar x^{\bar\alpha-1}e^{-\bar x}}{\Gamma(\bar\alpha)} \end{align} </math> where <math>\bar\alpha=\textstyle\sum_{i=1}^K\alpha_i</math>. The right-hand side can be recognized as the product of a Dirichlet pdf for the <math>x_i</math> and a gamma pdf for <math>\bar x</math>. The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain: <math display=block>x_{1}, x_{2}, \ldots, x_{K-1} \sim \frac{(1-\sum_{i=1}^{K-1}x_{i})^{\alpha _{K}-1}\prod _{i=1}^{K-1}x_{i}^{\alpha _{i} -1}}{B(\boldsymbol{\alpha})} </math> Which is equivalent to <math display=block>\frac{\prod _{i=1}^{K} x_{i}^{\alpha_{i}-1}}{B(\boldsymbol{\alpha})} </math> with support <math> \sum_{i=1}^{K}x_{i}=1 </math> {{hidden end}} Below is example Python code to draw the sample: <syntaxhighlight lang="python"> params = [a1, a2, ..., ak] sample = [random.gammavariate(a, 1) for a in params] sample = [v / sum(sample) for v in sample] </syntaxhighlight> This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0. ===From marginal beta distributions=== A less efficient algorithm<ref>{{cite book |author1=A. Gelman |author2=J. B. Carlin |author3=H. S. Stern |author4=D. B. Rubin | year=2003 | title= Bayesian Data Analysis |url=https://archive.org/details/bayesiandataanal00gelm |url-access=limited | edition=2nd | isbn=1-58488-388-X | pages=[https://archive.org/details/bayesiandataanal00gelm/page/n607 582]|publisher=Chapman & Hall/CRC }}</ref> relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate <math>x_1</math> from <math display=block>\textrm{Beta}\left(\alpha_1, \sum_{i=2}^K \alpha_i \right)</math> Then simulate <math>x_2, \ldots, x_{K-1}</math> in order, as follows. For <math>j=2, \ldots, K-1</math>, simulate <math>\phi_j</math> from <math display=block>\textrm{Beta} \left(\alpha_j, \sum_{i=j+1}^K \alpha_i \right ),</math> and let <math display=block>x_j= \left(1-\sum_{i=1}^{j-1} x_i \right )\phi_j.</math> Finally, set <math display=block>x_K=1-\sum_{i=1}^{K-1} x_i.</math> This iterative procedure corresponds closely to the "string cutting" intuition described above. Below is example Python code to draw the sample: <syntaxhighlight lang="python"> params = [a1, a2, ..., ak] xs = [random.betavariate(params[0], sum(params[1:]))] for j in range(1, len(params) - 1): phi = random.betavariate(params[j], sum(params[j + 1 :])) xs.append((1 - sum(xs)) * phi) xs.append(1 - sum(xs)) </syntaxhighlight> ===When each alpha is 1=== When {{math|1=''α''{{sub|1}} = ... = ''α''{{sub|''K''}} = 1}}, a sample from the distribution can be found by randomly drawing a set of {{math|''K'' − 1}} values independently and uniformly from the interval {{math|[0, 1]}}, adding the values {{math|0}} and {{math|1}} to the set to make it have {{math|''K'' + 1}} values, sorting the set, and computing the difference between each pair of order-adjacent values, to give {{math|''x''{{sub|1}}}}, ..., {{math|''x''{{sub|''K''}}}}. ===When each alpha is 1/2 and relationship to the hypersphere=== When {{math|1=''α''{{sub|1}} = ... = ''α''{{sub|''K''}} = 1/2}}, a sample from the distribution can be found by randomly drawing {{mvar|K}} values independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to give {{math|''x''{{sub|1}}}}, ..., {{math|''x''{{sub|''K''}}}}. A point {{math|(''x''{{sub|1}}}}, ..., {{math|''x''{{sub|''K''}})}} can be drawn uniformly at random from the ({{math|''K''−1}})-dimensional unit hypersphere (which is the surface of a {{mvar|K}}-dimensional [[Ball (mathematics)|hyperball]]) via a similar procedure. Randomly draw {{mvar|K}} values independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares. ==See also== * [[Generalized Dirichlet distribution]] * [[Grouped Dirichlet distribution]] * [[Inverted Dirichlet distribution]] * [[Latent Dirichlet allocation]] * [[Dirichlet process]] * [[Matrix variate Dirichlet distribution]] ==References== {{reflist}} ==External links== * {{springer|title=Dirichlet distribution|id=p/d032840}} * [http://users.ics.aalto.fi/ahonkela/dippa/node95.html Dirichlet Distribution] *[http://mayagupta.org/publications/EMbookGuptaChen2010.pdf How to estimate the parameters of the compound Dirichlet distribution (Pólya distribution) using expectation-maximization (EM)] * {{cite web|url=http://luc.devroye.org/rnbookindex.html |title=Non-Uniform Random Variate Generation|author= Luc Devroye|access-date=19 October 2019}} * [http://www.cs.princeton.edu/courses/archive/fall07/cos597C/scribe/20071130.pdf Dirichlet Random Measures, Method of Construction via Compound Poisson Random Variables, and Exchangeability Properties of the resulting Gamma Distribution] * [https://cran.r-project.org/web/packages/SciencesPo/index.html SciencesPo]: R package that contains functions for simulating parameters of the Dirichlet distribution. {{ProbDistributions|multivariate}} {{Peter Gustav Lejeune Dirichlet}} [[Category:Multivariate continuous distributions]] [[Category:Conjugate prior distributions]] [[Category:Exponential family distributions]] [[Category:Continuous distributions]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Further
(
edit
)
Template:Hidden begin
(
edit
)
Template:Hidden end
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Note
(
edit
)
Template:Peter Gustav Lejeune Dirichlet
(
edit
)
Template:ProbDistributions
(
edit
)
Template:Probability distribution
(
edit
)
Template:Ref
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Short description
(
edit
)
Template:Slink
(
edit
)
Template:Springer
(
edit
)
Template:Sub
(
edit
)