Geometric distribution

Revision as of 02:03, 20 May 2025 by 2601:240:cf01:fbd0:b2a6:96a8:9dda:360f (talk) (→‎Definition: Added missing spaces around math blocks)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description Template:Distinguish Template:Infobox probability distribution 2</math> | kurtosis = <math>6+\frac{p^2}{1-p}</math> | entropy = <math>\tfrac{-(1-p)\log (1-p) - p \log p}{p}</math> | fisher = <math>\tfrac{1}{p^2 \cdot(1-p)}</math> | mgf = <math>\frac{pe^t}{1-(1-p) e^t},</math>
for <math>t<-\ln(1-p)</math> | char = <math>\frac{pe^{it}}{1-(1-p)e^{it}}</math> | pgf = <math>\frac{pz}{1-(1-p)z}</math> | parameters2 = <math>0 < p \leq 1</math> success probability (real) | support2 = k failures where <math>k \in \mathbb{N}_0 = \{0, 1, 2, \dotsc\}</math> | pdf2 = <math>(1 - p)^k p</math> | cdf2 = <math>1-(1 - p)^{\lfloor x\rfloor+1}</math> for <math>x\geq 0</math>,
<math>0</math> for <math>x<0</math> | mean2 = <math>\frac{1-p}{p}</math> | median2 = <math>\left\lceil \frac{-1}{\log_2(1-p)} \right\rceil - 1</math>
(not unique if <math>-1/\log_2(1-p)</math> is an integer) | mode2 = <math>0</math> | variance2 = <math>\frac{1-p}{p^2}</math> | skewness2 = <math>\frac{2-p}{\sqrt{1-p}}</math> | kurtosis2 = <math>6+\frac{p^2}{1-p}</math> | entropy2 = <math>\tfrac{-(1-p)\log (1-p) - p \log p}{p}</math> | fisher2 = <math>\tfrac{1}{p^2 \cdot(1-p)}</math> | mgf2 = <math>\frac{p}{1-(1-p)e^t},</math>
for <math>t<-\ln(1-p)</math> | char2 = <math>\frac{p}{1-(1-p)e^{it}}</math> | pgf2 = <math>\frac{p}{1-(1-p)z}</math> }} In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:

  • The probability distribution of the number <math>X</math> of Bernoulli trials needed to get one success, supported on <math>\mathbb{N} = \{1,2,3,\ldots\}</math>;
  • The probability distribution of the number <math>Y=X-1</math> of failures before the first success, supported on <math>\mathbb{N}_0 = \{0, 1, 2, \ldots \} </math>.

These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of <math>X</math>); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.

The geometric distribution gives the probability that the first occurrence of success requires <math>k</math> independent trials, each with success probability <math>p</math>. If the probability of success on each trial is <math>p</math>, then the probability that the <math>k</math>-th trial is the first success is

<math>\Pr(X = k) = (1-p)^{k-1}p</math>

for <math>k=1,2,3,4,\dots</math>

The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:

<math>\Pr(Y=k) =\Pr(X=k+1)= (1 - p)^k p</math>

for <math>k=0,1,2,3,\dots</math>

The geometric distribution gets its name because its probabilities follow a geometric sequence. It is sometimes called the Furry distribution after Wendell H. Furry.<ref name=":8" />Template:Rp

DefinitionEdit

The geometric distribution is the discrete probability distribution that describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs. Its probability mass function depends on its parameterization and support. When supported on <math>\mathbb{N}</math>, the probability mass function is <math display="block">P(X = k) = (1 - p)^{k-1} p</math> where <math>k = 1, 2, 3, \dotsc</math> is the number of trials and <math>p</math> is the probability of success in each trial.<ref name=":1">Template:Cite book</ref>Template:Rp

The support may also be <math>\mathbb{N}_0</math>, defining <math>Y=X-1</math>. This alters the probability mass function into <math display="block">P(Y = k) = (1 - p)^k p</math> where <math>k = 0, 1, 2, \dotsc</math> is the number of failures before the first success.<ref name=":2">Template:Cite book</ref>Template:Rp

An alternative parameterization of the distribution gives the probability mass function <math display="block">P(Y = k) = \left(\frac{P}{Q}\right)^k \left(1-\frac{P}{Q}\right)</math> where <math>P = \frac{1-p}{p}</math> and <math>Q = \frac{1}{p}</math>.<ref name=":8" />Template:Rp

An example of a geometric distribution arises from rolling a six-sided die until a "1" appears. Each roll is independent with a <math>1/6</math> chance of success. The number of rolls needed follows a geometric distribution with <math>p=1/6</math>.

PropertiesEdit

MemorylessnessEdit

Template:Main article The geometric distribution is the only memoryless discrete probability distribution.<ref>Template:Cite book</ref> It is the discrete version of the same property found in the exponential distribution.<ref name=":8">Template:Cite book</ref>Template:Rp The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success.

Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Expressed in terms of conditional probability, the two definitions are<math display="block">\Pr(X>m+n\mid X>n)=\Pr(X>m),</math>

and<math display="block">\Pr(Y>m+n\mid Y\geq n)=\Pr(Y>m),</math>

where <math>m</math> and <math>n</math> are natural numbers, <math>X</math> is a geometrically distributed random variable defined over <math>\mathbb{N}</math>, and <math>Y</math> is a geometrically distributed random variable defined over <math>\mathbb{N}_0</math>. Note that these definitions are not equivalent for discrete random variables; <math>Y</math> does not satisfy the first equation and <math>X</math> does not satisfy the second.

Moments and cumulantsEdit

The expected value and variance of a geometrically distributed random variable <math>X</math> defined over <math>\mathbb{N}</math> is<ref name=":1" />Template:Rp<math display="block">\operatorname{E}(X) = \frac{1}{p},

\qquad\operatorname{var}(X) = \frac{1-p}{p^2}.</math> With a geometrically distributed random variable <math>Y</math> defined over <math>\mathbb{N}_0</math>, the expected value changes into<math display="block">\operatorname{E}(Y) = \frac{1-p} p,</math>while the variance stays the same.<ref name=":0">Template:Cite book</ref>Template:Rp

For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is <math>\frac{1}{1/6} = 6</math> and the average number of failures is <math>\frac{1 - 1/6}{1/6} = 5</math>.

The moment generating function of the geometric distribution when defined over <math> \mathbb{N} </math> and <math>\mathbb{N}_0</math> respectively is<ref>Template:Cite book</ref><ref name=":0" />Template:Rp<math display="block">\begin{align} M_X(t) &= \frac{pe^t}{1-(1-p)e^t} \\ M_Y(t) &= \frac{p}{1-(1-p)e^t}, t < -\ln(1-p) \end{align}</math>The moments for the number of failures before the first success are given by

<math>

\begin{align} \mathrm{E}(Y^n) & {} =\sum_{k=0}^\infty (1-p)^k p\cdot k^n \\ & {} =p \operatorname{Li}_{-n}(1-p) & (\text{for }n \neq 0) \end{align} </math>

where <math> \operatorname{Li}_{-n}(1-p) </math> is the polylogarithm function.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

The cumulant generating function of the geometric distribution defined over <math>\mathbb{N}_0</math> is<ref name=":8" />Template:Rp <math display="block">K(t) = \ln p - \ln (1 - (1-p)e^t)</math>The cumulants <math>\kappa_r</math> satisfy the recursion<math display="block">\kappa_{r+1} = q \frac{\delta\kappa_r}{\delta q}, r=1,2,\dotsc</math>where <math>q = 1-p</math>, when defined over <math>\mathbb{N}_0</math>.<ref name=":8" />Template:Rp

Proof of expected valueEdit

Consider the expected value <math>\mathrm{E}(X)</math> of X as above, i.e. the average number of trials until a success. The first trial either succeeds with probability <math>p</math>, or fails with probability <math>1-p</math>. If it fails, the remaining mean number of trials until a success is identical to the original mean - this follows from the fact that all trials are independent.

From this we get the formula:

<math>\operatorname \mathrm{E}(X) = p + (1-p)(1 + \mathrm{E}[X]) ,</math>

which, when solved for <math> \mathrm{E}(X) </math>, gives:

<math>\operatorname E(X) = \frac{1}{p}.</math>

The expected number of failures <math>Y</math> can be found from the linearity of expectation, <math>\mathrm{E}(Y) = \mathrm{E}(X-1) = \mathrm{E}(X) - 1 = \frac 1 p - 1 = \frac{1-p}{p}</math>. It can also be shown in the following way:

<math>

\begin{align} \operatorname E(Y) & =p\sum_{k=0}^\infty(1-p)^k k \\ & = p (1-p) \sum_{k=0}^\infty (1-p)^{k-1} k\\ & = p (1-p) \left(-\sum_{k=0}^\infty \frac{d}{dp}\left[(1-p)^k\right]\right) \\ & = p (1-p) \left[\frac{d}{dp}\left(-\sum_{k=0}^\infty (1-p)^k\right)\right] \\ & = p(1-p)\frac{d}{dp}\left(-\frac{1}{p}\right) \\ & = \frac{1-p}{p}. \end{align} </math>

The interchange of summation and differentiation is justified by the fact that convergent power series converge uniformly on compact subsets of the set of points where they converge.

Summary statisticsEdit

The mean of the geometric distribution is its expected value which is, as previously discussed in § Moments and cumulants, <math>\frac{1}{p}</math> or <math>\frac{1-p}{p}</math> when defined over <math>\mathbb{N}</math> or <math>\mathbb{N}_0</math> respectively.

The median of the geometric distribution is <math>\left\lceil -\frac{\log 2}{\log(1-p)} \right\rceil</math>when defined over <math>\mathbb{N}</math><ref>Template:Cite book</ref> and <math>\left\lfloor-\frac{\log 2}{\log(1-p)}\right\rfloor</math> when defined over <math>\mathbb{N}_0</math>.<ref name=":2" />Template:Rp

The mode of the geometric distribution is the first value in the support set. This is 1 when defined over <math>\mathbb{N}</math> and 0 when defined over <math>\mathbb{N}_0</math>.<ref name=":2" />Template:Rp

The skewness of the geometric distribution is <math>\frac{2-p}{\sqrt{1-p}}</math>.<ref name=":0" />Template:Rp

The kurtosis of the geometric distribution is <math>9 + \frac{p^2}{1-p}</math>.<ref name=":0" />Template:Rp The excess kurtosis of a distribution is the difference between its kurtosis and the kurtosis of a normal distribution, <math>3</math>.<ref name=":4">Template:Cite book</ref>Template:Rp Therefore, the excess kurtosis of the geometric distribution is <math>6 + \frac{p^2}{1-p}</math>. Since <math>\frac{p^2}{1-p} \geq 0</math>, the excess kurtosis is always positive so the distribution is leptokurtic.<ref name=":2" />Template:Rp In other words, the tail of a geometric distribution decays faster than a Gaussian.<ref name=":4" />Template:Rp

Entropy and Fisher's InformationEdit

Entropy (Geometric Distribution, Failures Before Success)Edit

Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is:

<math>P(X = k) = (1 - p)^k p, \quad k = 0, 1, 2, \dots</math>

The entropy <math>H(X)</math> for this distribution is defined as:

<math>\begin{align}

H(X) &= - \sum_{k=0}^{\infty} P(X = k) \ln P(X = k) \\

    &= - \sum_{k=0}^{\infty} (1 - p)^k p \ln \left( (1 - p)^k p \right) \\
    &= - \sum_{k=0}^{\infty} (1 - p)^k p \left[ k \ln(1 - p) + \ln p \right] \\
    &= -\log p - \frac{1 - p}{p} \log(1 - p)

\end{align}</math>

The entropy increases as the probability <math>p</math> decreases, reflecting greater uncertainty as success becomes rarer.

Fisher's Information (Geometric Distribution, Failures Before Success)Edit

Fisher information measures the amount of information that an observable random variable <math>X</math> carries about an unknown parameter <math>p</math>. For the geometric distribution (failures before the first success), the Fisher information with respect to <math>p</math> is given by:

<math>I(p) = \frac{1}{p^2(1 - p)}</math>

Proof:

  • The Likelihood Function for a geometric random variable <math>X</math> is:
<math>L(p; X) = (1 - p)^X p</math>
  • The Log-Likelihood Function is:
<math>\ln L(p; X) = X \ln(1 - p) + \ln p</math>
  • The Score Function (first derivative of the log-likelihood w.r.t. <math>p</math>) is:
<math>\frac{\partial}{\partial p} \ln L(p; X) = \frac{1}{p} - \frac{X}{1 - p}</math>
  • The second derivative of the log-likelihood function is:
<math>\frac{\partial^2}{\partial p^2} \ln L(p; X) = -\frac{1}{p^2} - \frac{X}{(1 - p)^2}</math>
  • Fisher Information is calculated as the negative expected value of the second derivative:
<math>\begin{align}

I(p) &= -E\left[\frac{\partial^2}{\partial p^2} \ln L(p; X)\right] \\

    &= - \left(-\frac{1}{p^2} - \frac{1 - p}{p (1 - p)^2} \right) \\
    &= \frac{1}{p^2(1 - p)}

\end{align}</math>

Fisher information increases as <math>p</math> decreases, indicating that rarer successes provide more information about the parameter <math>p</math>.

Entropy (Geometric Distribution, Trials Until Success)Edit

For the geometric distribution modeling the number of trials until the first success, the probability mass function is:

<math>P(X = k) = (1 - p)^{k - 1} p, \quad k = 1, 2, 3, \dots</math>

The entropy <math>H(X)</math> for this distribution is given by:

<math>\begin{align}

H(X) &= - \sum_{k=1}^{\infty} P(X = k) \ln P(X = k) \\

    &= - \sum_{k=1}^{\infty} (1 - p)^{k - 1} p \ln \left( (1 - p)^{k - 1} p \right) \\
    &= - \sum_{k=1}^{\infty} (1 - p)^{k - 1} p \left[ (k - 1) \ln(1 - p) + \ln p \right] \\
    &= - \log p + \frac{1 - p}{p} \log(1 - p)

\end{align}</math>

Entropy increases as <math>p</math> decreases, reflecting greater uncertainty as the probability of success in each trial becomes smaller.

Fisher's Information (Geometric Distribution, Trials Until Success)Edit

Fisher information for the geometric distribution modeling the number of trials until the first success is given by:

<math>I(p) = \frac{1}{p^2(1 - p)}</math>

Proof:

  • The Likelihood Function for a geometric random variable <math>X</math> is:
<math>L(p; X) = (1 - p)^{X - 1} p</math>
  • The Log-Likelihood Function is:
<math>\ln L(p; X) = (X - 1) \ln(1 - p) + \ln p</math>
  • The Score Function (first derivative of the log-likelihood w.r.t. <math>p</math>) is:
<math>\frac{\partial}{\partial p} \ln L(p; X) = \frac{1}{p} - \frac{X - 1}{1 - p}</math>
  • The second derivative of the log-likelihood function is:
<math>\frac{\partial^2}{\partial p^2} \ln L(p; X) = -\frac{1}{p^2} - \frac{X - 1}{(1 - p)^2}</math>
  • Fisher Information is calculated as the negative expected value of the second derivative:
<math>\begin{align}

I(p) &= -E\left[\frac{\partial^2}{\partial p^2} \ln L(p; X)\right] \\

    &= - \left(-\frac{1}{p^2} - \frac{1 - p}{p (1 - p)^2} \right) \\
    &= \frac{1}{p^2(1 - p)}

\end{align}</math>

General propertiesEdit

X </math> and <math> Y </math> defined over <math> \mathbb{N} </math> and <math> \mathbb{N}_0 </math> are, respectively,<ref name=":0" />Template:Rp

<math>\begin{align}

G_X(s) & = \frac{s\,p}{1-s\,(1-p)}, \\[10pt] G_Y(s) & = \frac{p}{1-s\,(1-p)}, \quad |s| < (1-p)^{-1}. \end{align}</math>

  • The characteristic function <math>\varphi(t)</math> is equal to <math>G(e^{it})</math> so the geometric distribution's characteristic function, when defined over <math>

\mathbb{N} </math> and <math> \mathbb{N}_0 </math> respectively, is<ref name=":9">Template:Cite book</ref>Template:Rp<math display="block">\begin{align} \varphi_X(t) &= \frac{pe^{it}}{1-(1-p)e^{it}},\\[10pt] \varphi_Y(t) &= \frac{p}{1-(1-p)e^{it}}. \end{align}</math>

\mathbb{N}_0 </math> is infinitely divisible, that is, for any positive integer <math>n</math>, there exist <math>n</math> independent identically distributed random variables whose sum is also geometrically distributed. This is because the negative binomial distribution can be derived from a Poisson-stopped sum of logarithmic random variables.<ref name=":9" />Template:Rp

  • The decimal digits of the geometrically distributed random variable Y are a sequence of independent (and not identically distributed) random variables.Template:Citation needed For example, the hundreds digit D has this probability distribution:
<math>\Pr(D=d) = {q^{100d} \over 1 + q^{100} + q^{200} + \cdots + q^{900}},</math>
where q = 1 − p, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.

Related distributionsEdit

  • The sum of <math>r</math> independent geometric random variables with parameter <math>p</math> is a negative binomial random variable with parameters <math>r</math> and <math>p</math>.<ref>Template:Cite book</ref> The geometric distribution is a special case of the negative binomial distribution, with <math>r=1</math>.
  • The geometric distribution is a special case of discrete compound Poisson distribution.<ref name=":9" />Template:Rp
  • The minimum of <math>n</math> geometric random variables with parameters <math>p_1, \dotsc, p_n</math> is also geometrically distributed with parameter <math>1 - \prod_{i=1}^n (1-p_i)</math>.<ref>Template:Cite journal</ref>
  • Suppose 0 < r < 1, and for k = 1, 2, 3, ... the random variable Xk has a Poisson distribution with expected value rk/k. Then
<math>\sum_{k=1}^\infty k\,X_k</math>
has a geometric distribution taking values in <math>\mathbb{N}_0</math>, with expected value r/(1 − r).Template:Citation needed
  • The exponential distribution is the continuous analogue of the geometric distribution. Applying the floor function to the exponential distribution with parameter <math>\lambda</math> creates a geometric distribution with parameter <math>p=1-e^{-\lambda}</math> defined over <math>\mathbb{N}_0</math>.<ref name=":2" />Template:Rp This can be used to generate geometrically distributed random numbers as detailed in § Random variate generation.
  • If p = 1/n and X is geometrically distributed with parameter p, then the distribution of X/n approaches an exponential distribution with expected value 1 as n → ∞, since<math display="block">

\begin{align} \Pr(X/n>a)=\Pr(X>na) & = (1-p)^{na} = \left(1-\frac 1 n \right)^{na} = \left[ \left( 1-\frac 1 n \right)^n \right]^{a} \\ & \to [e^{-1}]^{a} = e^{-a} \text{ as } n\to\infty. \end{align} </math>More generally, if p = λ/n, where λ is a parameter, then as n→ ∞ the distribution of X/n approaches an exponential distribution with rate λ:<math>\Pr(X>nx)=\lim_{n \to \infty}(1-\lambda /n)^{nx}=e^{-\lambda x}</math> therefore the distribution function of X/n converges to <math>1-e^{-\lambda x}</math>, which is that of an exponential random variable.Template:Cn

Statistical inferenceEdit

The true parameter <math>p</math> of an unknown geometric distribution can be inferred through estimators and conjugate distributions.

Method of momentsEdit

Provided they exist, the first <math>l</math> moments of a probability distribution can be estimated from a sample <math>x_1, \dotsc, x_n</math> using the formula<math display="block">m_i = \frac{1}{n} \sum_{j=1}^n x^i_j</math>where <math>m_i</math> is the <math>i</math>th sample moment and <math>1 \leq i \leq l</math>.<ref name=":5">Template:Cite book</ref>Template:Rp Estimating <math>\mathrm{E}(X)</math> with <math>m_1</math> gives the sample mean, denoted <math> \bar{x} </math>. Substituting this estimate in the formula for the expected value of a geometric distribution and solving for <math> p </math> gives the estimators <math> \hat{p} = \frac{1}{\bar{x}} </math> and <math> \hat{p} = \frac{1}{\bar{x}+1} </math> when supported on <math>\mathbb{N}</math> and <math>\mathbb{N}_0</math> respectively. These estimators are biased since <math>\mathrm{E}\left(\frac{1}{\bar{x}}\right) > \frac{1}{\mathrm{E}(\bar{x})} = p</math> as a result of Jensen's inequality.<ref name=":3">Template:Cite book</ref>Template:Rp

Maximum likelihood estimationEdit

The maximum likelihood estimator of <math>p</math> is the value that maximizes the likelihood function given a sample.<ref name=":5" />Template:Rp By finding the zero of the derivative of the log-likelihood function when the distribution is defined over <math>\mathbb{N}</math>, the maximum likelihood estimator can be found to be <math>\hat{p} = \frac{1}{\bar{x}}</math>, where <math>\bar{x}</math> is the sample mean.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> If the domain is <math>\mathbb{N}_0</math>, then the estimator shifts to <math>\hat{p} = \frac{1}{\bar{x}+1}</math>. As previously discussed in § Method of moments, these estimators are biased.

Regardless of the domain, the bias is equal to

<math>
   b \equiv \operatorname{E}\bigg[\;(\hat p_\mathrm{mle} - p)\;\bigg]
       = \frac{p\,(1-p)}{n} 
 </math>

which yields the bias-corrected maximum likelihood estimator,Template:Cn

<math>
   \hat{p\,}^*_\text{mle} = \hat{p\,}_\text{mle} - \hat{b\,}
 </math>

Bayesian inferenceEdit

In Bayesian inference, the parameter <math>p</math> is a random variable from a prior distribution with a posterior distribution calculated using Bayes' theorem after observing samples.<ref name=":3" />Template:Rp If a beta distribution is chosen as the prior distribution, then the posterior will also be a beta distribution and it is called the conjugate distribution. In particular, if a <math>\mathrm{Beta}(\alpha,\beta)</math> prior is selected, then the posterior, after observing samples <math>k_1, \dotsc, k_n \in \mathbb{N}</math>, is<ref>Template:Cite CiteSeerX</ref><math display="block">p \sim \mathrm{Beta}\left(\alpha+n,\ \beta+\sum_{i=1}^n (k_i-1)\right). \!</math>Alternatively, if the samples are in <math>\mathbb{N}_0</math>, the posterior distribution is<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><math display="block">p \sim \mathrm{Beta}\left(\alpha+n,\beta+\sum_{i=1}^n k_i\right).</math>Since the expected value of a <math>\mathrm{Beta}(\alpha,\beta)</math> distribution is <math>\frac{\alpha}{\alpha+\beta}</math>,<ref name=":9" />Template:Rp as <math>\alpha</math> and <math>\beta</math> approach zero, the posterior mean approaches its maximum likelihood estimate.

Random variate generationEdit

Template:Further The geometric distribution can be generated experimentally from i.i.d. standard uniform random variables by finding the first such random variable to be less than or equal to <math>p</math>. However, the number of random variables needed is also geometrically distributed and the algorithm slows as <math>p</math> decreases.<ref name=":6">Template:Cite book</ref>Template:Rp

Random generation can be done in constant time by truncating exponential random numbers. An exponential random variable <math>E</math> can become geometrically distributed with parameter <math>p</math> through <math>\lceil -E/\log(1-p) \rceil</math>. In turn, <math>E</math> can be generated from a standard uniform random variable <math>U</math> altering the formula into <math>\lceil \log(U) / \log(1-p)\rceil</math>.<ref name=":6" />Template:Rp<ref>Template:Cite book</ref>

ApplicationsEdit

The geometric distribution is used in many disciplines. In queueing theory, the M/M/1 queue has a steady state following a geometric distribution.<ref>Template:Cite book</ref> In stochastic processes, the Yule Furry process is geometrically distributed.<ref>Template:Cite book</ref> The distribution also arises when modeling the lifetime of a device in discrete contexts.<ref>Template:Citation</ref> It has also been used to fit data including modeling patients spreading COVID-19.<ref>Template:Cite journal</ref>

See alsoEdit

ReferencesEdit

Template:Reflist

Template:ProbDistributions