Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Negative binomial distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Properties== ===Expectation=== The expected total number of trials needed to see {{mvar|r}} successes is <math>\frac{r}{p}</math>. Thus, the expected number of ''failures'' would be this value, minus the successes: :<math> E[\operatorname{NB}(r, p)] = \frac{r}{p} - r = \frac{r(1-p)}{p} </math> ===Expectation of successes=== The expected total number of failures in a negative binomial distribution with parameters {{math|(''r'', ''p'')}} is {{math|''r''(1 β ''p'')/''p''}}. To see this, imagine an experiment simulating the negative binomial is performed many times. That is, a set of trials is performed until {{mvar|r}} successes are obtained, then another set of trials, and then another etc. Write down the number of trials performed in each experiment: {{math|''a'', ''b'', ''c'', ...}} and set {{math|''a'' + ''b'' + ''c'' + ... {{=}} ''N''}}. Now we would expect about {{math|''Np''}} successes in total. Say the experiment was performed {{mvar|n}} times. Then there are {{math|''nr''}} successes in total. So we would expect {{math|''nr'' {{=}} ''Np''}}, so {{math|''N''/''n'' {{=}} ''r''/''p''}}. See that {{math|''N''/''n''}} is just the average number of trials per experiment. That is what we mean by "expectation". The average number of failures per experiment is {{math|1=''N''/''n'' β ''r'' = ''r''/''p'' β ''r'' = ''r''(1 β ''p'')/''p''}}. This agrees with the mean given in the box on the right-hand side of this page. A rigorous derivation can be done by representing the negative binomial distribution as the sum of waiting times. Let <math>X_r \sim\operatorname{NB}(r, p)</math> with the convention <math>X</math> represents the number of failures observed before <math>r</math> successes with the probability of success being <math>p</math>. And let <math>Y_i \sim Geom(p)</math> where <math>Y_i</math> represents the number of failures before seeing a success. We can think of <math>Y_i</math> as the waiting time (number of failures) between the <math>i</math>th and <math>(i-1)</math>th success. Thus :<math> X_r = Y_1 + Y_2 + \cdots + Y_r. </math> The mean is :<math> E[X_r] = E[Y_1] + E[Y_2] + \cdots + E[Y_r] = \frac{r(1-p)}{p}, </math> which follows from the fact <math>E[Y_i] = (1-p)/p</math>. === Variance === When counting the number of failures before the {{mvar|r}}-th success, the variance is {{math|''r''(1 β ''p'')/''p''{{sup|2}}}}. When counting the number of successes before the {{mvar|r}}-th failure, as in alternative formulation (3) above, the variance is {{math|''rp''/(1 β ''p''){{sup|2}}}}. ===Relation to the binomial theorem=== Suppose {{mvar|Y}} is a random variable with a [[binomial distribution]] with parameters {{mvar|n}} and {{mvar|p}}. Assume {{math|1=''p'' + ''q'' = 1}}, with {{math|''p'', ''q'' β₯ 0}}, then :<math>1=1^n=(p+q)^n.</math> Using [[Newton's binomial theorem]], this can equally be written as: :<math>(p+q)^n=\sum_{k=0}^\infty \binom{n}{k} p^k q^{n-k},</math> in which the upper bound of summation is infinite. In this case, the [[binomial coefficient]] : <math>\binom{n}{k} = {n(n-1)(n-2)\cdots(n-k+1) \over k! }.</math> is defined when {{mvar|n}} is a real number, instead of just a positive integer. But in our case of the binomial distribution it is zero when {{math|''k'' > ''n''}}. We can then say, for example : <math>(p+q)^{8.3}=\sum_{k=0}^\infty \binom{8.3}{k} p^k q^{8.3 - k}.</math> Now suppose {{math|''r'' > 0}} and we use a negative exponent: :<math>1=p^r\cdot p^{-r}=p^r (1-q)^{-r}=p^r \sum_{k=0}^\infty \binom{-r}{k} (-q)^k.</math> Then all of the terms are positive, and the term :<math>p^r \binom{-r}{k} (-q)^k = \binom{k + r - 1}{k} p^rq^k</math> is just the probability that the number of failures before the {{mvar|r}}-th success is equal to {{mvar|k}}, provided {{mvar|r}} is an integer. (If {{mvar|r}} is a negative non-integer, so that the exponent is a positive non-integer, then some of the terms in the sum above are negative, so we do not have a probability distribution on the set of all nonnegative integers.) Now we also allow non-integer values of {{mvar|r}}. Recall from above that :The sum of independent negative-binomially distributed random variables {{math|''r''{{sub|1}}}} and {{math|''r''{{sub|2}}}} with the same value for parameter {{mvar|p}} is negative-binomially distributed with the same {{mvar|p}} but with {{mvar|r}}-value {{math|''r''{{sub|1}} + ''r''{{sub|2}}}}. This property persists when the definition is thus generalized, and affords a quick way to see that the negative binomial distribution is [[Infinite divisibility (probability)|infinitely divisible]]. === Recurrence relations === The following [[recurrence relations]] hold: For the probability mass function : <math> \begin{cases} (k+1) \Pr (X=k+1)-p \Pr (X=k) (k+r)=0, \\[5pt] \Pr (X=0)=(1-p)^r. \end{cases} </math> For the moments <math>m_k = \mathbb E(X^k),</math> : <math> m_{k+1} = r P m_k + (P^2 + P) {d m_k \over dP}, \quad P:=(1-p)/p, \quad m_0=1. </math> For the cumulants : <math> \kappa_{k+1} = (Q-1)Q {d \kappa_k \over dQ}, \quad Q:=1/p, \quad \kappa_1=r(Q-1). </math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)