Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Negative binomial distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Expectation of successes=== The expected total number of failures in a negative binomial distribution with parameters {{math|(''r'', ''p'')}} is {{math|''r''(1 β ''p'')/''p''}}. To see this, imagine an experiment simulating the negative binomial is performed many times. That is, a set of trials is performed until {{mvar|r}} successes are obtained, then another set of trials, and then another etc. Write down the number of trials performed in each experiment: {{math|''a'', ''b'', ''c'', ...}} and set {{math|''a'' + ''b'' + ''c'' + ... {{=}} ''N''}}. Now we would expect about {{math|''Np''}} successes in total. Say the experiment was performed {{mvar|n}} times. Then there are {{math|''nr''}} successes in total. So we would expect {{math|''nr'' {{=}} ''Np''}}, so {{math|''N''/''n'' {{=}} ''r''/''p''}}. See that {{math|''N''/''n''}} is just the average number of trials per experiment. That is what we mean by "expectation". The average number of failures per experiment is {{math|1=''N''/''n'' β ''r'' = ''r''/''p'' β ''r'' = ''r''(1 β ''p'')/''p''}}. This agrees with the mean given in the box on the right-hand side of this page. A rigorous derivation can be done by representing the negative binomial distribution as the sum of waiting times. Let <math>X_r \sim\operatorname{NB}(r, p)</math> with the convention <math>X</math> represents the number of failures observed before <math>r</math> successes with the probability of success being <math>p</math>. And let <math>Y_i \sim Geom(p)</math> where <math>Y_i</math> represents the number of failures before seeing a success. We can think of <math>Y_i</math> as the waiting time (number of failures) between the <math>i</math>th and <math>(i-1)</math>th success. Thus :<math> X_r = Y_1 + Y_2 + \cdots + Y_r. </math> The mean is :<math> E[X_r] = E[Y_1] + E[Y_2] + \cdots + E[Y_r] = \frac{r(1-p)}{p}, </math> which follows from the fact <math>E[Y_i] = (1-p)/p</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)