Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Boole's inequality
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Bonferroni inequalities== Boole's inequality for a finite number of events may be generalized to certain [[upper bound|upper]] and [[upper bound|lower bound]]s on the probability of [[finite unions]] of events.<ref>{{cite book |first1=George |last1=Casella |first2=Roger L. |last2=Berger |title=Statistical Inference |publisher=Duxbury |year=2002 |isbn=0-534-24312-6 |pages=11–13 |url=https://books.google.com/books?id=0x_vAAAAMAAJ&pg=PA11 }}</ref> These bounds are known as '''Bonferroni inequalities''', after [[Carlo Emilio Bonferroni]]; see {{harvtxt|Bonferroni|1936}}. Let :<math>S_1 := \sum_{i=1}^n {\mathbb P}(A_i), \quad S_2 := \sum_{1\le i_1 < i_2\le n} {\mathbb P}(A_{i_1} \cap A_{i_2} ),\quad \ldots,\quad S_k := \sum_{1\le i_1<\cdots<i_k\le n} {\mathbb P}(A_{i_1}\cap \cdots \cap A_{i_k} ) </math> for all integers ''k'' in {1, ..., ''n''}. Then, when <math>K \leq n </math> is odd: :<math> \sum_{j=1}^K (-1)^{j-1} S_j \geq \mathbb{P}\Big(\bigcup_{i=1}^n A_i\Big) = \sum_{j=1}^n (-1)^{j-1} S_j </math> holds, and when <math>K \leq n</math> is even: :<math> \sum_{j=1}^K (-1)^{j-1} S_j \leq \mathbb{P}\Big(\bigcup_{i=1}^n A_i\Big) = \sum_{j=1}^n (-1)^{j-1} S_j </math> holds. The inequalities follow from the [[inclusion–exclusion principle]], and Boole's inequality is the special case of <math>K=1</math>. Since the proof of the inclusion-exclusion principle requires only the finite additivity (and nonnegativity) of <math>\mathbb{P}</math>, thus the Bonferroni inequalities holds more generally <math>\mathbb{P}</math> is replaced by any finite [[Content (measure theory)|content]], in the sense of measure theory. === Proof for odd K === Let <math> E = \bigcap_{i=1}^n B_i </math>, where <math> B_i \in \{A_i, A_i^c\} </math> for each <math> i = 1, \dots, n </math>. These such <math> E </math> partition the [[sample space]], and for each <math> E </math> and every <math> i </math>, <math> E </math> is either contained in <math> A_i </math> or disjoint from it. If <math> E = \bigcap_{i=1}^n A_i^c </math>, then <math> E </math> contributes 0 to both sides of the inequality. Otherwise, assume <math> E </math> is contained in exactly <math> L </math> of the <math> A_i </math>. Then <math> E </math> contributes exactly <math> \mathbb{P}(E) </math> to the right side of the inequality, while it contributes :<math> \sum_{j=1}^K (-1)^{j-1} {L \choose j} \mathbb{P}(E) </math> to the left side of the inequality. However, by [[Pascal's rule]], this is equal to :<math> \sum_{j=1}^K (-1)^{j-1} \Big({L-1 \choose j-1} + {L-1 \choose j} \Big)\mathbb{P}(E) </math> which telescopes to :<math> \Big( 1 + {L-1 \choose K}\Big) \mathbb{P}(E) \geq \mathbb{P}(E) </math> Thus, the inequality holds for all events <math> E </math>, and so by summing over <math> E </math>, we obtain the desired inequality: :<math> \sum_{j=1}^K (-1)^{j-1} S_j \geq \mathbb{P}\Big(\bigcup_{i=1}^n A_i\Big) </math> The proof for even <math> K </math> is nearly identical.<ref>{{cite book |first1=Santosh |last1=Venkatesh |title=The Theory of Probability |publisher=Cambridge University Press |year=2012 |isbn=978-0-534-24312-8 |pages=94–99 ,113–115 |url=http://www.cambridge.org/9781107024472 }}</ref> === Example === Suppose that you are estimating five [[Parameter|parameters]] based on a random sample, and you can control each parameter separately. If you want your estimations of all five parameters to be good with a chance 95%, what should you do to each parameter? Tuning each parameter's chance to be good to within 95% is not enough because "all are good" is a subset of each event "Estimate ''i'' is good". We can use Boole's Inequality to solve this problem. By finding the complement of event "all five are good", we can change this question into another condition: :''P''(at least one estimation is bad) = 0.05 ≤ ''P''(''A''<sub>1</sub> is bad) + ''P''(''A''<sub>2</sub> is bad) + ''P''(''A''<sub>3</sub> is bad) + ''P''(''A''<sub>4</sub> is bad) + ''P''(''A''<sub>5</sub> is bad) One way is to make each of them equal to 0.05/5 = 0.01, that is 1%. In other words, you have to guarantee each estimate good to 99%( for example, by constructing a 99% [[confidence interval]]) to make sure the total estimation to be good with a chance 95%. This is called the Bonferroni Method of simultaneous inference.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)