Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Base rate fallacy
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==False positive paradox== An example of the base rate fallacy is the '''false positive paradox''' (also known as '''accuracy paradox'''). This paradox describes situations where there are more [[false positive]] test results than true positives (this means the classifier has a low [[Precision and recall|precision]]). For example, if a facial recognition camera can identify wanted criminals 99% accurately, but analyzes 10,000 people a day, the high accuracy is outweighed by the number of tests; because of this, the program's list of criminals will likely have far more innocents (false positives) than criminals (true positives) because there are far more innocents than criminals overall. The probability of a positive test result is determined not only by the accuracy of the test but also by the characteristics of the sampled population.<ref>{{cite book |last1=Rheinfurth |first1=M. H. |url=https://ntrs.nasa.gov/citations/19980045313 |title=Probability and Statistics in Aerospace Engineering |last2=Howell |first2=L. W. |date=March 1998 |publisher=[[NASA]] |page=16 |quote=MESSAGE: False positive tests are more probable than true positive tests when the overall population has a low prevalence of the disease. This is called the false-positive paradox.}}</ref> The fundamental issue is that the far higher prevalence of true negatives means that the pool of people testing positively will be dominated by false positives, given that even a small fraction of the much larger [negative] group will produce a larger number of indicated positives than the larger fraction of the much smaller [positive] group. When the prevalence, the proportion of those who have a given condition, is lower than the test's [[false positive rate]], even tests that have a very low risk of giving a false positive ''in an individual case'' will give more false than true positives ''overall''.<ref name="Vacher">{{cite journal |last=Vacher |first=H. L. |date=May 2003 |title=Quantitative literacy - drug testing, cancer screening, and the identification of igneous rocks |url=http://findarticles.com/p/articles/mi_qa4089/is_200305/ai_n9252796/pg_2/ |journal=Journal of Geoscience Education |page=2 |quote=At first glance, this seems perverse: the less the students as a whole use [[steroids]], the more likely a student identified as a user will be a non-user. This has been called the False Positive Paradox}} - Citing: {{cite book |last1=Gonick |first1=L. |title=The cartoon guide to statistics |last2=Smith |first2=W. |publisher=Harper Collins |year=1993 |location=New York |page=49}}</ref> It is especially counter-intuitive when interpreting a positive result in a test on a low-prevalence [[population (statistics)|population]] after having dealt with positive results drawn from a high-prevalence population.<ref name="Vacher" /> If the false positive rate of the test is higher than the proportion of the ''new'' population with the condition, then a test administrator whose experience has been drawn from testing in a high-prevalence population may [[rule of thumb|conclude from experience]] that a positive test result usually indicates a positive subject, when in fact a false positive is far more likely to have occurred.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)