Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Positive and negative predictive values
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Statistical measures of whether a finding is likely to be true}} {{More citations needed|date=March 2012}} [[File:Positive and negative predictive values.pdf|thumb|541x541px|Positive and negative predictive values]] [[File:PPV, NPV, Sensitivity and Specificity.svg|thumb|417x417px|Positive and negative predictive values - 2]] The '''positive and negative predictive values''' ('''PPV''' and '''NPV''' respectively) are the proportions of positive and negative results in [[Predictive value of tests|statistics]] and [[diagnostic test]]s that are [[true positive]] and [[true negative]] results, respectively.<ref>{{cite book|last=Fletcher|first=Robert H. Fletcher; Suzanne W.|title=Clinical epidemiology : the essentials|url=https://archive.org/details/clinicalepidemio00flet|url-access=limited|year=2005|publisher=Lippincott Williams & Wilkins|location=Baltimore, Md.|isbn=0-7817-5215-9|pages=[https://archive.org/details/clinicalepidemio00flet/page/n55 45]|edition=4th}}</ref> The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic. The PPV and NPV are not intrinsic to the test (as [[true positive rate]] and [[true negative rate]] are); they depend also on the [[prevalence]].<ref name=AltmanBland1994>{{cite journal |pmid=8038641 |year=1994 |last1=Altman |first1=DG |last2=Bland |first2=JM |title=Diagnostic tests 2: Predictive values |volume=309 |issue=6947 |pages=102 |pmc=2540558 |journal=BMJ |doi=10.1136/bmj.309.6947.102}}</ref> Both PPV and NPV can be derived using [[Bayes' theorem]]. Although sometimes used synonymously, a ''positive predictive value'' generally refers to what is established by control groups, while a [[Pre- and post-test probability|post-test probability]] refers to a probability for an individual. Still, if the individual's [[pre-test probability]] of the target condition is the same as the prevalence in the control group used to establish the positive predictive value, the two are numerically equal. In [[information retrieval]], the PPV statistic is often called the [[Precision and recall|precision]]. ==Definition== ===Positive predictive value (PPV)=== The positive predictive value (PPV), or [[Precision and recall|precision]], is defined as ::<math> \text{PPV} = \frac{\text{Number of true positives}}{\text{Number of true positives} + \text{Number of false positives}} = \frac{\text{Number of true positives}}{\text{Number of positive calls}}</math> where a "[[true positive]]" is the event that the test makes a positive prediction, and the subject has a positive result under the [[Gold standard (test)|gold standard]], and a "[[false positive]]" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. The ideal value of the PPV, with a perfect test, is 1 (100%), and the worst possible value would be zero. The PPV can also be computed from [[Sensitivity and specificity|sensitivity]], [[Sensitivity and specificity|specificity]], and the [[prevalence]] of the condition: ::<math> \text{PPV} = \frac{\text{sensitivity} \times \text{prevalence}}{\text{sensitivity} \times \text{prevalence} + (1 - \text{specificity}) \times (1 - \text{prevalence})} </math> cf. Bayes' theorem The complement of the PPV is the [[false discovery rate]] (FDR): ::<math> \text{FDR} = 1 - \text{PPV} = \frac{\text{Number of false positives}}{\text{Number of true positives} + \text{Number of false positives}} = \frac{\text{Number of false positives}}{\text{Number of positive calls}}</math> ===Negative predictive value (NPV)=== The negative predictive value is defined as: ::<math> \text{NPV} = \frac{\text{Number of true negatives}}{\text{Number of true negatives}+\text{Number of false negatives}} = \frac{\text{Number of true negatives}}{\text{Number of negative calls}} </math> where a "[[true negative]]" is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a "[[false negative]]" is the event that the test makes a negative prediction, and the subject has a positive result under the gold standard. With a perfect test, one which returns no false negatives, the value of the NPV is 1 (100%), and with a test which returns no true negatives the NPV value is zero. The NPV can also be computed from [[Sensitivity and specificity|sensitivity]], [[Sensitivity and specificity|specificity]], and [[prevalence]]: ::<math> \text{NPV} = \frac{\text{specificity} \times (1-\text{prevalence})}{\text{specificity} \times (1-\text{prevalence}) + (1-\text{sensitivity}) \times \text{prevalence}} </math> ::<math> \text{NPV} = \frac{TN}{TN + FN} </math> The complement of the NPV is the '''{{visible anchor|false omission rate}}''' (FOR): ::<math> \text{FOR} = 1 - \text{NPV} = \frac{\text{Number of false negatives}}{\text{Number of true negatives}+\text{Number of false negatives}} = \frac{\text{Number of false negatives}}{\text{Number of negative calls}} </math> Although sometimes used synonymously, a ''negative predictive value'' generally refers to what is established by control groups, while a negative [[Pre- and post-test probability|post-test probability]] rather refers to a probability for an individual. Still, if the individual's [[pre-test probability]] of the target condition is the same as the prevalence in the control group used to establish the negative predictive value, then the two are numerically equal. ===Relationship=== The following diagram illustrates how the ''positive predictive value'', ''negative predictive value'', [[Sensitivity and specificity|sensitivity, and specificity]] are related. {{diagnostic testing diagram}} Note that the positive and negative predictive values can only be estimated using data from a [[cross-sectional study]] or other population-based study in which valid [[prevalence]] estimates may be obtained. In contrast, the sensitivity and specificity can be estimated from [[case-control study|case-control studies]]. ===Worked example=== Suppose the [[fecal occult blood]] (FOB) screen test is used in 2030 people to look for bowel cancer: {{diagnostic testing example}} The small positive predictive value (PPV = 10%) indicates that many of the positive results from this testing procedure are false positives. Thus it will be necessary to follow up any positive result with a more reliable test to obtain a more accurate assessment as to whether cancer is present. Nevertheless, such a test may be useful if it is inexpensive and convenient. The strength of the FOB screen test is instead in its negative predictive value β which, if negative for an individual, gives us a high confidence that its negative result is true. == Problems == ===Other individual factors=== Note that the PPV is not intrinsic to the testβit depends also on the prevalence.<ref name=AltmanBland1994/> Due to the large effect of prevalence upon predictive values, a standardized approach has been proposed, where the PPV is normalized to a prevalence of 50%.<ref>{{cite journal |doi=10.1002/jmri.22466 |title=Standardizing predictive values in diagnostic imaging research |year=2011 |last1=Heston |first1=Thomas F. |journal=Journal of Magnetic Resonance Imaging |volume=33 |issue=2 |pages=505; author reply 506β7 |pmid=21274995|doi-access=free |url=https://zenodo.org/records/8329256/files/Magnetic%20Resonance%20Imaging%20-%202011%20-%20Heston.pdf }}</ref> PPV is directly proportional{{Dubious|Is PPV directly proportional to the prevalence?|date=December 2020}} to the prevalence of the disease or condition. In the above example, if the group of people tested had included a higher proportion of people with bowel cancer, then the PPV would probably come out higher and the NPV lower. If everybody in the group had bowel cancer, the PPV would be 100% and the NPV 0%.{{citation needed|date=August 2023}} To overcome this problem, NPV and PPV should only be used if the ratio of the number of patients in the disease group and the number of patients in the healthy control group used to establish the NPV and PPV is equivalent to the prevalence of the diseases in the studied population, or, in case two disease groups are compared, if the ratio of the number of patients in disease group 1 and the number of patients in disease group 2 is equivalent to the ratio of the prevalences of the two diseases studied. Otherwise, positive and negative [[Likelihood ratios in diagnostic testing|likelihood ratios]] are more accurate than NPV and PPV, because likelihood ratios do not depend on prevalence.{{citation needed|date=August 2023}} When an individual being tested has a different [[pre-test probability]] of having a condition than the control groups used to establish the PPV and NPV, the PPV and NPV are generally distinguished from the positive and negative [[post-test probabilities]], with the PPV and NPV referring to the ones established by the control groups, and the post-test probabilities referring to the ones for the tested individual (as estimated, for example, by [[Likelihood ratios in diagnostic testing|likelihood ratios]]). Preferably, in such cases, a large group of equivalent individuals should be studied, in order to establish separate positive and negative predictive values for use of the test in such individuals.{{citation needed|date=August 2023}} ===Bayesian updating=== [[Bayes' theorem]] confers inherent limitations on the accuracy of screening tests as a function of disease prevalence or pre-test probability. It has been shown that a testing system can tolerate significant drops in prevalence, up to a certain well-defined point known as the [[prevalence threshold]], below which the reliability of a positive screening test drops precipitously. That said, Balayla et al.<ref name=":0">Jacques Balayla. Bayesian Updating and Sequential Testing: Overcoming Inferential Limitations of Screening Tests. ''BMC Med Inform Decis Mak'' 22, 6 (2022). https://doi.org/10.1186/s12911-021-01738-w</ref> showed that sequential testing overcomes the aforementioned Bayesian limitations and thus improves the reliability of screening tests. For a desired positive predictive value <math>\rho</math>, where <math>\rho < 1</math>, that approaches some constant <math>k</math>, the number of positive test iterations <math>n_i</math> needed is: :<math>n_i =\lim_{k \to \rho}\left\lceil\frac{\ln\left[\frac{k(\phi-1)}{\phi(k-1)}\right]}{\ln\left[\frac{a}{1-b}\right]}\right\rceil</math> where * <math>\rho</math> is the desired PPV * <math>n_i</math> is the number of testing iterations necessary to achieve <math>\rho</math> * <math>a</math> is the sensitivity * <math>b</math> is the specificity * <math>\phi</math> is disease prevalence Of note, the denominator of the above equation is the natural logarithm of the positive [[likelihood ratios in diagnostic testing|likelihood ratio]] (LR+). Also, note that a critical assumption is that the tests must be independent. As described Balayla et al.,<ref name=":0" /> repeating the same test may violate the this independence assumption and in fact "A more natural and reliable method to enhance the positive predictive value would be, when available, to use a different test with different parameters altogether after an initial positive result is obtained.".<ref name=":0" /> ===Different target conditions=== PPV is used to indicate the probability that in case of a positive test, that the patient really has the specified disease. However, there may be more than one cause for a disease and any single potential cause may not always result in the overt disease seen in a patient. There is potential to mix up related target conditions of PPV and NPV, such as interpreting the PPV or NPV of a test as having a disease, when that PPV or NPV value actually refers only to a predisposition of having that disease.<ref name="EPV">{{cite journal |doi=10.1002/sim.1119 |title=The predictive value of microbiologic diagnostic tests if asymptomatic carriers are present |year=2002 |last1=Gunnarsson |first1=Ronny K. |last2=Lanke |first2=Jan |journal=Statistics in Medicine |volume=21 |issue=12 |pages=1773β85 |pmid=12111911|s2cid=26163122 }}</ref> An example is the microbiological throat swab used in patients with a [[sore throat]]. Usually publications stating PPV of a throat swab are reporting on the probability that this bacterium is present in the throat, rather than that the patient is ill from the bacteria found. If presence of this bacterium always resulted in a sore throat, then the PPV would be very useful. However the bacteria may colonise individuals in a harmless way and never result in infection or disease. Sore throats occurring in these individuals are caused by other agents such as a virus. In this situation the gold standard used in the evaluation study represents only the presence of bacteria (that might be harmless) but not a causal bacterial sore throat illness. It can be proven that this problem will affect positive predictive value far more than negative predictive value.<ref>{{cite journal |doi=10.1016/j.ijid.2016.02.002 |title=Etiologic predictive value of a rapid immunoassay for the detection of group A Streptococcus antigen from throat swabs in patients presenting with a sore throat |year=2016 |last1=Orda |first1=Ulrich |last2=Gunnarsson |first2=Ronny K | last3=Orda | first3=Sabine | last4=Fitzgerald | first4=Mark | last5=Rofe | first5=Geoffry | last6=Dargan | first6=Anna |journal=International Journal of Infectious Diseases |volume=45 |issue=April |pages=32β5|pmid=26873279|url=https://researchonline.jcu.edu.au/45048/1/2016%20---%20Orda%20-%20EPV.pdf |doi-access=free }}</ref> To evaluate diagnostic tests where the gold standard looks only at potential causes of disease, one may use an extension of the predictive value termed the [http://www.infovoice.se/fou/epv Etiologic Predictive Value].<ref name="EPV" /><ref>{{cite web |last1=Gunnarsson |first1=Ronny K. | url=http://science-network.tv/epv-calculator/ | title=EPV Calculator | website=Science Network TV}}</ref> ==See also== * [[Binary classification]] * [[Sensitivity and specificity]] * [[False discovery rate]] * [[Relevance (information retrieval)]] * [[Receiver-operator characteristic]] * [[Diagnostic odds ratio]] * [[Sensitivity index]] == References == {{Reflist}} [[Category:Biostatistics]] [[Category:Statistical ratios]] [[Category:Categorical data]] [[Category:Summary statistics for contingency tables]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Diagnostic testing diagram
(
edit
)
Template:Diagnostic testing example
(
edit
)
Template:Dubious
(
edit
)
Template:Fix
(
edit
)
Template:More citations needed
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Visible anchor
(
edit
)