[External] [IMPROVEDX] Sens/spec, +/-LRs, PPV/NPVs

Ely, John john-ely at UIOWA.EDU
Mon Sep 10 14:32:57 UTC 2018


I agree that sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio positive (LR+), and likelihood ratio negative (LR-) all seem to tell the same story, but there are some crucial differences too.

The crucial difference between PPV and specificity is that PPV depends on the prevalence of disease in the population.  Or, for the clinician, the probability of disease in the patient in the exam room.  If that probability is very low, you can’t trust a positive test even if the specificity is high.  In other words, there is a large risk that the positive test is a false positive even if the test is very specific.  An example, would be the requirement that all couples, regardless of risk, get tested for HIV before getting married.  I think this actually happened in Chicago in the early days of HIV.  It was a disaster.  Even though the HIV test had good specificity, almost all the positive tests were false.

It’s analogous for a highly sensitive test and NPV.

Likelihood ratios are very difficult to understand because they are not proportions.  Proportions (sensitivity, specificity, PPV, NPV) are easier to understand because you can picture, for example, 100 people in a room who all have the disease and 90 of them have a positive test so the sensitivity is 90%.  The other 10 have negative tests and they are all falsely negative.  It’s the false negatives that largely determine the  sensitivity, because false negatives are in the denominator of the sensitivity formula, whereas true positives are in both the numerator and denominator.  So when you hear sensitivity, think false negatives.  A highly sensitive test has few false negatives so you can trust a negative test, so you won’t miss anybody with the disease.  UNLESS the prevalence (pre-test probability) is high, in which case many of the negative tests will be false.  However, it can be confusing to say “when you hear sensitivity, think false negatives” because the sensitivity is the true positive rate.  So it seems like we should say, “When you hear sensitivity, think true positives.”  No wonder we have trouble understanding this stuff.

The problem with PPV, NPV, LR+, and LR- is that we never know the prevalence of disease in the population we care about.  The population the clinician cares about is a single patient and so the “prevalence” is actually the pre-test probability of disease in that one person.  And we never know that, with any degree of certainty.  At least we don’t know it in the people for whom it might be reasonable to do the test.  Likelihood ratios, themselves, do not depend on pre-test probability, but the way you use them does.  You use them to convert pre-test probability to post-test probability, but since you don’t know the pre-test probability, this doesn’t work very well.

I think the bottom line is that the clinician should be facile with sensitivity and specificity.  The others are OK to understand on a superficial level.  The only time you need to know about them is when you hear them used in a lecture.  You don’t need them at the bedside.

Your questions:

1.  There are no tests for which specificity is not related to positive likelihood ratio, but the positive likelihood ratio also depends on the sensitivity.  It does not depend on prevalence (pre-test probability).  But the way you use it does.

2.  There are instances when using sensitivity/specificity vs. LR vs PPV/NPV will result in an incorrect conclusion if you don’t account for the fact that sensitivity/specificity and LR do not depend on pre-test probability, whereas PPV/NPV and the way you use LR’s do.

3.  If a patient has a moderate pretest probability (say 20%) of acute MI and the troponin is negative, it does not matter if the resident refers to the LR- vs NPV vs sensitivity.  The sensitivity is 90% (pretty good).  The LR- is 0.11 (assuming a specificity of 94%) which is really good.  The NPV is 97% which is also really good.  The question for the clinician is:  Can I trust a negative test?  The answer is “yes,” regardless of which you use (LR-, NPV, or sensitivity).



John Ely



From: David Meyers [mailto:dm0015 at COMCAST.NET]
Sent: Monday, September 10, 2018 6:26 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: [External] [IMPROVEDX] Sens/spec, +/-LRs, PPV/NPVs

Dr Yuen writes again:


“Not sure if this was posted-repost:


Thanks for the suggestion-including Dr. Brush's direct reply,


However, I had a copy of the book from a few years ago and read it several times before posting.  In particular I combed through chapter 4 ("Decision Making: Making Choices) - extremely well written and helpful - just not for the specific questions I (and my residents) had. In the book John makes the excellent point of how pre-test probably affects our interpretation of test results and the role of sensitivity and specificity and introduces likelihood ratios and Bayesian reasoning-all concepts I taught during my lecture (citing his book of course!)

However, he doesn't specifically address the following questions and I can't seem to find an answer to [them].  Again I understand the mathematical differences, but I am just questioning the practical differences.  For example, specifically:


1. Are there any tests for which specificity is NOT directly related to a positive likelihood ratio (+LR) and positive predictive value (PPV)?  If not, then a very specific test ROUGHLY means a test with a high PPV, ROUGHLY means high +LR.  In my residents mind (and my own actually) then, there really isn't any difference in knowing the subtle nuances between those numbers-a positive test would mean that the patient very likely has the condition tested for regardless of which statistic you looked up. And yes, if you use exact numbers there certainly are numeric differences in the actual probabilities, but for a busy clinician it all "means about the same thing".


2. Are there instances when using Sensitivity/specificity vs LR vs PPV/NPV results in an incorrect conclusion?  In other words, are there specific cases when interpreting a test result using a test's published sensitivity would lead you to a different conclusion than if you looked up the -LR? Again if not, for the typical learner- I understand why it seems like a "waste of time" trying to figure out the differences between all these terms if they roughly lead one to the same conclusion.


3. When would you preferentially use specificity vs. LR vs PPV?  If the resident has a patient with a moderate pretest clinical probability of acute myocardial infarction (AMI) and the high sensitivity troponin is negative- does it really matter if they refer to the test's LR vs NPV vs sensitivity?  They all seem to lead to the same clinical conclusion?


I apologize if I am misunderstanding an elementary concept and these seem like silly questions - but as I am asked these questions I really didn't have any good answer and I don't feel any closer to a satisfactory answer than 2 days ago. Again I can point out the statistical and mathematical differences in calculating these values- but to me they all seem to "tell the same story"

Your thoughts appreciated
Tom”


________________________________

Address messages to: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG

To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/


________________________________
Notice: This UI Health Care e-mail (including attachments) is covered by the Electronic Communications Privacy Act, 18 U.S.C. 2510-2521 and is intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately and delete or destroy all copies of the original message and attachments thereto. Email sent to or from UI Health Care may be retained as required by law or regulation. Thank you.
________________________________

To unsubscribe from the IMPROVEDX:
mail to:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
or click the following link: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG

Address messages to: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG

For additional information and subscription commands, visit:
http://www.lsoft.com/resources/faq.asp#4A

http://LIST.IMPROVEDIAGNOSIS.ORG/ (with your password)

Visit the searchable archives or adjust your subscription at:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine

To unsubscribe from the IMPROVEDX list, click the following link:<br>
<a href="http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1" target="_blank">http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1</a>
</p>

HTML Version:
URL: <../attachments/20180910/9a1757e5/attachment.html>


More information about the Test mailing list