Sens/spec, +/-LRs, PPV/NPVs
John Brush
jebrush at ME.COM
Tue Sep 11 13:01:10 UTC 2018
Dr. Yuen,
I would teach your learners that you just have to be a little mathematical in practice. There’s no getting around it! Science is a quantitative discipline and if you want to practice medicine based on science, you have to apply just a little math. In my book, I tried to make it a simple as possible so the translation of quantitative science to semi-quantitative practice would become intuitive.
The fundamental question that I tried to answer in my book is this: How do you apply the science of medicine (which is derived from populations of patients) to an individual patient? You can do this, but it takes some mathematical thinking and takes some practice. When we learned mathematics as school kids, we learned by practicing problems until the concepts finally sunk in. You need to do this with likelihood ratios as well. If you play around with them a bit, you start to see how the strength of a test is well represented by the likelihood ratio number. You start to know intuitively that a likelihood ratio of 5 is pretty amazing. 10 is really compelling. ST elevation in the proper clinical setting is so compelling that we activate a cath lab based on that test alone. It has a LR(+) of about 13. The point of statistical inference is that numbers can have meaning. For likelihood ratios, learners need to understand what the numbers mean.
LRs are calculated directly from the sensitivity and specificity. Any test with a known sensitivity or specificity has a LR that can be easily calculated. LR(+) is interesting because it rises dramatically when specificity rises above 80%, as shown below. If you play around with probability, odds, and likelihood ratios, you start to get a sense of this, but you have to play around with the numbers a bit to become facile in converting probability to odds, multiply by the LR, then convert posttest odds back to probability. The point of the exercise is ultimately to develop an intuitive sense of how much emphasis you should put on a test result, depending on the test's likelihood ratio.
When your learners start to think this way, they start to appreciate that you have to think about baseline risk (based on an estimate of prevalence of disease), the strength of test results, and thresholds of posttest probability that will become your cutoff for decision-making. You start to think in ways that avoid base-rate neglect, anchoring, and other ways of jumping to conclusions. Your diagnostic decision making becomes more organized and precise. But it takes practice, just like anything else.
John E. Brush, Jr., M.D., FACC
Professor of Medicine
Eastern Virginia Medical School
Sentara Cardiology Specialists
844 Kempsville Road, Suite 204
Norfolk, VA 23502
757-261-0700
Cell: 757-477-1990
jebrush at me.com
On Sep 10, 2018, at 7:26 AM, David Meyers <dm0015 at COMCAST.NET> wrote:
Dr Yuen writes again:
“Not sure if this was posted-repost:
Thanks for the suggestion-including Dr. Brush's direct reply,
However, I had a copy of the book from a few years ago and read it several times before posting. In particular I combed through chapter 4 ("Decision Making: Making Choices) - extremely well written and helpful - just not for the specific questions I (and my residents) had. In the book John makes the excellent point of how pre-test probably affects our interpretation of test results and the role of sensitivity and specificity and introduces likelihood ratios and Bayesian reasoning-all concepts I taught during my lecture (citing his book of course!)
However, he doesn't specifically address the following questions and I can't seem to find an answer to [them]. Again I understand the mathematical differences, but I am just questioning the practical differences. For example, specifically:
1. Are there any tests for which specificity is NOT directly related to a positive likelihood ratio (+LR) and positive predictive value (PPV)? If not, then a very specific test ROUGHLY means a test with a high PPV, ROUGHLY means high +LR. In my residents mind (and my own actually) then, there really isn't any difference in knowing the subtle nuances between those numbers-a positive test would mean that the patient very likely has the condition tested for regardless of which statistic you looked up. And yes, if you use exact numbers there certainly are numeric differences in the actual probabilities, but for a busy clinician it all "means about the same thing".
2. Are there instances when using Sensitivity/specificity vs LR vs PPV/NPV results in an incorrect conclusion? In other words, are there specific cases when interpreting a test result using a test's published sensitivity would lead you to a different conclusion than if you looked up the -LR? Again if not, for the typical learner- I understand why it seems like a "waste of time" trying to figure out the differences between all these terms if they roughly lead one to the same conclusion.
3. When would you preferentially use specificity vs. LR vs PPV? If the resident has a patient with a moderate pretest clinical probability of acute myocardial infarction (AMI) and the high sensitivity troponin is negative- does it really matter if they refer to the test's LR vs NPV vs sensitivity? They all seem to lead to the same clinical conclusion?
I apologize if I am misunderstanding an elementary concept and these seem like silly questions - but as I am asked these questions I really didn't have any good answer and I don't feel any closer to a satisfactory answer than 2 days ago. Again I can point out the statistical and mathematical differences in calculating these values- but to me they all seem to "tell the same story"
Your thoughts appreciated
Tom”
To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1 <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
To learn more about SIDM visit:
http://www.improvediagnosis.org/
Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine
HTML Version:
URL: <../attachments/20180911/c18d7ab7/attachment.html>
ATTACHMENT:
Name: PastedGraphic-2.png
Type: image/png
Size: 107697 bytes
Desc: not available
URL: <../attachments/20180911/c18d7ab7/attachment.png>
More information about the Test
mailing list