[IMPROVEDX] IOM report is released - Diagnosis in actual practice

John Brush jebrush at ME.COM
Sun Oct 11 17:56:06 UTC 2015


Mark, can you send a reference to your study on thresholds? As you point out, thinking probabilistically about diagnosis raises important questions about thresholds and clinical decision rules.
	Here’s something to think about: For clinical research, we have conventionally and arbitrarily chosen thresholds of 0.05 and 0.20 for alpha and beta errors. In clinical medicine, we know that we would rather make errors of commission rather than omission, and I wonder if the ratio of 4:1 for beta to alpha error thresholds is about the same ratio that we would choose for commission versus omission, but to my knowledge, this has never been quantified. For rejecting a diagnosis, I would submit that the probability does need to be less than 5%. And a diagnoses with a probability of greater than 20% would be very much in play.
	What would the threshold be for rejecting the diagnosis of CAD for a patient with chest pain? If we assume that it is <0.05, we can back calculate using likelihood ratios to determine what tests and what pretest probabilities will get us there. For EKG stress tests with a sensitivity of 67%, specificity of 72% and a negative likelihood ratio of 0.46, you would have to start with a pretest probability of less than 10% to get a post test probability of less than 5%. No wonder we don’t use EKG stress tests much anymore, even though they are cheap. An imaging stress test, however, has a negative LR of 0.12. To get to <0.05, you have to start with a pretest likelihood of <30%. For patients with a higher prior probability, even a nuclear stress test won’t quite get you to our arbitrary threshold for dismissing CAD.
	I had a patient recently with exertion chest pain with some typical and some atypical features. Her prior probability was about 50% (54% from the Diamond and Forrester paper). She had a negative nuclear stress test. Because of persistent unexplained chest pain, I performed a cardiac cath. She had a severe left main lesion (causing balanced defect, which is a known source of false negative nuclear stress tests). it was only through persistence, and the knowledge that a negative nuclear stress test doesn’t mean 0% posterior probability of CAD that we avoided a missed diagnosis and a serious mistake. For patients with a pretest probability in the 50% range, you either need a very strong test, or several independent tests to rule out a diagnosis. If you can’t definitively rule out a diagnosis like CAD, you need to follow up and keep thinking about it.
 
John

John E. Brush, Jr., M.D., FACC
Professor of Medicine
Eastern Virginia Medical School
Sentara Cardiology Specialists
844 Kempsville Road, Suite 204
Norfolk, VA 23502
757-261-0700
Cell: 757-477-1990
jebrush at me.com



On Oct 10, 2015, at 7:20 PM, Mark H Ebell <ebell at UGA.EDU> wrote:

Our study looked at chest pain, suspected DVT, cough in physicians in US and Switzerland.

Rob, we should talk.

Mark

From: "Hamm, Robert M. (HSC)"
Date: Saturday, October 10, 2015 at 6:48 PM
To: Society to Improve Diagnosis in Medicine, Mark Ebell
Subject: RE: [IMPROVEDX] [IMPROVEDX] IOM report is released - Diagnosis in actual practice

I have done a survey on “what is the threshold” for giving antibiotics for a teenager with strep, with clinicians and patients. First, they don’t consciously have thresholds; second, the thresholds produced when asked to just the benefits and harms vary widely between people; third, they differ within people when the question is asked in different ways (decomposed to different extents). Fourth, after dropping responses that just don’t make any sense whatsoever, the lowest thresholds are those of the 3rd year medical students. (lower than patients and first year medical students; lower than residents and practicing clinicians).  Poster will be presented at Society for Judgment and Decision Making, November, Chicago.
 
I would be very interested if there were other studies making these comparisons, using different clinical problems.
 
Rob Hamm, PhD
Clinical Decision Making Program
Department of Family and Preventive Medicine
University of Oklahoma Health Sciences Center.
 
From: Mark H Ebell [mailto:ebell at UGA.EDU] 
Sent: Saturday, October 10, 2015 5:06 PM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] [IMPROVEDX] IOM report is released - Diagnosis in actual practice
 
I think an interesting area of study is: what are the test and treatment thresholds of patients vs physicians? We published a methodology for determining thresholds, and it could be used for that purpose. 
 
Ebell MH, Locatelli I, Senn N. A novel approach to the determination of clinical decision thresholds. Evid Based Med. 2015 Mar 3. pii: ebmed-2014-110140. dos: 10.1136/ebmed-2014-110140.
 
Mark
 
From: robert bell
Reply-To: Society to Improve Diagnosis in Medicine, robert bell
Date: Saturday, October 10, 2015 at 5:15 PM
To: "IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG"
Subject: Re: [IMPROVEDX] [IMPROVEDX] IOM report is released - Diagnosis in actual practice
 
It brings up the idea that could Physicians/HCPs be more open with patients and provide a disclaimer of sorts or just information that takes probability into consideration and how difficult it is to make a diagnosis with certain rare conditions and suggest a way forward if a solution/diagnosis is not accomplished in a reasonable period of time?
 
Rob Bell, MD.
 
 
> On Oct 9, 2015, at 10:16 AM, John Brush <jebrush at ME.COM> wrote:
>  
> Mark Ebell’s insightful email points to the absurdity of not thinking probabilistically. The probabilities that skilled clinicians use in an intuitive fashion in practice are derived from scientific evidence and experience (experiential knowledge). I am a big believer in evidence-based medicine. Without the scientific underpinnings for our clinical activities, and explicit acknowledgement of where the science is lacking, we would be adrift. 
> In David Sackett’s original BMJ article on evidence-based medicine, he promoted the use of the best available scientific evidence "especially from patient centred clinical research into the accuracy and precision of diagnostic tests.” He included with his book a nomogram that a clinician could use to incorporate likelihood ratios for diagnostic tests with prior probabilities to yield a posterior probability for a single patient undergoing a single test. To him and other EBM founders, probability was the very foundation of statistical inference and evidence-based medicine.
> In my book, I stated "We can make highly accurate actuarial predictions for populations, but we have trouble even comprehending what probability means for a single patient or event.” I think this email trail points to the difficulty of thinking about probability in medicine. I am concerned that Dr. Jain’s paper only adds to the confusion. I would submit that in every one of the CPCs that Dr. Jain refers to, the discussant makes a statement like, “I think the most likely diagnosis is….” The statement "most likely" is another way of saying "most probable." Dr. Jain’s statement that probability is only a theoretical consideration and is not used in the practice of medicine is, I think, absurd. Not acknowledging the uncertainty in medicine through some statement of probability (which is simply a way to quantify uncertainty) leads to an illusion of certainty, arrogance on the part of the practitioner, and unrealistic patient and family expectations.
> In my book, it took me 5 chapters to fully develop the idea of probability and how it should be used to think about the diagnosis and treatment individual patients. This is tough to think about and comprehend and I think it can be misrepresented in an email listserv.
> For excellent reading on probability, I would suggest Ian Hacking’s “An Introduction to Probability and Inductive Logic” or Gerd Gigerenzer’s “Risk Savvy: How to Make Good Decisions” or “Calculated Risks: How to Know When Numbers Deceive You.” 
> Ian Hacking and Gerd Gigerenzer participated in a year long sabbatical with other philosophers, scientists, and cognitive psychologists where they explored the meaning of probability. Hacking has also written “The Taming of Chance” and “The Emergence of Probability.” Girgerenzer has also written “The Empire of Chance: How Probability Changed Science and Everyday Life” for more in-depth reading.
> In clinical medicine, we use conditional probability every day, but because the exact numbers are only estimates, we actually use a heuristic called anchoring and adjusting. We use reason every day, but because the ultimate truth may be unknown, we use a type of reasoning called abductive reasoning. (My spell-checker incorrectly changed the spelling to adductive reasoning in my prior email.) Abductive reasoning, described by Charles Saunders Peirce, is "reasoning toward the most plausible hypothesis." When we start the diagnostic process, we are dealing with multiple hypotheses (plausible conjectures). We work through the process toward the most plausible hypothesis and, again, the term “most plausible" implies some concept of relative probability. With abductive reasoning, we blend both inductive reasoning and causal reasoning to make an argument (meaning a logical statement) that combines both probability and pathophysiologic rationale.
> As W. Edwards Deming said, "if you don’t understand the process of what you are doing, you don’t know what your are doing.” It is important for clinicians to have a better understanding of the process of making a diagnosis. By developing and using good habits based on a deep understanding of process, the clinician will have the best chance of making the correct diagnosis, as reliably as humanly possible.
> My apologies about the long email, but I am very serious and passionate about improving the quality of medical decisions.
> John
>  
> John E. Brush, Jr., M.D., FACC
> Professor of Medicine
> Eastern Virginia Medical School
> Sentara Cardiology Specialists
> 844 Kempsville Road, Suite 204
> Norfolk, VA 23502
> 757-261-0700
> Cell: 757-477-1990
> jebrush at me.com
>  
>  
>  
> On Oct 8, 2015, at 2:54 PM, Mark H Ebell <ebell at UGA.EDU> wrote:
>  
> So, I should order a chest CT for every patient with cough, to rule out lung cancer.
>  
> And I should order a stress thallium for every single 25 year old with chest pain that appears to be musculoskeletal, so I don’t miss the rare MI.
>  
> And of course I should get a CT or MRI for every patient with a headache, to not miss the rare (and generally untreatable) CNS cancer.
>  
> Do you realize the cost and harm of this approach? The complications of invasive tests and biopsies and follow-up that go nowhere? The false alarms? Radiation? 
>  
> But at least you won’t successfully sue me. I guess that’s all that matters.
>  
> Mark
>  
>> Mark H. Ebell MD, MS
> Professor of EpidemiologyUniversity of Georgia
> Editor, Essential EvidenceDeputy Editor, American Family Physician
> ebell at uga.edu
>  
>  
> From: Phillip Benton
> Reply-To: Society to Improve Diagnosis in Medicine, "pgbentonmd at AOL.COM"
> Date: Thursday, October 8, 2015 at 1:17 PM
> To: "IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG"
> Subject: Re: [IMPROVEDX] Fwd: [IMPROVEDX] IOM report is released - Diagnosis in actual practice
>  
> I am an experienced  physician-attorney (Medicine 54 Years, Law 44 years) teaching Medical Malpractice at a top tier law school for almost 20 years. 
> Relevant to this discussion is the fact that missed or delayed diagnosis tops the list of causes for awards to injured plaintiffs at mediation or in jury trials. Accounts of serious medical error by the Institute of Medicine (1999) and the Journal of Public Safety (Sept, 2013) document missed or delayed diagnosis as a leading cause of preventable harm. The IOM estimate of up to 98,000 preventable deaths per year means one every 5 minutes 22 seconds; Dr. John James' evidence-based 2013 update of up to 440,000 preventable deaths per year equals one every 72 seconds.
> One standard approach by the plaintiff''s attorney is to  have the defendant physician or defense expert witnesses agree that it is important to make a differential diagnosis, and then to first rule out the most serious and dangerous diagnoses on that list. In other words it is not the odds but it is the stakes that matter most. Greater experience of a physician may move this process from System II (rational) toward System I (intuitive) thinking, but the point is that juries (i.e., patients) routinely agree that you should always deal with  the most important things first.
> A common expression heard when an uncommon disease is misdiagnosed is that "When you hear hoofbeats you think of horses, not zebras." The savvy attorney will then then ask "And how do you tell the difference? (pause) You look!"  Adequate testing to first rule out life-threatening conditions, treatable if caught early, may often allow a successful defense.
>  
> Phillip G Benton, MD, JD
> Atlanta Georgia
>  
> 
> -----Original Message-----
> From: Bob Latino <blatino at RELIABILITY.COM>
> To: IMPROVEDX <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
> Sent: Thu, Oct 8, 2015 10:55 am
> Subject: Re: [IMPROVEDX] Fwd: [IMPROVEDX] IOM report is released - Diagnosis in actual practice
> 
> I am not a physician nor a clinician, so I come at this issue basically from the perspective of a patient.
>  
> When the physician becomes the patient, what is the expectation of them towards their care provider in terms of their diagnosis? 
>  
> Physicians themselves would obviously be more critical of their peer's diagnosis when their lives are involved, because they are 'insiders' and know the probing questions to ask about how the diagnosis was derived.  What are those questions?  What should the non-clinical patient be asking of their doctors when they provide a diagnosis?
>  
> I am in the investigation business and work in aviation, nuclear power, military and other potentially life-threatening businesses.  Many in these businesses have to make spur of the moment decisions (diagnosing the problem) and then quickly act on it. 
>  
> I will take pilots for instance.  I know healthcare has taken an interest in Crew Resource Management (CRM) from the training that pilots receive about effective cockpit communications and teamwork.  They too have to quickly make a diagnosis and act on it accordingly.
>  
> The difference between a pilot and a doctor in these situations is that the pilot and crew's lives are at stake (along with the passengers) as well, based on the accuracy of their diagnosis, decisions and actions.  
>  
> Given this informative debate about probabilities and looking at situations/patients singularly versus as a population, how does a pilot make their quick assessment versus a doctor and their diagnosis?  Does the fact the pilot's life is at stake differ in their decision as opposed to a doctor, whose life is not likely at stake based on their decision?  Does it matter? Should it?
>  
>  
> Robert J. Latino, CEO
> Reliability Center, Inc.
> 1.800.457.0645
> blatino at reliability.com
> www.reliability.com
>  
> From: Jason Maude [mailto:Jason.Maude at ISABELHEALTHCARE.COM] 
> Sent: Thursday, October 08, 2015 10:09 AM
> To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
> Subject: Re: [IMPROVEDX] Fwd: [IMPROVEDX] IOM report is released - Diagnosis in actual practice
>  
> Although this is a very stimulating debate, I am struggling to understand how relevant it actually is to the diagnosis of individual patients as a key additional variable will always be the personal consequences of a wrong decision.  The key difference with a probabilistic approach in life assurance or similar versus diagnosis of a particular patient has to be the consequences of getting it wrong. This means that nobody is likely to follow a purely probabilistic approach if they know the patient might die if they didn’t check for something even if it was a lower probability. The odds of winning the lottery are ludicrously bad but because the prize is so big (upside consequences) people still try their luck. Personal consequences will always seriously affect rational calculations of probability.
>  
>  
> Jason Maude
> Founder and CEO Isabel Healthcare
>  
> From: "Jain, Bimal P.,M.D." <BJAIN at PARTNERS.ORG>
> Reply-To: Society to Improve Diagnosis in Medicine <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>, "Jain, Bimal P.,M.D." <BJAIN at PARTNERS.ORG>
> Date: Thursday, 8 October 2015 12:13
> To: "IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG" <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
> Subject: Re: [IMPROVEDX] Fwd: [IMPROVEDX] IOM report is released - Diagnosis in actual practice
>  
> We can only comment on and critically evaluate material that is published. I find it simply amazing that a probabilistic approach in which probability is evidence has not been employed in even a single amidst hundreds of published CPCs and clinical problem solving exercises. Dr. Brush dismisses CPCs as artificial, pedagogical exercises employing System 2 thinking over days or weeks. This is all the more reason to employ a probabilistic approach as the discussants then have plenty of time to estimate prior probabilities and calculate posterior probabilities. This is not done simply because this approach has not been found useful for diagnosis. Some time back, I carefully examined 50 consecutive CPCs in NEJM from July 2013 to OCTOBER 2014. I found the word probability mentioned only once in these 50 CPCs. If Dr. Brush thinks this approach is suitable only for System 1 thinking in diagnosis ,Croskerry has pointed out the danger of such thinking in causing diagnostic errors. At present, the emperor does not appear to have any clothes with regard to probabilistic approach to diagnosis in these exercises. What is  needed ,I think are head to head observational or experimental studies comparing usual to probabilistic approach in real patients.
>  
> The adage ‘Common things are common’ is useful only in indicating chance of a disease in a given patient. Certainly, we should look for a common disease first as it has the greatest chance of being found. The problem arises when a frequency or probability is taken as evidence for a disease. There is little doubt in my mind, diagnostic errors due to failure to suspect a disease in patients with atypical presentation in studies of Hardeep Singh and John Ely arose from interpreting low prior probability as absence of evidence for the disease.
>  
> In discussion about STEMI, Dr. Brush rightly deals with all patients with STEMI regardless of  prior probability in the same manner by taking them all for cardiac cath. His accuracy rate of acute MI of 85 percent in these patients is close to the rate of 90 percent in my paper. If he were to analyze his data he would find the majority of patients with acute MI to have intermediate or high prior probability.
>  
> I refer the Central Limit Theorem with regard to distribution of prior probability which is a continuous variable.
>  
> The main problem with a probabilistic approach is that it takes probability as evidence in a given individual patient while it is true only in groups of patients. There is no proof that it improves diagnosis in actual practice. Its use appears to have become a dogma which is hindering efforts to reduce diagnostic errors. It is only by looking at diagnosis in actual practice such as in studies of H. Singh and J. Ely and analyzing results without putting on probabilistic  glasses that we shall make progress.
>  
> I mention three examples from history of science of dogmatic beliefs hindering progress which was made only when phenomena as they occur were analyzed.
> 1.      Since the time of Plato, the belief in planetary orbits being circular due to perfection of a circle as a geometrical figure. All contrary observations were explained away by drawing circles(epicycles) within circles. It was only two thousand years later that Kepler determined the orbit of Mars to be an ellipse when he actually observed and analyzed its movement.
> 2.      Since the time of Aristotle, every movement was believed to require a mover. Contrary observations such as flight of an arrow were explained away in an absurd manner. Again, about two thousand years later, the true law of motion, that it is change in motion and not motion itself that requires a force was discovered when Galileo observed and analyzed actual motion of rolling balls.
> 3.      And nearer to our age, there was a widespread belief in Absolute Time since Newton declared it to exist in the 17th century. It was only in early 20 the century this belief was overthrown by Einstein by his insightful analysis of actual time in terms of clocks and trains.
>  
>  
> Bimal
>  
>  
> Bimal P Jain MD
> Pulmonary-Critical Care
> North Shore Medical Center
> Lynn MA 01904
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
> From: John Brush [mailto:jebrush at ME.COM] 
> Sent: Saturday, October 03, 2015 8:58 AM
> To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
> Subject: Re: [IMPROVEDX] Fwd: [IMPROVEDX] IOM report is released - Diagnosis in actual practice
>  
> I’m afraid that I can’t agree with Dr. Jain’s argument. I think his argument is circular, difficulty to follow, and selectively self-serving.
>             We have an adage in medicine: “Common things are common.” Otherwise, every diagnostic exercise would become a wild goose chase, leading us to look into every remote possibility every time. Having said that, I can also say that if we collect cases over time, uncommon things become common. Someone somewhere will eventually win the lottery. Uncommon diagnoses do occur eventually. But the exceptions should not define the rules.
>             The STEMI case that Dr. Jain presents proves my point. I am in interventional cardiologist who frequently takes patients with suspected STEMI to the cath lab for intervention. I have been getting direct feedback on these cases for about 25 years. I can tell you that there is a false positive rate of about 15% among STEMI alerts that are taken to the cath lab (numerous reports in the literature confirm that estimate). We allow that false positive rate because we make a subjective calculation of expected value. Even if a patient has a relatively low initial prior probability of STEMI, like Dr. Jain’s example, we don’t want to miss a serious diagnosis like a STEMI. The EKG findings change the probability estimate and make a STEMI quite plausible in such a patient. In a patient like Dr. Jain’s example, we know that there is about a 50-50 chance of finding an occluded artery, which is certainly high enough to activate the cath lab. And sure enough, over time, 50% is about the frequency that we find in such patients.
>             Dr. Jain references central limit theorem. That theorem applies to probability for a continuous variable, and states that for any distribution, the sample means of repeated samples will become a normal distribution. I’m not sure I follow his argument that it applies to a probability distribution of categorial variables. A diagnostic category is a countable variable. Kolmogorov’s principles, however, do apply. The probabilities of all of the possibilities do add up to one, if they are all independent. General knowledge of these probability principles can help us organize our thinking.
>             When we see a patient with chest pain in the ED, we start to narrow the sample space by asking questions and making observations. For example, we can eliminate the possibility of a stab wound very quickly by noticing that there is no knife in the chest. Through early hypothesis generation, we narrow the range of possibilities to the point were we can start the process of iterative hypothesis testing. We have at our disposal many possible tests that we can perform. We can send a troponin, do a CT scan for dissection, do a stress echo, go directly to the cath lab, etc. We can’t do all of these tests at the same time, and we probably don’t want to do every test on every patient. So how do we decide what test to do first? We do a little mental calculation of the subjective probabilities, which gives us an idea of the expected value of each test. We don’t want to miss a diagnosis with serious consequences, like MI or dissection, so an EKG and CXR are done on virtually everyone, regardless of the prior probability. But we narrow the sample space as we hone in on the correct diagnosis. We don’t want to narrow the search prematurely, and we use a differential diagnosis to help us guard against jumping to conclusions. All of this is guided by some notion of relative probabilities. 
>             Dr. Jain talks about the CPC method of diagnosis. This is a useful pedagogical exercise, where an expert can expound on clinical medicine, but it is very artificial, as compared to real world practice. An expert may spend days or weeks preparing a CPC discussion. His/her main goals are to not miss the diagnosis, and to eloquently discuss all of the possibilities. It is almost purely System 2 thinking. In the real world, with time constraints and uncertainty, we have to employ System 1’s intuition. It is helpful, however, if we calibrate our intuition through knowledge of the relative strength of evidence and the base rates of various diagnostic possibilities. I think that having an intuitive sense of probability is the essence of experiential knowledge. Savvy clinicians make good bets. 
>             The fundamental assumption of evidence based medicine is that the frequencies that we measure in populations of patients can be applied to an individual patient. The measured frequencies from our aggregated experience, or from the reports in the literature inform us on how we should think about an individual. Single event or single patient probability then becomes a degree of belief, which is then modified by additional information that we gain through diagnostic testing. In fact, the sensitivity and specificity of diagnostic tests are defined using a frequency notion of probability. They are cumulative probabilities, depending on where we draw the line of demarcation. Some tests, like an x-ray for a broken arm, are so compelling that they lead to absolute certainty. Other tests, like EKGs, stress tests, troponins, etc, don’t have perfect operating characteristics, however, and we are left with a probability estimate for each diagnostic possibility that is somewhere between 0 and 1. Usually we get to a point of certainty, but sometimes, through adductive reasoning, we are left with the most plausible diagnosis, but never really know for sure.
>             I hate to drag the listserv through this back and forth again, but to me, Dr. Jain’s arguments seem to counter what we have been taught about evidence-based medicine, but also run counter to principles of cognitive psychology. Without some intuitive idea of probability and likelihood, we would be totally adrift in clinical medicine, so I just can’t let this go.
> John
>  
> John E. Brush, Jr., M.D., FACC
> Professor of Medicine
> Eastern Virginia Medical School
> Sentara Cardiology Specialists
> 844 Kempsville Road, Suite 204
> Norfolk, VA 23502
> 757-261-0700
> Cell: 757-477-1990
> jebrush at me.com
>  
>  
>  
> On Oct 2, 2015, at 2:45 PM, Mark Graber <mark.graber at IMPROVEDIAGNOSIS.ORG> wrote:
>  
> Note and manuscript forwarded on behalf of Dr Bimal Jain.
>  
> 
> 
> To unsubscribe from IMPROVEDX: click the following link:
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
>  
> or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
> 
> Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
>  
> 
> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
> 
> To learn more about SIDM visit:
> http://www.improvediagnosis.org/
>  
> <Bimal Jain - The Role of Probability in Diagnosis.docx>
>  
>  
> From: Jain, Bimal P.,M.D. 
> Sent: Thursday, October 01, 2015 1:54 PM
> To: 'Mark Graber'
> Subject: RE: [IMPROVEDX] IOM report is released - Diagnosis in actual practice
>  
> Hi Mark and all,
>  
> It is important to understand how diagnosis is performed in actual practice as a correct diagnosis is made after all  85 percent of the time in practice. To reduce diagnostic errors, we need to know if the method in practice needs to be improved or whether certain deviations from it need to be eliminated. The most puzzling issue in this regard is the role that probability plays or does not play in diagnosis. The puzzle arises because a probabilistic approach has been prescribed for a long time, but it does not appear to be employed in practice when we look at published CPCs and clinical problem solving exercises. Does this disparity imply that a probabilistic approach is not suitable for diagnosis in actual practice? This is certainly possible as diagnosis is performed in a given, individual patient with the aim of determining a disease correctly in that particular patient. And probability, as is well known has been employed most successfully in practice in areas such as epidemiology and life insurance business where the focus is on accuracy of prediction in a large group of persons, not on prediction in a given individual person.
> If we look closely, we note that a strict probabilistic approach in which a probability represents evidence may actually increase diagnostic errors specially in patients with atypical presentations by encouraging the cognitive bias of representativeness and inhibiting comprehensive differential diagnosis (discussed in attached paper).
> I have put together my thoughts on this subject in the attached paper ‘The role of probability in diagnosis’. Please review and comment on it. Thanks.
>  
> Bimal
>  
>  
> Bimal P Jain MD
> Pumonary-Critical Care
> North shore Medical CENTER
> Lynn MA 01904
>  
>  
>  
>  
> 
> 
> To unsubscribe from IMPROVEDX: click the following link:
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
> or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
> 
> Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
> 
> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
> 
> To learn more about SIDM visit:
> http://www.improvediagnosis.org/
>  
> 
> 
> To unsubscribe from IMPROVEDX: click the following link:
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
> or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
> 
> Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
> 
> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
> 
> To learn more about SIDM visit:
> http://www.improvediagnosis.org/
>  
> 
> 
> To unsubscribe from IMPROVEDX: click the following link:
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
>  
> or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
> 
> Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
>  
> 
> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
> 
> To learn more about SIDM visit:
> http://www.improvediagnosis.org/
>  
>  
> 
> 
> To unsubscribe from IMPROVEDX: click the following link:
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
>  
> or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
> 
> Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
>  
> 
> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
> 
> To learn more about SIDM visit:
> http://www.improvediagnosis.org/
>  
>  
>  
> 
> 
> To unsubscribe from IMPROVEDX: click the following link:
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
>  
> or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
> 
> Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
>  
> 
> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
> 
> To learn more about SIDM visit:
> http://www.improvediagnosis.org/

 
Robert M. Bell, M.D., Ph.C.
P.O. Box 3668
West Sedona, AZ  86340-3668
USA
Tel: Fax: 928 203-4517
 
 
 
 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/



To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1

or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX


Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/








Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine


HTML Version:
URL: <../attachments/20151011/3adffa17/attachment.html>


More information about the Test mailing list