quick ?

Thomas, Eric Eric.Thomas at UTH.TMC.EDU
Fri Apr 25 13:51:36 UTC 2014

Steve and Colleagues,

In some of the research I have done with Hardeep Singh, we have tried to use definitions of diagnostic errors that allow a reliable and valid measurement to occur.  We mostly avoided the issue of diagnoses that evolve over time.

In some of our work we used the following definition, "An error was judged to have occurred if adequate data to suggest the final, correct diagnosis were already present at the index visit or if the documented abnormal findings at the index visit should have prompted additional evaluation that would have revealed the correct, ultimate diagnosis.  Thus, errors occurred only when missed opportunities to make an earlier diagnosis occurred based on retrospective review."  The "index visit" is the visit we sampled for review.  I won't get into all the details here, but this definition was used for a study where we sampled primary care visits which preceded an unexpected return visit to the primary care office or the ED.

So, when that definition is used we are pretty much eliminating the cases that are evolving over time.  We called it a dx error when all the data was there at the time of the visit to make the right dx.  As a practicing primary care doc, I am very sensitive to the fact that diagnoses evolve over time and it is often unclear what the dx is at the time of a single visit.  Our research does not label delays when all the data is not available as an error.

I agree with others that we will never know THE rate of diagnostic error.  However, with good measurement we can come to understand the frequency, types, and contributing factors of dx error within certain practice settings and for certain diseases.  I think a disease-specific and setting-specific approach will lead to the most improvement.

While I have your attention (wishful thinking, I know) I'd also say that we are a very, very long way from measures of dx error that could be useful for any external body (CMS, Leapfrog, etc) to use as some type of publically reported performance measure.  Groups like that have already gone too far with efforts to measure safety - in many organizations those externally mandated, top-down measures create cultures of accountability and even blame such that caregivers end up redefining or even hiding events so they don't have to be reported to management.  Also, those externally mandated measures only capture a small fraction of all the harm that occurs.  What we need, especially for diagnostic errors, are cultures where learning and improvement are valued.  Externally mandated measures, especially those not based on good science, will not help us reduce diagnostic errors.



Eric J Thomas MD, MPH
Professor of Medicine
Associate Dean for Healthcare Quality
Director, UT Houston-Memorial Hermann Center for Healthcare Quality and Safety
The University of Texas Medical School at Houston
6410 Fannin UPB 1100.44
Houston, TX 77030

From: Pauker, Stephen [mailto:SPauker at TUFTSMEDICALCENTER.ORG]
Sent: Thursday, April 24, 2014 11:15 AM
Subject: Re: [IMPROVEDX] quick ?

Patient care and diagnoses evolve over time as things are revealed.
So labeling something as a diagnostic error depends on when in
the patient's course it's measured. In the course of disease evolution,
the primary diagnosis can change. So perhaps we should not make a diagnosis
ever but say "At this moment I think the probability of X is P". Of course, the evolving issue is
when to treat or test with what modalities.


Stephen G. Pauker, MD, MACP, FACC, ABMH
Professor of Medicine and Psychiatry
Please note new email address;
spauker at tuftsmedicalcenter.org<mailto:spauker at tuftsmedicalcenter.org>

From: Danny Long [mailto:dannylong at EARTHLINK.NET]
Sent: Thu 4/24/2014 8:42 AM
Subject: Re: [IMPROVEDX] quick ?
When cover-up is the standard of care, who really knows the facts besides the ones doing the cover-up? The underlying motivation to nearly end autopsies.. just the truth.


Errors related to missed or delayed diagnoses are a frequent cause of patient harm. In 2003, a systematic review of 53 autopsy studies from 1966 to 2002 was undertaken to determine the rate at which autopsies detect important, clinically missed diagnoses. Diagnostic error rates were 4.1% to 49.8% with a median error rate of 23.5%.* Furthermore, approximately 4% of these cases revealed lethal diagnostic errors for which a correct diagnosis coupled with treatment could have averted death.4 Other autopsy studies have shown similar rates of missed diagnoses; one study reported the rate to be between 10% to 12%5, while another placed it at 14%.6 Autopsies are considered the gold standard for definitive evidence of diagnostic error, but they are being performed less frequently and provide only retrospective information.


Knowing the CDC are well aware death certificates are often falsified... even the Joint Commission are against autopsies .. so the prevailing logic is, keep the facts blurry and the conversation of how bad is the problem will keep the public in the dark. and make correcting the diagnosis problem nearly impossible to do anything about.  = keep the excuses alive.

:-( garbage in garbage out to keep the data corrupt.



To unsubscribe from IMPROVEDX: click the following link:

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: Lorri Zipperer Lorri at ZPM1.com<mailto:Lorri at ZPM1.com>, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:

HTML Version:
URL: <../attachments/20140425/daa2c5fe/attachment.html>

More information about the Test mailing list