Solving the problem of DX error

Jason Maude Jason.Maude at ISABELHEALTHCARE.COM
Tue Apr 29 14:58:10 UTC 2014


I agree strongly with Rob that much more time should be spent on talking about how to solve the problem rather than the various facets of the problem

There is now a good body of research describing the quantum of error and why it happens. We may not know everything we would like to but we do know what the main issues are and enough to start implementing solutions.

A significant body of papers conclude that the clinician should have broadened their differential diagnosis or done one in the first place. Building a differential, or hypothesis, accounts for several steps in John Brush's excellent 12 point diagnostic process outlined earlier. It's been taught for over 100 years. Surely we have more than enough evidence to tell us that the routine building of a differential diagnosis and recording it in the notes will help reduce a significant proportion of diagnosis errors?

Modern diagnosis decision support tools enable clinicians to build a good differential in the short time they have.

Why can we not role this out as a solution today and still carry on researching the finer points of dx error so we can finally eliminate all the diagnosis error that it's possible to eliminate?

Regards
Jason


Jason Maude
Founder and CEO Isabel Healthcare
Tel: +44 1428 644886
Tel: +1 703 879 1890
www.isabelhealthcare.com<http://www.isabelhealthcare.com/>

From: <Pauker>, Stephen <SPauker at TUFTSMEDICALCENTER.ORG<mailto:SPauker at TUFTSMEDICALCENTER.ORG>>
Reply-To: Society to Improve Diagnosis in Medicine <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>>, "Pauker, Stephen" <SPauker at TUFTSMEDICALCENTER.ORG<mailto:SPauker at TUFTSMEDICALCENTER.ORG>>
Date: Tuesday, 29 April 2014 14:55
To: "IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>" <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>>
Subject: Re: [IMPROVEDX] Fwd: [IMPROVEDX] quick ?

If the numbers are correct and these causes of death are not just
an end stage cause, then indeed it's time to get moving,
but we cannot eliminate or decrease all errors or even all types
of errors with a shot gun six sigma fix it all approach.
Rather the first step is to identify the core problem or core conflict
that underlies them all. Finding that is possible with thinking hard
and analysis and logic. Although the Joint Commission has made progress,
errors remain.

Let me suggest that human error (Reason, cf Dave Pryor's post)
is a characteristic of humans and will persist. Perhaps the solution
is not to eliminate all errors or carbon based providers, but rather
to build in oversight and correction mechanisms that will identify
a pre-error (error precursor) and intervene before it manifests as harm.

Steve


Stephen G. Pauker, MD, MACP, FACC, ABMH
Professor of Medicine and Psychiatry
===========================
Please note new email address;
spauker at tuftsmedicalcenter.org<mailto:spauker at tuftsmedicalcenter.org>
===========================

________________________________
From: robert bell [mailto:rmsbell at ESEDONA.NET]
Sent: Mon 4/28/2014 9:49 PM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
Subject: [IMPROVEDX] Fwd: [IMPROVEDX] quick ?

Dear all,

I too have been following the very interesting discussion on this list.  And I have also been following the discussions for years on the National Patient Safety Foundation list, as I am sure many of you do.

I have realized that on these two lists we talk at length about the problems but very little about what is or should be done to solve the problems.

Here we have 100,000 - 210,000 (or whatever the upper figure is) dying each year and still more maimed or injured. And I ask what have we done since the IOM report on Errors issued nearly 15 years ago?  http://www.iom.edu/~/media/Files/Report%20Files/1999/To-Err-is-Human/To%20Err%20is%20Human%201999%20%20report%20brief.pdf

? And many think that the situation has become worse - hence my 210,000 upper figure  http://www.npr.org/blogs/health/2013/09/20/224507654/how-many-die-from-medical-mistakes-in-u-s-hospitals 210,000 is something like the 3rd, 4th, or 5th, cause of death in the US. That we cannot get significant funding for this is beyond comprehension and disgusting.

With the NPSF list that serves the hospitals of the nation hardly anyone talks about solutions or research taking place in their hospitals to create changes. Presumably for fear of losing or compromising their jobs because significant change might impact the bottom line - and I realize that may be harsh thinking.

Further, there seems a reluctance to talk about advances in the private sector, presumably for proprietary reasons relating to copyrights and patents.

And then moving to the academic area of endeavor, the same reluctance to talk about what research is ongoing and what advances are being made, presumably for academic reasons associated with the secrecy surrounding the "publish or perish" mantra.

Unless I am missing a lot the situation to me seems to be an all around disaster. Surely, with the level deaths and injury we need to be at constant war with this problem. Surely there needs to be the equivalent of a national war room with a coordinated strategy?

Is there an up to date State of the Union Report that I am not aware of; are there significant plans in the works, both national government and societies that I am not aware of; is there already the equivalent of a national war room with intense planning taking place; does academia have a national or even global plan to get on top of things; are there leaders that I am not aware of that are making progress and starting to make change?

I am reminded that Nobel Prize winner Barry Marshall’s work on peptic ulceration and H. pylori took roughly 20 years for antibiotic treatment to be accepted into USA clinical practice as standard therapy  http://en.wikipedia.org/wiki/Barry_Marshall  So, even if a breakthrough in Errors came it would not be fully accepted by all for many years with more dying and injured needlessly.

What I would like to see is a list of priorities to reduce errors at all levels in the healthcare system, with each priority discussed as to the various ways that change can quickly be effected. Perhaps there is one, if not, which organization would be best to make one?

Sorry to be so blunt, but how many died while typing this? If half an hour it would have been about 12 if at the above 210,000 level a year - and whose relatives are these?

Rob Bell, M.D., Ph.C.



Begin forwarded message:

From: robert bell <rmsbell at ESEDONA.NET<mailto:rmsbell at ESEDONA.NET>>
Subject: Re: [IMPROVEDX] quick ?
Date: April 27, 2014 at 2:11:21 PM MST
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
Reply-To: Society to Improve Diagnosis in Medicine <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>>, robert bell <rmsbell at ESEDONA.NET<mailto:rmsbell at ESEDONA.NET>>

Well said, I agree totally.

I am not exactly sure what the solution is but it would seem that we need pretty intense education. I have always felt the simulation labs, like the airline industry, with Evidence Based algorithms, that include frequency guesses and cost issues, would offer the best chance of getting medical students, residents, and CME HCPs to the level of superstars. A basketball player after 100,000 shots!

Well designed programs that mimic the clinical situation would themselves provide information on where trainees make errors/mistakes that can in turn then be made public so that the algorithms can be tweaked and clinical practice altered.

It would seem that something like this could be high on the list of the thought leaders in this field of endeavor.

Does anyone know of this kind of research taking place anywhere in the world?

One could start first with potentially life threatening situations in the ER, be it Cellulitis, Pneumonia, or Pulmonary Embolism or other conditions that are “often" missed.

Perhaps it could develop to a state where everyone with a certain set of symptoms could quickly be “put through" a computer program/algorithm. This to suggest a course of action, lab tests, and treatment. Or something like that!

Rob Bell


On Apr 27, 2014, at 10:40 AM, John Brush <jebrush at ME.COM<mailto:jebrush at ME.COM>> wrote:

I have been following this conversation and I continue to believe that our best way to improve diagnosis in medicine is to teach good habits. Teach people to be systematic, to appropriately use the available tools, and to calibrate their intuitive approaches with better numerical literacy and probability estimates.  Focusing on errors doesn’t seem like a very fruitful path forward.
Anecdotes and patient stories can motivate us to improve, but don’t necessarily give us the lessons on how to improve. And measuring diagnostic error rates will always be difficult in the real world. No doubt, the studies that have estimated the error rates are important for raising public awareness and motivating our efforts. Diagnostic errors occur way too commonly. But measuring diagnostic error rates will be always be problematic for analysis of individual performance over time. Effective practitioners seek follow up, measurement and feedback for internal use, but external reporting of error rates could have lots of unintended consequences.
Measuring diagnostic skill is like measuring fielding in baseball. Baseball uses fielding percentages, but we know that that measure is flawed. A player can improve his fielding percentage by limiting his range. Likewise, a hospitalist can lower his/her error rate by consulting on every patient and ordering lots of tests, which in effect limits his/her range and is very inefficient. And certain positions, like first base, have a better chance of getting a higher fielding percentage. Likewise, certain medical specialties have a better chance of avoiding diagnostic error than others. Also, to measure fielding percentage, you have to track every single opportunity, which is virtually impossible in clinical medicine. This email string is a testament to the difficulties in defining errors and tracking opportunities.
So, in my humble opinion, we need to focus on education efforts, to help practitioners and patients develop better diagnostic habits, and facilitate the consistent use of these good habits.
John

John E. Brush, Jr., M.D., FACC
Professor of Medicine
Eastern Virginia Medical School
Sentara Cardiology Specialists
844 Kempsville Road, Suite 204
Norfolk, VA 23502
757-261-0700
Cell: 757-477-1990
jebrush at me.com<mailto:jebrush at me.com>



On Apr 26, 2014, at 12:30 PM, Pauker, Stephen <SPauker at tuftsmedicalcenter.org<mailto:SPauker at tuftsmedicalcenter.org>> wrote:

Let me extend Kassirer's thought perhaps. Allow me to suggest, depending on definitions, that our quest for being free of diagnostic errors is similarly stubborn and may sometimes be unattainable, when diagnostic uncertainty still exists at a given point of time
Steve



Sent with Good (www.good.com<http://www.good.com/>)


-----Original Message-----
From: Gerrit Jager [gerrit.jager at PLANET.NL<mailto:gerrit.jager at PLANET.NL>]
Sent: Saturday, April 26, 2014 11:11 AM Eastern Standard Time
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
Subject: Re: [IMPROVEDX] quick ?

Indeed, it get complex. The discussion started  about the definition of diagnostic errors.

I like the word “error”  (“errare”,  wandering from the truth, even if the truth may be unknown”)   In our language (Dutch) we use the negative verbs misses and faults.

The words of Jerome Kassirer “Absolute certainty in diagnosis is unattainable, no matter how much information we gather, how many observations we make, or how many tests we perform.” (Our Stubborn Quest for Diagnostic Certainty, N Engl J Med 1989) are still up-to-date.
We are often wrong for the right reason.

If possible, It will be very challeging to define the line between “no fault” diagnostic errors and preventable errors.

Gerrit

Gerrit Jager
Radiologists
The Netherlands



Op 25-04-14 22:46, robert bell <rmsbell at ESEDONA.NET> schreef:

David, Edward,

Agree, and this also tied to cost effectiveness, which in turn is linked to services available - there may not even be a CT scanner available.

Doesn't it get complex?

Rob


On Apr 25, 2014, at 12:34 PM, Hoffer, Edward P.,M.D. <EHOFFER at MGH.HARVARD.EDU> wrote:

Excellent point, to which I would like to add that Dr. A will find all sorts of "incidentalomas" on the CT scans, which will require FURTHER testing, most of no avail to the patient.


Ed
Edward P Hoffer MD, FACC, FACP
________________________________
From:David Gordon, M.D. [davidc.gordon at DUKE.EDU]
Sent: Friday, April 25, 2014 1:11 PM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] quick ?

I am SO GLAD to see Eric's and Manoj's comments appear because they highlight a really important piece of the puzzle.

If all we focus on is the miss or delay rate, we will fail to appreciate the harm that can come by over diagnosis and over treatment. Take a theoretical and extreme example for illustrative purposes:
--Doctor A obtains a CT scan on every patient with RLQ pain across the board to make sure appendicitis is not missed. Still may miss a case or 2 over the course of several years due to the intrinsic miss rate of CTs
--Doctor B does a selective approach. Using his judgment and sometimes guided by labs, he sometimes will do a CT, sometimes will recommend a clinical recheck the next day, or sometimes tells the patient everything looks fine today and come back as needed for worsening.  In doing this, he may miss a few more cases of appendicitis over the years than Doctor A but avoids hundreds of unnecessary CT scans.

So if the only variable studied is how often a diagnosis is missed, Doctor A will always come out on top when in the more complete picture his overtesting and overtreating style can lead to greater public harm not only through greater cost but also through radiation exposure, adverse medication reactions, drug resistance, and so on.

I am sure we are going to see an explosion of studies looking at how often diagnoses are missed or delayed. I gather the majority will be retrospective studies - at least in these early phases. My fear is that retrospective analysis has many limitations and unmeasured variables, yet it is going to be the results rather than the limitations that will receive greater public attention.Ultimately, this evolving science about how to improve diagnostic efficacy is going to have to balance the harm that can come by both under and over diagnosis. I hope the caution expressed by Eric that we are a long way from safely implementingperformance metrics and regulations is heard widely and embraced strongly.

Thanks
David




David Gordon, MD
Associate Professor
Undergraduate Education Director
Division of Emergency Medicine
Duke University

The information in this electronic mail is sensitive, protected information intended only for the addressee(s). Any other person, including anyone who believes he/she might have received it due to an addressing error, is requested to notify the sender immediately by return electronic mail, and to delete it without further reading or retention. The information is not to be forwarded to or shared unless in compliance with Duke Medicine policies on confidentiality and/or with the approval of the sender.
________________________________
From:Mittal, Manoj K [MITTAL at EMAIL.CHOP.EDU]
Sent: Friday, April 25, 2014 10:50 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] quick ?

Thanks, Eric.
That is a useful definition.
What concerns me a little bit is that we are labeling events as diagnostic errors based on retrospective review of the chart. This may lead to over-diagnosis of diagnostic errors.

It is far easier to see something as a missed opportunity when one knows the future.
When you are with a patient in the office or in the emergency department, though, and the case is not straightforward, there may be some pointers to the final diagnosis, but the trick is to find the signal amongst all the noise.

It will be useful to test the various differential diagnosis list generators, such as Isabel, prospectively, to see how much they help, and at what cost (in terms of increased testing, imaging, increasing LOS, false positive tests, etc.).

Regards,
Manoj Mittal, MD
________________________________
From:Thomas, Eric [Eric.Thomas at UTH.TMC.EDU]
Sent: Friday, April 25, 2014 9:51 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] quick ?

Steve and Colleagues,

In some of the research I have done with Hardeep Singh, we have tried to use definitions of diagnostic errors that allow a reliable and valid measurement to occur.  We mostly avoided the issue of diagnoses that evolve over time.

In some of our work we used the following definition, “An error was judged to have occurred if adequate data to suggest the final, correct diagnosis were already present at the index visit or if the documented abnormal findings at the index visit should have prompted additional evaluation that would have revealed the correct, ultimate diagnosis.  Thus, errors occurred only when missed opportunities to make an earlier diagnosis occurred based on retrospective review.”  The “index visit” is the visit we sampled for review.  I won’t get into all the details here, but this definition was used for a study where we sampled primary care visits which preceded an unexpected return visit to the primary care office or the ED.

So, when that definition is used we are pretty much eliminating the cases that are evolving over time.  We called it a dx error when all the data was there at the time of the visit to make the right dx.  As a practicing primary care doc, I am very sensitive to the fact that diagnoses evolve over time and it is often unclear what the dx is at the time of a single visit.  Our research does not label delays when all the data is not available as an error.

I agree with others that we will never know THE rate of diagnostic error.  However, with good measurement we can come to understand the frequency, types, and contributing factors of dx error within certain practice settings and for certain diseases.  I think a disease-specific and setting-specific approach will lead to the most improvement.

While I have your attention (wishful thinking, I know) I’d also say that we are a very, very long way from measures of dx error that could be useful for any external body (CMS, Leapfrog, etc) to use as some type of publically reported performance measure.  Groups like that have already gone too far with efforts to measure safety – in many organizations those externally mandated, top-down measures create cultures of accountability and even blame such that caregivers end up redefining or even hiding events so they don’t have to be reported to management.  Also, those externally mandated measures only capture a small fraction of all the harm that occurs.  What we need, especially for diagnostic errors, are cultures where learning and improvement are valued.  Externally mandated measures, especially those not based on good science, will not help us reduce diagnostic errors.

Best,

Eric

Eric J Thomas MD, MPH
Professor of Medicine
Associate Dean for Healthcare Quality
Director, UT Houston-Memorial Hermann Center for Healthcare Quality and Safety
The University of Texas Medical School at Houston
6410 Fannin UPB 1100.44
Houston, TX 77030
713-500-7958
www.utpatientsafety.org<http://www.utpatientsafety.org/><http://www.utpatientsafety.org/>




From: Pauker, Stephen [mailto:SPauker at TUFTSMEDICALCENTER.ORG]
Sent: Thursday, April 24, 2014 11:15 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] quick ?


Patient care and diagnoses evolve over time as things are revealed.
So labeling something as a diagnostic error depends on when in
the patient's course it's measured. In the course of disease evolution,
the primary diagnosis can change. So perhaps we should not make a diagnosis
ever but say "At this moment I think the probability of X is P". Of course, the evolving issue is
when to treat or test with what modalities.


Steve


Stephen G. Pauker, MD, MACP, FACC, ABMH
Professor of Medicine and Psychiatry
===========================
Please note new email address;
spauker at tuftsmedicalcenter.org
===========================

________________________________

From: Danny Long [mailto:dannylong at EARTHLINK.NET]
Sent: Thu 4/24/2014 8:42 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] quick ?

When cover-up is the standard of care, who really knows the facts besides the ones doing the cover-up? The underlying motivation to nearly end autopsies.. just the truth.
Statistics
Errors related to missed or delayed diagnoses are a frequent cause of patient harm. In 2003, a systematic review of 53 autopsy studies from 1966 to 2002 was undertaken to determine the rate at which autopsies detect important, clinically missed diagnoses. Diagnostic error rates were 4.1% to 49.8% with a median error rate of 23.5%.* Furthermore, approximately 4% of these cases revealed lethal diagnostic errors for which a correct diagnosis coupled with treatment could have averted death.4 Other autopsy studies have shown similar rates of missed diagnoses; one study reported the rate to be between 10% to 12%5, while another placed it at 14%.6 Autopsies are considered the gold standard for definitive evidence of diagnostic error, but they are being performed less frequently and provide only retrospective information.


http://patientsafetyauthority.org/ADVISORIES/AdvisoryLibrary/2010/Sep7%283%29/Pages/76.aspx


Knowing the CDC are well aware death certificates are often falsified... even the Joint Commission are against autopsies .. so the prevailing logic is, keep the facts blurry and the conversation of how bad is the problem will keep the public in the dark. and make correcting the diagnosis problem nearly impossible to do anything about.  = keep the excuses alive.




:-( garbage in garbage out to keep the data corrupt.










HTML Version:
URL: <../attachments/20140429/06f29e6c/attachment.html>


More information about the Test mailing list