Misdiagnosis teaching and assessment tool

Bob Swerlick rswerli at GMAIL.COM
Sun Jul 28 16:05:38 UTC 2013


Yes, it is all about providing feedback in a time frame that is relevant.
The challenge in medicine is that this does not happen in any meaningful
way in the outpatient setting. The feedback physicians and trainees often
receive is simply about money and clinical volume, with a few "quality"
metrics salted in. At this point in time they are generally selected
because they can be measured as opposed to have have real meaning.

We may be able to identify specific exercises within a training environment
where trainees can receive important feedback, and Dr. Zamir needs to be
lauded for his insight and clever approach. Perhaps as trainees are
enlightened as to the power of real time feedback a cultural change may
come about with similar inroads being made in the world of every day
practice. We can hope but at this point our cutting age feedback tools are
Press Ganey surveys which tell us whether our patients are happy or
unhappy.


On Sat, Jul 27, 2013 at 9:27 PM, Ehud Zamir <ezamir at unimelb.edu.au> wrote:

> I have recently conducted an experiment where real patients volunteered to
> be examined by senior ophthalmology residents in a mock clinical exam.
> Several cases were chosen because they had a history of (usually harmless)
> misdiagnosis, but were presented to the trainees just as they had presented
> to the doctor, with (erroneous) presumptive diagnoses from previous doctors
> or with other contextual/cognitive biases. In the vast majority of cases,
> trainees misdiagnosed them exactly in the same way it had been done by the
> original doctors, namely stumbled on the cognitive biases or contextual
> misleading information inherently present in the cases.  I found this a
> good way to teach about diagnostic error, as it is not just a list of
> theoretical biases. It is a very authentic simulation. Trainees are in fact
> led towards making a diagnostic error (similar to the original doctor) and
> made the error in a controlled environment, then received feedback and
> "learned the lesson". In addition to a good teaching model about diagnostic
> errors, I also believe it is a very useful method for assessment of
> diagnostic skills. A bit difficult logistically (finding patients with a
> history of misdiagnosis who still have the findings) but certainly
> feasible. If we value the skill of diagnosis, we have to test it in
> authentic conditions, and in the presence of proven misleading factors.
> Lessons learnt from misdiagnosis should be used for training and assessment
> in a concrete manner.
>
> Ehud Zamir
> Centre for Eye Research Australia
>
>
>
>
>
> Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society
> for Improving Diagnosis in Medicine
>
> To unsubscribe from the IMPROVEDX list, click the following link:<br>
> <a href="
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1"
> target="_blank">
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
> </a>
> </p>
>



-- 
Bob Swerlick







To unsubscribe from the IMPROVEDX list, click the following link:<br>
<a href="http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1" target="_blank">http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1</a>
</p>

HTML Version:
URL: <../attachments/20130728/fd337edc/attachment.html>


More information about the Test mailing list