The Path to Diagnostic Excellence Includes Feedback to Calibrate How Clinicians Think. | Health Care Quality | JAMA | JAMA Network

Bruno, Michael mbruno at PENNSTATEHEALTH.PSU.EDU
Mon Feb 11 21:45:21 UTC 2019

Excellent article!!!  Succinct and so powerful.

I’ve shared it with several of our residents and will send it also to our departmental Quality & Safety Committee.  Thanks Ashley & Hardeep!!

[cid:image004.png at 01D112FF.F77F98B0]
Michael A. Bruno, M.D., M.S., F.A.C.R.
Professor of Radiology & Medicine
Vice Chair for Quality & Patient Safety
Chief, Division of Emergency Radiology
Department of Radiology, H-066
Penn State Milton S. Hershey Medical Center
500 University Drive, Hershey PA 17033
• (717) 531-8703  |   6 (717) 531-5737
• mbruno at<mailto:mbruno at>
*****E-Mail Confidentiality Notice*****
This message (including any attachments) contains information intended for a specific individual(s) and purpose that may be privileged, confidential or otherwise protected from disclosure pursuant to applicable law.  Any inappropriate use, distribution or copying of the message is strictly prohibited and may subject you to criminal or civil penalty.  If you have received this transmission in error, please reply to the sender indicating this error and delete the transmission from your system immediately.

From: David L Meyers [mailto:dm0015 at COMCAST.NET]
Sent: Friday, February 08, 2019 5:36 PM
Subject: [IMPROVEDX] The Path to Diagnostic Excellence Includes Feedback to Calibrate How Clinicians Think. | Health Care Quality | JAMA | JAMA Network

Sorry. Here’s the article I referred to a short while ago.<>
The Path to Diagnostic Excellence Includes Feedback to Calibrate How Clinicians Think
Improving diagnosis in health care is considered the next imperative for patient safety.1,2 Rapidly evolving diagnostic tests and treatments and competing priorities and pressures encountered by clinicians to deliver high-quality, low-cost health care make this a major challenge. Clinicians frequently balance undertesting, possibly missing a diagnosis, with pursuing overzealous diagnostic testing, which could be harmful and costly. Rigorous multidisciplinary research and innovation from cognitive psychology, human factors, informatics, and social sciences are needed to stimulate previous efforts to reduce diagnostic errors. The Moore Foundation’s recently announced $85 million, 6-year initiative on improving diagnostic excellence could be particularly transformative because it “aims to reduce harm from erroneous or delayed diagnoses” but also “goes beyond avoiding errors and includes consideration of cost, timeliness and patient convenience.”3
Achieving diagnostic excellence requires focusing on not just accuracy of diagnosis but concurrently on minimizing costs and enhancing timeliness and patient centeredness. The best diagnostician may be the clinician who makes the diagnosis using the fewest resources,4 while improving patient experience. Decision-making must optimize the balance between reducing overuse and addressing underuse of diagnostic tests and other resources, because both simultaneously exist in health care. Diagnostic excellence also requires managing diagnostic uncertainty appropriately and communicating that uncertainty to patients, and at times watchful waiting for complex and evolving diagnoses for which unfocused treatment efforts may be even more harmful.
Much overuse and underuse can be attributed to cognitive errors involving suboptimal analytic thinking or erroneous intuitive decision-making that can be averted by well-calibrated clinicians. Achieving diagnostic calibration (when clinicians’ confidence in the accuracy of their diagnostic decision-making aligns with their actual accuracy5) is a prerequisite for achieving diagnostic excellence, because it accounts for competing demands of addressing underuse and overuse (low confidence can lead to overtesting and high confidence to undertesting). In addition to strong foundational knowledge, well-calibrated clinicians have the appropriate amount of confidence in their diagnostic decisions and recognize and manage uncertainty appropriately, and this acumen guides their analytic thinking, intuition, and approach to diagnostic tests. These clinicians minimize harm from missed, delayed, and incorrect diagnoses (such as from delayed or wrong tests and treatments) as well as from diagnosing problems that would never cause symptoms (overdiagnosis). Although systems and policy approaches are also needed, developing well-calibrated clinicians is one potential solution to the dilemma of how to simultaneously reduce preventable harms from both underdiagnosis and overtesting.
Research from applied psychology outside health care suggests that calibration can be achieved through consistent learning about an individual’s performance through feedback. Very little feedback occurs once clinicians enter practice, but measurement of diagnostic performance, if done correctly, could create appropriate feedback,2 as long as the communication occurs constructively and facilitates learning. However, determining how to provide diagnostic performance feedback to practicing clinicians for learning and improvement requires additional inquiry. Clinicians must learn about the ultimate accuracy of their diagnoses, as well as the processes that led them to those diagnoses (eg, which tests were ordered and whether they should have been) or why diagnostic performance was suboptimal.
Fundamental scientific principles underlying how to calibrate clinicians through feedback are not well established and recommendations from other clinical feedback domains may not necessarily apply to diagnosis. For example, clinicians are concerned about both giving and receiving feedback about diagnostic delays in cancer6 and may be uncomfortable discussing diagnostic errors. As opposed to more systems-oriented and patient-focused factors that could explain poorer performance on quality measures of chronic disease prevention or management, diagnostic feedback may generate conversations about knowledge base or individual decision-making; consequently, at the individual level, clinicians may be uncomfortable with such feedback. Despite the strong influence of systems-related factors on diagnosis,1 current norms erroneously emphasize that when outcomes are poor, cognitive processes were more error prone and thus to blame. Feedback about diagnostic decision-making must account for an understanding of cognitive processes as well as the associated patient-related (case specific) and systems factors (context specific). Hence, feedback on diagnostic performance requires additional considerations for implementation than feedback related to quality measurement of chronic disease prevention or management, which by now many clinicians are accustomed to receiving.
To inform scientific principles of diagnostic performance feedback, certain general feedback principles that lead to learning and improved performance in other domains are important, along with specific considerations for diagnosis. For quantitative summaries (eg, monthly proportions of test results with timely follow-up), general principles should include using measures perceived as important and based on valid underlying data, ensuring timely presentation of performance data, focusing on areas with improvement opportunities (ie, for which performance is not at ceiling), having an ability to compare a clinician’s performance with another group’s performance or with a goal, and incorporating delivery of feedback in a routine and ongoing fashion contained within an overarching quality improvement structure. Aspects of performance that should be measured should include diagnostic processes that can be more easily tracked over time and acted upon, rather than just diagnostic outcomes that do not provide actionable information about which processes went wrong and which to improve. Additionally, feedback should ideally highlight patterns in performance rather than merely reflect case-specific or context-specific phenomena. Such specificity makes transfer of learning to other cases and contexts less likely. However, distilling lessons into specific behavior changes from aggregate quantitative data alone may be difficult because of the complexity of diagnosis and the myriad patient and situational factors involved in errors.7 Learning using summarized data may therefore be complemented through qualitative analysis of specific instances for which case-specific factors, context-specific factors, and diagnostic decision-making can be examined in depth. Feedback should include these specifics to allow clinicians receiving feedback to delve into their performance and investigate improvement opportunities for themselves, encouraging both self-assessment and accountability.
Although feedback should be given for both positive and negative outcomes, delivering feedback about suboptimal performance requires creation of a receptive learning environment because feedback conversations are poised to generate discomfort and feelings of blame. If clinicians can self-identify areas for which improvement is needed, providing positive, constructive feedback in those areas could create such an environment. Such sensitive feedback should be delivered verbally and confidentially in a supportive, nonjudgmental, nonpunitive fashion by a trusted source, but by whom and how it is delivered can be individualized. For example, a performance review with a supervisor may increase levels of concern and discomfort compared with a debrief or coaching session with a respected senior colleague. Clinicians without formal supervisors to provide feedback, which is true for most physicians in practice, could receive feedback through developing informal peer-to-peer collaborative learning networks.7 Feedback should highlight specific plans to improve performance that target simple behavior changes (eg, changes in test ordering) and learning opportunities that could improve diagnosis in the future. Additionally, because diagnosis is a team endeavor, feedback will often need to involve teams. Future scientific exploration will need to answer current unknowns such as how to deliver feedback effectively while maintaining clinician accountability, how to develop effective peer-to-peer networks, what unintended consequences occur with such feedback (eg, relative inattention to other clinical priorities, hypervigilance for certain previously missed conditions, or mistrust in feedback content or measures), and which specific diagnostic processes and outcomes should be tracked routinely and fed back to clinicians of different specialties.
Grounding diagnostic performance measurement and feedback using principles discussed in this Viewpoint could help ensure that such feedback does not become a threat to a clinician’s professional image. Creating effective feedback pathways within a learning health care system could produce better calibrated clinicians who prevent harm from missed diagnostic opportunities as well as from overdiagnosis, overtesting, and overtreatment. The future well-calibrated clinician is one of the most promising paths to achieving diagnostic excellence.
Back to top
Article Information
Corresponding Author: Hardeep Singh, MD, MPH, Center for Innovations in Quality, Effectiveness and Safety (152), Michael E. DeBakey Veterans Affairs Medical Center (MEDVAMC), 2002 Holcombe Blvd 152, Houston, TX 77030 (hardeeps at<mailto:hardeeps at>).
Published Online: February 8, 2019. doi:10.1001/jama.2019.0113<>
Conflict of Interest Disclosures: Dr Meyer reported receiving the VA Health Services Research and Development Career Development Award (CDA-17-167). Dr Singh reported receiving the VA Health Services Research and Development Service Award (CRE12-033; Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety, the Agency for Healthcare Research and Quality (RO1HS022087), and the Gordon and Betty Moore Foundation. No other disclosures were reported.
Funding/Support: This work was supported in part by the Houston VA Health Services Research and Development Center for Innovations in Quality, Effectiveness and Safety (CIN13-413).
Role of the Funder/Sponsor: These funding sources had no role in the preparation, review or approval of the manuscript.
Disclaimer: The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Department of Veterans Affairs or the US government.
The Moore Foundation invests $85 million to improve diagnostic performance [news release]. Palo Alto, CA: Gordon and Betty Moore Foundation; November 1, 2018.$85-million-to-improve-diagnostic-performance<>. Accessed January 24, 2018.
McNamara  P, Shaller  D, De La Mare  D, Ivers  N.  Confidential Physician Feedback Reports: Designing for Optimal Impact on Performance. Rockville, MD: Agency for Healthcare Research and Quality; 2016.



To unsubscribe from IMPROVEDX: click the following link:<>

Visit the searchable archives or adjust your subscription at:<>

Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:<>

To unsubscribe from the IMPROVEDX:
or click the following link: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG


For additional information and subscription commands, visit:

http://LIST.IMPROVEDIAGNOSIS.ORG/ (with your password)

Visit the searchable archives or adjust your subscription at:

Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine

To unsubscribe from the IMPROVEDX list, click the following link:<br>
<a href="" target="_blank"></a>

HTML Version:
URL: <../attachments/20190211/8fe3ba22/attachment.html> ATTACHMENT:
Name: image001.png Type: image/png Size: 2281 bytes Desc: image001.png URL: <../attachments/20190211/8fe3ba22/attachment.png> ATTACHMENT:
Name: image002.png Type: image/png Size: 14098 bytes Desc: image002.png URL: <../attachments/20190211/8fe3ba22/attachment-0001.png>

More information about the Test mailing list