Two terrific new articles from JAMA this week!

Bruno, Michael mbruno at PENNSTATEHEALTH.PSU.EDU
Fri Dec 22 20:32:33 UTC 2017


Dear SIDM Colleagues and Friends:

Attached in PDF format are a couple of terrific articles which were just published online by the Journal of the American Medical Association (JAMA).  The first is entitled, “What this Computer Needs is a Physician: Humanism and Artificial Intelligence,” written by Drs. Abraham Verghese, Nigam Shah and Robert Harrington, all of Stanford.  Dr. Verghese, the lead author, is a very prolific writer and thought leader in the field of humanistic medicine.  He has written a number of excellent articles in venues such as Health Affairs, NEJM, JAMA, and Annals of Internal Medicine, as well as in the popular press, most notably the New York Times, Wall Street Journal, The Washington Post and the San Francisco Chronicle, among others.  He is the author of three books, most recently the novel Cutting For Stone.  His first two books were physician memoirs.  You can read more at his own website, http://abrahamverghese.com/home/articles/  Dr. Shah is a well-known expert in medical informatics.

The second article is by Andrew Bindman of UCSF on the need to develop streamlined means to fund research (like the kind we try to develop in the SIDM’s Research Committee) on health systems science, such as reducing diagnostic error.

All best wishes in this Holiday season!

[cid:image004.png at 01D112FF.F77F98B0]
Michael A. Bruno, M.S., M.D., F.A.C.R.
Professor of Radiology & Medicine
Vice Chair for Quality & Patient Safety
Chief, Division of Emergency Radiology
Penn State Milton S. Hershey Medical Center
• (717) 531-8703  |  • mbruno at hmc.psu.edu<mailto:mbruno at hmc.psu.edu>  |  6 (717) 531-5737
[cid:image001.jpg at 01D04A9B.917CDCD0]

*****E-Mail Confidentiality Notice*****
This message (including any attachments) contains information intended for a specific individual(s) and purpose that may be privileged, confidential or otherwise protected from disclosure pursuant to applicable law.  Any inappropriate use, distribution or copying of the message is strictly prohibited and may subject you to criminal or civil penalty.  If you have received this transmission in error, please reply to the sender indicating this error and delete the transmission from your system immediately.



[cid:image004.jpg at 01D37B11.265AED30]
Viewpoint
December 20, 2017

What This Computer Needs Is a Physician
Humanism and Artificial Intelligence
Abraham Verghese, MD1<https://jamanetwork.com/searchresults?author=Abraham+Verghese&q=Abraham+Verghese>; Nigam H. Shah, MBBS, PhD1<https://jamanetwork.com/searchresults?author=Nigam+H.+Shah&q=Nigam+H.+Shah>; Robert A. Harrington, MD1<https://jamanetwork.com/searchresults?author=Robert+A.+Harrington&q=Robert+A.+Harrington>
·         1Department of Medicine, Stanford University School of Medicine, Stanford, California
JAMA. Published online December 20, 2017. doi:10.1001/jama.2017.19198



T


he nationwide implementation of electronic medical records (EMRs) resulted in many unanticipated consequences, even as these systems enabled most of a patient’s data to be gathered in one place and made those data readily accessible to clinicians caring for that patient. The redundancy of the notes, the burden of alerts, and the overflowing inbox has led to the “4000 keystroke a day” problem1<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r1> and has contributed to, and perhaps even accelerated, physician reports of symptoms of burnout. Even though the EMR may serve as an efficient administrative business and billing tool, and even as a powerful research warehouse for clinical data, most EMRs serve their front-line users quite poorly. The unanticipated consequences include the loss of important social rituals (between physicians and between physicians and nurses and other health care workers) around the chart rack and in the radiology suite, where all specialties converged to discuss patients.

The lessons learned with the EMR should serve as a guide as artificial intelligence and machine learning are developed to help process and creatively use the vast amounts of data being generated in the health care system. Outside of medicine, the use of artificial intelligence in predictive policing, bail decisions, and credit scoring has shown that artificial intelligence can actually exaggerate racial and other bias. For example, a program used for risk assessment by US courts mistakenly flagged black prisoners as likely to offend at twice the rate it mistakenly flagged white prisoners.2<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r2>

Similar concerns around artificial intelligence predictive models in health care have been discussed: clearly, in the 3-step process of selecting a dataset, creating an appropriate predictive model, and evaluating and refining the model, there is nothing more critical than the data. Bad data (such as from the EMR) can be amplified into worse models. For example, a model might classify patients with a history of asthma who present with pneumonia as having a lower risk of mortality than those with pneumonia alone,3<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r3> not registering the context that this is an artifact of clinicians admitting and treating such patients earlier and more aggressively. Since machine learning presents no human interface and cannot be interrogated, even if its predictions are extraordinarily accurate, some clinicians are likely to view the “black box” with suspicion.

The missing piece in the dialectic around artificial intelligence and machine learning in health care is understanding the key step of separating prediction from action and recommendation. Such separation of prediction from action and recommendation requires a change in how clinicians think about using models developed using machine learning. In 2001, the statistician Breiman4<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r4> suggested the need to move away from the culture of assuming that models that are not causal and cannot explain the underlying process are useless. Instead, clinicians should seek a partnership in which the machine predicts (at a demonstrably higher accuracy), and the human explains and decides on action. The same sentiment was expressed by Califf and Rosati as early as 1981 in an editorial on predictive risk factors emerging from a computer database on exercise testing for coronary artery disease: “Proper interpretation and use of computerized data will depend as much on wise doctors as any other source of data in the past.”5<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r5>

The 2 cultures—computer and the physician—must work together. For example, clinicians are biased toward optimistic prediction, often overestimating life expectancy by a factor of 5, while predictive models trained from vast amounts of data do better; using these well-calibrated probability estimates of an outcome, clinicians can then can act appropriately for patients at the highest risk.6<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r6> The lead time a predictive model can offer to allow for an alternative action matters a great deal. Well-calibrated levels of risk for each outcome, and the timely execution of an alternative action, are needed for a model to be useful. In short, a black-box model can lead physicians to good decisions but only if they keep human intelligence in the loop, bringing in the societal, clinical, and personal context. Additionally, the unique human brain and clinical training can generate new ideas, see new applications and uses of artificial intelligence and machine learning, and connect these technologies to the humanities and the social sciences in ways that current computers do not.

The ability of artificial intelligence to automate and help in the clerical functions (such as servicing the EMR) that now take up so much of a clinician’s time would also be welcome. Although not currently accurate enough, automated charting using speech recognition during a patient visit would be valuable and could free clinicians to return to facing the patient rather than spending almost twice as much time on the “iPatient”—the patient file in the EMR.7<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r7> More time for human-to-patient interaction might both improve care and allow physicians to record, and accurately register, more phenotypes8<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r8> and more nuance. Better diagnosis, and diagnostic algorithms providing more accurate differential diagnoses, might reshape the traditional CPC (clinical problem solving) exercise, just as the development of imaging modalities and sophisticated laboratory testing made the autopsy less relevant.

As with the EMR, there are legitimate concerns that artificial intelligence applications might jeopardize critical social interactions between colleagues and with the patient, affecting the lived experiences of both groups. But concerns about physician “unemployment” and “de-skilling” are overblown.9<https://jamanetwork.com/journals/jama/fullarticle/2666717#jvp170180r9> In the same manner that automated blood pressure measurement and automated blood cell counts freed clinicians from some tasks, artificial intelligence could bring back meaning and purpose in the practice of medicine while providing new levels of efficiency and accuracy. Physicians must proactively guide, oversee, and monitor the adoption of artificial intelligence as a partner in patient care.

In the care of the sick, there is a key function played by physicians, referred to by Tinsley Harrison as the “priestly function of the physician.” Human intelligence working with artificial intelligence—a well-informed, empathetic clinician armed with good predictive tools and unburdened from clerical drudgery—can bring physicians closer to fulfilling Peabody’s maxim that the secret of care is in “caring for the patient.”
References


1.      Hill  RG  Jr, Sears  LM, Melanson  SW.  4000 clicks: a productivity analysis of electronic medical records in a community hospital ED.  Am J Emerg Med. 2013;31(11):1591-1594.PubMed<https://www.ncbi.nlm.nih.gov/pubmed/24060331>



2.      Angwin  J, Larson  J, Mattu  S, Kirchner  L.  Machine bias: there’s software used across the country to predict future criminals—and it’s biased against blacks.  ProPublica website. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. May 23, 2016. Accessed November 30, 2017.



3.      Caruana  R, Lou  Y, Gehrke  J, Koch  P, Sturm  M, Elhadad  N. Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY: Association for Computing Machinery; 2015:1721-1730.



4.      Breiman  L.  Statistical modeling: the two cultures.  Stat Sci. 2001;16(3):199-123.



5.      Califf  RM, Rosati  RA.  The doctor and the computer.  West J Med. 1981;135(4):321-323.PubMed<https://www.ncbi.nlm.nih.gov/pubmed/7342460>



6.      Avati  A, Jung  K, Harman  S, Downing  L, Ng  A, Shah  NH. Improving palliative care with deep learning. Presented at: 2017 IEEE International Conference on Bioinformatics and Biomedicine; Kansas City, MO; November 13-16, 2017.



7.      Verghese  A.  Culture shock—patient as icon, icon as patient.  N Engl J Med. 2008;359(26):2748-2751.PubMed<https://www.ncbi.nlm.nih.gov/pubmed/19109572>


8.      Halpern  Y, Horng  S, Choi  Y, Sontag  D.  Electronic medical record phenotyping using the anchor and learn framework.  J Am Med Inform Assoc. 2016;23(4):731-740.PubMed<https://www.ncbi.nlm.nih.gov/pubmed/27107443>



9.       Cabitza  F, Rasoini  R, Gensini  GF.  Unintended consequences of machine learning in medicine.  JAMA. 2017;318(6):517-518.PubMed<https://www.ncbi.nlm.nih.gov/pubmed/28727867>



Article Information

Corresponding Author: Abraham Verghese, MD, Department of Medicine, Stanford University School of Medicine, 300 Pasteur Dr, S102, Stanford, CA 94305-5110 (abrahamv at stanford.edu<mailto:abrahamv at stanford.edu>).

Published Online: December 20, 2017. doi:10.1001/jama.2017.19198<http://jamanetwork.com/article.aspx?doi=10.1001/jama.2017.19198>

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Verghese repoted receiving royalties from Random House and Scribner; receiving speakers fees from Leigh Bureau; and receiving fees for serving on the Gilead Global Access Advisory Board. Dr Harrington reported receiving research grants outside the topic area from Merck, CSL Behring, GlaxoSmithKline, Regado, Sanofi-Aventis, AstraZeneca, Portola, Janssen, Bristol Myers Squibb, Novartis, and The Medicines Company; serving as a consultant for Amgen, Adverse Events, Bayer, Element Science, Gilead, MyoKardia, Merck, The Medicines Company, and WebMD; and serving on the boards of directors of Signal Path, Scanadu, the American Heart Association, and Stanford Healthcare. Dr Shah reported no disclosures.


To unsubscribe from the IMPROVEDX:
mail to:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
or click the following link: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG

Address messages to: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG

For additional information and subscription commands, visit:
http://www.lsoft.com/resources/faq.asp#4A

http://LIST.IMPROVEDIAGNOSIS.ORG/ (with your password)

Visit the searchable archives or adjust your subscription at:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine

To unsubscribe from the IMPROVEDX list, click the following link:<br>
<a href="http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1" target="_blank">http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1</a>
</p>

HTML Version:
URL: <../attachments/20171222/ae67df5c/attachment.html> ATTACHMENT:
Name: image004.jpg Type: image/jpeg Size: 3861 bytes Desc: image004.jpg URL: <../attachments/20171222/ae67df5c/attachment.jpg> ATTACHMENT:
Name: image002.png Type: image/png Size: 2281 bytes Desc: image002.png URL: <../attachments/20171222/ae67df5c/attachment.png> ATTACHMENT:
Name: image003.jpg Type: image/jpeg Size: 3077 bytes Desc: image003.jpg URL: <../attachments/20171222/ae67df5c/attachment-0001.jpg> ATTACHMENT:
Name: jama_Verghese_2017_vp_170180.pdf Type: application/pdf Size: 53301 bytes Desc: jama_Verghese_2017_vp_170180.pdf URL: <../attachments/20171222/ae67df5c/attachment.pdf> ATTACHMENT:
Name: jama_Bindman_2017_vp_170172.pdf Type: application/pdf Size: 51146 bytes Desc: jama_Bindman_2017_vp_170172.pdf URL: <../attachments/20171222/ae67df5c/attachment-0001.pdf>


More information about the Test mailing list