Artificial intelligence in radiology: Friend or foe? | Diagnostic Imaging

HM Epstein hmepstein at GMAIL.COM
Tue Oct 30 16:13:46 UTC 2018

Thank you, Michael. I appreciate your detailed answer. I’m fascinated by the dialogue that is happening currently amongst radiologists. I think that AI has a lot of promise but I’ve always thought of it as an aid to and not a director of decision-making. I’m curious, what did you mean by a “throw away journal“? Is the journal Diagnostic Imaging low circulation or low impact or just filled with junk science?

On Oct 30, 2018, at 10:44 AM, Bruno, Michael <mbruno at> wrote:

Hi Helene,
I think the short answer is: nobody really knows.  But probably.
All of us in the field of Diagnostic Radiology have been wondering about this very question (I actually wrote a book chapter on the topic very recently), probably since the noted Computer Scientist Geoff Hinton of the U of Toronto—and who now works for Google—famously opined that we should stop training Radiologists immediately, since computers will very soon replace them all.  He went on to say that if you work as a Radiologist you’re like the Coyote in the Roadrunner cartoon, who has gone off a cliff but hasn’t looked down yet.  Your doom is sealed and everybody knows it but you!  He has since recanted a bit, but he still maintains that computers will soon do everything better than people in the very near future.
So perhaps as a result of these very provocative comments by such a major leader in the field, and perhaps driven by large amounts of speculative investment money, or maybe it was the very recent and  remarkable success of a DL algorithm at beating the world champion for the complex game of “Go,” there is a good deal of breathless hyperbole about how AI, especially ML and DL , will change the field of radiology, and by extension, medicine as a whole.   There is a growing faith that revolutionary change is imminent.  I suspect this will be followed by disappointment.  But in general I’ve come to the conclusion that the technology is promising in many ways, and it may well have a positive impact on diagnostic accuracy in the future.  But it is very far from replacing human diagnostic acumen. 
AI, especially the subsets of machine-learning and “deep-learning,” has captured the attention of the entire field of diagnostic radiology, and a great deal of time is now being devoted to the topic at Radiology’s large national meetings, especially the Annual RSNA Meeting in Chicago and recent Annual Meetings of the American College of Radiology (ACR) in Washington, DC.  In fact, I just got back from the 10th Annual ACR Quality & Safety Meeting in Boston this past weekend, and a LOT of time was devoted to AI and how it is going to impact the field.  The speakers point to a bright utopian future which is “just around the corner,” when AI will have a dramatic impact.  But there is no operational plan to take us from where we are to this Emerald City in the distance.
From the standpoint of error in Radiology, the most promising AI technology is merely an improvement on an existing one, namely computer-assisted detection (CAD).  CAD, which is based on the concept of DL, has been with us since the 1990s, primarily used in mammography, and in general it has been a huge disappointment.  There has been no separation between radiologists who use CAD to help them with mammograms vs. those who do not.  But computer technology has been improving exponentially since then (as predicted my Moore’s Law, named after the same Moore whose charitable Foundation is doing so much for us in SIDM), and as a result CAD will probably get a lot better in the near future.  Keep in mind that the most common errors we make in radiology are simple perceptual errors, i.e., those in which the abnormality is simply not seen, though is often easily appreciated in retrospect—or by a second reader.  We would dramatically improve our error rates in Radiology if every study were independently interpreted by two radiologists.  Each would bring their human 3-4% error rate, but it is extremely unlikely that BOTH readers would miss the SAME abnormalities in any given study.  Leo Henry Garland proposed this back in 1949, and that basic reality hasn’t changed.  It is probably neuro-biological in origin.  In a recent study, double-reading was found to have a 12x benefit for error detection before those errors reached the patient! 
Sadly, there aren’t enough radiologists to allow uniform double-reading of all of the imaging studies performed each year, even if we could afford the cost of that double-reading.  But if an AI classifier based on DL could be developed that had a 70 – 80% accuracy, not a far-fetched goal, it could be a workable second-reader, and the promise of CAD would at last be realized.  That ought to be enough to cut radiologist errors in half or better.  It would be huge.
Along the same lines, using eye-tracking technology it has been shown that in cases where abnormalities are ultimately missed on images, the radiologist did in fact actually look at those areas of the image.  There may be benefit for AI technologies that incorporate eye-tracking to highlight areas where the doctor’s gaze had stalled, as a flag to show the radiologist where he might take another look before signing off their report.  Both of these technologies exist currently in sub-optimal form, and both are within reach of getting to a useful level.  Right now, AI is being used in some areas to prioritize cases for reading, monitor individual radiologist performance, and the like.  Each of these applications has its own potential value (and threat).
The article you mentioned from the ‘throw-away journal’ Diagnostic Imaging, highlights the very real fact that there are still MAJOR problems in the two-way communication between Radiologists and other health care providers, which is another major source of errors that reach patients.  I have long lamented the loss of our traditional face-to-face communication that we used to have with the referring clinician on every case, every day.  Our written reports are no substitute for this, and there is a lot of error that flows from the loss of reciprocity in the human interaction, which cannot be duplicated using an IT intermediary.  I do think that better AI / IT methods may be helpful with these communication lapses, but ultimately communication between people is a human enterprise and is subject to human limitations, and will ultimately require a human solution. 
For example, at Hershey we’ve instituted a communication program known as “Failsafe,” in which cases with incidental findings requiring delayed follow-up (such as possible, but not certain, cancers) are placed into a queue where, using an IT-based infrastructure, the patient receives a letter notifying them that there is a potential issue.  We found that this IT-based solution was completely ineffective by itself, so we added the human touch: a nurse who repeatedly calls these patients urging them to take ownership of this issue, to take it seriously, and take action to obtain the needed resolution to their diagnostic question.  The nurse also allows us to collect data on these cases which is showing the positive effect the program is having.  We found that the human element was essential.  Other places have put all of their “eggs” in the “basket” of IT, which I think places far too much faith in the automated solutions to human problems.
The American College of Radiology has established a Data Science Institute, which is largely focused on future AI applications in Radiology.  I’ve been involved as a member of the AI Task Force since this was put together a few years ago, and it has been fun watching this unfold.  A few interesting publications, White Papers and future-looking statements have come forth from this body.  The bulk of it is very forward-thinking, imagining a terrific future state in which we all benefit from the amazing power of AI, but leaving large gaps unfilled between “where we are now,” and “where we are going.”  To my way of thinking, a great deal of it seems to me to be driven by speculation and wild optimism.  Like the idea that, in the future, Radiology and Pathology will merge into a single specialty of “information management,” and that radiologists and pathologists who don’t get with this program will soon become roadkill.  This idea was published in a very high-impact journal by a real leader in the field (Dr. Jha of the U of P).  But no one has yet proposed a workable plan to take us down the yellow brick road that stretches out before us.  We are told that it leads to an Emerald City where there lives a Great Wizard of AI, a whiz of a wiz If ever a wiz there was!
Sorry this has been such a long message.  I’ll sum it up by saying that there seems to be some justification for cautious optimism that AI methods, especially those based in Deep Learning (DL) may be beneficial in reducing diagnostic error in Radiology, and these methods will be broadly, if slowly, adopted by the (very conservative) field of Diagnostic Radiology once they are shown to have real benefits for patients.  I don’t think that radiologists will allow themselves to be forced by market forces into using technologies that no one understands how to use properly, and I also don’t think that AI is on the verge of bringing about some sort of future imaging utopia where all the Luddites have been purged and the enlightened few AI-enhanced radiologists who remain become diagnostic superheroes, powered by a machine intelligence that makes the human variety seem quaint (and redundant).
Just one man’s opinion!
All the best,
Michael A. Bruno, M.D., M.S., F.A.C.R.   
Professor of Radiology & Medicine
Vice Chair for Quality & Patient Safety
Chief, Division of Emergency Radiology
Penn State Milton S. Hershey Medical Center
( (717) 531-8703  |  * mbruno at  |  6 (717) 531-5737
From: HM Epstein [mailto:hmepstein at GMAIL.COM] 
Sent: Monday, October 29, 2018 12:08 PM
Subject: [IMPROVEDX] Artificial intelligence in radiology: Friend or foe? | Diagnostic Imaging
Will AI in radiology reduce diagnostic error? I know that many on this listserv believe that the loss of communication between clinicians and radiologists has caused as many diagnostic issues as errors made by radiologists interpreting images. []
Will AI help? 
And even if it doesn’t help, do we think that this quote from the article is true: “Artificial intelligence won’t necessarily replace radiologists, but it will replace radiologists who don’t use artificial intelligence.” 

Will market forces mean that radiologists have to adapt AI even if they don’t understand how to use it properly?

Website Twitter 

To unsubscribe from IMPROVEDX: click the following link:

Visit the searchable archives or adjust your subscription at:

Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:

Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine

HTML Version:
URL: <../attachments/20181030/353b9225/attachment.html>

More information about the Test mailing list