Deep Learning=?utf-8?Q?=E2=80=94A_?=Technology With the Potential to Transform Health Care | Health Informatics | JAMA | JAMA Network
lehmann at JHMI.EDU
Thu Sep 13 14:53:09 UTC 2018
Geoffrey Hinton is a Big Name in neural networks, way before Google, so it's great that he's tackling health care.
However, I think SIDM needs to put Deep Learning's feet to the fire. All sorts of people want to apply these algorithms to EHR data. When I asked another Big Name at Google AI how they plan to deal with all the errors and biases in EHRs, he admitted not having much of an answer ("we have a guy working on it"). Physicians' (and others') expertise, when as prior distributions or prior causal knowledge are not included in Deep Learning.
Social services pose as an object lesson in the unintended consequences (or perhaps they were not so unintended) in the use of machine learning to poverty. [Virginia Eubanks: Automating Inequality] Steve Downs and I have a more technical article on the 10 Commandments of sharing knowledge models across institutions [Learning Health Systems<https://onlinelibrary.wiley.com/doi/10.1002/lrh2.10065> 3 Aug 2018].
On Sep 13, 2018, at 9:51 AM, David L Meyers <dm0015 at COMCAST.NET<mailto:dm0015 at COMCAST.NET>> wrote:
An interesting article on neural networks, a form of artificial intelligence.
Deep Learning—A Technology With the Potential to Transform Health Care
Widespread application of artificial intelligence in health care has been anticipated for half a century. For most of that time, the dominant approach to artificial intelligence was inspired by logic: researchers assumed that the essence of intelligence was manipulating symbolic expressions, using rules of inference. This approach produced expert systems and graphical models that attempted to automate the reasoning processes of experts. In the last decade, however, a radically different approach to artificial intelligence, called deep learning, has produced major breakthroughs and is now used on billions of digital devices for complex tasks such as speech recognition, image interpretation, and language translation. The purpose of this Viewpoint is to give health care professionals an intuitive understanding of the technology underlying deep learning. In an accompanying Viewpoint, Naylor1 outlines some of the factors propelling adoption of this technology in medicine and health care.
What Neural Networks Can Do
Artificial neural networks are inspired by the ability of brains to learn complicated patterns in data by changing the strengths of synaptic connections between neurons. Deep learning uses deep networks with many intermediate layers of artificial “neurons” between the input and the output, and, like the visual cortex, these artificial neurons learn a hierarchy of progressively more complex feature detectors. By learning feature detectors that are optimized for classification, deep learning can substantially outperform systems that rely on features supplied by domain experts or that are designed by hand.2
Deep learning excels at modeling extremely complicated relationships between inputs and outputs. This technology can be used for tasks as different as predicting future medical events from past events3 and predicting cardiovascular health from fundus images of the retina.4 Deep learning is already achieving results that equal or surpass those of human experts. For example, in a 2017 report, Esteva et al5 compiled a database of 129 450 labeled images of hundreds of different skin lesions. Approximately 2000 images with accurate diagnostic labels based on skin biopsies were used for test purposes, and the rest were used to retrain a convolutional neural network that had previously been trained to recognize everyday objects in cluttered images. The skin lesion images used for retraining varied widely in quality, and no further information was provided to the convolutional neural network other than the image pixels and the lesion label. The network and groups of 21 to 25 board-certified dermatologists then reviewed subsets of the unlabeled test images and decided whether the correct clinical course was a biopsy for possible malignancy or reassurance of the patient. Sensitivity for the majority of the dermatologists was lower than that of the convolutional neural network when matched for specificity, and their specificity was lower than that of the convolutional neural network when matched for sensitivity for identifying images with melanoma, as well as for images of basal and squamous cell carcinoma.
A Brief History of Artificial Neural Networks
The simple neural nets of the 1960s had to be provided with hand-designed feature detectors and they simply learned how much weight to give to each detector. The introduction in 1986 of the back-propagation procedure6 (explained below) allowed neural networks to design their own feature detectors, which made them much more powerful at modeling complicated relationships between their inputs and outputs, especially when they used multiple layers of learned features. However, despite some promising results in the 1990s in reading the numeric amounts on checks, it proved difficult to train deep neural networks and they did not consistently outperform other simpler machine-learning techniques.
What changed? In simple terms, computers became millions of times faster, data sets got thousands of times bigger, and researchers discovered many technical refinements that made neural nets easier to train.
How Deep Learning Works
Consider the problem of deciding whether a patient has a specific disease when given a large number of numeric input variables that represent characteristics of the patient. One standard approach is to use simple logistic regression that estimates how to weight each input variable so that their weighted sum is a good indicator of the disease. Since health and disease often involve complex interactions, a statistician can add extra inputs, known as interaction terms, each representing the product of 2 or more input variables. However, if multiway interactions need to be modeled, the number of interaction terms increases exponentially.
The neural network alternative is to add a layer of “hidden factors” (ie, features). The first step is to determine which hidden factors are active, and then the active ones are used to determine whether the disease is present. To prevent the model from becoming too big while allowing factors to reflect many input variables, the number of hidden factors is limited, rather than the number of input variables that contribute to each factor. The challenge is then to learn a good set of hidden factors by repeatedly modifying the weights on connections from the input variables to the hidden factors and the weights on connections from the hidden factors to the output variable.
In principle, a learning procedure could repeatedly choose single weights at random, make a small change, and keep this change if it improves the performance of the whole net, but this would be extremely slow. In a neural net with a million weights, back-propagation achieves the same goal about a million times faster than blind trial and error. Instead of changing weights and measuring the effect, the neural network takes the discrepancy between the output produced by the network for each patient and the target output and propagates this discrepancy backward through the network to compute, for all of the weights, how a small change in a weight would reduce the discrepancy. The network then changes every weight in the direction that reduces the discrepancy by an amount proportional to how rapidly it reduces the discrepancy.
Back-propagation can be used to train deep networks that have many layers of hidden factors, with each layer of features depending on the features in the preceding layer. For complex image interpretation, as occurs in many medical applications, neural networks can be improved by making a separate copy of each feature detector for every position in the image. After updating of the incoming weights of each copy, the corresponding weights are averaged so that all copies use an identical set of weights. This is called a convolutional neural network,7 and it allows knowledge acquired by looking at one part of an image to be applied at every location in subsequent images.
For modeling sequences, such as a patient’s medical history, a “recurrent” neural network can be used that takes in one term at a time.7 In addition to the connections coming from the layer below, each layer of a recurrent network has weighted connections coming from its own activations at the previous time step, and this allows the layers to accumulate and transform information over time. The back-propagation phase then sends the discrepancy between a prediction of the next term in the sequence and the actual next term backward through the layers and also backward through the time steps.
A Caveat About Interpretability
Understandably, clinicians, scientists, patients, and regulators would all prefer to have a simple explanation for how a neural net arrives at its classification of a particular case. In the example of predicting whether a patient has a disease, they would like to know what hidden factors the network is using. However, when a deep neural network is trained to make predictions on a big data set, it typically uses its layers of learned, nonlinear features to model a huge number of complicated but weak regularities in the data. It is generally infeasible to interpret these features because their meaning depends on complex interactions with uninterpreted features in other layers. Also, if the same neural net is refit to the same data, but with changes in the initial random values of the weights, there will be different features in the intermediate layers. This reflects that unlike models in which an expert specifies the hidden factors, a neural net has many different and equally good ways of modeling the same data set. It is not trying to identify the “correct” hidden factors. It is merely using hidden factors to model the complicated relationship between the input variables and the output variables.
The Future of Deep Learning
As data sets get bigger and computers become more powerful, the results achieved by deep learning will get better, even with no improvement in the basic learning techniques, although these techniques are being improved. The neural networks in the human brain learn from fewer data and develop a deeper, more abstract understanding of the world. In contrast to machine-learning algorithms that rely on provision of large amounts of labeled data, human cognition can find structure in unlabeled data, a process commonly termed unsupervised learning. The creation of a smorgasbord of complex feature detectors based on unlabeled data appears to set the stage for humans to learn a classifier from only a small amount of labeled data. How the brain does this is still a mystery, but will not remain so. As new unsupervised learning algorithms are discovered, the data efficiency of deep learning will be greatly augmented in the years ahead, and its potential applications in health care and other fields will increase rapidly.
Back to top
Corresponding Author: Geoffrey Hinton, PhD, Google Brain Team and Department of Computer Science, University of Toronto, 6 King's College Rd, Toronto, ON M5S 1A1, Canada (geoffrey.hinton at gmail.com<mailto:geoffrey.hinton at gmail.com>).
Published Online: August 30, 2018. doi:10.1001/jama.2018.11100<http://jamanetwork.com/article.aspx?doi=10.1001/jama.2018.11100>
Conflict of Interest Disclosures: The author has completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Hinton reports owning stock in Google.
Additional Contributions: I am deeply indebted to C. David Naylor, MD, DPhil, for discussions that shaped this Viewpoint and for sharing his knowledge of health care.
Naylor CD. On the prospects for a (deep) learning health care system [published online August 30, 2018]. JAMA. doi:10.1001/jama.2018.11103<http://jamanetwork.com/article.aspx?doi=10.1001/jama.2018.11103>Google Scholar<https://scholar.google.com/scholar_lookup?title=On%20the%20prospects%20for%20a%20(deep)%20learning%20health%20care%20system&author=CD%20Naylor&publication_year=&journal=JAMA&volume=&pages=>
Goodfellow I, Bengio Y, Courville A. Deep Learning. Vol 1. Cambridge, MA: MIT Press; 2016.
David L Meyers, MD FACEP
Listserv Moderator/Board member
Society to Improve Diagnosis in Medicine
Save the Dates: Diagnostic Error in Medicine, November 4-6, 2018; New Orleans, LA
Diagnostic Error in Medicine-2nd European Conference, August 30-31, 2018; Bern, Switzerland
AusDEM2019, April 28-30, 2019;
Address messages to: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
To unsubscribe from IMPROVEDX: click the following link:
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG>
Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine
To learn more about SIDM visit:
To unsubscribe from the IMPROVEDX:
mail to:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
or click the following link: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Address messages to: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
For additional information and subscription commands, visit:
http://LIST.IMPROVEDIAGNOSIS.ORG/ (with your password)
Visit the searchable archives or adjust your subscription at:
Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine
To unsubscribe from the IMPROVEDX list, click the following link:<br>
<a href="http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1" target="_blank">http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1</a>
More information about the Test