A.I. Shows Promise as a Physician Assistant - The New York Times

Mark Gusack gusackm at COMCAST.NET
Tue Feb 12 18:14:52 UTC 2019


I like that!  I use the term Artificial Imbecility.

 

Mark Gusack, M.D.

President

MANX Enterprises, Ltd.

304 521-1980

www.manxenterprises.com <http://www.manxenterprises.com> 

 

From: Swerlick, Robert A <rswerli at EMORY.EDU> 
Sent: Tuesday, February 12, 2019 10:57 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] A.I. Shows Promise as a Physician Assistant - The
New York Times

 

I call this the other AI - Artificial Ignorance....

 

Robert A. Swerlick, MD

Alicia Leizman Stonecipher Chair of Dermatology

Professor and Chairman, Department of Dermatology

Emory University School of Medicine

404-727-3669 

  _____  

From: Bruno, Michael <mbruno at PENNSTATEHEALTH.PSU.EDU
<mailto:mbruno at PENNSTATEHEALTH.PSU.EDU> >
Sent: Tuesday, February 12, 2019 10:04 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG> 
Subject: Re: [IMPROVEDX] A.I. Shows Promise as a Physician Assistant - The
New York Times 

 

Exactly.  And there is a lot of bias, as was highlighted in this article:

 

 



 



*          

*          


Intelligent Machines
<https://www.technologyreview.com/topic/intelligent-machines/> 


This is how AI bias really happens-and why it's so hard to fix


Bias can creep in at many stages of the deep-learning process, and the
standard practices in computer science aren't designed to detect it.


*         by Karen Hao <https://www.technologyreview.com/profile/karen-hao/>
|   February 4, 2019

 

*          


O

ver the past few months, we've documented how the
<https://www.technologyreview.com/s/612404/is-this-ai-we-drew-you-a-flowchar
t-to-work-it-out/> vast majority of AI's applications today are based on the
category of algorithms known as deep learning, and how
<https://www.technologyreview.com/s/612437/what-is-machine-learning-we-drew-
you-another-flowchart/> deep-learning algorithms find patterns in data.
We've also covered how these technologies affect people's lives: how they
can perpetuate injustice in
<https://www.technologyreview.com/s/612846/making-face-recognition-less-bias
ed-doesnt-make-it-less-scary/> hiring, retail, and security and may already
be doing so in the
<https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/>
criminal legal system.  But it's not enough just to know that this bias
exists. If we want to be able to fix it, we need to understand the mechanics
of how it arises in the first place.

 


How AI bias happens


We often shorthand our explanation of AI bias by blaming it on biased
training data. The reality is more nuanced: bias can creep in
<https://dl.acm.org/citation.cfm> long before the data is collected as well
as at
<http://www.californialawreview.org/wp-content/uploads/2016/06/2Barocas-Selb
st.pdf> many other stages of the deep-learning process. For the purposes of
this discussion, we'll focus on three key stages.

 

Framing the problem. The first thing computer scientists do when they create
a deep-learning model is decide what they actually want it to achieve. A
credit card company, for example, might want to predict a customer's
creditworthiness, but "creditworthiness" is a rather nebulous concept. In
order to translate it into something that can be computed, the company must
decide whether it wants to, say, maximize its profit margins or maximize the
number of loans that get repaid. It could then define creditworthiness
within the context of that goal. The problem is that "those decisions are
made for various business reasons other than fairness or discrimination,"
explains Solon Barocas, an assistant professor at Cornell University who
specializes in fairness in machine learning. If the algorithm discovered
that giving out subprime loans was an effective way to maximize profit, it
would end up engaging in predatory behavior even if that wasn't the
company's intention.

 

Collecting the data. There are two main ways that bias shows up in training
data: either the data you collect is unrepresentative of reality, or it
reflects existing prejudices. The first case might occur, for example, if a
deep-learning algorithm is fed more photos of light-skinned faces than
dark-skinned faces. The resulting face recognition system would inevitably
be
<https://www.technologyreview.com/s/612846/making-face-recognition-less-bias
ed-doesnt-make-it-less-scary/> worse at recognizing darker-skinned faces.
The second case is precisely what happened when Amazon discovered that its
internal recruiting tool was
<https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazo
n-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK
08G> dismissing female candidates. Because it was trained on historical
hiring decisions, which favored men over women, it learned to do the same.

 

Preparing the data. Finally, it is possible to introduce bias during the
data preparation stage, which involves selecting which attributes you want
the algorithm to consider. (This is not to be confused with the
problem-framing stage. You can use the same attributes to train a model for
very different goals or use very different attributes to train a model for
the same goal.) In the case of modeling creditworthiness, an "attribute"
could be the customer's age, income, or number of paid-off loans. In the
case of Amazon's recruiting tool, an "attribute" could be the candidate's
gender, education level, or years of experience. This is what people often
call the "art" of deep learning: choosing which attributes to consider or
ignore can significantly influence your model's prediction accuracy. But
while its impact on accuracy is easy to measure, its impact on the model's
bias is not.

 


Why AI bias is hard to fix


Given that context, some of the challenges of mitigating bias may already be
apparent to you. Here we highlight four main ones.

Unknown unknowns. The introduction of bias isn't always obvious during a
model's construction because you may not realize the downstream impacts of
your data and choices until much later. Once you do, it's hard to
retroactively identify where that bias came from and then figure out how to
get rid of it. In Amazon's case, when the engineers initially discovered
that its tool was penalizing female candidates, they reprogrammed it to
ignore explicitly gendered words like "women's." They soon discovered that
the revised system was still picking up on
<https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazo
n-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK
08G> implicitly gendered words-verbs that were highly correlated with men
over women, such as "executed" and "captured"-and using that to make its
decisions.

 

Imperfect processes. First, many of the standard practices in deep learning
are not designed with bias detection in mind. Deep-learning models are
tested for performance before they are deployed, creating what would seem to
be a perfect opportunity for catching bias. But in practice, testing usually
looks like this: computer scientists randomly split their data before
training into one group that's actually used for training and another that's
reserved for validation once training is done. That means the data you use
to test the performance of your model has the same biases as the data you
used to train it. Thus, it will fail to flag skewed or prejudiced results.

 

Lack of social context. Similarly, the way in which computer scientists are
taught to frame problems often isn't compatible with the best way to think
about social problems. For example, in
<https://dl.acm.org/citation.cfm?id=3287598> a new paper, Andrew Selbst, a
postdoc at the Data & Society Research Institute, identifies what he calls
the "portability trap." Within computer science, it is considered good
practice to design a system that can be used for different tasks in
different contexts. "But what that does is ignore a lot of social context,"
says Selbst. "You can't have a system designed in Utah and then applied in
Kentucky directly because different communities have different versions of
fairness. Or you can't have a system that you apply for 'fair' criminal
justice results then applied to employment. How we think about fairness in
those contexts is just totally different."

 

The definitions of fairness. It's also not clear what the absence of bias
should look like. This isn't true just in computer science-this question has
a long history of debate in philosophy, social science, and law. What's
different about computer science is that the concept of fairness has to be
defined in mathematical terms, like balancing the false positive and false
negative rates of a prediction system. But as researchers have discovered,
there are many different mathematical definitions of fairness that are also
mutually exclusive. Does fairness mean, for example, that the
<https://www.propublica.org/article/machine-bias-risk-assessments-in-crimina
l-sentencing> same proportion of black and white individuals should get high
risk assessment scores? Or that the
<https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algori
thm-be-racist-our-analysis-is-more-cautious-than-propublicas/?utm_term=.2276
d78de3c1> same level of risk should result in the same score regardless of
race? It's impossible to fulfill both definitions at the same time (
<https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algori
thm-be-racist-our-analysis-is-more-cautious-than-propublicas/?utm_term=.2276
d78de3c1> here's a more in-depth look at why), so at some point you have to
pick one. But whereas in other fields this decision is understood to be
something that can change over time, the computer science field has a notion
that it should be fixed. "By fixing the answer, you're solving a problem
that looks very different than how society tends to think about these
issues," says Selbst.

 


Where we go from here


If you're reeling from our whirlwind tour of the full scope of the AI bias
problem, so am I. But fortunately a strong contingent of AI researchers are
working hard to address the problem. They've taken a variety of approaches:
algorithms that help  <https://arxiv.org/abs/1805.12002> detect and
<http://aif360.mybluemix.net/> mitigate hidden biases within training data
or that
<https://www.technologyreview.com/the-download/612502/ai-has-a-culturally-bi
ased-worldview-that-google-has-a-plan-to-change/> mitigate the
<http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_220.pdf
> biases learned by the model regardless of the data quality;
<http://gendershades.org/overview.html> processes that hold companies
<http://www.aies-conference.com/wp-content/uploads/2019/01/AIES-19_paper_223
.pdf> accountable to the fairer outcomes and  <http://aif360.mybluemix.net/>
discussions that hash out the different definitions of fairness.  "'Fixing'
discrimination in algorithmic systems is not something that can be solved
easily," says Selbst. "It's a process ongoing, just like discrimination in
any other aspect of society."

 

This originally appeared in our AI newsletter The Algorithm. 

 

 
<https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happen
sand-why-its-so-hard-to-fix/>
https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happens
and-why-its-so-hard-to-fix/

 

 


From: Edward Hoffer [mailto:ehoffer at GMAIL.COM] 
Sent: Tuesday, February 12, 2019 8:28 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG> 
Subject: Re: [IMPROVEDX] A.I. Shows Promise as a Physician Assistant - The
New York Times

Fascinating study.  The biggest problem with neural networks is their
opacity - inability to explain in a comprehensible way why/how they reach
their conclusions - which makes many reluctant to accept their conclusions.
The biggest problem with a "big data" approach is that one may be finding
correlations rather than cause and effect, and correlation does not prove
causation.  Only when these systems can explain their reasoning will they be
widely accepted.

Ed

Edward P Hoffer MD

Co-creator, DXplain

 

On Mon, Feb 11, 2019 at 11:17 PM HM Epstein <hmepstein at gmail.com
<mailto:hmepstein at gmail.com> > wrote:

 

I still believe that AI is there to help, not take over. But still an
interesting article. 
https://www.nytimes.com/2019/02/11/health/artificial-intelligence-medical-di
agnosis.html
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2019_0
2_11_health_artificial-2Dintelligence-2Dmedical-2Ddiagnosis.html&d=DwMFaQ&c=
_FmMnDvUH5queZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx9j_wSYpSZ
PDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=NneiGrfd_-1rLG
-WeoP6kKrFMV6hwDbm2oHMeVnJQ0U&e=> 

 

Best,

Helene 

 

 
<https://drive.google.com/uc?id=0BzqkjTYl6PKwNUhEU2NyZ2RsNW8&export=download
>  


A.I. Shows Promise as a Physician Assistant


Feb. 11, 2019

Doctors competed against A.I. computers to recognize illnesses on magnetic
resonance images of a human brain during a competition in Beijing last year.
The human doctors lost.Mark Schiefelbein/Associated Press

 
<https://static01.nyt.com/images/2019/02/12/science/12AIDIAGNOSIS/merlin_140
515347_723de50e-b119-4cd4-8f57-56afc4f98d99-articleLarge.jpg?quality=75&auto
=webp&disable=upscale> 

Doctors competed against A.I. computers to recognize illnesses on magnetic
resonance 

images of a human brain during a competition in Beijing last year. The human
doctors lost.

 

Each year, millions of Americans walk out of a doctor's office
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.sciencedaily.com_r
eleases_2014_04_140416190948.htm&d=DwMFaQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7Ewt
Gwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx9j_wSYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljI
OC7VNFztF3Rp8FGnxT9WEQBq-lw&s=coJLyLfbRA0n5EtEfQ36gsByCvQYm43-vQq7XEGVac4&e=
> with a misdiagnosis. Physicians try to be systematic when identifying
illness and disease, but bias creeps in. Alternatives are overlooked.  Now a
group of researchers in the United States and China has tested a potential
remedy for all-too-human frailties: artificial intelligence.

In a paper published on Monday in Nature Medicine, the scientists reported
that they had built a system that
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nature.com_article
s_s41591-2D018-2D0335-2D9&d=DwMFaQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7EwtGwv5cxx
nTj0&r=XZJky8Jx0OuETXcWpBMhx9j_wSYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFz
tF3Rp8FGnxT9WEQBq-lw&s=4rgu2fxyWkQrljksOFViBLWgLTbmwVJX18H1OTx_K1g&e=>
automatically diagnoses common childhood conditions - from influenza to
meningitis - after processing the patient's symptoms, history, lab results
and other clinical data.

The system was highly accurate, the researchers said, and one day may assist
doctors in diagnosing complex or rare conditions.

Drawing on the records of nearly 600,000 Chinese patients who had visited a
pediatric hospital over an 18-month period, the vast collection of data used
to train this new system highlights an advantage for China in the worldwide
race toward artificial intelligence.

Because its population is so large - and because its privacy norms put fewer
restrictions on the sharing of digital data - it may be easier for Chinese
companies and researchers to build and train the "deep learning" systems
that are rapidly changing the trajectory of health care.

On Monday, President Trump
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2019_0
2_11_business_ai-2Dartificial-2Dintelligence-2Dtrump.html-3Fmodule-3Dinline&
d=DwMFaQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBM
hx9j_wSYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=s_M
18PijwOkmPow_cN1zFtWL3pBcOndNFeN5OTnxN30&e=> signed an executive order meant
to spur the development of A.I. across government, academia and industry in
the United States. As part of this "American A.I. Initiative," the
administration will encourage federal agencies and universities to share
data that can drive the development of automated systems.

Pooling health care data is a particularly difficult endeavor. Whereas
researchers went to a single Chinese hospital for all the data they needed
to develop their artificial-intelligence system, gathering such data from
American facilities is rarely so straightforward.

"You have go to multiple places," said Dr. George Shih, associate professor
of clinical radiology at Weill Cornell Medical Center and co-founder of
MD.ai, a company that helps researchers label data for A.I. services. "The
equipment is never the same. You have to make sure the data is anonymized.
Even if you get permission, it is a massive amount of work."

After reshaping internet services, consumer devices and driverless cars in
the early part of the decade, deep learning is moving rapidly into myriad
areas of health care. Many organizations,
<https://urldefense.proofpoint.com/v2/url?u=https-3A__ai.googleblog.com_2018
_05_deep-2Dlearning-2Dfor-2Delectronic-2Dhealth.html&d=DwMFaQ&c=_FmMnDvUH5qu
eZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx9j_wSYpSZPDVXdInJ5O9g
Q&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=xixMrdxJ587LueaCwEpk4ht6ta
lAnZUOqQaNo52G0Ac&e=> including Google, are developing and testing systems
that analyze electronic health records in an effort to flag medical
conditions such as osteoporosis, diabetes, hypertension and heart failure.

Similar technologies are being built to automatically detect signs of
illness and disease in X-rays, M.R.I.s and eye scans.

The new system relies on a
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2018_0
3_06_technology_google-2Dartificial-2Dintelligence.html-3Fmodule-3Dinline&d=
DwMFaQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx
9j_wSYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=pha65
6hIAHSZ_RzcHJOJTLmydPTT8qt_wKvs7PErXtU&e=> neural network, a breed of
artificial intelligence that is accelerating the development of everything
from health care to
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2018_0
1_04_technology_self-2Ddriving-2Dcars-2Daurora.html-3Fmodule-3Dinline&d=DwMF
aQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx9j_w
SYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=M_Hz0a7vE
KnnZ1tkK58HFcy4H11uFBL--FDk8cS4YvU&e=> driverless cars to
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nytimes.com_2018_0
2_20_technology_artificial-2Dintelligence-2Drisks.html-3Fmodule-3Dinline&d=D
wMFaQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx9
j_wSYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=Sn-KrL
nn4m2YGhRL0HRwiEMWwG1ZYf7DAeSikMcbwgc&e=> military applications. A neural
network can learn tasks largely on its own by analyzing vast amounts of
data.

Using the technology, Dr. Kang Zhang, chief of ophthalmic genetics at the
University of California, San Diego, has built systems that can analyze eye
scans for hemorrhages, lesions and other signs of diabetic blindness.
Ideally, such systems would serve as a first line of defense, screening
patients and pinpointing those who need further attention.

Now Dr. Zhang and his colleagues have created a system that can diagnose an
even wider range of conditions by recognizing patterns in text, not just in
medical images. This may augment what doctors can do on their own, he said.

"In some situations, physicians cannot consider all the possibilities," he
said. "This system can spot-check and make sure the physician didn't miss
anything."

The experimental system analyzed the electronic medical records of nearly
600,000 patients at the Guangzhou Women and Children's Medical Center in
southern China, learning to associate common medical conditions with
specific patient information gathered by doctors, nurses and other
technicians.

First, a group of trained physicians annotated the hospital records, adding
labels that identified information related to certain medical conditions.
The system then analyzed the labeled data.

Then the neural network was given new information, including a patient's
symptoms as determined during a physical examination. Soon it was able to
make connections on its own between written records and observed symptoms.

When tested on unlabeled data, the software could rival the performance of
experienced physicians. It was more than 90 percent accurate at diagnosing
asthma; the accuracy of physicians in the study ranged from 80 to 94
percent.

In diagnosing gastrointestinal disease, the system was 87 percent accurate,
compared with the physicians' accuracy of 82 to 90 percent.

Able to recognize patterns in data that humans could never identify on their
own, neural networks can be enormously powerful in the right situation. But
even experts have difficulty understanding why such networks make particular
decisions and how they teach themselves.

As a result, extensive testing is needed to reassure both doctors and
patients that these systems are reliable.

Experts said extensive clinical trials are now needed for Dr. Zhang's
system, given the difficulty of interpreting decisions made by neural
networks.

"Medicine is a slow-moving field," said Ben Shickel, a researcher at the
University of Florida who specializes in the use of deep learning for health
care. "No one is just going to deploy one of these techniques without
rigorous testing that shows exactly what is going on."

It could be years before deep-learning systems are deployed in emergency
rooms and clinics. But some are closer to real-world use: Google is now
running clinical trials of its eye-scan system at two hospitals in southern
India.

Deep-learning diagnostic tools are more likely to flourish in countries
outside the United States, Dr. Zhang said. Automated screening systems may
be particularly useful in places where doctors are scarce, including in
India and China.

The system built by Dr. Zhang and his colleagues benefited from the large
scale of the data set gathered from the hospital in Guangzhou. Similar data
sets from American hospitals are typically smaller, both because the average
hospital is smaller and because regulations make it difficult to pool data
from multiple facilities.

Dr. Zhang said he and his colleagues were careful to protect patients'
privacy in the new study. But he acknowledged that researchers in China may
have an advantage when it comes to collecting and analyzing this kind of
data.

"The sheer size of the population - the sheer size of the data - is a big
difference," he said.

 

 


  _____  



<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG> 

To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX
<https://urldefense.proofpoint.com/v2/url?u=http-3A__list.improvediagnosis.o
rg_scripts_wa-2DIMPDIAG.exe-3FSUBED1-3DIMPROVEDX-26A-3D1&d=DwMFaQ&c=_FmMnDvU
H5queZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx9j_wSYpSZPDVXdInJ
5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=cJjpdW5-93UaYS_koIl_hq
WUfswt-VmPLpJNJcXF4SM&e=> &A=1 or send email to:
IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
<mailto:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG> 

<https://urldefense.proofpoint.com/v2/url?u=http-3A__list.improvediagnosis.o
rg_scripts_wa-2DIMPDIAG.exe-3FINDEX&d=DwMFaQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7
EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcWpBMhx9j_wSYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJ
ljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=spdv6gvH7CKev454uDGkJa_IsJ4sv-uAxQnAVs2A8EI
&e=>  
Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.improvediagnosis.or
g_&d=DwMFaQ&c=_FmMnDvUH5queZcSmOuBzHZMbp7E7EwtGwv5cxxnTj0&r=XZJky8Jx0OuETXcW
pBMhx9j_wSYpSZPDVXdInJ5O9gQ&m=M4Uro1ASgsVLJljIOC7VNFztF3Rp8FGnxT9WEQBq-lw&s=
c9ORCaJvVzvZQLxJTmVVKBB-xIpz30tPt8mSRAiU7Wo&e=>  

 

 

  _____  


<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG> 

To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX
<http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=
1> &A=1 

or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
<mailto:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG> 

Visit the searchable archives or adjust your subscription at:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX 


Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/ 

 

  _____  


This e-mail message (including any attachments) is for the sole use of
the intended recipient(s) and may contain confidential and privileged
information. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution
or copying of this message (including any attachments) is strictly
prohibited.

If you have received this message in error, please contact
the sender by reply e-mail message and destroy all copies of the
original message (including attachments).

 

  _____  


<mailto:IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG> 

To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX
<http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=
1> &A=1 

or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
<mailto:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG> 



Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/







Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine

To unsubscribe from the IMPROVEDX list, click the following link:<br>
<a href="http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1" target="_blank">http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1</a>
</p>

HTML Version:
URL: <../attachments/20190212/3a1a4150/attachment.html> ATTACHMENT:
Name: image001.jpg Type: image/jpeg Size: 5309 bytes Desc: not available URL: <../attachments/20190212/3a1a4150/attachment.jpg> ATTACHMENT:
Name: image002.png Type: image/png Size: 205445 bytes Desc: not available URL: <../attachments/20190212/3a1a4150/attachment.png>


More information about the Test mailing list