Fwd: PDF - www.pnas.org

Peggy Zuckerman peggyzuckerman at GMAIL.COM
Wed Aug 31 15:04:19 UTC 2016


One of great difficulties in understanding this data is to know how one
defines 'second opinion'.  When I was found to have a large kidney tumor
and not a 'scabbed-over' stomach ulcer, I did not request the doctor
suggest I get a second opinion.  I just found an oncologist with expertise
in kidney cancer.  If a patient realizes that his doctor is not doing a
good job, rarely does the patient want to take the advice of the
discredited doctor to help find a second opinion, in my experience.  Is
this a 'second opinion' or simply shifting to a new doctor?

The only follow up by my 'ulcer' doctor to me was to ask for a payment of
the last appointment fees.  I declined, not very politely, to that request.

Peggy Zuckerman

Peggy Zuckerman
www.peggyRCC.com

On Tue, Aug 30, 2016 at 11:54 PM, HM Epstein <hmepstein at gmail.com> wrote:

> I found articles on the topic in PubMed but most aren't available to read
> full text without a subscription. A recent study was  [J Cancer Res Clin
> Oncol. 2016 Jul;
> "Is there evidence for a better health care for cancer patients after a
> second opinion? A systematic review."] The abstract's results section
> wasn't very helpful:
>
> *"Depending on the study, between 6.5 and 36 % of patients search for a
> second opinion, due to a variety of reasons. Changes in diagnosis,
> treatment recommendations or prognosis as a result of the second opinion
> occurred in 12-69 % of cases. In 43-82 % of cases, the original diagnosis
> or treatment was verified. Patient satisfaction was high, and the second
> opinion was deemed as helpful and reassuring in most cases. Yet, data on
> patient-relevant outcomes or on the quality of the second opinion are
> missing." *
>
> Perhaps if I could see the charts there would be more helpful data, but
> there's a pay wall even for healthcare journalists.
>>
> Best,
> Helene​
>
>
> hmepstein.com
> @hmepstein <https://twitter.com/hmepstein>
> Mobile: 914-522-2116
>
> On Wed, Aug 31, 2016 at 12:16 AM, <Michael.H.Kanter at kp.org> wrote:
>
>> great questions.  I dont know the answers to any of these.  Part of the
>> barrier of getting second opinions besides the need for second opinions not
>> being recognized is the cost which can be born either by the patient or the
>> delilvery system but is not trivial.
>>
>>
>> Michael Kanter, M.D., CPPS
>> Regional Medical Director of Quality & Clinical Analysis
>> Southern California Permanente Medical Group
>> (626) 405-5722 (tie line 8+335)
>>
>> Executive Vice President,
>> Chief Quality Officer,
>> The Permanente Federation
>>
>> THRIVE By Getting Regular Exercise
>>
>> *NOTICE TO RECIPIENT:*  If you are not the intended recipient of this
>> e-mail, you are prohibited from sharing, copying, or otherwise using or
>> disclosing its contents.  If you have received this e-mail in error, please
>> notify the sender immediately by reply e-mail and permanently delete this
>> e-mail and any attachments without reading, forwarding or saving them.
>> Thank you.
>>
>>
>>
>>
>> From:        HM Epstein <hmepstein at GMAIL.COM>
>> To:        IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
>> Date:        08/29/2016 12:02 PM
>> Subject:        Re: [IMPROVEDX] Fwd: PDF - www.pnas.org
>> ------------------------------
>>
>>
>>
>> Michael:
>> You wrote, "Frequently, in practice getting a second opinion is NOT a
>> routine and so the second opinion approaches the case very differently as
>> he/she knows someone is questioning the diagnosis.  Of course, if a second
>> or third opinion is the routine, this is not the case."
>>
>> Does anyone have statistics on how often a patient gets a second opinion?
>> I would think it varies by specialty. Also, I expect those with a diagnosed
>> illness seek second opinions more often than those without.
>>
>> Plus a little wishful thinking: I would love to see a study comparing use
>> of second opinions between patients who are told their tests are negative
>> and still have symptoms vs. those who don't. (Even that group can be broken
>> down between patients who never had symptoms and those whose symptoms
>> dissipated when told their test results were clean.)
>>
>> Thanks.
>>
>> Best,
>> Helene
>>
>> *-- *
>> *hmepstein.com* <http://hmepstein.com/>
>> *@hmepstein*
>> *Mobile: 914-522-2116 <914-522-2116>*
>>
>> *Sent from my iPhone*
>>
>>
>>
>> On Aug 29, 2016, at 10:37 AM, Ely, John <*john-ely at UIOWA.EDU*
>> <john-ely at uiowa.edu>> wrote:
>>
>> 0.01 = 1%
>> 0.01 x 0.01 = 0.0001 = 0.01% (not 0.001%)
>>
>> 0.01=1%
>> 0.001=0.1%
>> 0.0001=0.01%
>>
>> At least I think that’s right.  It actually took me a little while and my
>> calculator to figure it out.  Embarrassing.  Apart from the math, I don’t
>> think this would work because those 1% from the first radiologist will be
>> tough cases and the second equally skilled radiologist won’t have the same
>> 1% error rate for tough cases.
>>
>> John Ely
>>
>>
>>
>> *From:* Mark Graber [*mailto:mark.graber at IMPROVEDIAGNOSIS.ORG*
>> <mark.graber at IMPROVEDIAGNOSIS.ORG>]
>> * Sent:* Monday, August 29, 2016 7:31 AM
>> * To:* *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at list.improvediagnosis.org>
>> * Subject:* Re: [IMPROVEDX] Fwd: PDF - *www.pnas.org*
>> <http://www.pnas.org/>
>>
>> Second opinions are an attractive intervention to help reduce diagnostic
>> errors, but will require a LOT more study to understand when and how to go
>> this route for maximal benefit and value.  As a start, second (and third,
>> etc) reviews seem to be an ideal answer for all the diagnoses that depend
>> on visual findings, including imaging, pathology and cytology-based tests.
>> I'm just guessing that 50 years from now, cancer cytologies will all be
>> read by teams of people.
>>
>> In general medical practice, the story is a little different.  In theory,
>> if the error rate after a single reading is 1%, that could be reduced to
>> 0.001% by a second reading by an equally skilled clinician.  But, if a
>> patient only pays attention to the second opinion, the error rate doesn't
>> improve at all.  Another factor that degrades the potential improvement is
>> that most people with a 'normal' reading from the first clinician won't
>> bother to get a second opinion (which would pick up most false negatives).
>> The math on that gets a little complicated, but is explained here:  Lindsey
>> PA, Newhouse JP. The cost and value of second surgical opinion programs: a
>> critical review of the literature. J Health Polit Policy Law.
>> 1990;15(3):543-570.
>>
>>
>> Mark L Graber MD FACP
>> President, SIDM  *www.improvediagnosis.org*
>> <http://www.improvediagnosis.org/>
>>
>> <image001.png>
>>
>> On Aug 29, 2016, at 12:01 AM, *Michael.H.Kanter at KP.ORG*
>> <Michael.H.Kanter at kp.org> wrote:
>>
>> I found this article really interesting.  I think there are some
>> significant limitations though.
>> 1)  it was confined to situations in which all of the information was
>> made available to the physician (s) to make a diagnosis.  This works for
>> images best but in most situations the physician when less than certain can
>> get other information.  Perhaps more clinical information, lab tests, ect.
>> Even in interpreting images, physicians can get more views or images prior
>> to making a decision.  None of this was available in this study so the
>> generalizability to the real world  is somewhat limited.
>> 2)  I would agree with Linda in that having a dichotomous outcome is also
>> a bit artificial.  Physicians can have an uncertain diagnosis and do close
>> follow up.
>> 3)  Not having discussion among the differing opinions is also
>> artificial.  Ideally, in practice if there was a difference of opinions,
>> there should be a discussion as to what each physician is thinking and
>> why.  Of course, in some settings this may not occur as when a second
>> opinion is obtain in a totally separate institution so the lack of
>> discussion among the physicians in this study may reflect much of current
>> practice.
>> 4)  Frequently, in practice getting a second opinion is NOT a routine and
>> so the second opinion approaches the case very differently as he/she knows
>> someone is questioning the diagnosis.  Of course, if a second or third
>> opinion is the routine, this is not the case.
>>
>> Overall, though, this article addresses the really important issue of how
>> to aggregate different opinions and use the collective wisdom of the crowd.
>>   I think more study is needed but this forms the basis of a theoretical
>> framework that deserves more study.
>>
>>
>> Michael Kanter, M.D., CPPS
>> Regional Medical Director of Quality & Clinical Analysis
>> Southern California Permanente Medical Group
>> (626) 405-5722 (tie line 8+335)
>>
>> Executive Vice President,
>> Chief Quality Officer,
>> The Permanente Federation
>>
>> THRIVE By Getting Regular Exercise
>>
>> * NOTICE TO RECIPIENT:*  If you are not the intended recipient of this
>> e-mail, you are prohibited from sharing, copying, or otherwise using or
>> disclosing its contents.  If you have received this e-mail in error, please
>> notify the sender immediately by reply e-mail and permanently delete this
>> e-mail and any attachments without reading, forwarding or saving them.
>> Thank you.
>>
>>
>>
>>
>> From:        "Linda M. Isbell" <*lisbell at PSYCH.UMASS.EDU*
>> <lisbell at psych.umass.edu>>
>> To:        *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at list.improvediagnosis.org>
>> Date:        08/25/2016 09:03 AM
>> Subject:        Re: [IMPROVEDX] Fwd: PDF - *www.pnas.org*
>> ------------------------------
>>
>>
>>
>>
>> * Caution: *This email came from outside Kaiser Permanente. Do not open
>> attachments or click on links if you do not recognize the sender.
>>
>> ------------------------------
>>
>> Hi Mark and others -
>> Ok I'll take stab at this...
>> Yes, this is a complicated piece methodologically and statistically, but
>> the implications you suggest (Mark) below are generally correct.  It is a
>> great paper and one that I am sure psychologists especially like, but I
>> think there are some important caveats to keep in mind if you were to apply
>> in practice.  I'll describe them below and also try to elaborate some on
>> the methods/stats as I understand them.
>> First, be careful not to draw wide-ranging conclusions about how this
>> might actually work in practice - that is, these were independent judgments
>> from a group of doctors (for whom there were meaningful individual scores
>> of diagnostic accuracy available) who diagnosed and rated their confidence
>> for each of many images for which there were correct/known diagnoses (101
>> radiologists for the mammograms, 40 dermatologists for the skin images) -
>> images were divided into different groups for different docs so not
>> everyone rated all of them (due to the large number I'm sure).  No one ever
>> communicated with anyone.  Following each diagnosis, docs rated their
>> confidence.  Virtual groups were created by randomly sampling up to 1000
>> groups for each of the two types of images (breast and skin) AND for each
>> of three different doc group sizes (2 v. 3 v. 5 doctors).  So, what they
>> did is essentially create a bunch of randomly selected groups of doctors
>>  by repeatedly sampling from their "population" of doctors/diagnoses (for
>> these 6 "conditions" in what can be thought of as a 2 [skin v. breast
>> images] x 3 [group size: 2 v. 3 v. 5 doctors] design).  So they created up
>> to 6000 virtual groups (up to 1000 for each) - something I think is really
>> cool methodologically!   Each doctor got a sensitivity score (that is,
>> proportion of positive results identified as such) and a specificity score
>> (that is, proportion of negative results identified as such).  Youden’s
>> index (J) is an accuracy measure that takes into account both of these
>> scores and is equal to (sensitivity + specificity) - 1.  The index ranges
>> from -1 to +1 where a score of 0 means that the proportion of people
>> identified with the disease is the same regardless of whether they actually
>> have it or not.  A 1 means the test is perfect (no false positives or
>> negatives).  For a pair of docs in any given group, a change in J was
>> computed (ΔJ), which is the difference in accuracy between that pair of
>> doctors.  So, basically then, when ΔJ is small, that is when docs have
>> similar accuracy - based on all of the cases they judged).
>> The "confidence rule" means that the doctor with the most confidence in
>> his/her diagnosis in any given group "wins" on a specific diagnostic
>> assessment - and that becomes the outcome/diagnosis for the group (and that
>> outcome is compared to the diagnosis of the best doctor in the group - the
>> one with the highest accuracy score based on all diagnoses from all images
>> rated).  So, regardless of group size, it turns out that if you have a
>> group of doctors who generally perform similarly well across all of their
>> diagnostic assessments, then going with the diagnosis in any given
>> case/image that is associated with doc who is most confident with it will
>> be best/most accurate.    For groups of 3 or 5 docs, if they have similar
>> accuracy levels in general, then going with the majority "vote" (diagnosis)
>> is more accurate than the diagnosis of the single best/most accurate doc in
>> the group.  As you can see in Figure 2 in the article, if docs aren't
>> pretty similar in their overall accuracy in a given group, then they are
>> MUCH better off going with the diagnosis of the best diagnostician in the
>> group.
>> SO that's how I read/understand all this.   The tricky part, I think,
>> about applying this to practice prior to more research is that these were
>> all independent judgments/diagnoses and accuracy scores were computed for
>> each doc based on a large number of images that each evaluated.  This is
>> how it was determined who the docs are that are similar in accuracy to one
>> another.  In the real world (everyday practice), I am not sure you would
>> actually know this - would you?  (I'm an outsider here - a social cognition
>> expert, not an MD or clinician).  I am guessing you have a sense of who the
>> good docs are who make good judgments, but I wonder how much more info you
>> need about their general accuracy in order for the effects reported in this
>> article to emerge in real clinical practice (in a way that is CLINCIALLY
>> significant and not just statistically significant)?  There is a noteworthy
>> literature in social psychology that demonstrates that group decisions can
>> sometimes lead to bad outcomes and in some cases to very good ones - the
>> trick is to figure out what those conditions are that take you one way or
>> the other.  If group members can truly add some expertise/understanding to
>> a problem, outcomes can improve.  However, much work suggests that groups
>> can lead to situations in which individuals are overly influenced by others
>> and thinking gets kind of stuck or overly influenced by some ideas that may
>> well be wrong (which can lead to confirmatory hypothesis testing around
>> those ideas if people engage in more discussion/thought, and may ultimately
>> lead to premature closure either with or without the confirmatory
>> hypothesis testing).  Of course much of this work also has been done with
>> group discussions and interactions - something that is noticeably missing
>> in the study reported in the PNAS article (but appropriately, they do note
>> this in their discussion).
>> Overall, it seems that in diagnostic decisions that are relatively
>> dichotomous (as in this article - though I also wonder how many decisions
>> really are quite this dichotomous??  If there are few, then more research
>> is needed to see what happens when their are multiple
>> possibilities/diagnoses/outcomes), these simple decision rules (majority
>> and confidence rules) could work out well and be relatively efficient IF
>> one actually knows the diagnostic accuracy of the group members and knows
>> that they are similarly good.  Personally, I see that as kind of a big if
>> --- because if you are wrong about this - ugh - these decision rules lead
>> to MORE error than if you just went with the best doc! (Again see figure
>> 2).  I guess this is where I wonder most about applying this in practice.
>> SO at the moment at least, this research looks very promising to me for
>> application down the road, but more work would be needed to get there and
>> feel confident that the rules actually do lead to fewer errors in practice
>> (and not too more errors....yikes!).  Plus that whole issue of
>> communication between docs seems extremely important for practice too.
>> All that said, I like the paper VERY much as an important contribution to
>> basic research with the strong potential to one day to have applied
>> implications - but I don't think we are there yet.
>> Very interested also in others' thoughts,
>> Linda
>>
>> ---
>> Linda M. Isbell, Ph.D.
>> Professor, Psychology
>> Department of Psychological and Brain Sciences
>> University of Massachusetts
>> 135 Hicks Way -- 630 Tobin Hall
>> Amherst, Massachusetts 01003
>> Office Phone:  413-545-5960
>> Website:  *http://people.umass.edu/lisbell/*
>> <http://people.umass.edu/lisbell/>
>> On 2016-08-23 12:17, *graber.mark at GMAIL.COM* <graber.mark at gmail.com>
>> wrote:
>> Thanks to Nick Argy for bringing this article to attention.   The methods
>> and findings are a bit hard to follow, but if I understand things
>> correctly, the article finds that diagnostic accuracy can be improved by
>> second opinions or larger groups if the diagnosticians have similarly high
>> skill levels, but that accuracy is degraded to the extent that the
>> variability increases.  I'd really like to hear what others get out of this
>> paper, because these findings have important implications for
>> recommendations to move in the direction of getting more second opinions,
>> or using the new group-based diagnosis approaches.
>>
>> Mark
>>
>>
>>
>> ------------------------------
>>
>> Address messages to: *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at list.improvediagnosis.org>
>>
>> To unsubscribe from IMPROVEDX: click the following link:
>>
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
>> or send email to: *IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX-SIGNOFF-REQUEST at list.improvediagnosis.org>
>>
>> Visit the searchable archives or adjust your subscription at:
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX>
>>
>> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
>> Medicine
>>
>> To learn more about SIDM visit:
>> *http://www.improvediagnosis.org/* <http://www.improvediagnosis.org/>
>>
>>
>> ------------------------------
>>
>> Address messages to: *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at list.improvediagnosis.org>
>>
>> To unsubscribe from IMPROVEDX: click the following link:
>>
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
>> or send email to: *IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX-SIGNOFF-REQUEST at list.improvediagnosis.org>
>>
>> Visit the searchable archives or adjust your subscription at:
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX>
>>
>> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
>> Medicine
>>
>> To learn more about SIDM visit:
>> *http://www.improvediagnosis.org/* <http://www.improvediagnosis.org/>
>>
>> ------------------------------
>>
>>
>> Address messages to: *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at list.improvediagnosis.org>
>>
>> To unsubscribe from IMPROVEDX: click the following link:
>>
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
>> or send email to: *IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX-SIGNOFF-REQUEST at list.improvediagnosis.org>
>>
>> Visit the searchable archives or adjust your subscription at:
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX>
>>
>> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
>> Medicine
>>
>> To learn more about SIDM visit:
>> *http://www.improvediagnosis.org/* <http://www.improvediagnosis.org/>
>>
>>
>> ------------------------------
>>
>>
>> Address messages to: *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at list.improvediagnosis.org>
>>
>> To unsubscribe from IMPROVEDX: click the following link:
>>
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
>>
>> or send email to: *IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX-SIGNOFF-REQUEST at list.improvediagnosis.org>
>>
>> Visit the searchable archives or adjust your subscription at:
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX>
>>
>>
>> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
>> Medicine
>>
>> To learn more about SIDM visit:
>> *http://www.improvediagnosis.org/* <http://www.improvediagnosis.org/>
>>
>>
>>
>> ------------------------------
>>
>>
>> Address messages to: *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
>>
>> To unsubscribe from IMPROVEDX: click the following link:
>>
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
>> or send email to: *IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG>
>>
>> Visit the searchable archives or adjust your subscription at:
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX>
>>
>> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
>> Medicine
>>
>> To learn more about SIDM visit:
>> *http://www.improvediagnosis.org/* <http://www.improvediagnosis.org/>
>>
>>
>>
>> ------------------------------
>> Notice: This UI Health Care e-mail (including attachments) is covered by
>> the Electronic Communications Privacy Act, 18 U.S.C. 2510-2521 and is
>> intended only for the use of the individual or entity to which it is
>> addressed, and may contain information that is privileged, confidential,
>> and exempt from disclosure under applicable law. If you are not the
>> intended recipient, any dissemination, distribution or copying of this
>> communication is strictly prohibited. If you have received this
>> communication in error, please notify the sender immediately and delete or
>> destroy all copies of the original message and attachments thereto. Email
>> sent to or from UI Health Care may be retained as required by law or
>> regulation. Thank you.
>> ------------------------------
>>
>> ------------------------------
>>
>> Address messages to: *IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX at list.improvediagnosis.org>
>>
>> To unsubscribe from IMPROVEDX: click the following link:
>>
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
>>
>> or send email to: *IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG*
>> <IMPROVEDX-SIGNOFF-REQUEST at list.improvediagnosis.org>
>>
>> Visit the searchable archives or adjust your subscription at:
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX>
>>
>>
>> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
>> Medicine
>>
>> To learn more about SIDM visit:
>> *http://www.improvediagnosis.org/* <http://www.improvediagnosis.org/>
>>
>>
>> ------------------------------
>>
>>
>> To unsubscribe from IMPROVEDX: click the following link:
>>
>> *http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1*
>> <http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1>
>>
>> or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
>>
>>
>>
>> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
>> Medicine
>>
>> To learn more about SIDM visit:
>> http://www.improvediagnosis.org/
>>
>>
>
> ------------------------------
>
>
> To unsubscribe from IMPROVEDX: click the following link:
> http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?
> SUBED1=IMPROVEDX&A=1 or send email to: IMPROVEDX-SIGNOFF-REQUEST@
> LIST.IMPROVEDIAGNOSIS.ORG
>
> Moderator:David Meyers, Board Member, Society for Improving Diagnosis in
> Medicine
>
> To learn more about SIDM visit:
> http://www.improvediagnosis.org/
>






Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine


HTML Version:
URL: <../attachments/20160831/739e6632/attachment.html>


More information about the Test mailing list