Ten Commandments to Reduce Diagnostic Errors

John Brush jebrush at ME.COM
Wed Sep 10 13:24:26 UTC 2014


Bimal,
I take issue with your statement:  "In any case, a probabilistic approach does not appear to be employed by physicians in actual practice. Most physicians, at least the ones I know have no knowledge of probability theory yet many of them are excellent diagnosticians."

1. Every day, in every hospital and every clinic you hear physicians say “I think the most likely diagnosis is ….”  In that statement, they are stating that the probability of a particular diagnosis is greater than the probability of other possible diagnoses.
2. Every day I hear surgeons tell patients that they have x% risk of complications and x% probability of success with a surgical procedure. The Society of Thoracic Surgeons provides a website where you can calculate the probability of success, mortality, and morbidity for individual patients undergoing cardiac surgery.
3. Recently, the AHA and ACC released a major guideline document that instructed physicians to prescribe statins to all patients with a calculated 10-year risk of hard cardiovascular endpoints of >7%. That’s an explicit use of calculated probability applied to an individual. The term “risk” is synonymous with “probability.” The Mayo Clinic has an excellent website that creates pictorial displays of probability to help patients understand their cardiovascular risk.

With these 3 examples you can clearly see that probability is actually ubiquitous in practice. Physicians, like all people, estimate probability intuitively using a heuristic called anchoring and adjusting. That heuristic is good, but far from perfect. Anchoring and adjusting isn’t going to go away, but it could be improved upon by calibrating our intuitive estimates a bit and by being more reflective about how we think about probability. 

BTW, I think you have misinterpreted Nate Silver’s methods. He used a Bayesian method of constantly updating his probability estimates based on incremental information from various sources. His probability estimates were not simply poll results, they were progressively updated estimates based on a number of information inputs. He explains his Bayesian methods very well in his book.

John

John E. Brush, Jr., M.D., FACC
Professor of Medicine
Eastern Virginia Medical School
Sentara Cardiology Specialists
844 Kempsville Road, Suite 204
Norfolk, VA 23502
757-261-0700
Cell: 757-477-1990
jebrush at me.com



On Sep 9, 2014, at 10:45 AM, Jain, Bimal P.,M.D. <BJAIN at PARTNERS.ORG> wrote:

John,
 
The core issue we are discussing is the role of probability in clinical diagnosis. You hold that diagnosis is best performed from a post test probability generated by  Bayesian reasoning while I have grave reservations about this approach. Let us unravel the various strands of this issue.
1.       Probability is a  mathematical concept that has been interpreted in application subjectively as degree of belief and objectively as a frequency or distribution.
2.       It has been applied with great success in many fields such as statistical mechanics in physics, epidemiology, insurance, stock portfolio management, weather forecasting, betting, forecasting of election results etc.
3.       In all these fields, its application leads to predictions in large groups or series made up of large numbers of objects or events, billions of gas molecules in statistical mechanics, millions of voters in election result forecasting.
4.       A characteristic feature of its application in these fields is that our focus is on accuracy in the result of a group as  whole, error in prediction in given individual members is tolerated. For example, we do not care if we have a loss in a particular stock as long as the whole portfolio makes money.
5.       Nate Silver’s remarkable result in correctly predicting election results in all 50 states that you mention, was based on predictions in groups of millions of voters in each state. He did not and could not predict how a particular person voted in a given state. What he did  was what an epidemiologist does, not what a physician does in diagnosing a disease in a given patient.
6.       His amazing success was due The Law of Large Numbers as the eminent Fields Prize winning mathematician Terence Tao has pointed out (Best Writing in Mathematics 2013)
7.       This law states that a predicted frequency from a probability gets progressively closer to an actual frequency as a series gets larger.
8.       By this lawman actual frequency will tend to deviate more and more from a frequency predicted from a probability as a series gets progressively smaller. In the limit when the series consists of one member only, the frequency or distribution vanishes and all  we have is presence or absence of an outcome which a probability  cannot correctly predict.
9.       In clinical diagnosis we are dealing with a given, individual patient who makes up a series of one in whom probability considerations do not apply.
10.   It is not clear to me, why a probabilistic approach has been proposed for diagnosis. Perhaps it is due to its success in epidemiologic studies which is inappropriate as clinical and epidemiologic are two different domains, one dealing with an individual patient and the other with groups of patients. The term clinical epidemiology is most unfortunate, in my view, as it seems to imply that the method of epidemiology can be applied in the clinical domain.
11.   In any case, a probabilistic approach does not appear to be employed by physicians in actual practice. Most physicians, at least the ones I know have no knowledge of probability theory yet many of them are excellent diagnosticians. Furthermore, in hundreds of published discussions of diagnosis in actual patients in CPCs and clinical problem solving exercises, a Bayesian approach has not been employed. Thus in none of these discussions, the pretest probability of a suspected disease is estimated, which is a hallmark of the probabilistic approach. I have come across only one instance in which Bayesian diagnosis was attempted(Pauker NEJM 1992) where it performed poorly leading to an incorrect diagnosis.
12.   The method of diagnosis employed in actual practice consists, I suggest of suspecting one or more diseases from a presentation, formulating them as hypotheses and diagnosing one of them definitively when a highly informative test result is observed. In this method, a suspected disease is not assigned a pretest probability, its status is indeterminate, it is a hypothesis. It is proven correct or incorrect, which corresponds to presence or absence of disease in the given patient from a highly informative test result..
13.   This is the method used in all of science. It is the same method used by Feynman in finding the cause of explosion of space shuttle Challenger as I discussed earlier.
14.   The accuracy of diagnosis in actual practice, Mark Graber tells us is 85 percent. The aim of our Society is to reduce or eliminate the 15 percent diagnostic error rate.
15.   The pioneering studies of Gordon Schiff, Hardeep Singh, John Ely have shown that failure to suspect a disease in patients with atypical presentations is the commonest cause of diagnostic errors.
16.   A probabilistic approach is likely to increase this failure as the low pretest probability in these patients is likely to be interpreted as low  plausibility or low pretest evidence increasing the likelihood of a disease being ruled out without further testing.
17.   I would like to emphasize I am not questioning the mathematical correctness of Bayes’ theorem on which the probabilistic approach is based. It is an elegant, mathematically consistent theorem which is derived from the axioms of probability in a straightforward manner.
18.   But its mathematical correctness does not ensure its correctness in application. In this regard, the fine saying of Einstein ‘As far as the laws of mathematics refer to reality they are not certain and as far as they are certain they do not refer to reality’ in his great essay Geometry and Experience is very relevant.
19.   A correct application depends on features in the real world which  correspond to mathematical concepts. Bayes’ theorem fails to apply because  probability gives correct results in groups of patients only such as in epidemiologic studies while clinically we diagnose a disease in a given individual patient.
20.   Finally, regardless of what you or I believe, the issue about probabilistic diagnosis can only be settled like all issues in science by experiment.
21.   I suggest studies be conducted in large numbers of patients to compare accuracy rates in Bayesian  and usual diagnosis.
22.   If well conducted studies show clear-cut superiority of one or the other method, we should accept the verdict regardless of our personal belief.
23.   Our Society consisting of physicians interested in diagnosis could take the initiative in conducting such studies.
24.   The results of such studies would have important implications, I believe, in teaching clinical diagnosis to novice physicians including medical students and in reducing diagnostic errors.
 
 
Bimal
 
 
Bimal P Jain MD
Pulmonary-CriticalCare
NorthShore Medical Center
Lynn MA 01904
 
 
 
 
 
 
 
 
 
 
 
 
 
From: John Brush [mailto:jebrush at ME.COM] 
Sent: Saturday, August 30, 2014 10:18 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] Ten Commandments to Reduce Diagnostic Errors
 
Bimal,
                You are confusing variability of clinical manifestations with probability of a disease. The range of possible manifestations of a disease is not the same thing as the range of possible diagnoses that might explain a manifestation. This is shown by the figure from the article by Goodman that was referenced in a previous email response from David Newman-Toker (see below). We explain variability of disease deductively. CPCs and book chapters are set up this way.  But we make diagnoses inductively, starting with the manifestation and working backward toward a general category, or diagnosis. This requires inverse probability, induction, Bayesian inference, and the use of conditional probability. 
                 
<image001.png>
                This has actually been thoroughly discussed in the literature. (see the classic article by Diamond and Forrester: Analysis of probability in the clinical diagnosis of coronary-artery disease. N Engl J Med 1979;300:1350-1358). I don’t think you could pass the cardiology boards without an understanding of conditional probability and Bayesian logic as it applies to stress testing, troponin, BNP, d-dimers, and other tests. There was classic series  in JAMA commissioned by David Sackett that was compiled in excellent book, The Rational Clinical Examination by Simel and Rennie where the use of conditional probability was expanded to simple physical exam findings. Bayesian logic was also discussed in the lay literature by Nate Silver in The Signal and the Noise. He used Bayesian logic to predict the outcome of the last election and was 50/50 in his predictions. 
                Your call to only use tests with likelihood ratios of greater than 10 is simply unrealistic. A test has to have a sensitivity of at least 90% and a specificity of at least 91% to have a LR(+) of 10. A test that good is almost non-existent. As I stated in a prior email, imaging stress tests have a LR(+) of 6 and a positive troponin is 4.7. ST elevation on EKG has a high LR(+), but only when used in a specific setting, and using a very restrictive criteria of ST elevation. 
                Learning about variability of clinical manifestations is already part of our training. We know that there can be formes fruste of disease. We are all taught that an unlikely manifestation of a common disease is more likely than an uncommon disease. Representativeness is when you fall for an uncommon disease because of unusual manifestations and you ignore the more likely possibilities. In fact, virtually all of the fallacies described by Daniel Kahneman are misinterpretations of probability (representativeness, availability, anchoring and adjusting).  The solution is to think more about probabilities, not less, and to try to be a bit more quantitative and precise. Probabilistic thinking, even if it is semi-quantitative, provides a framework for double checking our own thinking and thoughtfully examining our conclusions.
                Likelihood ratios are useful as multipliers to help us calculate probability. And they help us understand the relative strength of new information. But they don’t really make sense unless you use them in conjunction with pre-test odds to calculate post-odds, then post test probability. I don’t know how you can talk about likelihood ratios while criticizing the value of using probability in practice. This doesn’t seem logical to me.
                We seem to be going over the same ground over and over. But I think we need to have some clear logic for the readers of the listserv.
Thanks.
John
John E. Brush, Jr., M.D., FACC
Professor of Medicine
Eastern Virginia Medical School
Sentara Cardiology Specialists
844 Kempsville Road, Suite 204
Norfolk, VA 23502
757-261-0700
Cell: 757-477-1990
jebrush at me.com
 
 
 
On Aug 27, 2014, at 9:59 PM, Swerlick, Robert A <rswerli at EMORY.EDU> wrote:
 
Bimal,
 
I am perplexed by your logic. The examples you chose to highlight are ones where the diagnostic tests are especially robust. They represent the exceptions rather than the rule. The utility of most diagnostic tests are heavily dependent upon the context they are deployed and for the most part, they nudge us in certain directions. They do not close the deal. 

Are you rejecting the probabilistic nature of diagnoses in general?  What alternative do you offer? Hypotheses are rarely "proven true". The best you can hope for is that they are not refuted and with mounting evidence, the probability they are true becomes greater. 
 
In addition, each diagnosis has predictions, which depend upon probabilities,  built into them. A diagnosis of AMI or PE or any other diagnosis is linked to likelihoods of specific outcomes, which can only be viewed through the lens of probability. Some of those outcomes will come about, some will not and for given populations, the particular outcomes happen with certain predictable frequencies. 
 
Feynman did an autopsy on the Challenger. What we do in medicine is more like doing the ice water test on the o-ring before the crash and predicting an outcome. 
 
Bob
 
Robert A. Swerlick, MD
Alicia Leizman Stonecipher Chair of Dermatology
Professor and Chairman, Department of Dermatology
Emory University School of Medicine
404-727-3669
From: Jain, Bimal P.,M.D. [BJAIN at PARTNERS.ORG]
Sent: Wednesday, August 27, 2014 9:54 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] Ten Commandments to Reduce Diagnostic Errors

1.       T o improve diagnosis, it is important to understand, I believe, how it is performed in actual practice.
2.       The real life method of diagnosis overcomes two major challenges
(a)    The varying typicality of presentation of a given disease in different patients
(b)   The need to determine a disease correctly in every individual patient with symptoms.
3.       The notion of typicality is equivalent to that of pretest probability as both indicate the frequency of a disease in patients with a given presentation. Thus a highly typical presentation as well as high pretest probability of a disease indicates most patients with a given presentation having a disease.
4.       A given patient with a certain presentation can thus be looked upon as being drawn from a series of patients with similar presentations.
5.       The typicality of a presentation is therefore not evidence for or against a disease in a given patient as it refers only to a frequency in a series and not to presence or absence of disease in the given patient.
6.       In actual practice therefore, a presentation is employed only as a clue, I suggest, from which we suspect a disease in a given patient. Thus highly characteristic chest pain in a 65 year old male with multiple cardiac risk factors as well as highly uncharacteristic chest pain in a healthy 40 year old woman with no cardiac risk factor make us only suspect acute myocardial infarction (acute MI)in both these patients.
7.       The suspected disease is then assumed or postulated to be present and thus given the status of a hypothesis.
8.       The hypothesis is then evaluated by a test and if a highly informative test result with likelihood ratio (LR) of 10 or higher is observed, the hypothesis is considered correct and the suspected disease diagnosed definitively.
9.       The hypothesis of acute MI in the above two patients is evaluated by performing an EKG. If acute Q wave and ST elevation changes (acute EKG changes) with LR of 13 are observed in both patients, acute MI is diagnosed with near certainty in both patients.
10.   A test result with LR of 10 or higher is usually obtained by performing a laboratory, imaging or biopsy study and occasionally from physical examination. For example, observation of unilateral erythematous, vesicular skin lesions would confirm diagnosis of herpes zoster suspected in a patient with unilateral back pain.
11.   Clinical diagnosis in actual practice is performed, I suggest in two sequential steps:
(a)    A disease is suspected from a presentation
(b)   It is diagnosed definitively from a test result with LR of 10 or higher.
        12.During diagnosis therefore, the status of a disease is
                (a) That of a postulated disease or hypothesis in the first stage and
                (b) That of a confirmed or definitively diagnosed disease in the second stage.
        13. In the CPCs published in NEJM, these two stages are clearly seen
               (a) In the first stage, the discussing physician postulates a disease from given information (presentation) which has the status of a hypothesis
               (b) In the second stage, the postulated disease is proven correct (or not) when the pathologist gives the result of a highly informative test result which is usually a biopsy ( or autopsy) finding but
                      may be laboratory or imaging test result.
        14. In general, it is rare to make a definitive diagnosis from a presentation alone. This may occur however when a highly informative test result is part of presentation.
               For examp0le, herpes zoster is diagnosed definitively if a patient presents with painful, unilateral erythematous, vesicular skin lesions.
        15. A presentation ceases to play any further role in diagnosis once a test result with LR of 10 or higher is observed. For example, pulmonary embolism is diagnosed definitively when a positive chest
               CT angiogram (LR 21) is observed and deep vein thrombophlebitis is diagnosed definitively when a positive venous ultrasound study (LR 19) is found regardless of typicality of presentation.
        16. It is seen from above account that probability does not seem to play any significant role in diagnosis in actual practice.
        17. It is difficult to assess the value of the proposed Bayesian (probabilistic) to diagnosis in actual practice as there are hardly any published accounts of Bayesian diagnosis in actual patients. I present
               below a patient discussed in a clinical problem solving exercise (Pauker, NEJM 1992) in which Bayesian diagnosis was attempted.
        18. A healthy 40 year old woman without any cardiac risk factor presents with highly uncharacteristic chest pain and is found to have acute Q wave and ST elevation EKG changes.
        19. The pretest probability of acute MI was estimated to be 7 percent which was combined with known LR  of acute EKG changes of 13 by Bayes’ theorem to generate a post test probability of acute
               MI of  50 percent. The Bayesian diagnosis from this post test probability obviously is that acute MI is indeterminate in this patient.
        20. But the discussing physician ignored the Bayesian diagnosis and correctly diagnosed acute MI with near certainty from the strong evidence provided by acute EKG changes alone.
        21. He diagnosed in this manner, I suggest, because acute EKG changes are known to diagnose acute MI correctly in 90 percent patients regardless of pretest probability (Rude Am J Card 1983)
        22. I consider diagnosis to be a problem solving process which is similar to problem solving in any other field. It is strikingly similar for example as I discuss below, to the manner in which the great
               American physicist Richard Feynman ‘diagnosed’ the cause of explosion of space capsule Challenger in 1986.
        23. He carefully studied all available information about launch of Challenger, much as a physician discussing  a CPC would study available information about patient he is to discuss. From his study he
               suspected malfunction of a rubber O ring which served as a valve due to extremely cold temperature (28 F) at time of launch. He postulated this explanation as a hypothesis which he evaluated
               with his famous experiment conducted on television in which he dipped a replica of O ring in a glass of ice cold water. He found the O ring to become brittle and therefore incapable of functioning
               properly as a valve thereby proving his hypothesis correct.
         24. It will be noted Feynman did not employ probabilities, therefore his method is non-Bayesian. In fact, he had harsh words to say about what he felt was improper use of probabilities by NASA
                engineers. Feynman has narrated his investigation of Challenger incident in his inimitable style in his highly entertaining and instructive book  ‘What do you care whir other people think?’
         25. It would be immensely useful to all of us  if the value (or not) of Bayesian approach in diagnosis is decisively established by a well conducted experimental study.
         26. For if such a study shows a clear cut superiority over the usual approach which is about 85 percent accurate, we should all adopt it in our daily practice. If however, it is not found to be superior or
                Found to be inferior we should stop thinking about employing it and focus instead on learning more about how diagnosis is performed in actual practice.
 
         Bimal
 
 
        Bimal P Jain MD
        Pulmonary-CriticalCare
        NorthShore Medical Center
        Lynn MA 01904
               
                      
 
 
 
 
 
 
 
 
 
 
From: Pauker, Stephen [mailto:SPauker at TUFTSMEDICALCENTER.ORG] 
Sent: Tuesday, August 26, 2014 1:12 PM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] Ten Commandments to Reduce Diagnostic Errors
 
Well now, Dr Crab, one can craft similar prolems
With every (any) commandment or rule.
 
 
Rule 1: Every rule has exceptions. (Yes, but)
Rule 2: Uncertainty and variation are not going away. (So manage them)
Rule 3: Although History was important, it pales at the autopsy (or is it the MRI)
Rule 4: Since history is a kind of test, testing only if it will change plans,
             Is not an enforceable rule.
Rule 5: Publication is merely telling a convincing story to reviewers. (What  makes evidence?)
Rule 6: In a pinch, a brain outsmarts an iPad.
Rule 7: Be wary of Intuition and “Evidence.”
Rule 8: Although the uncommon can be important, Prevalence and Bayes Rule triumphs.
Rule 9: Don’t ignore System I thinking (gut feelings)
Rule 10: It’s only a guideline.”
 
Steve ;-)
 
From: Harold Lehmann [mailto:lehmann at JHMI.EDU] 
Sent: Tuesday, August 26, 2014 9:30 AM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: Re: [IMPROVEDX] Ten Commandments to Reduce Diagnostic Errors
 
Am I a crab to point out that “First Do No Harm” is false? Because we harm all the time—because we (and hopefully the patient) think it’s worth it. (Asking embarrassing History questions (#)…asking for disrobing…cold stethoscope…rectal exam…gagging pharyngeal exam…blood test…IV…VCUG…bone marrow aspiration…Need I go on?)
 
So: “First, Do the Least Necessary Harm”?
 
Also—re "think of serious and treatable conditions and act on them without delay”—does that reward availability bias? Or are we saying that any such “thought” means the likelihood is > 1/1,000, which I have found (in 20 years of eliciting from residents) is the threshold for referring infants to the ED for an LP, and therefore above threshold?
 
Or should we say: “"think of serious and treatable conditions and act on them without delay, if the likelihood is high enough”
 
Harold
 
From: "<Patrice F. Hirning>", <MD>, <MACP>, CPHRM <phirning at UMIA.COM>
Reply-To: Society to Improve Diagnosis in Medicine <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>, "Patrice F. Hirning, MD, MACP, CPHRM" <phirning at UMIA.COM>
Date: Monday, August 25, 2014 at 8:52 PM
To: "IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG" <IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG>
Subject: Re: [IMPROVEDX] Ten Commandments to Reduce Diagnostic Errors
 
What a great list. This should be shared with all medical students, house staff and practicing physicians. I plan to add these to my presentation to physicians about diagnostic error.
 
Patrice
 
Patrice F. Hirning, MD, MACP, CPHRM
Medical Director
UMIA Insurance, Inc.
310 East 4500 South, Suite 550
Salt Lake City, Utah 84107
Office 801.554.1145 
Fax 801.531.0381 
Toll Free 800.748.4380
phirning at umia.com
 
The information contained in this e-mail message is privileged and/or confidential, and is intended only for the receipt by and use of the
intended recipient. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this message is strictly prohibited under both state and federal law. If you have received this message in error, please immediately notify UMIA Insurance, Inc. and delete this message from your computer. Thank you.
 
From: Lorri Zipperer [mailto:Lorri at ZPM1.COM] 
Sent: Sunday, August 24, 2014 7:13 PM
To: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG
Subject: [IMPROVEDX] Ten Commandments to Reduce Diagnostic Errors
 
Forwarded by the moderator:
 
From Dr. Leonardo Leonidas, Bangor, Maine 20 May 2001  Copyright 2001
Given to his Son Len and Class 2001 Tufts University School of Medicine
 
   1. Thou shalt First "Do No Harm."
   2. Thou shalt think of serious and treatable conditions and act on them without delay.
   3. Thou shalt remember that Diagnosis is History, History, History.  Then confirm with clinical examination and more History.
   4. Thou shalt request a test only if it will change your plan or help in predicting the outcome.
   5. Thou shalt question "authority" such as your senior residents, attendings, experts, or even National guidelines.
   6.  Thou shalt continue the debate and questioning even though the data is "IN."
   7. Thou shalt maintain a high index of suspicion for uncommon presentations of the common.
   8. Thou shalt recognize your own beliefs, biases, prejudices, and thinking style.
   9. Thou shalt be wary of your hunches and intuitions. It is better to use Evidence Based Medicine.
  10.  Thou shalt have an iPad* or a smartphone in your palm.
 
*Palm Pilot in the first edition.
 
Leonardo L. Leonidas, MD
Assistant Clinical Professor in Pediatrics (retired 2008)
Tufts University School of Medicine, Boston, USA
nonieleonidas68 at gmail.com <mailto:nonieleonidas68 at gmail.com>
 
 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Tufts Medical Center HIPAA Hotline at (617) 636-4422. If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail.

 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
 
The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
 
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
 

Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
 
 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
 
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX
 

Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/
 
 
 


To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1
or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/



To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1

or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX


Moderator: Lorri Zipperer Lorri at ZPM1.com, Communication co-chair, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/










HTML Version:
URL: <../attachments/20140910/2b01d1ec/attachment.html>


More information about the Test mailing list