UNBOUNDED VERSUS BOUNDED ( ECOLOGICAL ) RATIONALITY IN DIAGNOSIS

Woods, David woods.2 at OSU.EDU
Tue Nov 14 18:34:33 UTC 2017


to clarify:  you are referring to abductive reasoning.  For a general real world view of abductive reasoning see chapters 8 and 9 of my 2006 book (attached). [also note the oversimplification tendencies covered briefly in chapter 9:  the discussions about diagnosis can fall into these traps as well as diagnostic processes themselves.]

I also include an excerpt from a review which addresses this topic of anomaly response and abductive reasoning.   You might also want to go through most of the chapter as it provides a quick overview on how and why the ability to revise is the most important attribute of diagnostic processes and how machine reasoners (automation, autonomy, algs.) are fundamentally brittle.

David


David Woods
Releasing the Adaptive Power of Human Systems

follow @ddwoods2<https://twitter.com/ddwoods2>

Professor
Department of Integrated Systems Engineering
The Ohio State University

Past-President
Resilience Engineering Association

7th Biennial International Symposium on Resilience Engineering
Liège Belgium, June 26-29, 2017

woods.2 at osu dot edu
614-946-0123

SNAFU Catchers Consortium
 http://bit.ly/StellaReportVelocity2017
stella.report  <https://drive.google.com/file/d/0B7kFkt5WxLeDTml5cTFsWXFCb1U/view>

keynote on autonomy and people see
part 1: https://youtu.be/b8xEpjW0Sqk   part 2: https://youtu.be/as0LipGTm5s  part 3: https://youtu.be/2GEsxMuLWIE

keynotes on resilience and complexity see
https://www.youtube.com/watch?v=7STcaWjJoww&index=7&list=PL055Epbe6d5YDU6sikjqcd_YM9XT4OehD
or
https://www.youtube.com/watch?v=zHJdDMQJXiw&index=8&list=PL7_JAXDeVTvIZ_Y-ddqCiGF-ZKxtM5MLe




Excerpt from Woods (2018) On the Origins of Cognitive Systems Engineering on anomaly response and abductive reasoning.



Anomaly Response

The studies of control rooms during emergencies examined a major class of cognitive work — anomaly response. In anomaly response, there is some underlying process, an engineered or physiological process which will be referred to as the monitored process, whose state changes over time. Faults disturb the functions that go on in the monitored process and generate the demand for practitioners to act to compensate for these disturbances in order to maintain process integrity—what is sometimes referred to as ‘safing’ activities. In parallel, practitioners carry out diagnostic activities to determine the source of the disturbances in order to correct the underlying problem.

Anomaly response situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks (Woods, 1988; 1994). Typical examples of fields of practice where this form of cognitive work occurs include flight deck operations in commercial aviation, control of space systems, anesthetic management under surgery, process control, and response to natural disasters.

In the early 1980’s the dominant view assumed a static situation where diagnosis was a classification task, or, for artificial intelligence, heuristic classification (Clancey, 1980). Internal medicine seemed archetypical where a set of symptoms was classified into one of several diagnostic categories in the differential diagnosis. However, this approach seemed limited in its capacity to capture the dynamism and risk that I was observing in anomaly response. Among many problems, classification missed the complications that arise when events can cascade. Plus, waiting until the classification was complete guaranteed responses would be too slow and stale to handle an evolving situation.

In anomaly response, incidents rarely spring full blown and complete; incidents evolve. Practitioners make provisional assessments and form expectancies based on partial and uncertain data. These assessments are incrementally updated and revised as more evidence comes in. Furthermore, situation assessment and plan revision are not distinct sequential stages, but rather they are closely interwoven processes with partial and provisional plan development and feedback that lead to revised situation assessments. As a result, it may be necessary for practitioners to make therapeutic interventions, which have the goal of mitigating the disturbances before additional malfunctions occur, even when the diagnosis of what is producing the anomaly is unknown or uncertain (or even to just buy time to obtain or process more evidence about what is going on). It can be necessary for practitioners to entertain and evaluate assessments that later turn out to be erroneous. To reduce uncertainties it can be necessary to intervene for diagnostic purposes to generate information about the nature and source of the anomaly. And interventions intended to be therapeutic may turn out to have mostly diagnostic value when the response of the monitored process to the intervention is different than expected. Diagnostic value of interventions is also important to help generate possible hypotheses to be ruled out or considered as possibilities.

The early studies of operators in nuclear power emergencies provided a great deal of data about anomaly response as a generic form of cognitive work that was later buttressed by studies in the surgical operating room and in space shuttle mission control. Figure 2 reproduces the original figure summarizing the model of anomaly response as a form of cognitive work. The best description of the model uses NASA cases and is presented in detail in Chapter 8 of Woods and Hollnagel, 2006).

In the early 1980’s cognitive modeling was just beginning to become popular in the form of software made to reason similar to people (e.g., Allen Newell’s SOAR, 1990; or John Anderson’s ACT-R, 1983). Emilie Roth and I were engaged to develop a cognitive model for nuclear control rooms during emergencies. We approached the modeling task in two new ways (Woods and Roth, 1986). First, we did not try to model a person, a set of people, or a team. Instead we set out to model the cognitive demands any person, intelligent machine or combination of people and machines would have to be able to handle. For example, in anomaly response there is the potential for a cascade of disturbances to occur, and all joint cognitive systems have to be able to keep pace with this flow of events, anomalies, and malfunctions.

Second, we set out to model the cognitive demands of anomaly response as a form of cognitive work. The model of anomaly response that we built from observations was a form of abductive reasoning (Peirce, 1936). In abductive reasoning there is a set of findings to be explained, potential explanations for these findings are generated (if hypothesis A were true, then A would account for the presence of Finding X), and competing hypotheses are evaluated in a search for the “best” explanation based on criteria such as parsimony. Abduction allows for more than one hypothesis to account for the set of findings, so deciding on what subset of hypotheses best “covers” the set of findings is a treacherous process.

There were people developing AI software to reason abductively so we teamed with one developer, Harry Pople, who was working on the Caduceus software to perform diagnosis in one area of medicine (Pople, 1985). With Pople we tried to work through how his goals for software development could be adapted to do anomaly response in control rooms for engineered processes. The collaboration formed in part because of common interest in approaches based on abduction and in part because Pople had run into trouble as he tried to develop the Caduceus software to handle cases his previous software system Internist couldn’t handle — he had run into cases where his experimental software had gotten stuck on one explanation. The way the findings to be explained presented themselves over time turn out to make a difference in what explanations appeared to be best, and his software had difficulty revising as additional information became available. Pople thought that looking at a time dependent process would help him improve his software under development. Besides, Emilie Roth and I had lot’s of data on how anomaly response worked and sometimes didn’t work so well (Woods, 1984; 1994; 1995).

However, improving the software didn’t go as planned. Characteristics of anomaly response kept breaking his experimental software. The first problem was determining what are the findings to be explained. The AI-ers had used a trick previously — they had determined, in advance, a fixed set of findings to be explained. The machine didn’t have to figure out what is a finding or deal with a changing set of findings; the findings were handed to it on a silver (human developer) platter. By the way, when it comes to all things AI, remember there is always a hidden trick — good CSE-ers find ways to break automata and plans by focusing on patterns of demands that challenge any agent or set of agents whatever the combination of human and machine roles.

Modeling the cognitive demands of anomaly response required hooking the machine reasoning software to multiple dynamic data channels—hundreds and even thousands for a nuclear plant at that time (and think of the advances in sensing since then that provide access to huge data streams). Even though we narrowed in on a very limited set of critical sensor feeds, the machine was quickly victimized by data overload. There were many changes going on all the time, which of these changes needed to be explained? In our model the answer is unexpected changes based on the data on how people do anomaly response. But figuring out what are unexpected events is quite difficult and requires a model of what is expected or typical depending on context. Some changes could be abnormal but expected depending on the situation and context and therefore not in need a new or modified explanation. Imposing even more difficulty, the absence of an expected change is a finding very much in need of explanation. Abductive inference engines had no way to compute expectations, but people do use expectations, and, more surprisingly, the mind generates expectations very early in the processing of external cues in order to determine which out of very many changes need more processing (Christoffersen et al., 2007).

Next the software had to deal with actions that occurred before any diagnosis was reached and accepted. Abnormal changes demanded interventions or at least the consideration of intervention without waiting for more definitive assessments to occur. Many of these actions were taken by automated systems or by other human agents. These actions produced more changes and expected and unexpected responses. If there was an action to reduce pressure and pressure stayed the same, oops, that’s a new unexpected finding in need of an explanation i.e., what is producing that behavior? Did the action did not happen as intended or instructed? What broke down in moving from intention to effect. Or perhaps, some other unrecognized process or disturbance is going on whose effects offset the action to reduce pressure which did occur as intended?

Another issue in abductive reasoning and in anomaly response is where do hypotheses come from? AI approaches snuck in a trick again—the base set of hypotheses were preloaded. Yes, in abductive reasoning computations the software could build lots of different composites from the base set. But in actual anomaly response, generating hypotheses is part of the cognitive work, a difficult demand. Studies showed that having diverse perspectives helps generates a wider set of possible hypotheses, as do many other factors. One thing is clear though: Asking for a single agent, human or machine, to generate the widest set of possible hypotheses all by themselves is too much. The right teaming with the right interplay will do much better. These and other factors created an ironic twist—we didn’t need a more sophisticated diagnostic evaluation in our abductive reasoner. What we needed were new modules beyond the standard AI software to track what was unexpected and to generate provisional assessments ready to revise as everything kept changing (Woods et al., 1987; 1988; 1990; Roth et al., 1992).

While I could go on in great depth about different aspects of anomaly response as a critical activity cognitive systems at work, several things stand out. Anomaly response is the broadest reference model for diagnostic processes; some settings relax parts of the demands or function in a default mode as mere classification. But broadly speaking all diagnostic activities are some version of anomaly response. Second, the model we developed in the mid-80’s is still the best account of the demands that have to be met by any set of agents of whatever type and in whatever collaborative configuration. And third, we still don’t have software than can contribute effectively to the hard parts of anomaly response as part of a joint cognitive system. It’s disappointing.

On Nov 14, 2017, at 1:00 PM, Jain, Bimal P.,M.D. <BJAIN at PARTNERS.ORG<mailto:BJAIN at partners.org>> wrote:

In this short , attached paper, I point out that the Bayesian method, in which probability is evidence, has been prescribed as the method of diagnosis due to the unbounded rationality of this method for inference of any uncertain event.
But we find this method is not employed for diagnosis in practice because it fails to achieve our goal in diagnosis of accurate determination of a disease in a given, individual patient and is therefore not ecologically rational.
The method that is employed in practice consists of hypothesis generation and confirmation which achieves the goal of accuracy in a given, individual patient and is thus ecologically rational.

We suggest that the prescription of the Bayesian method be replaced by the ecologically rational method of hypothesis generation and confirmation as long as our goal in diagnosis remains accuracy in a given, individual patient.

Please review and comment on this paper.

Thanks.

Bimal

Bimal P Jain MD
Northshore Medical Center
Lynn MA 01907.


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

________________________________

Address messages to: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX at list.improvediagnosis.org>

To unsubscribe from IMPROVEDX: click the following link:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1

or send email to: IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG<mailto:IMPROVEDX-SIGNOFF-REQUEST at list.improvediagnosis.org>

Visit the searchable archives or adjust your subscription at: http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX


Moderator:David Meyers, Board Member, Society for Improving Diagnosis in Medicine

To learn more about SIDM visit:
http://www.improvediagnosis.org/

<Unbounded versus bounded (ecological) rationality in diagnosis.pdf>


To unsubscribe from the IMPROVEDX:
mail to:IMPROVEDX-SIGNOFF-REQUEST at LIST.IMPROVEDIAGNOSIS.ORG
or click the following link: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG

Address messages to: IMPROVEDX at LIST.IMPROVEDIAGNOSIS.ORG

For additional information and subscription commands, visit:
http://www.lsoft.com/resources/faq.asp#4A

http://LIST.IMPROVEDIAGNOSIS.ORG/ (with your password)

Visit the searchable archives or adjust your subscription at:
http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?INDEX

Moderator: David Meyers, Board Member, Society to Improve Diagnosis in Medicine

To unsubscribe from the IMPROVEDX list, click the following link:<br>
<a href="http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1" target="_blank">http://list.improvediagnosis.org/scripts/wa-IMPDIAG.exe?SUBED1=IMPROVEDX&A=1</a>
</p>

HTML Version:
URL: <../attachments/20171114/9e12d4f3/attachment.html> ATTACHMENT:
Name: chapter8_JCS-Patterns.pdf Type: application/pdf Size: 1033065 bytes Desc: chapter8_JCS-Patterns.pdf URL: <../attachments/20171114/9e12d4f3/attachment.pdf> ATTACHMENT:
Name: chapter9_JCS-Patterns.pdf Type: application/pdf Size: 330835 bytes Desc: chapter9_JCS-Patterns.pdf URL: <../attachments/20171114/9e12d4f3/attachment-0001.pdf> ATTACHMENT:
Name: Origins_C003-corrections.pdf Type: application/pdf Size: 628465 bytes Desc: Origins_C003-corrections.pdf URL: <../attachments/20171114/9e12d4f3/attachment-0002.pdf>


More information about the Test mailing list