Thursday, October 10, 2024

field guide to human error investigations (Sidney Dekker)

 
Acknowledgements

I want to thank those who alerted me to the need for this book and who inspired me to write it, in particular Air Safety Investigator Maurice Peters and Captain Örjan Goteman. It was written on a grant from the Swedish Flight Safety Directorate and Arne Axelsson, its director.  Kip Smith and Captain Robert van Gelder and his colleagues were invaluable for their comments and suggestions during the writing of earlier drafts. 

S.D.
Linköping, Sweden
Summer 2001

  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.4  (pdf page: 8/154)
Investigators intend to find the systemic vulnerabilities behind individual errors. They want to address the error-producing conditions that, if left in place, will repeat the same basic pattern of failure. 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.20  (pdf page: 23/154)
Focusing on people at the sharp end

Reactions to failure focus firstly and predominantly on those people who were closest to producing and to potentially avoiding the mishap. It is easy to see these people as the engine of action. If it were not for them, the trouble would not have have occurred. 

p.20  (pdf page: 23/154)
Blunt end and sharp end

In order to understand error, you have to examine the larger system in which these people worked.  You can divide an operational system into a sharp end and a blunt end: 

 •  At the sharp end (for example the train cab, the cockpit, the surigical 
    operating table), people are in direct contact with the safety-
    critical process; 
 •  The blunt end is the organization or set of organizations that supports 
    and drives and shapes activities at the sharp end (for example the
    airline or hospital; equipment vendors and regulators). 

pp.20-21  (pdf page: 23-24/154)
The blunt end gives the sharp end resources (for example equipment, training, colleagues) to accomplish what it needs to accomplish. But at the same time it puts on constraints and pressures (“don't be late, don't cost us any unnecessary money, keep the customers happy”).  Thus the blunt end shapes, creates, and can even encourage opportunities for errors at the sharp end.  Figure 2.3 shows this flow of causes through a system.  From blunt to sharp end; from upstream to downstream; from distal to proximal.  It also shows where the focus of our reactions to failure is trained:  on the proximal 

p.21  (pdf page: 24/154)
Figure 2.3:  Failures can only be understood by looking at the whole system in which they took place.  But in our reactions to failure, we often focus on the sharp end, where people were closest to causing or potentially preventing the mishap. 

p.22  (pdf page: 25/154)
Why do people focus on the proximal?

Looking for sources of failure far away from people at the sharp end is counter-intuitive.  And it can be difficult.  If you find that sources of failure lie really at the blunt end, this may call into question beliefs about the safety of the entire system.  It challenges previous views.  Perhaps things are not as well-organized or well-designed as people had hoped.  Perhaps this could have happened any time.  Or worse, perhaps it could happen again. 


    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.23  (pdf page: 26/154)
Some people and organizations see surprise as an opportunity to learn. Failures offer them a window through which they can see the true internal workings of the system that produced the incident or accident. These people and organizations are willing to change their views, to modify their beliefs about the safety or robustness of their system on the basis of what the system has just gone through. This is where real learning about failure occurs, and where it can create lasting changes for the good. But such learning does not come easy. And it does not come often. Challenges to existing views are generally uncomfortable. Indeed, for most people and organizations, coming face to face with a mismatch between what they believed and what they have just experienced is difficult. These people and organizations will do anything to reduce the nature of the surprise. People and organizations often want the surprise in the failure to go away, and with it the challenge to their view and beliefs. 

p.24  (pdf page: 27/154)
Potentially disruptive lessons about the system as a whole are transformed into isolated hick-ups by a few uncharacteristically ill-performing individuals. 
     This transformation relieves the larger organization of any need to change views and beliefs, or associated policies or spending practices. The system is safe, if only it weren't for a few unreliable humans in it. 

p.25  (pdf page: 28/154)
As far as organizational learning is concerned, the mishap might as well not have happened. 

p.30  (pdf page: 33/154)
FAILURES AS THE BY-PRODUCT OF NORMAL WORK

What is striking about many accidents is that people were doing exactly the sorts of things they would usually be doing──the things that usually lead to success and safety.  People are doing what makes sense given the situation indications, operational pressures and organizational norms existing at the time.  Accidents are seldom preceded by bizarre behavior. 
    If this is the most profound lesson you and your organization can learn from a mishap, it is also the most frightening.  The difficulty of accepting this reality lies behind our reaction to failure.  Going beyond reacting to failure means acknowledging that failures are baked into the very nature of your work and organization; that they are symptoms of deeper trouble or by-products of systemic brittleness in the way you do your business.  It means having to acknowledge that mishaps are the result of everyday influences on everyday decision making, not isolated cases of erratic individuals behaving unrepresentatively.  Going beyond your reactions to failure means having to find out why what people did back there and then actually make sense given the organization and operation that surrounded them. 


    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.31  (pdf page: 35/154)
Two persistent myths drive our search for the cause of failure: 

 •  We think there is something like  the cause  of a mishap (sometimes 
    we call it the root cause, or the primary cause), and if we look in the 
    rubble hard enough, we will find it there.  The reality is that there
    is no such thing as  the cause, or the primary cause or root cause. 
    Cause is something we construct, not find.  The first part of this chapter
    talks about that. 
 •  We think we can make a distinction between human cause and mechanical
    cause, and that a mishap has to be caused by either one or the other. 
    This is an over simplification.  Once you acknowledge the complexity 
    of pathways to failure, you will find that the distinction between 
    mechanical and human contributions becomes very blurred; even impossible 
    to maintain.  The second part of this chapter talks about that. 

p.34  (pdf page: 38/154)
The causal web quickly multiplies and fans out, like cracks in a window.  What you call “root cause” is simply the place where you stop looking any further.  As far as the causal web is concerned, there are no such thing as root or primary causes──there is in fact no end anywhere.  If you find a root or primary cause, it was your decision to distinguish something in the dense causal pattern by those labels. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.38  (pdf page: 42/154)
In safety debates there are three ways of using the label “error”: 

 • Error as the  cause  of failure.  For example: This event was due to 
    human error. 
 • Error as the  failure itself.  For example: The pilot's selection of that 
    mode was an error. 
 • Error as a  process , or, more specifically, as a departure from some 
    kind of standard.  This may be operating procedures, or simply good 
    airmanship.  Depending on what you use as standard, you will come to
    different conclusions about what is an error. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

pp.49-50  (pdf page: 52-53/154)
Human error ── matter over mind

Things are different when you begin your investigation with the unfolding situation in which people found themselves.  Methods that attribute human error to structures inside the brain easily ignore the situation in which human behavior took place, or they at least underestimate its importance. Yet it makes  sense to start with the situation: 

 •  Past situations can be objectively reconstructed to a great extent, 
    and documented in detail; 
 •  There are tight and systematic connections between situations and 
    behavior; between what people did and what happened in the world 
    around them. 

    These connections between situations and behavior work both ways: 

 •  People change the situation by doing what they do; by managing 
    their processes; 
 •  But the evolving situation also changes people's behavior. An 
    evolving situation provides changing and new evidence; it updates 
    people's understanding; it presents more difficulties; it forecloses or
    opens pathways to recovery. 
  
    You can uncover the connections between situation and behavior, investigate them, document them, describe them, represent them graphically.  Other people can look at the reconstructed situation and how you related it to the behavior that took place inside of it. Other people can actually trace your explanations and conclusions. Starting with the situation brings a human error investigation out in the open. It does not rely on hidden psychological structures or processes, but instead allows verification and debate by those who understand the domain.  When a human error investigation starts with the situation, it sponsors its own credibility. 
    A large part of human error investigations, then, is not at all about the human behind the error.  It is about supposed structures in a human's mind; about psychological constructs that were putatively [to suppose, generally considered] involved in causing mental hick-ups.  A large part of humans error investigation is about the situation in which the human was working; about the tasks he or she was carrying out; about the tools that were used. 

   ****************************************
   *                                      *
   *   THE RECONSTRUCTION OF MINDSET      *  
   *   BEGINS NOT WITH THE MIND           *
   *                                      *
   *   IT BEGINS WITH THE CIRCUMSTANCES   *
   *   IN WHICH THE MIND FOUND ITSELF     *
   *                                      *
   ****************************************


 •  how the process, the situation, changed over time;
 •  how people's assessments and actions evolved in parallel with their
    changing situation; 
 •  how features of people's tools and tasks and their organizational 
    and operational environment influenced their assessments and 
    actions inside that situation. 

This is what the reconstruction of unfolding mindset, the topic of chapter 9, is all about. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.54  (pdf page: 57/154)
By referring to procedures, physically available data or standards of good practice, investigators can micro-match controversial fragments of behavior with standards that seem applicable from their after-the-fact position. Referent worlds are constructed from the outside the accident sequence, based on data investigators now have access to, based on the facts they now know to be true. The problem is that these after-the-fact-worlds may have very little relevance to the circumstances of the accident sequence.  They do not explain the observed behavior. The investigator has substituted his own world for the one that surrounded the people in question. 

p.54  (pdf page: 57/154)
Referent worlds are constructed from the outside the accident sequence, based on data investigators now have access to, based on the facts they now know to be true. 

p.54  (pdf page: 57/154)
The problem is that these after-the-fact-worlds may have very little relevance to the circumstances of the accident sequence.  They do not explain the observed behavior. The investigator has substituted his own world for the one that surrounded the people in question. 

p.56  (pdf page: 59/154)
PUT DATA IN CONTEXT

Taking data out of context, either by: 

 •  micro-matching them with a world you now know to be true, or by 
 •  lumping selected bits together under one condition identified in 
    hindsight 

p.57  (pdf page: 60/154)
robs data of its original meaning. And these data out of context are simultaneously given a new meaning──imposed from the outside and from hindsight. 

p.57  (pdf page: 60/154)
You impose this new meaning when you look at the data in a context you  now  know to be true. Or you impose meaning by tagging an outside label on a loose collection of seemingly similar fragments. 

p.57  (pdf page: 60/154)
    To understand the actual meaning that data had at the time and place it was produced, you need to step into the past yourself. When left or relocated in the context that produced and surrounded it, human behavior is inherently meaningful. 

p.57  (pdf page: 60/154)
Historican Barbara Tuchman put it this way: “Every scripture is entitled to be read in the light of the circumstances that brought it forth. To understand the choices open to people in another time, one must limit oneself to what they knew; see the past in its own clothes, as it were, not in ours.”4

4    Tuchman, B. (1981).  Practicing history: Selected essays. New York: Norton, page 75. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.62  (pdf page: 63/154)

 •  Safety is never the only goal in the systems that people operate. 
    Multiple interacting pressures and goals are always at work. There
    are economic pressures; pressures that have to do with schedules, 
    competition, customer service, public image. 
 •  Trade-offs between safety and other goals often have to be made
    under uncertainty and ambiguity. Goals other than safety are easy 
    to measure (How much fuel will we save?  Will we get to our 
    destination?).  However, how much people borrow from safety to 
    achieve those goals is very difficult to measure. 
 •  Systems are not basically safe. People in them have to create safety
    by tying together the patchwork of technologies, adapting under 
    pressure and acting under uncertainty. 

Trade-offs between safety and other goals enter, recognizably or not, into thousands of little and larger decisions and considerations that practitioners make every day.  Will we depart or won't we?  Will we push on or won't we?  Will we accept the directive or won't we?  Will we accept this display or alarm as indication of trouble or won't we?  These trade-offs need to be made under much undertainty and often under time pressure. 

p.63  (pdf page: 64/154)

   ****************************************
   *                                      *
   *   HUMAN ERRORS ARE SYMPTOMS OF       *  
   *   DEEPER TROUBLE                     *
   *                                      *
   ****************************************

Human error is the starting point of an investigation. The investigation is interesting in what the error points to. What are the sources of people's difficulties?  Investigations target what lies behind the error──the organizational trade-offs pushed down into the individual operating units; the effects of new technology; the complexity buried in the circumstances surrounding human performance; the nature of the mental work that went on in difficult situations; the way in which people coordinated or communicated to get their jobs done; the uncertainty of the evidence around them. 
    Why are investigations in the new view interested in these things?  Because this is where the action is. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

pp.74-75  (pdf page: 74-75/154)
    Recognize that data is not something absolute.  There is not a finite amount of data that you could gather about a human error mishap and then think you have it all.  Data about human error is infinite, and you will often have to reconstruct certain data from other data, cross-linking and bridging between different sources in order to arrive at what you want to know. 

p.75  (pdf page: 75/154)
    This can take this you into some new problems.  For example, investigations may need to make a distinction between factual data and analysis. So where is the border between these two if you start to derive or infer certain data from other data?  It all depends on what you can factually establish and how factually you establish it. If there is structure behind your inferences──In other words, if you can show what you did and why you concluded what you concluded──it may be quite acceptable to present well-derived data as factual evidence. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.86  (pdf page: 85/154)
So what, out of all the data available, did people actually see and how did they interpret it?   You may have laid out all the possible and relevant parameters, but what did people actually notice?  What did they understand their situation to be?  The answer lies in three things: 

 •  People have goals.  They are in a situation to get a job done; to 
    achieve a particular aim. 
 •  People have knowledge.  They use this to interpret what goes on 
    around them. 
 •  People's goals and knowledge together determine their focus of 
    attention.  Where people look depends on what they know and what 
    they want to accomplish. 

p.86-87  (pdf page: 85-86/154)
In this section, we will first look at goals──at conflicts between interacting goals in particular, as those almost always exist in systems operations and have a profound influence on how people decide and act. Then we shift to knowledge and focus of attention. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.91  (pdf page: 90/154)
It is hard for organizations, especially in highly regulated industries, to admit that these kinds of tricky goal trade-offs arise; even arise frequently. But denying the existence of goal conflicts does not make them disappear. For a human error investigation it is critical to get these goals, and the conflicts they produce, out in the open. If not, organizations easily produce something that looks like a solution to a particular incident, but that in fact makes certain goal conflicts worse. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.102  (pdf page: 100/154)
p.102-103  (pdf page: 100-101/154)

The new view on the role of technology

How does the new view on human error look at the role of technology? 
New technology does not remove the potential for human error, but changes it. New technology can give a system and its operators new capabilities, but it inevitably brings new complexities too: 

 •  New technology can lead to an increase in operational demands by 
    allowing the systems to be driven faster; harder; longer; more 
    precisely or minutely. Although first introduced as greater protection
    against failure (more precise approaches to the runway with a 
    Head-Up-Display, for example), the new technology allows a system
    to be driven closer to its margins, eroding the safety advantage 
    that was gained. 
 •  New technology is also often ill-adapted to the way in which people 
    do or did their work, or to the actual circumstances in which people 
    have to carry out their work, or to other technologies that were
    already there.
 •  New technology often forces practitioners to tailor it in locally
    pragmatic ways, to make it work in real practice. 
 •  New technology shifts the ways in which systems break down. 
 •  It asks people to acquire more knowledge and skills, to remember 
    new facts. 
 •  It adds new vulnerabilities that did not exist before. It can open
    new and unprecedented doors to system breakdown. 

 The new view of human error maintain that: 

 •  People are the only ones who can hold together the patch work of 
    technologies introduced into their worlds; the only ones who can
    make it all work in actual practice; 
 •  It is never surprising to find human errors at the heart of system 
    failure because people are at the heart of making these systems
    work in the first place. 

p.107  (pdf page: 105/154)
     Larry Hirschorn talks about a law of systems development, which is that every system always operates at its capacity.  Improvements in the form of new technology get stretched in some way, pushing operators back to the edge of the operational envelope from which the technological innovation was supposed to buffer them. 

p.109  (pdf page: 107/154)
     Bainbridge wrote about the ironies of automation in 1987.  She observed that automation took away the easy parts of the jobs, and made the difficult parts more difficult. Automation counted on human monitoring, but people are bad at monitoring for very infrequent events. Indeed, automation did not fail often, which limited people's ability to practice the kinds of breakdown scenarios that still justified their presence in the system. The human is painted as a passive monitor, whose greatest safety risks would lie in deskilling, complacency, vigilance decrements and the inability to intervene successfully in deteriorating circumstances. 

1   Material for this section comes from Woods, D. D., Johanssen, L. J., 
    Cook, R. L., & Sarter, N. B. (1994). Behind human error: Cognitive 
    systems, computers and hindsight.  Dayton, OH: CSERIAC, and Dekker, S. 
    W. A., & Hollnagel, E. (Eds.) (1999).  Coping with Computers in the 
    Cockpit. Aldershot, UK: Ashgate.    

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.111  (pdf page: 109/154)
People generally interpret cues about the world on the basis of what they have told their automated systems to do, rather than on the basis of what their automated systems are actually doing. In fact, people do not act on the basis of reality, they act on the basis of their perception of reality. Once they have programmed their ship to steer to Boston in NAV mode, they may interpret cues about the world as if the ship is doing just that. Evidence about a mismatch has to be very compelling for people to break out of the misconstruction of mindset. They have no expectation of a mismatch (the system has behaved reliably in the past), and such feedback as there is (a tiny mode annunciation) is not compelling when viewed from inside the situation. 

p.114  (pdf page: 112/154)
     The pattern is typical because people in dynamic worlds always face a trade-off between changing their assessments and actions with every little change (or possible indication of change) in the world, versus providing some stability in interpretation to better manage and oversee an unfolding situation; creating a framework in which to place newly incoming information. There are errors of judgment on both ends. On the other, people can get fixated, they do not revise their assessment in the face of cues that (in hindsight) suggested it could be good to do so. 

    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.116  (pdf page: 114/154)
 •  Find out what organizational history or pressures exist behind
    these routine departures from the routine; what other goals help
    shape the new norms for what is acceptable risk and behavior. 
 •  Understand that the rewards of departures from the routine are 
    probably immediate and tangible:  happy customers, happy bosses, 
    money made, and so forth.  The potential risks (how much did
    people borrow from safety to achieve those goals?) are unclear, 
    unquantifiable or even unknown. 
 •  Realize that continued absence of adverse consequences may 
    confirm people in their beliefs (in their eyes justified!) that their
    behavior was safe, while also achieving other important system 
    goals. 

p.116  (pdf page: 114/154)
Borrowing from safety

With rewards constant and tangible, departures from the routine may become routine across an entire operation or organization. 

   ****************************************
   *                                      *
   *   DEVIATIONS FROM THE NORM CAN       *  
   *   THEMSELVES BECOME THE NORM         *
   *                                      *
   ****************************************

Without realizing it, people start to borrow from safety, and achieve other system goals because of it──production, economics, customer service, political satisfaction. Behavior shifts over time because other parts of the system send messages, in subtle ways or not, about the importance of these goals.  In fact, organizations reward and punish operational people in daily trade-offs (“We are an ON-TIME operation!”), focusing them on goals other than safety. The lack of adverse consequences with each trade-off that bends to goals other than safety, strengthens people's tacit belief that it is safe to borrow from safety. 


    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     

Sidney Dekker, The field guide to human error investigations, 2002 

p.123  (pdf page: 121/154)
High reliability organizations do not try to constantly close the gap between procedures and practice by exhorting people to stick to the rules. Instead, they continually invest in their understanding of the reasons beneath the gap.  This is where they try to learn──learn about ineffective guidance; learn about novel, adaptive strategies and where they do and do not work work.     

p.134  (pdf page: 131/154)
In this sense your recommendations are a prediction, a hypothesis. You propose to modify something, and you implicitly predict it will have a certain effect on human behavior.  The strength of your prediction, of course, hinges on the credibility of the connection you have shown earlier in your investigation: between the observed human errors and critical features of tasks, tools and environment.  With this prediction in hand, you challenge those responsible for implementing your recommendations to go along in your experiment──to see if, over time, the proposed changes indeed have the desired effect on human performance. 

p.134  (pdf page: 131/154)
 •  the ease with which your recommendation can be implemented; 
 •  the effectiveness of your recommended change. 

The ease of implementation and the effectiveness of an implemented recommendation generally work in opposite directions. In other words: the easier the recommendation can be sold and implemented, the less effective it will be (see Figure 11.1). 

p.135  (pdf page: 132/154)
     But after implementation, the potential for the same kinds of error is left in the organization or operation. The error is almost guaranteed to repeat itself in some shape or form, through someone else who finds him or herself in a similar situation.  Low-end recommendation really deal with symptoms, not with causes. After their implementation, the system as a whole has not become much wiser or better. 

p.140  (pdf page: 137/154)
A really good investigation does not necessarily lead to the implementation of really good countermeasures.  In fact, the opposite may be true if you look at figure 11.1.  Really good investigations may reveal systemic shortcomings that necessitate fundamental interventions which are too expensive or sensitive to be accepted. 

p.141  (pdf page: 138/154)
Many organizations──even those with mature safety departments and high investments in investigations──lack a coherent strategy on continuous improvement. Resources for quality control and operational safety are directed primarily at finding out what went wrong in the past, rather than assuring that it will go right in the future. The focus is on diagnosis, not change. 

p.141  (pdf page: 138/154)
The issue, in the end, is one of sponsoring and maximizing organizational learning; organizational change, which is what Chapter 12 is about. 

p.142  (pdf page: 139/154)
12.  Learning from Failure 

p.142  (pdf page: 139/154)
Learning is about modifying an organization's basic assumptions and beliefs. It is about identifying, acknowledging and influencing the real sources of operational vulnerability.  This can actually be done even before real failures occur, and the remainder of this chapter is about the opportunities and difficulties of organizational learning──before as well as after failures. 

p.147  (pdf page: 143/154)
     What seems to characterize high reliability organizations (ones that invest heavily in learning from failure) more than anything is the ability to identify commonalities across incidents. Instead of departments distancing themselves from problems that occur at other times or places and focusing on the differences and unique features (real or imagined), they seek similarities that contain lessons for all to learn. 

p.150  (pdf page: 146/154)
Fear of prosecution stifles the flow of information about such conditions. And information is the prime asset that makes a safety culture work. A flow of information earlier could in fact have told the bad news. It could have revealed these features of people's tasks and tools; these long-standing vulnerabilities that form the stuff that accidents are made of. It would have shown how human error is inextricably connected to how the work is done, with what resources, and under what circumstances and pressures. 

p.155  (pdf page: 151/154)
 •  Learning is about changing these beliefs, and changing the system 
    accordingly. 
 •  Learning is about seeing the failure as a part of the system. 
 •  Learning is about countermeasures that remove error-producing 
    conditions so there won't be a next time. 
 •  Learning is about increasing that flow [of safety-related information].
 •  Learning is about continuity, about continuous improvement
    that comes from firmly integrating the terrible event in what 
    the system knows about itself. 


    source:  The field guide to human error investigations, by Sidney Dekker,  
             Cranfield university press 
    filename:  DekkersFieldGuide.pdf 

   (Sidney Dekker, The field guide to human error investigations, 2002, )
  <------------------------------------------------------------------------>     


No comments:

Post a Comment

ba place space

  Bernie Clark., From the Gita to the Grail : exploring yoga stories & western myths, 2014 p.345      ... literal definition of dukkha: ...