Organisational Accidents Reason

download Organisational Accidents Reason

of 24

  • date post

    08-Apr-2015
  • Category

    Documents

  • view

    487
  • download

    0

Embed Size (px)

Transcript of Organisational Accidents Reason

Organizational Accidents in AviationJim Reason University of Manchester

Two kinds of accidentsIndividual accidents Frequent Limited consequences Few or no defences Limited causes Slips, trips and lapses Short history Organizational accidents Rare Widespread consequences Many defences Multiple causes Product of new technology Long history

Two ways of looking at accidents The person approach: Focuses on the errors and violations of individuals. Remedial efforts directed at people at the sharp end. The system approach: Traces the causal factors back into the system as a whole. Remedial efforts directed at situations and organisations.

The person approach Since human actions are implicated in 8090 per cent of accidents And human actions are perceived as under voluntary control And behaviour is viewed as being the least constrained factor in the prior events Then accidents must be due to carelessness, negligence, incompetence, recklessness, etc.

Although . . . Blaming individuals is emotionally satisfying and legally convenient It gets us nowhere Fallibility is part of the human condition You cant change the human condition But you can change the conditions in which humans work

The system approach Accidents arise from a (usually rare) linked sequence of failures in the many defences, safeguards, barriers and controls established to protect against known hazards The important questions are: How and why did the defences fail? What can we do to reduce the chances of a recurrence?

Hazards, losses & defencesDefences Losses Hazards Hard defences Soft defences

The Swiss cheese model of accident causationSome holes due to active failures Hazards

Losses

Other holes due to latent conditions (resident pathogens)

Successive layers of defences, barriers, & safeguards

How and why defences failDefences Losses Hazards

Latent condition pathways

Causes Unsafe acts Local workplace factors Investigation

Organisational factors

Types of unsafe act Errors Slips, lapses and fumbles Mistakes Rule-based mistakes Knowledge-based mistakes

Violations Routine violations Violations for kicks Situational violations

Two influential accidents Mount Erebus, 1979: one accident, two inquiries: Chippindale Report (pilot error) Mahon Report (orchestrated litany of lies)

Dryden, 1989: Moshansky Report, an indictment of the entire Canadian air transport system.

Three aviation applications of the Swiss cheese model Bureau of Air Safety Investigation (BASI), Canberra. International Civil Aviation Organization (ICAO): Amendment to Annex 13, the guide to accident investigators. As applied by an airline to a recent air accident in North America.

BASI 1n the early 1990s, BASI resolved to apply the model to all accident investigations. In June 1993, a small commuter aircraft crashed at Young, NSW. All died. The BASI report focused on the deficiencies of the regulator, the Oz CAA. Following a similar accident in 1994, the Oz CAA was disbanded. Replaced by CASA.

ICAO Accident Investigation Divisional Meeting (2/92)Traditionally, investigations have been limited to the persons directly involved. Current accident prevention views supported the notion that additional preventive measures could be derived from investigations if management policies and organisational factors were also investigated. (excerpt from minutes) Implemented in 8th Edition Annex 13 (1994)

The Harrytown accident A modern 50-passenger glass cockpit jet aircraft hit the runway wing down on landing. Slid off and ended up in trees 2000ft away. Nine people injured. F/O flying. Temp -8oC. Visibility 1/8 mile. Fog, snowy, no wind. 23.47 hrs. Both pilots were seriously surprised.

+ = AND gates (each necessary, none sufficient) = Causal pathways

Prone to inducing ground shyness Stick pusher problems?

HARRYTOWN: SPECULATIVE EVENT TREEStall +

Aircraft handling features

+

Prone to stall wing down? No leading edge flaps? Leading edge contamination? Nose-up response to power increase?

Poor visibility (white hole)

Wing hits ground

+ Too slow

Inexperienced copilot (jet) assigned landing (SOPs?) + Inappropriate aircraft attitude at 100ft and below Inadequate monitoring by Captain (SOPs)? Sufficient? Delayed go-around order

Too low

Implications of event tree - 1 Two main clusters of contributing factors: those relating to the aircraft those relating to handling and flight operations

Two main pathways for back-tracking: to the manufacturer (not our immediate concern) to flight operations and the system as a whole (the priority pathway)

Implications of event tree - 2 The Harrytown accident involved the combination of several contributing factors that were very hard to anticipate. The local circumstances were such that it took very little in the way of less-thanadequate pilot performance to push the system over the edge. This was an organizational accident.

Pruned event treeORGANIZATIONAL ISSUESProne to inducing ground shyness Aircraft factors + Nose-up response to power increase TRAINING TRAINING

+Operational factors

Poor visibility Inexperienced co-pilot (jet) assigned landing + Problems with aircraft attitude at 100ft and below Monitoring and crosschecking problems Delayed go-around order

TRAINING, SOPs

SOPs, HIRING, etc. TRAINING, CHECKING TRAINING, HIRING, SOPs TRAINING, SOPs

Defences that failed in Harrytown accident

Stall protection system Airmanship Training, checking, SOPs Hiring, placement, contracts, exposure to safe culture

Key issues for review Operating procedures Training Checking Hiring and placement of pilots Assimilation of new hires into airline culture

Comments on the culture A strong culture: Embodied in a few widely known and well understood beliefs and values. A safe culture: Values solidity, reliability, accuracy; proud to be dull in the pursuit of quality and safety. A collective culture: No one person is indispensable, interchangeable units.

The moral No point replacing pilot error attribution with management error. All top level decisions, even sound commercial ones, have a downside for someone, somewhere in the system at some time. All create resident pathogens. Challenge: to identify and rectify latent conditions before they combine to cause accidents.

Accident/incident questions What defences failed? How did they fail? Why did they fail? Unsafe acts? Team factors? Workplace factors Technical factors? Organizational factors?