How Good Pilots Make Bad Decisions

6
HOW GOOD PILOTS MAKE BAD DECISIONS: A MODEL FOR UND ERS TANDING AND TEACHING FAILURE MANAGEMENT TO PILO TS Steve Swauger Chandler, Arizona 12th International Symposium on Aviation Psychology April 14-17, 2003, Dayton, OH Experienced air line pilots operate in an environment effec tively free of failure. Over time, highly successful performance leads pilots to develop a very high level of trust in their judgment. In those very few situations where they encounter failure, many Good Pilots are unable to recognize their judgment error and incorrectly apply familiar and common decision paths to uncommon situations. These Good Pilots fall into a Recognition Trap and commit errors with highly undesirable consequences. Masterful pilots avoid Recognition Trap errors by using fact-based decision path validation. Directed training measures can increase awareness of Recognition Trap errors and promote effective decision validation. Discussion This paper focuses on occurrences of poor decision- making by experienced, well-intentioned, and compliant air carrier pilots. A survey of airline accidents and incidents indicates that they share many characteristics. Mishap pilots were recognized as Good Pilots w ith little or no history of performance problems; they were sufficiently proficient, experienced, knowledgeable, and rested; they chose every decision they made, even the really bad ones; and they had performed the failed maneuver successfully many times in the past. Airline Management is challenged with addressing this problem since there is little apparent evidence that these Good Pilots were deficient in any aviation skill category. Mishap and Accident Analysis Many airline incidents share the following characteristics. In general, they appear to follow Klein’s model for Recognition Primed Decision- Making (RPD) (Klein, 1998). Mishaps followed the Recognition Primed Decision (RPD) Path. Mishap pilots selected their decision path following a quick assessment of their situation. The decision path is a series of selection choices, each one dependent on the previously selected choices, and all directed toward the desired goal. Pilots appeared to use mental simulation to select familiar choices along a familiar decision path toward a familiar desired goal. Once they chose their path and goal, they rarely discarded it for an alternative path or goal. Corporate and time-pressures were reported as insignif icant. While time or flight-s chedule pressures were present, pilots reported that these factors did not influence their decisions. Self-imposed professional standards appeared to have a greater influence. For example, p ilots generally viewed aborting an approach as a personal failure, not as an operational problem for the airline. They accessed a single source of information. Often, these pilots selected their decision path based on a single information source. For example, they made a visual assessment of their position and determined that they could safely maneuver from that position to the desired goal using a familiar series of actions and maneuvers. Once decided, they did not access other sources of information to validate their path. This supports Klein’s findings that experts focus on a strong sense of typicality and miss subtle signs of trouble. (Klein, 1998, p. 280). The initial decision was of low consequence. These pilots started with a familiar and common decision that was flawed. While this was the first error in the chain, it was remarkably inconsequential. For landing accidents, the initial choice was to land on a particular runway. Once problems with this decision path began to emerge, these pilots had plenty of time to select a different runway or abort the approach. Instead, they stuck doggedly with the first path. This initial flawed decision had an effect of solidifying the decision path and their progression toward the original goa l. Additionally, it created a false legitimacy for the chosen path. This made it very difficult to change goals and start over. The Goal was viewed as familiar and successful. Accepting this, the pilots were strongly motivated to

description

Aviation.Pilot decision making

Transcript of How Good Pilots Make Bad Decisions

Page 1: How Good Pilots Make Bad Decisions

HOW GOOD PILOTS MAKE BAD DECISIONS: A MODEL FOR UNDERSTANDING AND TEACHINGFAILURE MANAGEMENT TO PILOTS

Steve SwaugerChandler, Arizona

12th International Symposium on Aviation PsychologyApril 14-17, 2003, Dayton, OH

Experienced air line pilots operate in an environment effectively free of failure. Over time, highly successfulperformance leads pilots to develop a very high level of trust in their judgment. In those very few situations wherethey encounter failure, many Good Pilots are unable to recognize their judgment error and incorrectly apply familiarand common decision paths to uncommon situations. These Good Pilots fall into a Recognition Trap and commiterrors with highly undesirable consequences. Masterful pilots avoid Recognition Trap errors by using fact-baseddecision path validation. Directed training measures can increase awareness of Recognition Trap errors and promoteeffective decision validation.

Discussion

This paper focuses on occurrences of poor decision-making by experienced, well-intent ioned, andcompliant air carrier pilots. A survey of airlineaccidents and incidents indicates that they sharemany characteristics. Mishap pilots were recognizedas Good Pilots with little or no history ofperformance problems; they were sufficientlyproficient, experienced, knowledgeable, and rested;they chose every decision they made, even the reallybad ones; and they had performed the failedmaneuver successfully many times in the past.Airline Management is challenged with addressingthis problem s ince there is little apparent evidencethat these Good Pilots were deficient in any aviationskill category.

Mishap and Accident Analysis

Many airline incidents share the followingcharacteristics. In general, they appear to followKlein’s model for Recognition Primed Decision-Making (RPD) (Klein, 1998).

Mishaps followed the Recognition Primed Decision(RPD) Path. Mishap pilots selected their decisionpath following a quick assessment of their situation.The decision path is a series of selection choices,each one dependent on the previously selectedchoices, and all directed toward the desired goal.Pilots appeared to use mental simulation to se lectfamiliar choices along a familiar decision path towarda familiar desired goal. Once they chose their pathand goal, they rarely discarded it for an alternativepath or goal.

Corporate and time-pressures were reported asinsignif icant. While time or flight-schedule pressureswere present, pilots reported that these factors did notinfluence their decisions. Self-imposed professionalstandards appeared to have a greater influence. Forexample, pilots generally viewed aborting anapproach as a personal failure, not as an operationalproblem for the airline.

They accessed a single source of information. Often,these pilots selected their decision path based on asingle information source. For example, they made avisual assessment of their position and determinedthat they could safely maneuver from that position tothe desired goal using a familiar series of actions andmaneuvers. Once decided, they did not access othersources of information to validate their path. Thissupports Klein’s findings that experts focus on astrong sense of typicality and miss subtle signs oftrouble. (Klein, 1998, p. 280).

The initial decision was of low consequence. Thesepilots started with a familiar and common decisionthat was flawed. While this was the first error in thechain, it was remarkably inconsequential. For landingaccidents, the initial choice was to land on aparticular runway. Once problems with this decisionpath began to emerge, these pilots had plenty of timeto select a different runway or abort the approach.Instead, they stuck doggedly with the first path. Thisinitial f lawed decision had an effect of solidifying thedecision path and their progression toward theoriginal goal. Additionally, it created a falselegitimacy for the chosen path. This made it verydifficult to change goals and start over.

The Goal was viewed as familiar and successful.Accepting this, the pilots were strongly motivated to

Page 2: How Good Pilots Make Bad Decisions

continue along the chosen decision path. When theydetected problems with their plan, they appliedappropriate corrections. They deemed the effects ofthese problems as minimal (de minimus error) (Klein,1998, p. 65). The problem was always considered“safe” and “manageable”.

The visible goal drew the pilots in. The goal wasoften clearly vis ible and seemingly achievable, evenif it was of marginal quality. This encouraged pilotsto discard safer options – most notably the option toabort the maneuver and escape the situation. Forlanding accidents, pilots visually saw the runway anddiscarded safer options such as circling to a morefavorable runway.

There was a perception of complexity andacceleration of events. Klein observed that a failingplan becomes increasingly complex as the personcontinues to patch the flaws to hold it together(Klein, 1998, p. 69). Pilots perceived an accelerationof events. They tunneled their focus increasingly onthe goal. For landing accidents, this point of focuswas often the runway landing zone.

Problems were always addressed, howeverineffectively. Mishap pilots always detectedindications of failure. These indications, however,were addressed by common corrective measures andthen dismissed. For example, if they determined thattheir landing profile as too high or too fast, theyreduced thrust and increased drag to regain thedesired profile path. The problem reemerged only tobe addressed with further actions to regain theoriginal path and desired goal. Pilots always felt thatthey were doing all they could to address theproblem. This seemed to justify continuing the path.

Pilots attempted to simplify the problem. As the pathbegan to fail, pilots experienced frustration andstress. To reduce the stress, they tried to simplifytheir plan (Klein, 1998, p. 152.). They discarded orminimized conflicting cues and focused on a singleparameter to achieve the desired goal.

Achieving the goal mitigated all prior errors. Often,achieving the desired goal effectively erased theprevious flaws and errors. For example, a successfullanding eliminated the effects of the poor approach.Pilots appeared to use this as a justification forcontinuing an out-of-limits approach.

The Problem

These Good Pilots appeared to go through greatlengths to drive square pegs into round holes. Theyselected unachievable goals and pursued them to theend. Why would experienced, capable pilots choosean unsuccessful decision path over safer alternatives?Why would they continue to discount conflictingfacts? Why wouldn’t they abort the failing path?Klein’s model of Recognition Primed Decision-Making (RPD) describes decision-making behaviorof experts (Klein, 1998). While generally behavinglike experts, many mishap Good Pilots behaved likenovices. What would cause a 10,000-hour airlineCaptain to make novice-like errors? One explanationis that, unlike many of Klein’s focus groups, aircarrier pilots generally operate in an environmentvirtually free of novelty and failure. A seasonedair line pilot effectively performs the same task overand over again with great success and reliability. Infact, many airline Captains operate for years withoutever flying a failed approach.

The characteristics discussed here seem to indicate aflaw in these Good Pilots’ abilities to validate anddiscard failing decision paths. This flaw leads tobehaviors that indicate cognitive dissonance. In short,a decision-making flaw leads to psychologicalconflict, which inhibits selection of logical and safealternatives.

Expert-like Pilot Judgment

Many air line Captains have not experienced asignificant error in their judgment. While theparticular conditions of each flight segment change,the flight path, the choices, and the mission goalsremain virtually constant. The experienced airlinepilot develops a very high confidence in his or herability to adapt to the changing conditions andachieve a consistent, successful goal. Over time, thisbecomes the standard that measures the quality of anairline pilot’s judgment. Consistent flightaccomplishment is the task that airline pilots areexperts at completing.

Novice-like Pilot Judgment

On the other hand, because they are so rarely exposedto failure, many very experienced air carrier pilots areactually novices at detecting and rejecting faileddecision paths. Perhaps, they possessed these skillsearlier in their careers, but over the years, they lostthem. Good judgment becomes a pilot’s ability toadapt to changing conditions and to achieve aconsistent successful goal, not his or her ability todetect and respond to a failing decision path. It isprecisely this inexper ience, combined with

Page 3: How Good Pilots Make Bad Decisions

confidence in their judgment which triggers theRecognition Trap.

The Recognition Trap

Pilot decision-making follows a simple model.Following RPD logic, they view the problem in astereotyped way (Klein, 1998, p. 280). Morespecifically, pilots quickly determine if the givensituation is common or familiar. If it is, then the pilotapplies a mental simulation to select a decision path(Klein, 1998, Chapter 5). Conversely, if theydetermine that the situation is uncommon orunfamiliar, then they follow an alternate decisionpath to an alternate safe conclusion. In the vastmajority of line operations, pilots select and executecommon and familiar paths to desired goals. Rarelydo they encounter uncommon situations that requireuncommon goals. Errors emerge when the pilot’sassessment of commonness or familiarity is wrong.Consider the following matrix.

Recognition Trap Recognized Uncommon Error - Failure Situation - Success

Land when you shouldn't Go-around when you should

Recognized Common Conservative Error Situation - Success Failure Land when you should Go-around when you shouldn't

What We Think the Situation isCommon Uncommon

Unc

omm

onC

omm

onW

hat t

he S

ituat

ion

Act

ually

is

Figure 1: Four Decision Paths and Outcomes

The two successful quadrants (Recognized Commonand Recognized Uncommon) clearly modelsuccessful RPD. The error quadrants (RecognitionTrap Error and Conservative Error) model errorscommitted under RPD. The error quadrants differgreatly in their level of consequence. TheConservative Error fails to achieve the desired goal,but has very low consequence. Since ConservativeErrors are deemed acceptable in daily airlineoperations, they will not be addressed further.Instead, we will focus our attention on theRecognition Trap error. The Recognition Trap is notacceptable because it leads to highly consequentialdamage or injury. It is in the Recognition Trapquadrant that aircraft accidents, mishaps, andincidents fall. Again, the vast majority of allsituations fall in the Recognized Common quadrant.

The remaining quadrants reflect only a tiny fractionof remaining situations.

Procedure and Policy Design

Pilots do not choose to make these poor decisionerrors. Instead, Good Pilots err in their assessment ofthe parameters, select familiar decisions for unfam-iliar situations, and fall into the Recognition Trap.For a clearer picture, consider the following graph.

More Risk/Less Time More Time/Less Risk

Failure Line Desired Line

Figure 2: Regions of flight operations.

Both airline management and individual pilots try tobalance their operations and performance betweenfailure and efficiency. Often, the most efficientoperation falls just to the right of the Failure Line.Since we cannot accept such a risky position, wecreate a safety margin. Then, we design our policiesand procedures at the Desired Line. Our goal is toachieve failure-free performance that maintains asafety margin without the waste of excess margin.

Ideal Pilot Performance

When we superimpose actual pilot performance overthe Desired Line, we achieve the following bell-shaped distribution.

Freq

uenc

y of

Occ

uran

ce Failure Line

More Risk/Less Time

Less Risk/More Time

Desired Line

Figure 3: Ideal Pilot performance.

Despite their best efforts, some paths fall within thesafety margin and others within excess margin. In allcases, pilots will avoid a situation that strays over theFailure Line. No pilot intentionally chooses failure.

Page 4: How Good Pilots Make Bad Decisions

In fact, the Recognition Trap Error is absent from thisideal environment.

Real-Wor ld Pilot Performance

In the Real World, the pilot does not know the actuallocation of the Failure Line. Indeed, pilots who rarelyexperience failure exhibit novice-like inability torecognize that they have crossed into this extremeterritory. This is modeled by a shift of the FailureLine toward the right. Mishap pilots fail to detect thisshift.

Freq

uenc

y of

Occ

uran

ce

FailLine

More Risk/Less Time

Less Risk/More Time

Desired Line

Figure 4: Real-Wor ld shift of the Failure Line

This shaded region represents the small proportion ofsituations where the pilot selects and follows afamiliar decision path toward a familiar goal, but theReal World delivers an uncommon situation toward afailing outcome. This is the region of the RecognitionTrap Error.

Judgment and Verification

Assume that our Good Pilots have encountered a raresituation within the shaded Recognition Trap region.Ideally, these pilots should recognize the uncommonfailure, abort the plan, and select a safer path. Instead,these Good Pilots typically continue with the familiarpath. Remember, these pilots THINK they areperforming a familiar and successful path within thesafety margin. Accordingly, they apply normal pathcorrections. As Klein explains, flaws in the mentalsimulation will lead to an increasing level ofcomplexity (Klein, 1998, p. 69). But, this complexityis not unique to failing scenarios. Operations withinthe safety margin exhibit this same complexity. Thedifference is that corrective measures applied withinthe safety margin solve the problem, while correctivemeasures applied within the Failure Zone mitigate,but never solve, the problem. For example, if theapproach path is too aggressive, the aircraft will drifthigh of desired flight path or accelerate faster thanthe desired speed. A pilot would apply the samecorrective steps (reducing power and increasing drag)for a within-the-safety-margin, successful approach

as he or she would for a failing, unsuccessfulapproach. The only difference is that flightcorrections employed for a failing scenario will notbe sufficient to address the energy problem. Thecritical difference is in how the Good Pilot validatesthe effectiveness of their decision path. Consider thefollowing flow chart.

Start with NaturalisticAssessment/Perception of

the Situation

Apply Experience andExpectations and Select a

Familiar Goal andDecision Path (RPD)

Validate the Planthrough an Assessment

of facts/parameters

Is the PlanWorking? Do

Corrections work?

Assessment isaccurate. Continue

with the selectedDecision Path

The Assessment isWrong. What's wrongwith my Judgment?

This leads to CognitiveDissonance andRationalization.

Good Pilot'sFlawed Decision-Making Process

Yes, it is.

No, it isn't.

Figure 5: The Good Pilot’s Flawed Decision-MakingProcess

The Successful Approach

We will use a common example of approach-to-landing. The Good Pilot assesses the situation (visualapproach to landing), determines that it is familiar(have seen it hundreds of times), and selects adecision path (a common sequence of configurationand power adjustments) to achieve a common goal(normal landing). Since our pilots have performed theexpected task many times, they are experts atadapting to the particular features of the givensituation. Their problem begins with the validationstep. Our Good Pilots assess parameters to determineif their plan is working. They determine that it is notworking (the aircraft is getting high and fast on flightpath). So, they apply corrections (reduce power andincrease drag) and reassess. If the corrections work,then they validate the desired decision path andcontinue to a safe landing. This is the case over manyhundreds of successful iterations. The flaw is that ourGood Pilots verify their judgment of the decisionpath, not the facts. As long as they are operatingoutside of the Failure Zone, this process works. Theflaw is inconsequential. It is only inside the FailureZone that this flaw manifests itself.

Page 5: How Good Pilots Make Bad Decisions

The Failed Approach

This one time, however, conditions are different. Thistime, the Good Pilots miss indications that they are ina failing situat ion (heavy weight, hot day, tailwind,etc.). They continue to believe that this approach issafe and manageable. The corrections that haveworked hundreds of times in the past now fail tosolve their prob lem. Sometimes, they will takeextraordinary steps to make the approach look normaland familiar, even though the parameters are well outof tolerance. Instead of discarding the plan, theybegin questioning their judgment. It is no longer anexercise of completing a familiar task. It has becomepersonal. These Good Pilots view the situation as animpeachment of their aviation skills and professionalreputation. Instead of ACCEPTING that the plan isnot working, they question WHY it is not workingand WHY they judged it so poorly. Their judgmentand verification always worked in the past, so itshouldn’t be wrong now.

The key is that the verif ication process that workswell +99% of the time hides the mechanism thatsprings the Recognition Trap in these very raresituations.

If they had enough time, they might eventually detectthe fatal flaw and abort the approach. But, eventsstart moving too quickly. There is no pause button inthe cockpit. As stress builds, they processinformation poorly (Klein, 1998, p. 275). When theyrun out of corrective steps (full flaps, gear, and idlepower), they accept the remaining error and pressforward. They feel confusion, embarrassment, andcognitive dissonance over their perceived personalfailure (“How did I go wrong?”). They attempt tosimplify the increasingly complex approach (ignorethe excess speed and concentrate on the landingzone). Sometimes they ignore or minimize the mainproblem and substitute an unrelated problem (waitingfor the previous landing aircraft to clear the runway).Klein also reported this effect (Klein, 1998, p. 69). Inthe end, they just want to terminate the failingscenario by moving on to the next phase (trade a badapproach problem for a tough runway stoppingproblem). Again, they err because this optionsacrifices safe escape alternatives (go-around and flya new approach). In the heat of the moment, theseGood Pilots rarely consider aborting the approach.Aborting an approach is perceived as a furtherimpeachment of their historically flawless judgment.This behavior reflects rationalization throughcognitive dissonance.

Masterful Pilot’s Verification Process

Masterful Pilots do not fall into the Recognition Trap.They detect the failure and abort the flawed path.Consider the following flow chart.

Start with NaturalisticAssessment/Perception of

the Situation

Apply Experience andExpectations and Select a

Familiar Goal andDecision Path (RPD)

Validate the Planthrough an Assessment

of facts/parameters

Is the PlanWorking? Do

Corrections work?

Assessment isaccurate. Continue

with the selectedDecis ion Path

The Assessment isWrong. What 's am I

Missing? This leads toFact-Based Evaluation.The Path is Adjusted or

Discarded.

Masterful Pilot'sFact-Based

Decision-MakingProcess

Yes, it is.

No, it isn't.

Figure 6: The Masterful Pilot’s Decision-MakingProcess

Masterful Pilots are just as confident in their goodjudgment as are Good Pilots. They just apply theirjudgment differently. This becomes evident whenthey verify their path. Instead of questioning WHYthey judged the situation incorrectly, they ACCEPTthat they selected a failing path because ofincomplete facts. They accurately conclude thatmissing information led to their flawed decision path.So, the same energy that the Good Pilots expendedon self-recrimination and psychological conflict, theMasterful Pilots expend on fact-based explorationand situation evaluation. Klein calls this the searchfor counterfactuals (Klein, 1998, p. 154.) They searchfor, and discover, information that explains thefailing approach. For example, they discover a strongtailwind and conclude that they need more distance toeffectively deplete the excess energy. In this way,they detect the critical fact that makes the “common”approach-to-landing into an “uncommon” go-aroundfrom an unsafe approach. Once the situation isaccurately categorized as uncommon, they make safeexpert choices. Their good judgment is validated.

Applications for Training

Judgment is one of the fuzziest subjects in aviation.We demand that every pilot have it. ProspectiveCaptains are not approved for command until they

Page 6: How Good Pilots Make Bad Decisions

can demonstrate it. Still, most organizations do noteffectively train, describe, or model good judgment.Following are some specific suggestions for training.

Pilots always exercise good judgment. The first goalis to preserve and promote the precept that allexperienced pilots possess good judgment. Anyattempt to convince pilots that their judgment isfallible is doomed to fail. Every pilot knows he or shehas good judgment. We will never convince themotherwise. It will always be the “other guy” that hadbad judgment. Our efforts are better spent focusingtheir good judgment toward an effective decision-making process.

Describe how to apply good judgment. Make it veryclear that good judgment is not so lely the ability toassess or recognize the familiarity of a situat ion.Good judgment is the ability to assemble the factsand parameters around a situation and select thesafest and most efficient path toward the desired goal.Demonstrate examples where pilots were initiallysidetracked by an incomplete knowledge ofcircumstances, made a poor choice, but detected theerror, selected a new choice, and avoided theRecognition Trap. By training the Masterful decision-making process, pilots can practice a verification skillthat works 100% of the time, not just +99% if thetime.

Teach Recognition Trap awareness. Most pilots areunaware of the dire consequences of RecognitionTrap Errors. Pilots are rarely exposed to highlyconsequential errors. Increase their awareness so theycan react appropriately to situations that compromisetheir safety margin. Pilots can consciously shift theirdesired plan away from the Failure Line. This mayresult in more Conservative Errors, but this is betterthan having more accidents and incidents.

Promote decision path assessment using informationfrom multiple sources. An incorrect assessment isalmost always derived from a single informationsource. If the single source is a visual assessment, thepilot is basing everything on this judgment offamiliarity. The better course is to start with ajudgment source (visual assessment) and immediatelyvalidate it from an automated source (flight computeroptimum profile path). This way, pilots learn toconstantly look for ver ification that their path isvalid.

Teach detachment between choices and judgment.Many mishap pilots waste invaluable t imequestioning their errors of judgment. Instead, theyshould spend that time accessing the core question –

Is the chosen path working? If it isn’t working, abortthe path, and start over. Pilots must preserve a senseof detachment between their judgment and theirchoices. Klein calls this decentering (Klein, 1998,156).

Model Fact-Based validation of the chosen path.Judgment follows facts. Just because the approachSHOULD work out doesn’t mean it IS working out.If the facts support abandoning the chosen path, thenaccept this and abort the path.

Complexity is a sign of a failing path. Educate pilotsthat an increasing level of complexity in a chosenpath is a red flag of failure. All pilots are sensitive tocomplexity. If a situation that is normally simple isbecoming complex, suspect the Recognition Trap andverify with facts. If necessary, abort the path.

Build CRM protections against single pilot failure.Often, the mishap pilot cannot step back andrecognize his or her error. It is incumbent on thesecond and/or third pilots to recognize the signs offailure and intercede. If a situation is failing, themonitoring pilots should assume that the flying pilothas missed critical information. Verbalize the criticalparameter and take steps to break the error chain.

Provide absolute limits to preserve safety margin.There is always a chance that pilots will err inpredicting the location of the Failure Line. When theydetect error, they might be tempted to assess whetherthe task is “manageable” or “safe”. These are notacceptable criterion for common maneuvers such asapproach and landing. To preserve the safety margin,organizations should provide clear limits on aircraftparameters. Operations beyond these prescribedparameters are not open for individual pilotjudgment. Pilots must clearly know that when theyexceed operational parameters, then they must abortthe maneuver.

Reference

Klein, G., (1998), Sources of Power: How PeopleMake Decisions. Massachusetts, MIT Press.