Cap. 11

17
CHAPTER 11 Task Analysis Jan Maarten Schraagen Introduction Analyses of tasks may be undertaken for a wide variety of purposes, including the design of computer systems to support human work, the development of training, the allocation of tasks to humans or ma- chines, or the development of tests to certify job competence. Task analysis is, therefore, primarily an applied activity within such diverse fields as human factors, human– computer interaction, instructional design, team design, and cognitive systems engi- neering. Among its many applications is the study of the work of expert domain practitioners. “Task analysis” may be defined as what a person is required to do, in terms of actions and/or cognitive processes, to achieve a sys- tem goal (cf. Kirwan & Ainsworth, 1992 , p. 1). A more recent definition, which at first sight has the merit of being short and crisp, is offered by Diaper (2004 , p. 15 ): “Task anal- ysis is the study of how work is achieved by tasks.” Both definitions are deceptively sim- ple. They do, however, raise further issues, such as what a “system” is, or a “goal,” or “work,” or “task.” Complicating matters fur- ther, notions and assumptions have changed over time and have varied across nations. It is not my intention in this chapter to provide a complete historical overview of the various definitions that have been given for task analysis. The reader is referred to Diaper and Stanton (2004 ), Hollnagel (2003 ), Kirwan and Ainsworth (1992 ), Militello and Hoffman (2006 ), Nemeth (2004 ), Schraagen, Chipman, and Shalin (2000 ), and Shepherd (2001). It is important, however, in order to grasp the subtle differences in task-analytic approaches that exist, to have some histor- ical background, at least in terms of the broad intellectual streams of thought. Given the focus of this handbook, this historical overview will be slightly biased toward task analysis focused on professional practition- ers, or experts. After the historical overview, the reader should be in a better position to grasp the complexities of the seemingly sim- ple definitions provided above. Next, I will focus on some case studies of task analysis with experts. This should give the reader an understanding of how particular methods 185

description

2006 - Ericsson, Charness, Feltovich e Hoffman

Transcript of Cap. 11

Page 1: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

C H A P T E R 11

Task Analysis

Jan Maarten Schraagen

Introduction

Analyses of tasks may be undertaken fora wide variety of purposes, including thedesign of computer systems to supporthuman work, the development of training,the allocation of tasks to humans or ma-chines, or the development of tests to certifyjob competence. Task analysis is, therefore,primarily an applied activity within suchdiverse fields as human factors, human–computer interaction, instructional design,team design, and cognitive systems engi-neering. Among its many applications isthe study of the work of expert domainpractitioners.

“Task analysis” may be defined as what aperson is required to do, in terms of actionsand/or cognitive processes, to achieve a sys-tem goal (cf. Kirwan & Ainsworth, 1992 ,p. 1). A more recent definition, which at firstsight has the merit of being short and crisp, isoffered by Diaper (2004 , p. 15): “Task anal-ysis is the study of how work is achieved bytasks.” Both definitions are deceptively sim-ple. They do, however, raise further issues,such as what a “system” is, or a “goal,” or

“work,” or “task.” Complicating matters fur-ther, notions and assumptions have changedover time and have varied across nations.It is not my intention in this chapter toprovide a complete historical overview ofthe various definitions that have been givenfor task analysis. The reader is referredto Diaper and Stanton (2004), Hollnagel(2003), Kirwan and Ainsworth (1992),Militello and Hoffman (2006), Nemeth(2004), Schraagen, Chipman, and Shalin(2000), and Shepherd (2001).

It is important, however, in order tograsp the subtle differences in task-analyticapproaches that exist, to have some histor-ical background, at least in terms of thebroad intellectual streams of thought. Giventhe focus of this handbook, this historicaloverview will be slightly biased toward taskanalysis focused on professional practition-ers, or experts. After the historical overview,the reader should be in a better position tograsp the complexities of the seemingly sim-ple definitions provided above. Next, I willfocus on some case studies of task analysiswith experts. This should give the readeran understanding of how particular methods

185

Page 2: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

186 the cambridge handbook of expertise and expert performance

were applied, why they were applied, andwhat their strengths and weaknesses were.As the field is evolving constantly, I will endwith a discussion of some open avenues forfurther work.

Historical Overview

Task analysis is an activity that has alwaysbeen carried out more by applied researchersthan by academic researchers. Academicpsychology often involves research in whichthe experimenters create the tasks. Con-versely, applied researchers look into theirworld to investigate the tasks that peo-ple perform in their jobs. Indeed, taskanalysis originated in the work of thevery first industrial psychologists, includ-ing Wundt’s student Hugo Munsterberg(see Hoffman & Deffenbacher, 1992). Forinstance, early research conducted by the so-called “psychotechnicians” (Munsterberg,1914) involved studies of the tasks of railwaymotormen, and for that research, one of thevery first simulators was created.

The applied focus and origins may bebecause the ultimate goal of task analysisis to improve something – be it selection,training, or organizational design. Given theapplied nature of task analysis, one mayhypothesize that there is a close connec-tion between the focus of task analysis andcurrent technological, economical, political,and cultural developments. One fairly com-mon characterization of the past 100 yearsis the following breakdown in three periods(Freeman & Louca, 2001; Perez, 2002):

1. The age of steel, electricity, and heavyengineering. Leading branches of theeconomy are electrical equipment, heavyengineering, heavy chemicals, and steelproducts. Railways, ships, and the tele-phone constitute the transport and com-munication infrastructure. Machines aremanually controlled. This period, dur-ing which industrial psychology emerged(e.g., Viteles, 1932), lasted from approxi-mately 1895–1940.

2 . The age of oil, automobiles, and massproduction. Oil and gas allow massivemotorization of transport, civil economy,and war. Leading branches of the econ-omy are automobiles, aircraft, refineries,trucks, and tanks. Radio, motorways, air-ports, and airlines constitute the trans-port and communication infrastructure.A new mode of control emerged: super-visory control, characterized by monitor-ing displays that show the status of themachine being controlled. The “upswing”in this period lasted from 1941 until 1973

(Oil Crisis). The “downswing” of this erais still continuing.

3 . The age of information and telecommu-nications. Computers, software, telecom-munication equipment, and biotech-nology are the leading branches ofthe economy. The internet has becomethe major communication infrastructure.Equipment is “cognitively” controlled, inthe sense that users need to draw onextensive knowledge of the environmentand the equipment. Automation grad-ually takes on the form of intelligentcooperation. This period started around1970 with the emergence of “cognitiveengineering,” and still continues.

Each of these periods has witnessed its typi-cal task-analysis methods, geared toward thetechnology that was dominant during thatperiod. In the historical overview that fol-lows, I will use the breakdown into threeperiods discussed above.

The Age of Steel

Around 1900, Frederick Winslow Taylorobserved that many industrial organizationswere less profitable than they could bebecause of a persistent phenomenon thathe termed “soldiering,” that is, deliberatelyworking slowly (Taylor, 1911/1998). Workersin those days were not rewarded for work-ing faster. Therefore, there was no reason todo one’s best, as Taylor noted. Workers alsodeveloped their own ways of working, lar-gely by observing their fellow workers.

Page 3: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 187

This resulted in a large variety of infor-mal, rule-of-thumb-like methods for carry-ing out their work. Taylor argued that it wasthe managers’ task to codify this informalknowledge, select the most efficient methodfrom among the many held by the work-ers, and train workers in this method. Man-agers should specify in detail not only whatworkers should be doing but how their workshould be done and the exact time allowedfor doing their work. This is why Taylorcalled his analysis “time study.” Workers fol-lowing these instructions in detail shouldbe rewarded with 30 to 100 percent wageincreases, according to Taylor (1911/1998,p. 17). In this way, Taylor was certain hewould eliminate the phenomenon of work-ing slowly. Another approach, pioneeredby Frank Gilbreth, was called “motionstudy” and consisted of studying everymovement involved in a task in detail.Gilbreth proposed to eliminate all unnec-essary movements and to substitute fast forslow motions.

Taylor’s approach has the modern ring toit of what we now call “knowledge man-agement.” One should recognize, however,that the tasks he and others such as Gilbrethconsidered consisted primarily of repetitivemanual operations, such as shoveling, pigiron loading, bricklaying, and manufactur-ing/assembly tasks. “Cognitive tasks” involv-ing planning, maintaining situation aware-ness, and decision making were not directlyaddressed by this approach. Taylor was,sometimes unjustly, criticized because of hisdeterministic account of work, his view ofhumans as machines, his notion that humansare motivated only by monetary rewards,and the utter lack of discretion grantedto workers.

Taylor’s lasting influence on task analysishas been his analytical approach to decom-posing complex tasks into subtasks, and theuse of quantitative methods in optimizingtask performance. By asserting that man-agement should develop an ideal methodof working, independent of workers’ intu-itions (or their “rule-of-thumb” method, asTaylor called them), he foreshadowed con-temporary discussions on the value of using

experts as sources of information. Indeed,to understand various manufacturing jobs,Taylor would first find people who werevery good (“experts”) and then bring theminto a laboratory that simulated their work-place so that their activity might be studied.Taylor’s time study continued to exert aninfluence on determining optimal work lay-out for at least half a century (Annett, 2000),and it still is a major approach to job design(Medsker & Campion, 1997).

Although World War I stimulated thedevelopment of more sophisticated equip-ment, particularly in the area of avionics,there was little attention to controls and dis-plays. Rather, the main focus was on pilotselection and training (Meister, 1999). Thisline of research resulted in the developmentof the method of job analysis in the 1930s bythe U.S. Department of Labor (Drury et al.,1987). Job analysis was devised to establisha factual and consistent basis for identify-ing personnel qualification requirements. Ajob consists of a position or a group of sim-ilar positions, and each position consists ofone or more tasks (Luczak, 1997). There-fore, there is a logical distinction betweenjob analysis and task analysis: the techniquesemployed in job analysis address a higherlevel of aggregation than the techniquesemployed in task analysis.

For instance, in a typical job analysis ananalyst would rate, on a scale, whether a par-ticular job element, such as “decision mak-ing and reasoning,” would be used very oftenor very infrequent, and whether its impor-tance is very minor or extreme. In a typ-ical task analysis, on the other hand, ananalyst would decompose decision makinginto its constituent elements, for instance,“plausible goals,” “relevant cues,” “expectan-cies,” and “actions” (Klein, 1993). Further-more, the goals and cues would be spelledout in detail, as would be the typical diffi-culties associated with particular cues (e.g.,Militello & Hutton, 1998). Similarly, whenanalyzing the interaction between a humanand a machine, job analysis would ratethe extent and importance of this interac-tion, whereas task analysis would specify indetail how the human interacts with the

Page 4: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

188 the cambridge handbook of expertise and expert performance

machine, perhaps even down to the level ofindividual keystrokes (e.g., Card, Moran, &Newell, 1983). Job analysis and task analy-sis may use the same methods, for instance,interviews, work observation, and criticalincidents. However, as mentioned above,these methods address different levels ofaggregation.

The Age of Oil

It was not until after World War II that taskanalysis and human factors (the preferredterm in North America) or ergonomics(the preferred term in Europe) began totake on a decidedly more “cognitive” form.This was initiated by the development ofinformation-processing systems and com-puting devices, from the stage of manualcontrol to the stage of supervisory control(Hollnagel & Cacciabue, 1999). AlthoughTayloristic approaches to task analysis werestill sufficient in most of the work con-ducted in the first half of the twentieth cen-tury (when machines were manually con-trolled), the development of instrumentedcockpits, radar displays, and remote processcontrol forced the human into a supervi-sory role in which knowledge and cognitionwere more important than manual labor, andconditional branchings of action sequenceswere more important than strictly linearsequences of actions. Experience in WorldWar II had shown that systems with well-trained operators were not always working.Airplanes with no apparent mechanical fail-ures flew into the ground, and highly moti-vated radar operators missed enemy con-tacts. Apparently, the emphasis on testingand training had reached its limits, as hadTaylor’s implicit philosophy of designing thehuman to fit the machine. Now, experimen-tal psychologists were asked to design themachine to fit the human.

miller: task description and task analysis

In 1953 , Robert B. Miller had developeda method for task analysis that wentbeyond merely observable behavior (Miller,1953 ; 1962). Miller proposed that eachtask be decomposed into the follow-

ing categories: cues initiating action, con-trols used, response, feedback, criterion ofacceptable performance, typical errors. Themethod was of general applicability, butwas specifically designed for use in planningfor training and training equipment. Milleradopted a systems approach to task analysis,viewing the human as part of the system’slinkages from input to output functions.

In his task-analysis phase, Miller includedcognitive concepts such as “goal orienta-tion and set,” “decisions,” “memory storage,”“coordinations,” and “anticipations.” These“factors in task structure,” as he called theconcepts, are, to different degrees, inevitableparts of every task. The task analyst needsto translate the set of task requirementslisted in the task description into task-structure terms. The next step would beto translate the task-structure terms intoselection procedures, training procedures,and human engineering. Take, for instance,the task of troubleshooting. Miller providedsome “classical suggestions” on how to trainthe problem-solving part of troubleshooting.One suggestion was to “indoctrinate by con-cept and practice to differentiate the func-tion from the mechanism that performs thefunction” (Miller, 1962 , p. 224). Althoughtoo general to be useful as a concrete train-ing suggestion, this example predates laterconcepts such as the “abstraction hierarchy”introduced by Jens Rasmussen in 1979 (seeVicente, 2001).

flanagan: critical incident technique

The applied area of human-factors engineer-ing was less reluctant to adopt cognitive ter-minology than mainstream North Americanacademic psychology, which at that timewas still impacted by behaviorism. We havealready seen how Miller’s (1953) approachto task analysis included cognitive con-cepts. In 1954 , Flanagan published his “crit-ical incident technique” (Flanagan, 1954).This is a method for collecting and ana-lyzing observed incidents having special sig-nificance. Although the modern-day readermay associate incidents with severe disasters,this was not Flanagan’s primary definition.

Page 5: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 189

During World War II, he and his cowork-ers studied reasons for failure in learningto fly, disorientation while flying, failures ofbombing missions, and incidents of effectiveor ineffective combat leadership. After thewar, the method was also applied to non-military jobs, such as dentistry, bookkeeping,life insurance, and industry. These incidentswere collected by interviewing hundreds ofparticipants, resulting in thousands of inci-dent records. Alternative methods of datacollection were group interviews, question-naires, and written records of incidents asthey happened. These incidents were thenused to generate critical job requirements,which in turn were used for training pur-poses, job design, equipment design, mea-sures of proficiency, and to develop selec-tion tests. Flanagan (1954) did not providemuch detail on the reliability and valid-ity of his technique, although he empha-sized the importance of the reporting offacts regarding behavior rather than restingsolely on subjective impressions. His tech-nique demonstrates the importance of usingdomain experts as informants about anybehavior that makes a significant contribu-tion to the work that is carried out.

hierarchical task analysis

Although R. B. Miller had used cognitiveconcepts in his method for task analysis, histask descriptions were still tied very muchto actual human–machine interaction. Histask descriptions would therefore basicallybe lists of physical activities. His conceptof user goals had more to do with the cri-teria of system performance that the userhad to meet, than with a nested set of inter-nal goals that drives user performance. Amethod for task analysis that began by iden-tifying the goals of the task was developed inthe 1960s by Annett and Duncan under thename of Hierarchical Task Analysis (HTA)(Annett & Duncan, 1967). In accordancewith the dominant industries during thisperiod (the Age of Oil), HTA was origi-nally developed for training process-controltasks in the steel and petrochemical indus-tries. These process-control tasks involved

significant cognitive activity such as plan-ning, diagnosis, and decision making.

In the 1950s and 1960s, manual-controltasks had been taken over by automation.Operators became supervisors who weresupposed to step in when things wentwrong. The interesting and crucial parts ofsupervisory-control tasks do not lie withthe observable behavior, but rather withunobservable cognitive activities such asstate recognition, fault finding and schedul-ing of tasks during start-up and shutdownsequences. Training for these tasks thereforeneeded to be based on a thorough exami-nation of this cognitive activity. Annett andDuncan felt the existing methods for taskanalysis (such as time and motion studyand Miller’s method) were inadequate toaddress these issues. Also, they were moreclear about the need for task descriptions toinvolve hierarchies (i.e., conditional branch-ings versus linear sequences.) Hence hier-archical task analysis. Complex systems aredesigned with goals in mind, and the samegoals may be pursued by different routes.Hence, a direct listing of activities may bemisleading (they may be sufficient for rou-tine repetitive tasks, though). The analysttherefore needs to focus on the goals.

Goals may be successively unpacked toreveal a nested hierarchy of goals and sub-goals. For example, thirst may be the condi-tion that activates the goal of having a cup oftea, and subgoals are likely to include obtain-ing boiling water, a teapot with tea, and soon. We may answer the question why weneed boiling water by referring to the top-level goal of having a cup of tea. The analystneeds to ask next how to obtain boiling water.Whether the analyst needs to answer thisquestion is dependent on the purpose of theanalysis. If the purpose is to train someonewho has never before made a cup of tea, thenthe subgoal of obtaining boiling water itselfneeds to be unpacked further, for instance:pour water in container, heat water, look forbubbles.

Since a general purpose of HTA is to iden-tify sources of actual or potential perfor-mance failure, Annett and Duncan (1967)formulated the following stop rule: stop with

Page 6: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

190 the cambridge handbook of expertise and expert performance

Operate Continuous-Process Plant

1. Start upfrom cold

2. Start up afterintermediate shutdown

3. Run plant 4. Carry outemergencycrashdown

5. Shutdown formaintenance

1. Monitor alarms,instruments, andequipment

2. Deal withoff-spec.conditions

3. Collect samplesand deal with lab.reports

4. Adjust plantthroughput

1. Ensure plantand servicesavailable

2. Line up system 4. Warm upsystem

3. Bring systempressure to set-point

5. Hold pressureat 72.5 and temp.at 150

Figure 11.1. Hierarchical task analysis for continuous-process plant.

the analysis when the product of the prob-ability of failure (p) and the cost of failure(c) is judged acceptable. In the exampleabove, if we needed to train a child in makinga cup of tea, we might judge the product of pand c to be acceptable for the subgoals ofpouring water in the container and lookingfor bubbles. However, we may have somedoubts about the subgoal of heating thewater: a child may not know how to oper-ate the various devices used for boiling water(probability of failure is high); moreover, thecost of failure may be high as well (burningfingers and worse). The analyst will thereforedecide to further decompose this subgoal,but not the other subgoals. By successivelydecomposing goals and applying the p · ccriterion at each step, the analyst can dis-

cover possible sources of performance fail-ure and solutions can be hypothesized.For instance, one may discover that heat-ing water with an electrical boiler in factrequires fairly extensive knowledge aboutelectricity and the hazards associated withthe combination of water and electricity.Based on current literature on training, andin particular training children, the analystmay finally suggest some ways of educat-ing children in the dangers of using electricalboilers when making a cup of tea.

To take a more complex example thanthat of making a cup of tea, and illustrat-ing the output of HTA in a graphical for-mat, consider part of the HTA in Figure 11.1for operating a continuous-process chemicalplant (after Shepherd, 2001).

Page 7: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 191

This example is deliberately simplified inthat it does not show the order in whichsubgoals are pursued. A typical HTA wouldinclude a plan that does specify that order.

HTA may best be described as a genericproblem-solving process. It is now oneof the most familiar methods employedby ergonomics specialists in the UnitedKingdom (Annett, 2004). However, evalu-ation studies have shown that HTA can bevery time intensive compared to other meth-ods such as observation and interview. HTAis certainly far from simple and takes bothexpertise and practice to administer effec-tively (Annett, 2003). There is also a gooddeal of variability in the application of HTA.The reader may have had different thoughtsthan the writer of this chapter when read-ing about the particular decomposition ofthe subgoal of obtaining boiling water: whynot describe a particular procedure for aparticular way of boiling (e.g., pour waterin pan, put pan on stove, turn on stove,wait until water boils)? One obvious replywould be that this description is less gen-eral than the one offered above because thatdescription talks about “containers” in gen-eral. Furthermore, the actions are less precise(does one need to set the stove to a partic-ular setpoint?), and the conditions indicat-ing goal attainment are vague (how does onesee that the water boils?). If there can bedisagreement with such a simple example,imagine what problems an analyst can runinto when dealing with a complex process-control task, such as the example above ofthe chemical plant.

One of the pitfalls in applying HTA is thefact that one may lose sight of the problem-solving nature of the task analysis itself. Thisis not a critique of HTA as such, but rathera cautionary note that analysts need to keepthe purpose of the study in sight throughoutthe analysis.

The Age of Information Processing

In the early 1970s, the word “cognitive”became more acceptable in American aca-demic psychology, though the basic idea hadbeen established at least a decade earlier

by George Miller and Jerome Bruner (seeGardner, 1985 ; Hoffman & Deffenbacher,1992 ; Newell & Simon, 1972 for histori-cal overviews). Neisser’s Cognitive psychol-ogy had appeared in 1967, and the scien-tific journal by the same name first appearedin 1970. It took one more decade for thisapproach to receive broader methodolog-ical justification and its practical applica-tion. In 1984 , Ericsson and Simon (1984)published Protocol analysis: Verbal reportsas data. This book reintroduced the useof think-aloud problem-solving tasks, whichhad been relegated to the historical dust-bin by behaviorism even though it hadsome decades of successful use in psychol-ogy laboratories in Germany and elsewherein Europe up through about 1925 . In 1983 ,Card, Moran, and Newell published The psy-chology of human–computer interaction. Thisbook helped lay the foundation for thefield of cognitive science and presented theGOMS model (Goals, Operators, Methods,and Selection rules), which was a family ofanalysis techniques, and a form of task analy-sis that describes the procedural, how-to-do-it knowledge involved in a task (see later sec-tion and Kieras, 2004 , for a recent overview).

Task analysis profited a lot from thedevelopments in artificial intelligence, par-ticularly in the early 1980s when expertsystems became commercially interesting(Hayes-Roth, Waterman, & Lenat, 1983).Since these systems required a great dealof expert knowledge, acquiring or “elicit-ing” this knowledge became an importanttopic (see Hoffman & Lintern, Chapter 12).Because of their reliance on unstructuredinterviews, system developers soon viewed“knowledge elicitation” as the bottleneckin expert-system development, and theyturned to psychology for techniques thathelped elicit that knowledge (Hoffman,1987). As a result, a host of individual tech-niques was identified (see Cooke, 1994 , fora review of 70 techniques), but no singleoverall method for task analysis that wouldguide the practitioner in selecting the righttechnique for a given problem resulted fromthis effort. However, the interest in theknowledge structures underlying expertise

Page 8: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

192 the cambridge handbook of expertise and expert performance

proved to be one of the approaches towhat is now known as cognitive task analysis(Hoffman & Woods, 2000; see Hoffman &Lintern, Chapter 12 ; Schraagen, Chipman, &Shalin, 2000).

With artificial intelligence coming to bea widely used term in the 1970s, the firstideas arose about applying artificial intel-ligence to cockpit automation. As earlyas 1974 , the concepts of adaptive aidingand dynamic function allocation emerged(Rouse, 1988). Researchers realized thatas machines became more intelligent, theyshould be viewed as “equals” to humans.Instead of Taylor’s “designing the human tofit the machine,” or the human factors engi-neering’s “designing the machine to fit thehuman,” the maxim now became to designthe joint human–machine system, or, moreaptly phrased, the joint cognitive system(Hollnagel, 2003). Not only are cognitivetasks everywhere, but humans have lost theirmonopoly on conducting cognitive tasks, asnoted by Hollnagel (2003 , p. 6).

Again, as in the past, changes in tech-nological developments were followed bychanges in task-analysis methods. In orderto address the large role of cognition inmodern work, new tools and techniqueswere required “to yield information aboutthe knowledge, thought processes, and goalstructures that underlie observable task per-formance” (Chipman, Schraagen, & Shalin,2000, p. 3).

Cognitive task analysis is not a singlemethod or even a family of methods, asare Hierarchical Task Analysis or the Crit-ical Incident Technique. Rather, the termdenotes a large number of different tech-niques that may be grouped by, for instance,the type of knowledge they elicit (Seamster,Redding, & Kaempf, 1997) or the process ofelicitation (Cooke, 1994 ; Hoffman, 1987).Typical techniques are observations, inter-views, verbal reports, and conceptual tech-niques that focus on concepts and their rela-tions. Apart from the expert-systems thread,with its emphasis on knowledge elicitation,cognitive task analysis has also been influ-enced by the need to understand expert deci-sion making in naturalistic, or field, settings.

A widely cited technique is the Criti-cal Decision Method developed by Kleinand colleagues (Klein, Calderwood, &Macgregor, 1989; see Hoffman, Crandall,& Shadbolt, 1998, for a review, and seeHoffman & Lintern, Chapter 12 , Ross, et al,Chapter 23). The Critical Decision Methodis a descendent of the Critical Incident Tech-nique developed by Flanagan (1954). In theCDM procedure, domain experts are askedto recall an incident in detail by construct-ing a time line, assisted by the analyst. Next,the analyst asks a set of specific questions(so-called cognitive probes) about goals, cues,expectancies, and so forth. The resultinginformation may be used for training or sys-tem design, for instance, by training novicesin recognizing critical perceptual cues.

Despite, and perhaps because of, its richand complex history, cognitive task analy-sis is still a relatively novel enterprise, anda number of major issues remain to beresolved. One is the usability of the prod-ucts of cognitive task analysis, an issue thatapplies not only to cognitive task analysis,but to task analysis in general. Diaper, forinstance, has argued since the beginning ofthe 1990s that a gulf exists between taskanalysis and traditional software-engineeringapproaches (Diaper, 2001). When design-ing systems, software engineers rarely usethe task-analysis techniques advocated bypsychologists. Conversely, as Lesgold (2000,p. 456) rightfully noted, “psychologistsmay have ignored the merits of object-based formalisms at least as often as ana-lysts on the software engineering side haveignored human learning and performanceconstraints.” Both groups can learn a lot fromeach other. Several attempts have been madeto bridge the gulf (Diaper and Stanton’s2004 handbook lists a number of these),but none has been widely applied yet, pos-sibly because of differences in backgroundand training between software engineers andcognitive psychologists.

Another major challenge for cognitivetask analysis is to deal with novel systems.For the most part, the existing practiceof cognitive task analysis is based on thepremise that one has existing jobs with

Page 9: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 193

experts and existing systems with experi-enced users to be analyzed. However, newsystems for which there are no experts arebeing developed with greater frequency, andurgency.

These issues have been taken up by thecognitive systems engineering approach. At itscore, cognitive systems engineering “seeksto understand how to model work in waysdirectly useful for design of interactive sys-tems” (Eggleston, 2002 , p. 15). Eggleston’suseful overview of the field distinguishesthree phases in the development of cog-nitive systems engineering: (1) a concep-tual foundations period that occurred largelyin the 1980s, (2) an engineering prac-tice period that dominated the 1990s, and(3) an active deployment period that startedaround 2000. Cognitive task analysis fig-ures prominently in the engineering practiceperiod of cognitive systems engineering.However, whereas “traditional” cognitivetask analysis focuses primarily on under-standing the way people operate in theircurrent world, cognitive systems engineer-ing focuses also on understanding theway the world works and the way inwhich new “envisioned worlds” might work(Potter, Roth, Woods, & Elm, 2000).

With the discussion of cognitive task anal-ysis and cognitive systems engineering, wehave reached the present-day status of taskanalysis. The next section will describe anumber of case studies that exemplify theuse of task analysis methods.

Case Studies

In this section, I will describe various casestudies on task analysis, with the aim, first, toprovide the reader with some ideas on howto carry out a task analysis, and second, tonote some of the difficulties one encounterswhen carrying out a task analysis in complexdomains.

Improving the Training of Troubleshooting

The first case study is in the domain oftroubleshooting. Schaafstal (1993), in her

studies of expert and novice operators in apaper mill, found evidence for a structuredapproach to troubleshooting by experts. Shepresented experts and novices with realis-tic alarms on paper and asked them to thinkaloud. Consider the following protocol bya novice when confronted with the alarm:“conveyor belt of pulper 1 broke down”:

I would . . . I would stop the pulper to startwith and then I would halt the whole cycleafterwards and then try to repair the con-veyor belt . . . but you have to halt the wholeinstallation, because otherwise they don’thave any stock anymore.

An expert confronted with the same prob-lem reacted as follows:

OK. Conveyor belt of pulper 1 brokedown . . . conveyor belt of pulper 1 . . . if thatone breaks down . . . yeah . . . see how longthat takes to repair . . . not postponing thedecision for very long, to ensure we don’thave to halt the installation.

The novice starts repairs that are not nec-essary at all given the situation, whereasthe expert first judges the seriousness ofthe problem. These and similar statementsled to the inclusion of the category “judg-ing the seriousness of the problem” in theexpert’s task structure of the diagnostic task.As novices rarely showed this deliberation,this category did not appear in their taskstructure.

The complete task structure is as follows(see Figure 11.2).

Experts in a paper mill first start by mak-ing a judgment about the seriousness of theproblem. If the problem is judged to be seri-ous, the operator will immediately continuewith the application of a global repair, fol-lowed by an evaluation whether the prob-lem has been solved. This process may befollowed by a more thorough diagnosis inorder to determine the correct local repair,ensuring a solution “once and for all.” If theproblem is not a very serious one, the expertwill consider possible faults one by one andtest them, until a likely one is found. This isthen followed by a determination of repairs,their consequences, an ordering of repairs

Page 10: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

194 the cambridge handbook of expertise and expert performance

symptom

Judgment: serious problem?

Possible faults

Testing: is this the fault?

Determination of repairs

Consequences of repairs

Ordering of repairs

Application of repair (local orglobal)

Evaluation: problem solved?

EXIT

no no

no

yes

serious problem

not very seriousproblem

Figure 11.2 . Task structure of the diagnostic strategy applied by expertoperators (Schaafstal, 1991)

(if necessary), application of repairs, and anevaluation whether the problem has beensolved. If the problem has not been solved,the expert might do two things: either tryanother repair, or back up higher in the tree –

he may realize that he has not yet spotted theactual fault, and therefore the problem hasnot been solved. In case no possible faultsare left, or the operator cannot think of anyother faults than the ones he already tested,

Page 11: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 195

he will be inclined to use a global repair toalleviate the problem.

Inexperienced operators show a far moresimple diagnostic strategy. They don’t judgethe seriousness of the problem, they don’tconsider the consequences of repairs, andthey don’t evaluate whether the problem hasbeen solved. Also, novices jump much morequickly to repairs without realizing whethera certain repair actually is right for a certainsituation.

We applied this expert task structure toanother area of troubleshooting (Schaafstal,Schraagen, & Van Berlo, 2000). Around1990, complaints started to emerge from theDutch fleet concerning the speed and accu-racy of weapon engineers, who carry outboth preventive and corrective maintenanceonboard naval vessels. There were a num-ber of reasons for the suboptimal perfor-mance of the troubleshooters. First, exper-tise was not maintained very well. Engineersshifted positions frequently, left military ser-vice for more lucrative jobs in the civilianworld, or were less interested in a technicalcareer in the first place. Second, a new gener-ation of highly integrated systems was intro-duced, and this level of integration madetroubleshooting more demanding. Third,the training the troubleshooters receivedseemed inadequate for the demands theyencountered onboard ships.

We conducted a field study with realfaults in real systems that showed thatmilitary technical personnel who had justcompleted a course and passed their examdiagnosed only 40% of the malfunctions cor-rectly. We also obtained scores on a knowl-edge test, and found that the junior techni-cians scored only 55% correct on this test. Ofeven more importance was the low correla-tion (0.27) between the scores on the knowl-edge test and the actual troubleshooting per-formance. This cast doubt on the heavyemphasis placed on theory in the trainingcourses.

Our suspicions about the value of theoryin the training courses were further raisedafter having conducted a number of obser-vational studies (see Schraagen & Schaafstal,1996: Experiment 1). In these studies, we

used both experts and novices (traineeswho had just finished a course) in orderto uncover differences in the knowledgeand strategies employed. Our task-analysismethod was to have technicians think aloudwhile troubleshooting two malfunctions in aradar system. The resulting verbal data wereanalyzed by protocol analysis, that is, by iso-lating and categorizing individual proposi-tions in the verbal protocol.

The categories we used for classifying thepropositions were derived from the experttask structure as shown in Figure 11.2 . Theradar study showed that a theory instructorwho was one of our participants had dif-ficulties troubleshooting this radar system.This turned our attention to a gap betweentheoretical instruction and practice. We alsoobserved virtually no transfer of knowledgefrom one radar system to the other, as wit-nessed by the unsuccessful troubleshoot-ing attempts of two participants who wereexperienced in one radar system but not inthe radar system we studied. This turnedour attention to the content of the train-ing courses, which were component orientedinstead of functionally oriented. Finally, theverbal protocols showed the typical unsys-tematic approach to troubleshooting by thenovice participant in our study.

These studies provided a glimpse of whatwas wrong with the courses in troubleshoot-ing. They were highly theoretical, compo-nent oriented, with little practice in actualtroubleshooting. On the basis of our obser-vations and experiments, we decided tochange the courses. Basically, we wanted toteach the students two things: (1) a sys-tematic approach to troubleshooting, (2) afunctional understanding of the equipmentthey have to maintain. In our previous study(Schraagen & Schaafstal, 1996), we hadfound that the systematic approach to trou-bleshooting could not be taught indepen-dently of a particular context. In order to beable to search selectively in the enormousproblem space of possible causes, it is essen-tial that the representation of the system behighly structured.

One candidate for such a structuringis a functional hierarchical representation,

Page 12: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

196 the cambridge handbook of expertise and expert performance

much like Rasmussen’s (1986) abstractionhierarchy (see Hoffman & Lintern, Chap-ter 12). For a course on a computer system,we decomposed the system into four levels,from the top-level decomposition of a com-puter system into power supply, central pro-cessor, memory, and peripheral equipment,down to the level of electrical schemata. Westopped at the level of individual replace-able units (e.g., a printed circuit board). Inthis way, substantial theoretical backgroundthat was previously taught could be elimi-nated. We replaced this theory with morepractice in troubleshooting itself. Studentswere instructed to use a troubleshootingform as a job aid. This form consisted sim-ply of a sheet of paper with four differentsteps to be taken in troubleshooting (prob-lem description, generate causes, test causes,repair and evaluate). These four steps were ahigh-level abstraction of the diagnostic taskstructure previously identified in the paper-mill study by Schaafstal (1993). In this way,the systematic approach to troubleshootingwas instilled in the practice lessons, whileat the same time a functional understand-ing of the system was instilled in the the-ory sessions. Theory and practice sessionswere interspersed such that the new theo-retical concepts, once mastered, could thenbe readily applied to troubleshooting thereal system.

We demonstrated to the Navy the suc-cess of this approach in a one-week add-on course: the percentage of problemssolved went up from 40% to 86%. Sub-sequently, we were asked to completelymodify the computer course according toour philosophy. Again, we evaluated thisnew course empirically by having studentsthink aloud while solving four problems,and rating their systematicity of reason-ing and level of functional understanding.Results were highly favorable for our mod-ified course: 95% of malfunctions were cor-rectly solved (previously 40%), and expertsrated the students’ systematicity and levelof functional understanding each at 4 .8on a 1–5 scale (whereas these numberswere 2 .6 and 2 .9, respectively, for the old

course). These results were most satisfy-ing, especially considering the fact that thenew course lasted four weeks instead ofsix weeks.

The Naval Weapon Engineering School,convinced by the empirical results, subse-quently decided to use this method as thebasis for the design of all its function courses.We have helped them with the first fewcourses, and subsequently wrote a man-ual specifying how to develop new coursesbased on our philosophy, but gradually thenaval instructors have been able to mod-ify courses on their own. Course length formore than 50 courses has on average beenreduced by about 30%.

As far as task analysis is concerned, thereader may have noted that little mentionhas been made of any extensive task decom-position. Yet, this project could not havebeen as successful as it was without a cog-nitive task analysis of troubleshooting onthe part of highly proficient domain prac-titioners. Troubleshooting is first and fore-most a cognitive task. Little can be observedfrom the outside, just by looking at behavior.Observations and inferences are all knowl-edge based. We therefore almost alwaysused think-aloud problem solving followedby protocol analysis as the data analysismethod.

Shore-based Pilotage

In 1992 , Rotterdam Municipal Port Manage-ment, with cooperation by the RotterdamPilots Corporation, ordered an investigationinto the possibilities of extending shore-based pilotage (Schraagen, 1993). Normally,a pilot enters a ship and “conns” the shipfrom the sea to the harbor entrance. Theexpertise of a pilot lies in his or her exten-sive knowledge of the particular conditionsof a specific harbor or port. The expertiseof the ship’s captain lies primarily in his orher extensive knowledge of a specific ship.Because of rough seas, situations sometimesarise where it is too dangerous for the pilotto board the ship himself. In those situa-tions, either the pilot is brought on board

Page 13: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 197

by a helicopter, or the ship is piloted by apilot ashore. The latter is called “shore-basedpilotage.” Extending shore-based pilotagewould reduce costs for shipping companiessince shore-based pilotage, at least at thetime of the study, was cheaper than beingserved by a helicopter or waiting in a har-bor for better weather. However, cost reduc-tion should be weighed against decreasedlevels of safety as a result of the pilot notbeing on the bridge himself. In particular,in bad weather conditions, the captain hasno overview of the traffic image in relationto the environment. Sometimes, large driftangles are seen by the captain. These anglesare sometimes not accepted by the captainand advice given by the pilot is not followedup. The problem is that the captain may notbe familiar with the local situation.

One important element in the study wasto specify the extra information requiredby pilots if shore-based pilotage would beextended to ships exceeding 170 m in length.Ideally, a simulator study would be requiredin which one could systematically vary theinformation available to the pilot, variablessuch as ship length and height, traffic den-sity, wind, current, and visibility conditions,and the quality of interaction between pilotand captain. However, this kind of study ex-ceeded the project’s budget, so we actuallyundertook a literature study, a comparisonwith air traffic control, a study of vessel-based systems, and a task analysis. The pur-pose of the task analysis was to find out whatinformation pilots used onboard the ships.

The selection of expert pilots that par-ticipated in the task analysis was largelydetermined by the Pilots Corporation, basedon a few constraints that we put forward.First, we needed pilots with at least tenyears of practical experience. Second, weneeded pilots who were proficient commu-nicators, so they could explain what theywere doing.

The task analysis consisted of two parts(see Schraagen, 1994 , for more details): (1)observation and recording of pilot activitieson eleven trips on ships, (2) paper and pen-cil tasks given to seven pilots who had also

cooperated during the trips. During the tripsonboard ships, an extensive array of mea-surements was taken:

(a) Pilots were instructed to talk aloud; theirverbalizations were tape-recorded.

(b) Pilot decisions and communication weretimed with a stopwatch.

(c) Pilots were asked about their plansbefore entering the ship and were inter-viewed afterwards about the trip made.

(d) The ships’ movements and positionswere recorded.

(e) Recordings were made of the move-ments of ship traffic (via radar).

(f) Photographs of the view ahead weretaken from the vantage point of thehelm: photographs were taken every fiveminutes in order to obtain a runningrecord of the pilot’s perceptual input.

After the trips had been made, sevenpilots participated in follow-up sessions. Theprimary aim was to obtain more detailedinformation on pilot information usage thanhad been possible during the trips (detailedinterviews were impossible, of course, sincepilots were doing their jobs). A secondarybenefit was that data could be compared, incontrast to the trip data that differed in termsof ship, weather, and traffic conditions. In thefollow-up session, pilots were asked to carryout the following tasks:

(a) Reconstruct the exact rudder advicegiven during a trip using fragments ofvideo recordings as input (fragments ofcurved situations lasting four to ten min-utes were presented on a TV monitor).

(b) Indicate on a map of the trajectory theywere familiar with at what points courseand speed changes were made.

(c) Draw on a map the course over groundtogether with acceptable margins undervarious wind and current conditions forthe entrance into Hook of Holland.

These tasks represent what Hoffman(1987)has called“limited-informationtasks.”Limited-information tasks are similar to

Page 14: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

198 the cambridge handbook of expertise and expert performance

the task the expert is familiar with, but theamount or kind of information that is avail-able to the expert is somehow restricted.For example, the video recordings weretaken from a fixed point in the middle of thebridge, whereas pilots would normally standat the righthand side of the bridge lookingat the starboard side of the river. Althoughexperts may initially feel uncomfortablewith limited-information tasks, the taskscan be informative in revealing practitionerreasoning (Hoffman, Shadbolt, Burton, &Klein, 1995).

The task analysis yielded a wealth of infor-mation on the kinds of information pilotsactually use when navigating. The mostimportant references used were pile moor-ings, buoys, and leading lines. Several unex-pected results emerged. First, pilots develophighly individualized references both to ini-tiate course and speed changes and to checkagainst the ship’s movements. Although allpilots rely on pile moorings, buoys, and lead-ing lines, which of these they use differsgreatly among them. This is perhaps dueto their individualized way of training. Sec-ond, one might hypothesize that the deci-sion points mentioned by pilots on paperconstitute only a fraction of, or are differ-ent in nature from, the decision points usedduring actual pilotage onboard a ship. Thislatter possibility turned out not to be thecase. Decision points used in actual prac-tice were all covered by the decision pointsmentioned on paper. This implies that thiskind of knowledge is not “tacit” or difficultto verbalize.

More interesting than the precise resultsof the task analysis, at least for this chap-ter’s purposes, are the lessons learned. First,this was a politically very sensitive project.It turned out that the sponsor, RotterdamPort Authorities, had a different politicalagenda than the Rotterdam pilots. The PortAuthorities wanted to reduce shipping costsin order to increase total amount of cargohandling. The pilots, on the other hand,publicly said they were afraid of jeopardiz-ing safety in case shore-based pilotage wasextended. They therefore offered their fullassistance by allowing us to follow them on

their trips, so that they could convince us ofthe added value of having experts on board.

In another project that was to have starteda year later, their knowledge of how to conna ship into the harbor was required for “pro-ficiency testing,” that is, training captains ofships to conn their own ships into the har-bor without the assistance of a pilot. Pilotparticipation in this project was withdrawnafter a few months and the entire projectwas cancelled. In the end, this project mayhave been used by the Port Authorities topose a threat to the pilots: if you don’t loweryour rates for helicopter assistance (the heli-copter was leased by the pilots), we willextend shore-based pilotage. It seems thatthis threat worked. Shore-based pilotage wasnot extended, hence the results of this studywere not implemented.

By spending time with the pilots, it wouldbe easy for the researchers to develop loy-alties with them and their organization,rather than with the Port Authorities whoremained at a distance. In general, the taskanalyst who is studying expertise in contextneeds to be aware of these agenda issues.

A second lesson learned is that obtainingtoo many data can be counterproductive. Inthis project, for instance, the video record-ings that were made of the synthetic radarimage in the Harbor Coordination Centerwere never analyzed afterwards, although itseemed potentially valuable at the time westarted. Second, the timing of the pilot deci-sions with a stopwatch, although laudablefrom a quantitative point of view, was reallyunnecessary given the focus of the project onthe qualitative use of categories of informa-tion. Hindsight is always 20/20, but the gen-eral lesson for task analysts is to think twicebefore recording massive amounts of infor-mation just because the gathering of certaindata types might be possible. Informationgathering should be driven by the goals ofthe research.

Finally, the paper and pencil tasks werereceived with downright hostility by thepilots. They had been forced to spend theirspare time on this exercise, and when theynoted certain inadequacies in the infor-mation provided to them on paper, they

Page 15: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 199

became uncooperative and very critical. Thisrequired some subtle people-handling skillson the part of the task analyst. In retro-spect, it would have been better to first talkthrough the materials with an independentpilot in order to remove the inadequacies.This confirms a conclusion already drawnin 1987 by Hoffman that “experts do notlike it when you limit the information thatis available to them ( . . . ). It is importantwhen instructing the expert to drive homethe point that the limited-information task isnot a challenge of their ego or of their exper-tise” (Hoffman, 1987, p. 56).

In another cognitive task analysis, gearedtoward discovering the search strategiesemployed by forensic analysts, we also usedlimited-information tasks, this time with-out encountering resistance (Schraagen &Leijenhorst, 2001). This may have been dueto the fact that the cases that were presentedto the experts were developed in close coop-eration with a forensic analyst who was notpart of the study participants. Also, theirfamiliar task, by definition, involves workingwith limited information.

Conclusions and Future Work

Where do we stand? Although it may be tooearly to tell, we may have shifted from theage of information to the age of knowledgesharing or innovation. Task analysis now hasa focus of understanding expert knowledge,reasoning, and performance, and leveragingthat understanding into methods for trainingand decision support, to amplify and extendhuman abilities to know, perceive, and col-laborate. To do this, we have an overarch-ing theory – macrocognition – (Klein, et al.,2003), and a rich palette of methods, withideas about methods’ strengths and limita-tions, and methods combinatorics. Task anal-ysis, and cognitive task analysis in particular,are both useful and necessary in any investi-gation of expertise “in the wild.”

Despite this generally positive outlook,there are several lingering issues that deservefuture work. First, the issue of bridgingthe gulf between task analysis and sys-

tems design is still a critical one. Recently,interesting work has been carried outon integrating task analysis with standardsoftware-engineering methods such as Uni-fied Modeling Language (UML) (see Diaper& Stanton, 2004 , Part IV).

A second issue regarding the gulf betweentask analysis and design concerns the devel-opment of systems that do not yet exist. Taskanalyses generally work well when expertscan be interviewed who are experiencedwith current systems. However, with novelsystems, there are no experts. If introduc-ing new technology changes tasks, the anal-ysis of a current task may be of limited usein the design of new sociotechnical systems(Woods & Dekker, 2000). Therefore, a some-what different set of techniques is requiredfor exploring the envisioned world, includ-ing storyboard walkthroughs, participatorydesign, and high-fidelity simulations usingfuture scenarios.

References

Annett, J. (2000). Theoretical and pragmaticinfluences on task analysis methods. In J. M.Schraagen, S. F. Chipman, & V. L. Shalin (Eds.),Cognitive task analysis (pp. 25–37). Mahwah,NJ: Lawrence Erlbaum Associates.

Annett, J. (2003). Hierarchical task analysis. InE. Hollnagel (Ed.), Handbook of cognitive taskdesign (pp. 17–35). Mahwah, NJ: LawrenceErlbaum Associates.

Annett, J. (2004). Hierarchical task analysis. InD. Diaper & N. Stanton (Eds.), The handbookof task analysis for human–computer interaction(pp. 67–82). Mahwah, NJ: Lawrence ErlbaumAssociates.

Annett, J., & Duncan, K. D. (1967). Task analy-sis and training design. Occupational Psychology,41, 211–221.

Card, S. K., Moran, T. P., & Newell, A. (1983).The psychology of human–computer interaction.Hillsdale, NJ: Lawrence Erlbaum Associates.

Chipman, S. F., Schraagen, J. M., & Shalin, V.L. (2000). Introduction to cognitive task anal-ysis. In J. M. Schraagen, S. F. Chipman, &V. L. Shalin (Eds.), Cognitive task analysis(pp. 3–23). Mahwah, NJ: Lawrence ErlbaumAssociates.

Page 16: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

2 00 the cambridge handbook of expertise and expert performance

Cooke, N. J. (1994). Varieties of knowledgeelicitation techniques. International Journal ofHuman–Computer Studies, 41, 801–849.

Diaper, D. (2001). Task Analysis for Knowl-edge Descriptions (TAKD): A requiem for amethod. Behavior and Information Technology,2 0, 199–212 .

Diaper, D. (2004). Understanding task analysisfor human–computer interaction. In D. Diaper& N. Stanton (Eds.), The handbook of task anal-ysis for human–computer interaction (pp. 5–47).Mahwah, NJ: Lawrence Erlbaum Associates.

Diaper, D., & Stanton, N. (2004). Wishing on asTAr: The future of task analysis. In D. Diaper &N. Stanton (Eds.), The handbook of task analysisfor human–computer interaction (pp. 603–619).Mahwah, NJ: Lawrence Erlbaum Associates.

Drury, C. G., Paramore, B., Van Cott, H. P., Grey,S. M., & Corlett, E. N. (1987). Task analysis. InG. Salvendy (Ed.), Handbook of human factors(pp. 371–401). New York: John Wiley & Sons.

Eggleston, R. G. (2002). Cognitive systems engi-neering at 20-something: Where do we stand?In M. D. McNeese & M. A. Vidulich (Eds.),Cognitive systems engineering in military aviationenvironments: Avoiding cogminutia fragmentosa!(pp. 15–78). Wright-Patterson Air Force Base,OH: Human Systems Information AnalysisCenter.

Ericsson, K. A., & Simon, H. A. (1984). Proto-col analysis: Verbal reports as data. Cambridge,MA: MIT Press.

Flanagan, J. C. (1954). The critical incident tech-nique. Psychological Bulletin, 51, 327–358.

Freeman, C., & Louca, F. (2001). As time goesby: From industrial revolutions to the informationrevolution. Oxford: Oxford University Press.

Gardner, H. (1985). The mind’s new science: A his-tory of the cognitive revolution. New York: BasicBooks.

Hayes-Roth, F., Waterman, D. A., & Lenat, D. B.(Eds.). (1983). Building expert systems. Reading,MA: Addison-Wesley Publishing Company.

Hoffman, R. R. (1987, Summer). The problemof extracting the knowledge of experts fromthe perspective of experimental psychology. AIMagazine, 8, 53–67.

Hoffman, R. R., & Deffenbacher, K. (1992). Abrief history of applied cognitive psychology.Applied Cognitive Psychology, 6, 1–48.

Hoffman, R. R., Shadbolt, N. R., Burton, A. M.,& Klein, G. (1995). Eliciting knowledge fromexperts: A methodological analysis. Organiza-

tional Behavior and Human Decision Processes,62 , 129–158.

Hoffman, R. R., Crandall, B. W., & Shadbolt,N. R. (1998). A case study in cognitive taskanalysis methodology: The critical decisionmethod for elicitation of expert knowledge.Human Factors, 40, 254–276.

Hoffman, R. R., & Woods, D. D. (2000). Study-ing cognitive systems in context: Preface to thespecial section. Human Factors, 42 , 1–7 (Specialsection on cognitive task analysis).

Hollnagel, E. & Cacciabue, P. C. (1999). Cogni-tion, technology & wrok: An introduction. Cog-nition, Technology & Work, 1(1), 1–6.

Hollnagel, E. (2003). Prolegomenon to cognitivetask design. In E. Hollnagel (Ed.), Handbook ofcognitive task design (pp. 3–15). Mahwah, NJ:Lawrence Erlbaum Associates.

Kieras, D. (2004). GOMS models for task anal-ysis. In D. Diaper & N. A. Stanton (Eds.),The handbook of task analysis for human–computer interaction (pp. 83–116). Mahwah, NJ:Lawrence Erlbaum Associates.

Kirwan, B., & Ainsworth, L. K. (Eds.). (1992). Aguide to task analysis. London: Taylor & Francis.

Klein, G. (1993). A recognition-primed deci-sion (RPD) model of rapid decision making.In G. Klein, J. Orasanu, R. Calderwood, &C. E. Zsambok (Eds.), Decision making inaction: Models and methods (pp. 138–147).Norwood, NJ: Ablex.

Klein, G. A., Calderwood, R., & Macgregor, D.(1989). Critical decision method for elicitingknowledge. IEEE Transactions on Systems, Man,and Cybernetics, 19, 462–472 .

Klein, G., Ross, K. G., Moon, B. M., Klein, D. E.,Hoffman, R. R., & Hollanagel, E. (May/June2003). Macrocognition. IEEE Intelligent Sys-tems, pp. 81–85 .

Lesgold, A. (2000). On the future of congni-tive task analysis. In J. M. Schraagen, S. F.Chipman, & V. L. Shalin (Eds.), Cognitive taskanalysis (pp. 451–465). Mahwah, NJ: LawrenceErlbaum Associates.

Luczak, H. (1997). Task analysis. In G.Salvendy (Ed.), Handbook of human fac-tors and ergonomics (2nd ed.) (pp. 340–416).New York: John Wiley & Sons.

Medsker, G. J., & Campion, M. A. (1997).Job and team design. In G. Salvendy (Ed.),Handbook of human factors and ergonomics(2nd ed.) (pp. 450–489). New York: JohnWiley & Sons.

Page 17: Cap. 11

P1: KAE052184097Xc11 CB1040B/Ericsson 0 521 84087 X May 22 , 2006 15 :37

task analysis 2 01

Meister, D. (1999). The history of human factorsand ergonomics. Mahwah, NJ: Lawrence Erl-baum Associates.

Militello, L. G., & Hoffman, R. R. (2006).Perspectives on cognitive task analysis.Cambridge: MIT Press.

Militello, L. G., & Hutton, R. J. B. (1998).Applied cognitive task analysis (ACTA): Apractitioner’s toolkit for understanding cog-nitive task demands. Ergonomics, 41, 1618–1641.

Miller, R. B. (1953). A method for man–machine task analysis. Dayton, OH: Wright AirDevelopment Center (Technical Report 53–137).

Miller, R. B. (1962). Task description and analysis.In R. M. Gagne (Ed.), Psychological principles insystem development (pp. 187–228). New York:Holt, Rinehart and Winston.

Munsterberg, H. (1914). Psychotechnik. Leipzig:J. A. Barth.

Nemeth, C. P. (2004). Human factors methods fordesign: Making systems human-centered. BocaRaton: CRC Press.

Newell, A., & Simon, H. A. (1972). Human prob-lem solving. Englewood Cliffs, NJ: Prentice-Hall.

Perez, C. (2002). Technological revolutions andfinancial capital: The dynamics of bubbles andgolden ages. Cheltenham: Edward Elgar.

Potter, S. S., Roth, E. M., Woods, D. D., &Elm, W. C. (2000). Bootstrapping multipleconverging cognitive task analysis techniquesfor system design. In J. M. Schraagen, S. F.Chipman, & V. L. Shalin (Eds.), Cognitive taskanalysis (pp. 317–340). Mahwah, NJ: LawrenceErlbaum Associates.

Rasmussen, J. (1986). Information processing andhuman–machine interaction: An approach to cog-nitive engineering. Amsterdam: Elsevier.

Rouse, W. B. (1988). Adaptive aiding for human/computer control. Human Factors, 30, 431–443 .

Schaafstal, A. M. (1991). Diagnostic skill in pro-cess operation: A comparison between expertsand novices. Unpublished dissertation. Univer-sity of Groningen, The Netherlands.

Schaafstal, A. M. (1993). Knowledge and strate-gies in diagnostic skill. Ergonomics, 36, 1305–13 16.

Schaafstal, A. M., Schraagen, J. M., & van Berlo,M. (2000). Cognitive task analysis and innova-tion of training: The case of structured trou-bleshooting. Human Factors, 42 , 75–86.

Schraagen, J. M. C. (1993). What informationdo river pilots use? In Proceedings of the Inter-national Conference on Marine Simulation andShip Manoeuvrability MARSIM ’93 (Vol. II,pp. 509–517). St. John’s, Newfoundland:Fisheries and Marine Institute of MemorialUniversity.

Schraagen, J. M. C. (1994). What infor-mation do river pilots use? (Report TM1994 C-10). Soesterberg: TNO Institute forHuman Factors.

Schraagen, J. M. C., & Leijenhorst, H. (2001).Searching for evidence: Knowledge and searchstrategies used by forensic scientists. In E. Salas& G. Klein (Eds.), Linking expertise and natural-istic decision making (pp. 263–274). Mahwah,NJ: Lawrence Erlbaum Associates.

Schraagen, J. M. C., & Schaafstal, A. M.(1996). Training of systematic diagnosis: A casestudy in electronics troubleshooting. Le TravailHumain, 59, 5–21.

Schraagen, J. M., Chipman, S. F., & Shalin, V. L.(2000). Cognitive task analysis. Mahwah, NJ:Lawrence Erlbaum Associates.

Seamster, T. L., Redding, R. E., & Kaempf, G. L.(1997). Applied cognitive task analysis in avia-tion. London: Ashgate.

Shepherd, A. (2001). Hierarchical task analysis.London: Taylor & Francis.

Taylor, F. W. (1998). The principles of scientificmanagement (unabridged republication of thevolume published by Harper & Brothers, NewYork and London, in 1911). Mineola, NY: DoverPublications.

Vicente, K. J. (2001). Cognitive engineeringresearch at Risø from 1962–1979. In E. Salas(Ed.), Advances in human performance and cog-nitive engineering research (Vol. 1, pp. 1–57).New York: Elsevier.

Viteles, M. S. (1932). Industrial psychology. NewYork: W. W. Norton.

Woods, D. D., & Dekker, S. (2000). Antic-ipating the effects of technological change:A new era of dynamics for human factors.Theoretical Issues in Ergonomics Science, 1,272–282 .