Korn et al. (2009) article on enhanced/synthetic vision displays

28
Combining Enhanced and Synthetic Vision for Autonomous All-Weather Approach and Landing Bernd Korn, Sven Schmerwitz, Bernd Lorenz, and Hans-Ullrich Döhler German Aerospace Center—DLR This article is about the design and evaluation of the human–machine interface of an enhanced and synthetic vision system. Several tunnel-in-the-sky and pathway-in- the-sky concepts including scene-linking elements have been evaluated. Pathway- in-the-sky displays enable pilots to accurately fly difficult trajectories. However, these displays may drive pilots’ attention to the aircraft guidance task at the expense of other tasks, particularly when the pathway display is located head-down. A path- way head-up display (HUD) might be a viable solution to overcome this disadvan- tage. Moreover, the pathway might mitigate the perceptual segregation between the static near domain and the dynamic far domain and hence might improve attention switching between both sources. To more comprehensively overcome the perceptual near-to-far domain disconnect, alphanumeric symbols could be attached to the path- way leading to an HUD design concept called scene linking. A scene-linked path- way-predictor concept was implemented on a monocular retinal scanning head- mounted device (HMD) in combination with an optical head tracker. The evaluation comprises low-fidelity part-task simulations, high-fidelity simulator runs, and flight trials. Where laboratory experiments found evidence in favor of scene-linked path- way HUDs or HMDs, the real flight tests could not fully support this display concept. Even so, in all studies evidence has been found that the head-up pathway concept could be superior to current head-up solutions. Adverse weather conditions affect flight safety as well as efficiency of airport opera- tions. The problem is obvious in critical flight phases such as approach, landing, THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 19(1), 49–72 Copyright © 2009 Taylor & Francis Group, LLC ISSN: ISSN 1050-8414 print / 1532-7108 online DOI: 10.1080/10508410802597408 Correspondence should be sent to Bernd Korn, German Aerospace Center—DLR, Institute of Flight, Guidance, Lilienthalplatz 7, D-38108 Braunschweig, Germany. E-mail: [email protected]

Transcript of Korn et al. (2009) article on enhanced/synthetic vision displays

Page 1: Korn et al. (2009) article on enhanced/synthetic vision displays

ET AL.

Combining Enhanced and SyntheticVision for Autonomous All-Weather

Approach and Landing

Bernd Korn, Sven Schmerwitz, Bernd Lorenz,and Hans-Ullrich Döhler

German Aerospace Center—DLR

This article is about the design and evaluation of the human–machine interface of anenhanced and synthetic vision system. Several tunnel-in-the-sky and pathway-in-the-sky concepts including scene-linking elements have been evaluated. Pathway-in-the-sky displays enable pilots to accurately fly difficult trajectories. However,these displays may drive pilots’ attention to the aircraft guidance task at the expenseof other tasks, particularly when the pathway display is located head-down. A path-way head-up display (HUD) might be a viable solution to overcome this disadvan-tage. Moreover, the pathway might mitigate the perceptual segregation between thestatic near domain and the dynamic far domain and hence might improve attentionswitching between both sources. To more comprehensively overcome the perceptualnear-to-far domain disconnect, alphanumeric symbols could be attached to the path-way leading to an HUD design concept called scene linking. A scene-linked path-way-predictor concept was implemented on a monocular retinal scanning head-mounted device (HMD) in combination with an optical head tracker. The evaluationcomprises low-fidelity part-task simulations, high-fidelity simulator runs, and flighttrials. Where laboratory experiments found evidence in favor of scene-linked path-way HUDs or HMDs, the real flight tests could not fully support this display concept.Even so, in all studies evidence has been found that the head-up pathway conceptcould be superior to current head-up solutions.

Adverse weather conditions affect flight safety as well as efficiency of airport opera-tions. The problem is obvious in critical flight phases such as approach, landing,

THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 19(1), 49–72Copyright © 2009 Taylor & Francis Group, LLCISSN: ISSN 1050-8414 print / 1532-7108 onlineDOI: 10.1080/10508410802597408

Correspondence should be sent to Bernd Korn, German Aerospace Center—DLR, Institute ofFlight, Guidance, Lilienthalplatz 7, D-38108 Braunschweig, Germany. E-mail: [email protected]

Page 2: Korn et al. (2009) article on enhanced/synthetic vision displays

takeoff, and taxiing, in which the reduced visual range affects pilots’situation aware-ness (SA) and increases the separation distance between approaching aircraft due tosafety reasons. Consequently, runway capacity decreases and delays increase. Thus,even at well-equipped airports (with Instrument Landing System Catagory III, orILS CAT III systems), runway capacity is dramatically reduced under low-visibilityconditions. There is a high demand to develop systems and procedures that allowequivalent visual operations. Enhanced vision systems (EVS) and synthetic visionsystems (SVS) are currently developed to bridge this gap between good and low-vis-ibility conditions. The basic idea is to provide the crew with an image illustratinghow the real world outside the cockpit really looks. This would allow visual meteo-rological conditions (VMC) operations under iInstrument meteorological con-ditions (IMC). EVS relies on weather-penetrating, forward-looking sensors thataugment the naturally existing visual cues and provide a real-time image of promi-nent topographical objects that can be clearly distinguished and identified bythe pilot. Operational benefit is already provided to operators using EVS. Underthe currently existing (Federal Aviation Administration [FAA]) or proposed (JAA/EASA) (Joint Aviation Authorities/European Aviation Safety Agency rules for theuse of EVS in combination with operational benefits ((FAA [2004] or the respec-tive JAA/EASA proposal published in the NPA OPS 41 Subpart E All Weather Op-erations, EASA [2006]) pilots are allowed to continue their descent below the de-cision height (DH) or minimum descend altitude (MDA) if the required features ofthe runway are unambiguously visible in the EVS image.

It can easily be seen that the performance of the EVS is strongly dependent on theselection of imaging sensors. At DH (or MDA) of the flown approach procedure, thepilot has to use EVS as primary input to continue the approach down to 100 ft, afterwhich visual contact to the runway has to be established. An important topic for inte-grating new visual sensors into existing cockpit environments concerns the questionof how to display the acquired images, visual cues, or both. An obvious method forshowing this information is a simple overlay onto the head-up display (HUD). Dueto its simplicity, this method has been applied in several projects in the past. It is thenthe task of the pilot to analyze the sensor data and derive the correct guidance cues.However, this could become a rather demanding task, especially if millimeter-wave(MMW) radar sensors are used, as they offer far better weather penetration than in-frared (IR) sensors that are currently used in commercial systems.

The EVS concept has to be distinguished from the synthetic vision concept thatlikewise aims at improving the visual SA of pilots. SVS images of the externalscene topography are generated by using terrain databases only, which are trans-formed regarding the own state vector and are then displayed to the flight crew.SVS images can be further enhanced by including additional guidance symbology.Although the synthetic images are crisp and clearly understandable for the pilot,they suffer from a lack of reliability. Databases might not be up-to-date and obsta-cles usually are not modeled and therefore not displayed. Another drawback is the

50 KORN ET AL.

Page 3: Korn et al. (2009) article on enhanced/synthetic vision displays

complete dependence of their accuracy on the navigation solution. Even with ad-vanced satellite navigation solutions (i.e., correct information on location of theaircraft), 100% integrity of the navigation cannot be guaranteed.

A combination of both concepts would probably be the best solution. In theRockwell-Collins SE-Vision program, which was finished with several demon-stration flights on a Boeing 727 in 2005, a transparent inset method has been inves-tigated. The German Aerospace Center (DLR) participated in this project, withpartners such as the FAA, Rockwell-Collins, and Max-Viz. The objective of theDLR contribution was to demonstrate a more integrated way of overlaying the in-formation from two different IR cameras, to reduce cluttering of the HUD, and toprovide a much better “look through,” so that pilots can clearly recognize the out-side world shortly before the final touchdown. This approach is based on an auto-matic analysis of the sensor images to extract the relevant information out of theimages and present the results to the pilots on a higher level. It also paves the wayto combine EVS technology with well-known guidance techniques like tun-nel-in-the-sky or further synthetic vision elements.

Within its research project ADVISE-PRO (Advanced Visual System for Situa-tion Awareness Enhancement—Prototype, completed in 2006), DLR has com-bined elements of enhanced vision and synthetic vision into one integrated systemto allow low-visibility operations independently from the infrastructure on theground. The core element of this system is the effective fusion of all informationthat is available on-board (see Korn, 2007, for a complete overview). The syntheticvision part is supported by the analysis of weather-penetrating imaging sensors.The necessary verification of navigation and airport database (integrity monitor-ing) is obtained by the additional use of an MMW radar sensor (Pirkl & Tospann,1997) and a long-wave IR camera. Significant structures, like the runway itself, areextracted from the sensor data (automatically by means of “machine vision”) andchecked as to whether they match with both the navigation data and database infor-mation (Döhler & Korn, 2003, 2006; Korn, 2005; Korn, Döhler, & Hecker, 2000;Korn & Hecker, 2002a, 2002b). Furthermore, the sensor images are analyzed todetect obstacles on the runway (Korn, Döhler, & Hecker, 2001). The developed ra-dar-based navigation functions are very robust and accurate in terms of lateralguidance. Already 10 nm before the threshold, a two-dimensional (2D) positionaccuracy of up to 10 m (position of the aircraft relative to the runway threshold) isachieved. In more than 100 approaches to different runways in Germany, it hasbeen demonstrated that this accuracy will improve to 2 m during the last 1,000 m ofthe approach (Korn et al., 2001). For a reliable calculation of the vertical compo-nent of the aircraft’s position relative to the runway, the result of the radar-im-age-based navigation has to be fused with the barometric altitude and the radar al-timeter (Korn et al., 2000).

Once the resulting navigation data are verified by the various subsystems, a reli-able and precise guidance can be delivered to the human–machine interface

ENHANCED AND SYNTHETIC VISION 51

Page 4: Korn et al. (2009) article on enhanced/synthetic vision displays

(HMI), which then can use SVS elements for displaying intuitive guidance cuesfor the final approach segments without displaying noisy radar images. This can berealized on an HUD, a helmet-mounted display (HMD), or a head-down display(HDD). Even without having a display with radar or IR images overlaid, the pilotcan be assured that what is displayed to him or her has been cross-checked by sen-sors sensing the real world.

Within ADVISE-PRO, special emphasis was placed on the design and valida-tion of the HMI. The automatic analysis of the sensor data and the combination ofextracted information with synthetic vision allow for a complete sensor-independ-ent design of the HMI. It can be purely synthetic, but has the advantage of the dis-played information being cross-checked with the real world by enhanced visiontechnologies. The main focus of this article is this the HMI design and its valida-tion in part tasks simulations and in flight trials.

THE HUMAN–MACHINE INTERFACEOF THE ADVISE-PRO SYSTEM

As described earlier, the sensor data analysis processes and the fusion of their re-sults with database information and onboard navigation data leads to a consistentand accurate description of the outside world. Thus, there is no need to display rawsensor images to pilots for them to analyze to get a sufficient SA to accomplishtheir task. All advantages of SVSs can now be used to show those guidance sym-bols that can be best utilized to land the aircraft in low-visibility conditions.

Synthetic vision driven by database information has been under development invarious concepts for quite some time and in most cases is presented head-down.From those concepts, the pathway predictor guidance has been found to be a prom-ising display concept for the alleviation of these problems (Mulder, 2003; Prinzel,Arthur, Kramer, & Bailey, 2004; Williams, 2002). Pathway displays increase flightpath awareness of pilots, enabling them to fly difficult (e.g., curved) trajectorieswith high accuracy (Grünwald, 1996; Haskell & Wickens, 1993; Wickens et al.,2004). However, these displays may drive the pilot’s attention head-down at theexpense of monitoring the outside scene (Flemisch & Onken, 2000; Wickens &Alexander, in press). This performance deficit is referred to as attention fixation orattention capture (Wickens, 2005).

One way to overcome the disadvantage of increased head-down times associ-ated with head-down pathway displays is to present pathway guidance symbologyon an HUD or HMD (Kramer, Bailey, & Prinzel, in press). Both types of displayspresent flight guidance imagery in the pilot’s forward field of view using col-limating optics either attached to the airframe in the case of an HUD or mounted tothe pilot’s head in the case of an HMD (see Arthur et al., in press). HUDs andHMDs help to reduce considerably the visual scanning load and the need to ac-

52 KORN ET AL.

Page 5: Korn et al. (2009) article on enhanced/synthetic vision displays

commodate the eyes to different focal depths when switching attention between in-strument information and the outside view. As with head-down pathway guidance,attention fixation problems with the use of HUD technology were observed consis-tently in a number of studies that performed simulated approach and landing trialsand assessed the pilots’ response to an unexpected obstacle located on the runwayon which they were cleared to land (Fadden, Wickens, & Ververs, 2000; Lorenz,Többen, & Schmerwitz, 2005; Martin-Emerson & Wickens, 1997; Wickens &Long, 1995). Pilots using an HUD either more often failed to initiate a missed ap-proach or took significantly longer in comparison to using traditional HDDs.McCann, Foyle, and Johnston (1993) identified differential motion between theHUD imagery (static domain) and structures of the outside environment (dynamicdomain) as the crucial element that promotes both information sources to be visu-ally segregated as separate object domains during processing. Their study used acue–target visual search experimental paradigm and revealed that attention switch-ing between cue and target was more efficient when both stimuli belonged to thesame rather than to a different domain. This leads to the central claim that domainsegregation induced by differential motion is the source of attention fixation in theuse of HUDs (Prinzel, 2004).

The concept of scene linking is based on this claim. Central to this concept isthe attempt to mitigate HUD-induced attention fixation by the creation of commonmotion between objects of the far-domain and the near-domain HUD imagery. Infact, Sheldon, Foyle, and McCann (1997) demonstrated a performance advantageof scene linking in low-fidelity flight performance. Fadden et al. (2000) confirmedthis benefit in support of low-fidelity airport surface movements by demonstratingthat a conformal overlay of a pathway over the true desired ground track presentedon an HUD along with scene-linked digital instrumentation (i.e., symbology thatexhibits common motion with the outside scene) improved unexpected event de-tection as compared to the traditional static superimposition of this information.Fadden et al. (2000) also implemented scene-linked HUD symbology in support ofin-flight guidance. To achieve this, they attached altitude and airspeed readings tothe moving symbology elements of the pathway. Although the pathway has nophysical counterpart in the real world on which it can be overlaid, it can be arguedthat the pathway itself represents a virtual referent to the outside scene, providing asuitable means to create common motion. However, Fadden et al. (2000) did notspecifically examine the effectiveness of such a scene-linked pathway. Rather, theystudied the effect of display location, comparing a head-down with an HUD lo-cation of this display. They found a weak and insignificant superiority in unex-pected event detection (runway incursion during landing) favoring the head-downlocation.

A pathway guidance display concept using scene linking has been developed byour research team for a monocular HMD (Schmerwitz, Lorenz, & Többen, 2006).Its feasibility was tested during a series of high-fidelity simulated approach and

ENHANCED AND SYNTHETIC VISION 53

Page 6: Korn et al. (2009) article on enhanced/synthetic vision displays

landing scenarios in which 18 pilots completed four curved approaches also in-volving unexpected runway obstacles to be detected on landing (Lorenz et al.,2005). The implementation of scene linking was done in a similar way as Fadden etal. (2000) and a significantly delayed runway incursion detection using the HMDwas observed when compared to using standard ILS guidance located head-down.However, the study is also not conclusive enough to reject the idea of using scenelinking in airborne pathway guidance. During the high-fidelity experiments we didnot directly assess whether the scene-linked symbology dominates compared tosymbology having a fixed location on the HUD screen.

To make this assessment, a low-fidelity PC-based simulation task was developed(Schmerwitz et al., 2006). The participating pilots completed a series of simulatedlow-altitude flights through mountainous terrain supported by pathway guidanceand were instructed to detect hostile surface-to-air missile (SAM) stations hidden inthe outside terrain. Simultaneously, the first real flight test began, focusing mainlyon the usability of the HMD onboard a Do228. The tests consisted of two differentsets. One was a local pattern flight scenario using the HMD versus terrestrial naviga-tion; the other was an area instrumental flight using the HMD versus VOR/NDB/ILSnavigation (Schmerwitz et al., 2006). With the findings of the prior experimentsthe display was modified and tested in a second real flight test, further exploring thefeasibility of the monocular HMD. The first flight test revealed difficulties of the pi-lots at segment transitions. Therefore, implementation changes of the pathwaysymbology were combined with predictor–director guidance. The enhanced displayused less of the scene-linking concept, but was intended to improve segment transi-tions, track reintercepting, and readability of primary flight information (PFI). Thescenario was a closed virtual departure, cruise, and approach task. It was designed togenerate a high workload with several different climbs, descents, and curved seg-ments. The navigational aids were either RNAV-GPS or the pathway predictor–di-rector HMD (Schmerwitz, Többen, Lorenz, & Korn, 2007). These four experimentsdescribed briefly range in fidelity from laboratory through flight simulation throughaircraft testing. Within these, two main issues were investigated: comparing anHMD pathway display with standard head-down navigation and comparing thescene-linked symbology sets with more traditional symbology. Table 1 provides anoverview of the four experiments that are the focus of the research described in thisarticle. In the following sections, the results of the experiments are presented, begin-ning with a description of the HMD display concept.

PATHWAY PREDICTOR GUIDANCE CONCEPT

Pathway

The pathway-in-the-sky symbol consists of two different representations, as shownin Figure 1. For better printability, all images in this article are inverted. Therefore,

54 KORN ET AL.

Page 7: Korn et al. (2009) article on enhanced/synthetic vision displays

white regions in the figures are transparent and the gray levels represent red levelson the NOMAD HMD display. For the area in the vicinity of the aircraft, the path-way consists of horizontal bars, like cross-ties, that are bent up at each end, provid-ing good horizontal guidance but less vertical error. The aim of the design was toachieve a good balance between precision requirement and demand on visual at-tention resources rather than to provide the highest possible precision cue. The

ENHANCED AND SYNTHETIC VISION 55

TABLE 1Experimental Overview

Fixed-BaseSimulation

LaboratoryStudy

FirstFlight Test

Second FlightTest

Type High-fidelitysimulation

Low-fidelity laptopexperiment

Real flight test Real flight test

Task Standard approach,unexpected sceneevent detection

Synthetic part task Standard visual andinstrumentalpattern

Instrumentalhigh-workloadpattern

Task load Normal Balanced Normal HighGuidance HMD pathway

predictor vs.VOR/ILS HDD

Fixed PFI vs.scene-linked PFI

HMD pathwaypredictor vs. NDB/VOR/ILS HDD

HMD directorpredictor vs.GPS RNAV

Focus Conceptualscene-linkingapproach

Division ofattention

Proof of concept Segmenttransition,guidance cues

Note. HMD = head-mounted display; VOR/ILS = Very High Frequency Omnidirectional RadioRange/Instrument Landing System; HDD = head-down display; PFI = primary flight information;NDB/VOR/ILS = Non-Directional Beacon/Very High Frequency Omnidirectional Radio Range/Instru-ment Landing System; GPS = global positioning system; RNAV = required navigation performance.

FIGURE 1 Pathway display with simple bars in the vicinity and arrows at larger distances.

Page 8: Korn et al. (2009) article on enhanced/synthetic vision displays

cross-ties span 267 ft horizontally and have an average distance of 1,500 ft. At dis-tances larger than 2.5 nm the trajectory is represented by tunnel symbols. Whenthese pathway symbols are viewed from the side they appear similar to an arrowpointing into the flight direction. This is done to provide early cues for the pilotabout upcoming curves. At distances larger than 5 nm the pathway symbols aredarkened to reduce clutter. The trajectories represented by the pathway were gen-erated using the 4D flight management system (Czerlitzki, 1994).

Predictor

A basic element of the display is the predictor. A stylized aircraft symbol presentsthe expected position of the aircraft 10 sec into the future (see also Theunissen,Roefs, & Etherington, this issue). The calculation of this prediction is based on asimple and known formula for horizontal curves, and the present climb angle andthe wind vector were included. The predicted flight trajectory is represented by athin line between the aircraft nose and the predictor. This line allows easier detec-tion of the predictor symbol and visualizes the current aircraft’s velocity vector andturn rate. When flying a curve the appearance of the predictor smoothly changesinto a three-dimensional (3D) symbol (see Figure 2).

Primary Flight Information

In an attempt to apply the technique of scene linking developed by Foyle, McCann,and Sheldon (1995) the presentation of PFI was modified. Attaching numericalvalues of the aircraft’s speed and altitude to a pathway winglet made this informa-tion appear as virtual elements of the outside environment. shows the pathwaywith attached airspeed and barometric height values. Small arrows inside the tun-nel represent selected speed and height and the arrows outside the tunnel present

56 KORN ET AL.

FIGURE 2 Pathway display with predictor and airspeed and barometric height attached to agate.

Page 9: Korn et al. (2009) article on enhanced/synthetic vision displays

the actual values. Thin bars outside the tunnel provide information about accelera-tion and climb rate. As soon as the aircraft passes through a pathway symbol withthe attached PFI, this information is presented anew in a larger distance, typicallyat a position three pathway symbols ahead, dependent on the distance to the air-craft. The new pathway symbol is faded in softly over a period of 1 sec when twoPFI-pathway linked symbols are visible. The pilots needed some training to getused to this dynamically changing display, but after a short training session theywere able to synchronize their scanning pattern with the display. Figure 2 showsthe on-track situation with the pilot’s head directed bore-sight. When the head wasdirected off-bore-sight and the pathway got out of the HMD’s field of view (e.g.,when the pilot looked out of the left or right cockpit window), round-dial PFI im-agery was presented as shown in Figure 3.

Short Final

During the short final, the perspective view of the runway provides essential visualguidance cues for the pilot to keep the aircraft on the glide path in VMC. Theconformal overlay with a synthetic runway symbology based on sensor data likeForward Looking Infrared (FLIR) or Milimeter Wave Radar (MMWR) provides thiskind of information when the true runway might not yet be visible. This facilitatesthe change from instrument to visual flight rules and improves the detection of thereal runway under adverse weather conditions. To provide a good view of the run-way, enabling the pilot to detect the runway itself or a possible obstacle on the run-

ENHANCED AND SYNTHETIC VISION 57

FIGURE 3 Round-dial primary flight information when looking away from the track.

Page 10: Korn et al. (2009) article on enhanced/synthetic vision displays

way as early as possible, symbols change in closer runway proximity. On short final,presented information is reduced in the center of the display and shown aside.

Figure 4 shows the display’s content at a distance of 1.3 nm to the runway. Therunway border lines become thinner as the aircraft approaches the airport. Theshape of the pathway is reduced to four small corners symbols. These gates shrinkfrom 200 ft × 267 ft at 1.0 nm down to 164 ft × 33 ft above the threshold. The flaremaneuver was supported by aiming point marks outside the virtual runway borderlines and a desired touchdown point is displayed as a better cue for the flare ma-neuver. Wind speed and direction were represented by a symbol set in the lowerleft corner. Flaps and gear settings were indicated by symbols in the lower rightcorner of the display, along with setting indicators like thrust, bore-sight symbol,horizon, heading and bank indicator, and others.

FIXED-BASE EXPERIMENTAL STUDY

Method

This experiment, designed to compare the HMD conformal pathway with head-down conventional guidance was conducted at DLR’s generic cockpit simulator.The cockpit is an A320 mock-up equipped with a 180° by 40° collimated visionsystem. The display was evaluated with 18 participants. None of the candidateshad used an HMD or an HUD before. After a 3-hr training session they had to per-form four curved approaches from the west and four curved approaches from theeast to Zurich airport Runway 16 under adverse weather conditions with moderate

58 KORN ET AL.

FIGURE 4 Pathway display during final approach.

Page 11: Korn et al. (2009) article on enhanced/synthetic vision displays

fog and moderate wind. Two approaches from each side were flown with a classichead-down navigation display and two with the HMD. The crew managementtasks were reduced to a minimum so the participants were able to concentrate onthe flight path tracking. A runway incursion by another aircraft was induced forhead-down and for head-mounted navigation. The incursion occurred when theaircraft passed 1.4 nm toward the runway. A Boeing 747 was taxiing onto the run-way where the candidate was cleared to land. The 1.4 nm distance was also the dis-tance at which the intruding aircraft could first be noticed through the fog. The par-ticipants were split into two different groups to counterbalance the unexpectedscene event. One group experienced the incursion first while using the HMD andsecond while navigating head-down and vice versa. After each trial, the NASA-Task Load Index (NASA–TLX; Hart & Staveland, 1988) was used to collect sub-jective data about the workload and at the end of the trials the Situational Aware-ness Rating Technique (SART; Taylor, 1990) was used to collect subjective data onSA. A more detailed description of the scenario and the results can be found inLorenz et al. (2005) and Többen, Lorenz, and Schmerwitz (2005).

Results

The measured slope distance (DME) where the go-around was initiated (full thruston both engines) was submitted to a 2 (type of flight guidance) × 2 (sequencegroup) split-plot analysis of variance (ANOVA). The between-subject factor se-quence group accounts for differences in event detection caused by whether or notpilots encountered this event for the first or for the second time at the respectivecondition of flight guidance. Group 1 encountered the event first with pathwayHMD guidance and then with ILS guidance; Group 2 encountered them in the op-posite order. The ANOVA revealed a significant main effect of type of flight guid-ance, F(1, 16) = 6.82, p = .019. The event was detected sooner with ILS guidance(average DME at 0.39 nm) as compared to pathway HMD guidance (average DMEat 0.30 nm). Figiure 5 shows the measured DME distance. Graphs connected by adashed or dotted line represent the first or second time, respectively, that the en-counter event occurs. Note that high values indicate superior performance due tothe fact that the runway incursion was detected earlier and therefore at a larger dis-tance to the threshold. The second go-around was detected earlier by all test pilotsin all cases, as the pilot was apparently better prepared. The effect of different dis-plays was thus mixed with a sequence effect. Nevertheless, the pilots using thestandard ILS display detected the event approximately 2 sec earlier than the pilotswith a head-mounted pathway display.

A pilot subjective rating was calculated for both conditions of flight guidanceby averaging the ratings derived from the four respective scenarios. For theNASA–TLX mental workload rating this was done for the untransformed ratingsaveraged across all six subscales. For the SART this was done for the global SA

ENHANCED AND SYNTHETIC VISION 59

Page 12: Korn et al. (2009) article on enhanced/synthetic vision displays

score derived from the three SART subscales (SART-Demand, SART-Supply, andSART-Understanding) according to the simple formula SA global score = Under-standing – (Demand – Supply) (EUROCONTROL, 2003). The data were analyzedby paired-sample t tests. This analysis revealed that there was no difference in sub-jective workload between both types of flight guidance, t(1, 17) = –1.31, p = .21.Subjective SA, however, was rated significantly higher when ILS guidance wasprovided, t(1, 17) = 2.12, p = .05. Inspection of the source of this effect revealedthat lower SA ratings were obtained in the scenario involving the unexpected run-way incursion event. A t test contrasting the type of guidance only for nominal sce-narios did not become significant, t(17) = –1.56, p = .14. Contrasting the two run-way incursion trials, however, revealed a significant SA advantage of the ILSguidance, t(17) = –3.01, p < .01.

Figure 6 shows the breakthrough points (i.e., error distribution) of some seg-ment transitions of all tracks from all pilots. The diagram in the upper left cornershows the situation shortly (0.5 nm) after the beginning of each trial (intermediatesegment). The upper right diagram displays the beginning of the curved segment tointercept the localizer and the lower two diagrams show the glide slope intercept.Circles show the segment transitions using an HDD and the crosses mark segmenttransitions using HMD relative to the optimum location perpendicular and across.The deviation using HDD is in some cases out of the limits. It might be explainedby the fact that the participants were young pilots who just received their pilot li-censes, lacking extensive experience (an average of less than 30 multicockpit hrand an average of no fly-by-wire hours).

60 KORN ET AL.

FIGURE 5 Average measured slope distance values on initiating a go-around to avoid a run-way incursion.

Page 13: Korn et al. (2009) article on enhanced/synthetic vision displays

Not surprisingly, the overall performance regarding flight path tracking wasmuch better with the HMD than with the HDDs. The horizontal accuracy was up to10 times better with the HMD than with the head-down instruments. The verticalprecision was 3 times better and keeping the aircraft at the right speed was accom-plished 30% better than with the HDD. The lower performance with the HDD ledto two unprovoked missed approaches; no unprovoked missed approach occurredwith the HMD.

LABORATORY EXPERIMENT

Method

Design and participants. To further study the effectiveness of the conceptof scene linking a fully crossed 2 (fixed-location vs. scene-linked) × 2 (manual vs.automated pathway following) within-participants design was used to investigatethe effects of symbology linking on three tasks:

1. Lateral and vertical pathway following.2. Detection and control of commanded airspeed changes (display event).

ENHANCED AND SYNTHETIC VISION 61

FIGURE 6 Distribution of piercing points of the flight tracks at flight segment borders.

Page 14: Korn et al. (2009) article on enhanced/synthetic vision displays

3. Detection and discrimination of hostile from friendly SAM stations (sceneevent).

A series of four trials was repeated for two different flight path trajectories, re-sulting in a total of eight trials completed by each participant. In some of the analy-ses flight path trajectory was treated as an additional independent factor. In auto-mated pathway following an autopilot flew the aircraft through the tunnel and theparticipants completed only the two event detection tasks. Fourteen pilots fromdifferent civil aviation companies participated in the experiment. Thirteen partici-pants were male and 1 was female. Their age averaged 39 years and they had an av-erage flight time experience of 4,540 hr. Some of the pilots were familiar withhead-up guidance.

Apparatus and symbology. The experimental task was presented on a lap-top PC. The pilots controlled the pitch and roll axis of the tunnel following taskwith a joystick. Both axes were decoupled, thus unlike a real aircraft, roll input hadno impact on pitch, and pitch input while banked did not increase turn rate. The dy-namics of both axes were of a simple first-order rate control without any externaldisturbance input. To still simulate appropriate aircraft behavior, the maximum an-gles of both axes were limited to ±30° roll and ±10° pitch. Speed was set constantand could not be changed by the pilots. The background (far domain) was repre-sented by a simple perspective checkerboard terrain, allowing depth interpretationby changes in size, angles, and color hue of the checkerboard. The terrain wasmountainous to introduce more realism. The color of the terrain was light and darkgreen and the PFI and pathway were orange. The outside scene event was triggeredby the appearance of a pyramid in the terrain (event marker) naturally increasing insize while the aircraft approached the pyramid location (see Figure 7a and 7b).Green pyramids were targets (hostile SAMs) and gray pyramids were distractors(friendly SAMs). Participants were instructed to press the joystick trigger as soonas they detected a hostile target. The frequency of appearance was accomplishedby four different look-up tables, each providing 10 events with five targets and fivedistractors. The display event was introduced by changes in the commanded air-speed represented by a digital reading. These changes were quasi-random in am-plitude and frequency and occurred on the average every 8 sec in a reproducibleway. Left of the commanded airspeed reading the actual reading was displayed.Participants were instructed to move the slider at the left-backward side of the joy-stick as fast and as accurately as possible to match commanded with actual speedreadings. Participants were informed that the changes in these parameters did notactually change the speed of the aircraft.

No further PFI (altitude, bore-sight symbol, predictor, or follow-me aircraft)was presented. Thus, the pilots had to use the tunnel as the only information toguide the aircraft along the desired flight path. Pathway symbology was imple-

62 KORN ET AL.

Page 15: Korn et al. (2009) article on enhanced/synthetic vision displays

mented by spherical arcs to amplify the 3D impression of the display (see ). Theconcept of minimizing the guidance cues, especially excluding the bore-sight sym-bol, might have introduced a systematic difference to the guidance that unfortu-nately remained unnoticed until the evaluation. Steering the aircraft along the tun-nel without any reference symbol leads to the effect that the screen frame becomesthe reference. This effect is further described in the Results section. The PFI read-ings needed for the display event task were either presented with a fixed locationand size (fixed display condition; see a and 7c) or were delivered via a scene-linkedgate attached to the pathway, as shown in Figure 7b and 7d. This gate is world-ref-erenced and the aircraft is moving toward the gate and when passing through it anew one would reappear at some distance ahead along the pathway.

Simulation scenario and procedure. The experimental session began witha detailed briefing followed by approximately 20 min of training to familiarize theparticipants with the controls, symbology, control dynamics, and overall tasks.

Table 2 shows the variation of the scenarios as well as the tasks to fulfill. Fourtrials were flown manually (triple task) and the other four with autopilot (dualtask). Each trial lasted about 5 min separated by 3-min breaks. The order of theeight trials was counterbalanced across participants. On completion of the experi-

ENHANCED AND SYNTHETIC VISION 63

FIGURE 7 Primary flight information, fixed (left) and scene linked (right) with scene-eventmarker (upper).

b)

d)

a)

c)

Page 16: Korn et al. (2009) article on enhanced/synthetic vision displays

mental task, subjective data on perceived mental workload and situation awarenesswere collected using the NASA–TLX (Hart & Staveland, 1988) and a reducedSART (Taylor, 1990). Finally, pilots’ comments regarding the display conceptsand experimental setup were collected by means of a debriefing questionnaire.

Results

Repeated-measures ANOVAs were used to analyze most of the performance dataof the experiment.

Display event detection and adjustment. The task was treated as a zero-order step tracking task and the performance was scored by calculating averageroot mean square error (RMSE) values. Performance in the autopilot was betterthan that in the manual tracking condition, F(1, 13) = 53.47, p < .001. There wasno main effect of symbology linking, but the significant interaction betweensymbology and workload F(1, 13) = 5.18, p = .04, revealed that symbology had noeffect in the autopilot condition, but that scene linking produced a 7.2% cost inRMSE for the scene-linked symbology (cost = scene-linked / fix – 1 [%]).

Scene event detection. Average detection times, misses, and false alarmsin response to the scene events were derived for each trial. The analysis revealed noeffect of workload but a significant 24.2% average benefit to reaction time (RT) ofscene-linked (RT = 1.197 sec) over fixed (RT = 1.580 sec) symbology (benefit =–(scene-linked / fix – 1) [%]). The false and missed alarms were analyzed but didnot show significant effects.

Pathway following. Average RMSE values were computed for the lateraland vertical deviations in following the pathway. These data were derived frommanual trials only. The ANOVA revealed a reliable cost for scene-linked over fixedsymbology for both lateral and vertical tracking error: vertical, F(1, 13) = 8.351, p= .013, 151% cost; lateral, F(1, 13) = 6.866, p = .021, 87.6% cost; cost =scene-linked / fix—1 (%).

64 KORN ET AL.

TABLE 2Tasks and Variation of the Scenarios

Variation Object Task

Manual Automatic Automation Pathway followingRoute A Route B Trajectory/vicinity Scene event detection

Fixed Scene-linked Display/primary flightinformation

Display event detection andadjustment

Page 17: Korn et al. (2009) article on enhanced/synthetic vision displays

Questionnaires. Both questionnaires (SART as well as NASA–TLX) showedno significant effects between the display conditions. In the debriefing question-naire, 9 of the 14 participants preferred the fixed PFI over the scene-linked PFI.

Discussion

The laboratory study was motivated by evidence for both a pathway-induced atten-tion fixation problem due to the 3D compellingness of the pathway and anHUD-induced attention fixation problem due to a perceptual disconnect betweenthe near and the far domain object triggered by the differential motion cue. Thesefindings raise the question of whether a combined pathway HUD amplifies atten-tion fixation problems with the effect of potentially offsetting the independentlyand likewise conclusively established benefits of both display concepts.

Scene linking is a rather new concept for the presentation of HUD symbologythat offers a theoretically sound and practically promising means to mitigateHUD-induced attention fixation. The aim of the laboratory experiment, therefore,was to test the hypothesis that scene-linked HUD imagery reduces the division ofattention between near-domain PFI information and far-domain outside scenemonitoring. Such an advantage has been substantiated for ground operation (Fadd-en, Ververs, & Wickens, 2001). The problem of a lacking physical counterpart fora scene-linked overlay of PFI symbology while airborne could be overcome by us-ing the pathway-in-the-sky symbology for this purpose. This reasoning assumesthat the moving pathway elements provide an overall virtual-conformal referent tothe outside world with which digital instrumentation could be linked to removedifferential motion.

The experimental task was a multiple-task scenario that involved three subtasksamong which the participants had to divide their attention. The results presentsome straightforward interpretations. At low workload, there is a clear benefit ofscene linking to event detection. The benefit is observed in noticing the far-domainevents, and there is no cost to noticing near-domain events. At a higher workload(manual path following), scene linking again benefits far-domain event detection a28.7% reduction in RT, but imposes a small (7.2%) cost to near-domain detectionand response. This combination of effects suggests that scene linking at high work-load draws attention outward toward the far domain, but at some lesser cost tonear-domain monitoring. At the same time, scene linking appears to substantiallydisrupt flight path tracking either (a) because attention is shifted outward awayfrom the display necessary to support that tracking, or (b) because of a systematicimplementation error. This might be founded on the missing bore-sight symbol.Lacking further guidance symbology such as a bore-sight symbol, a predictor, or afollow-me aircraft, the symbology of the display reading was presumably used as abore-sight cue when it had a fixed location. With scene linking there was no air-craft-referenced visual cue other than the center of the screen, so the screen frame

ENHANCED AND SYNTHETIC VISION 65

Page 18: Korn et al. (2009) article on enhanced/synthetic vision displays

preserved the reference that could support the assessment of aircraft position in re-lation to the pathway. The introduction of a bore-sight symbol in a modified labo-ratory task would help to clarify this issue. Until a follow-up experiment is con-ducted, we favor the second hypothesis and therefore conclude that the observedbenefit of scene linking in the division of attention between the two event-detec-tion tasks is regarded as supporting evidence for this display concept.

FIRST FLIGHT TEST

The experiment consisted of two different sets. The first was a local pattern flightusing either visual terrestrial navigation or HMD guidance. The second comparedHMD with VOR/NDB/ILS guidance in an area instrumental flight. Five pilotsfrom DLR’s flight operations department who are very familiar with and trainedregarding the area’s navigational procedures participated in a total of eight trials.

The HMD used was a red monochrome monocular retinal scanning HMD. Incombination with an optical head tracker, a Dornier 228 was modified for HMDtesting. The main navigational cue was the predictor pathway guidance alreadypresented, but the onboard experimental system delivers a different data set.Therefore the symbology only consists of true altitude, climb rate, pitch, bank,heading, and artificial horizon with bearing information, as well as a bore-sightsymbol and the predictor pathway guidance. Unfortunately neither torque nor indi-cated airspeed could be presented. However, supplement ground speed was pre-sented. Minimum but adequate configuration change information together withposition and transition information were presented in discrete (alpha-) numericcallouts. The pathway presented ensured in all trials with HMD to lead along ex-actly the same track. However, altitudes flown after a true track (HMD) are harderto compare with altitudes from a barometric track (without HMD).

All pilots were first trained using the HMD and were briefed on the experimen-tal task. At the beginning there was a presentation of HMD symbology, followedby a relatively short session (about 20 min) in the fixed-base simulator to get usedto the HMD technology and the display symbology. After the experimental ses-sion, pilots were given a debriefing questionnaire. They were asked to rate andcomment on aspects such as handling, usability, and workload, and also partici-pated in an interactive debriefing session with the experimenters.

Experimental data were analyzed in three categories: flight path accuracy, sub-jective questionnaire, and usability evaluation. In visual trials, the precision analy-ses did not show as significant a difference between flight path following with andwithout HMD compared to the fixed-base study findings. For the instrumental tri-als, the evaluation showed the expected significant differences in favor of HMDnavigation. Compared with prior results from fixed-base simulation (Lorenz et al.,2005), the advantage of HMD navigation in real flights was smaller. One possible

66 KORN ET AL.

Page 19: Korn et al. (2009) article on enhanced/synthetic vision displays

reason might be that in the simulation the display had more functionality. Infixed-base simulation, indicated airspeed and power setting could be presented,leading most likely to superior usability and somewhat negating the major HMDadvantage of reducing head-down scanning during the flight test. This is consistentwith the pilot’s reports responding about difficulties following the HMD path dueto the need for much more frequent configuration changes.

In addition to the precision analyses of straight segments, the handling qualitiesduring the trial were investigated. The pilot’s behavior in altitude tracking as wellas his or her timing pattern of turning into and out of curves was investigated. Al-though most pilots steered adequately along the pathway, some missed the begin-ning of curves, climbs, or descents and left the pathway boundaries. It became ob-vious during trials that the symbology did not provide proper guidance cues atcertain segment transitions. Even when the pilots stayed on the pathway, theircommands into curves, for example, were delayed or too early and with an aggres-sive gain at times.

Pilots’ comments regarding this showed that the unfamiliarity with the HMDand the effort needed to follow the pathway interfered with long-term planning anddecision-making tasks at times. Questionnaires and comments also revealed thatpilots had rather consistent feelings of discomfort with wearing the HMD and aconcern regarding the smoothness of the HMD image during head movements.Thus, pilots found latency in imagery in the case of head movement to be a majordrawback.

Discussion

The flight tests were conducted to obtain findings specific to usability issues of thepathway HMD. Improved flight path performance caused by pathway predictorguidance was by and large preserved, which agrees with the prior finding with thisHMD made in the fixed-base simulator (Többen et al., 2005). However, severalsystems-related difficulties were found with a large impact on usability. Regardingthe hardware, the following issues were raise: head tracker latency, readabilityproblems because of insufficient luminance and contrast in bright ambient light,difficulties adjusting to different ambient light, wearing discomfort, disturbed pe-ripheral vision due to the framing of the combiner, and difficult alignment of thecombiner in front of the eye. The disruptive impact of image latencies is particu-larly high for world-referenced and hence moving HMD symbology. This is be-cause image latencies generate frequent misalignments of conformal symbologywith their referents. This problem poses a particular challenge for the implementa-tion of the scene-linking concept. Therefore, the usability of the pathway-HMDsymbology concept in general and with regard to the scene-linking concept in par-ticular was difficult to evaluate facing the overwhelming impact of the observedhardware difficulties. Nevertheless, the evaluation provides evidence that pilots

ENHANCED AND SYNTHETIC VISION 67

Page 20: Korn et al. (2009) article on enhanced/synthetic vision displays

were not able to time segment transitions as demanded using the scene-linked PFIdue to a lack of precise guiding cue at those points. To what degree this depends onthe scene-linking concept used or was related to hardware cannot be ascertained.

Pathway Predictor–Director Guidance

The results of the prior experiments led to a redesign of the pathway HMD. In asecond flight test the focus was to improve segment transitions as well as introduc-ing a trajectory reintercept ability. A director was implemented that also could be asolution for better guidance during segment transitions. Integrating predictor–di-rector guidance made it impossible to use world-referenced gates with PFI withoutgenerating further clutter problems. The scene-linking concept was reduced to dis-play the PFI fixed to the location of the director. The director symbol provided hor-izontal and vertical guidance. The original scene-linked gates had a fixed, world-referenced position and provided adequate virtual depth perception. For the direc-tor, a lack in virtual depth perception was noticed; therefore the pathway-in-the-sky was changed to tunnel-in-the-sky symbology. Figure 8 shows the im-proved display for a curved and a straight segment.

During an on-track situation, the director appears as a gate aligned to the refer-ence track. To steer along the track, the predictor needs to be inbound of the direc-tor that is displayed at the same distance. If an off-track situation occurs wheremaximum bank or pitch would be needed, the director leaves the reference track,giving guidance back to the track with the maximum allowed bank or pitch. To easethe pilot’s effort to realign his or her view with the director or the predictor after anoff-bore-sight scan, two symbols at the edge of the display indicated to the pilot thedirection to turn his or her head, one to the director and one to the predictor.

The dimension of the tunnel was chosen to match the 95% Required NavigationPerformance (RNAV-RNP) proposal (see Table 3; Korn et al., 2005). The dimen-sion of the director was chosen similarly with 100% of tunnel height and 33% tun-

68 KORN ET AL.

FIGURE 8 Pathway predictor–director head-mounted device.

Page 21: Korn et al. (2009) article on enhanced/synthetic vision displays

nel width. During approach the tunnel dimension is reduced down to the chosennecessary precision (CAT II for this experiment).

SECOND FLIGHT TEST

Method

Figure 9 shows the layout of a high workload task. It was planned to be flyable us-ing the global positioning system (GPS) navigation system onboard the Do-228 orusing the HMD. All segments are shorter than 30 sec. The bank angles of turnsrank from 10° up to 35° bank. Climb rates were planned from 900 fpm up to 1,500fpm, descents from 750 fpm up to 1,500 fpm. Several speed and configurationchanges occur in this task. To get used to the improved display and the complexscenario, 4 participants were trained to fly this task via HMD in a simulator ses-sion. During the flight test two rounds of the task were flown with HMD and tworounds with GPS. Unfortunately a malfunction of the head tracker occurred duringtwo HMD trials, reducing the collected data. Therefore eight GPS trials and sixHMD trials could be evaluated. The trajectory was planned for zero winds, but thewind during the experiments changed heavily. The participants needed a signifi-cant effort to adapt to the wind situation, because the preplanned bank angles forsome turns needed to be adapted during the flight to keep on track. During and af-ter the flight trials, questionnaires were given. After the trials of each navigationalmethod a NASA–TLX was given and an interactive debriefing questionnaire wascollected.

Results

First, the small number of participants did not preserve enough data to bring upsignificant findings. The questionnaire could not deliver meaningful results for thesame reason. The hardware-related problems of the first flight test were expectedto be reproduced and did show similar results. Even though the task was very hardto follow, there was only one noticeable off-track situation while using the HMD.

ENHANCED AND SYNTHETIC VISION 69

TABLE 3Dimension of Tunnel-in-the-Sky

Category Horizontal Vertical

RNAV 0.03 nm (± 56 m) 50 ft (± 15 m)CAT I 0.02 nm (± 37 m) 40 ft (± 12 m)CAT II 0.01 nm (± 19 m) 15 ft (± 5 m)

Note. RNAV = required navigation performance.

Page 22: Korn et al. (2009) article on enhanced/synthetic vision displays

Using the HMD, the course could be followed well at the expense of high work-load. In the case of a GPS navigation system, it was not as easy to follow the refer-ence track. However, for both methods, the pilot flying had to rely on the safety pi-lot for certain assistance.

For HMD flight the pilot not flying had to set throttle and keep the referencespeed. For GPS navigation, the pilot not flying watched and controlled the GPSsystem and was needed to command the pilot flying to start each segment transi-tion by counting down from three. The weather was a hazard as well. Flying par-

70 KORN ET AL.

FIGURE 9 Virtual departure and approach flight task.

Page 23: Korn et al. (2009) article on enhanced/synthetic vision displays

tially IMC the pilot using the HMD had to adapt to constantly changing ambientlighting. The track was partially underneath overcast (low light, moderate con-trast), partially in clouds (moderate light, no contrast), and partially above clouds(bright light, high contrast). To be able to read the display at all times, participantsneeded to cover the head mount with dark foil and reduce the transparency bywearing sunglasses with the HMD underneath. The analysis of the cross-track er-ror showed the expected guidance superiority of a pathway HMD.Table 4 showsthe performance of each participant and the mean RMSE. Besides Pilot A, RMSEfor all pilots is below the RNP standard. The overall pathway following perfor-mance is 20 times better than with GPS navigation.

During the first real flight test, pilots did not have problems timing the segmenttransitions using standard navigation aids, but did have problems in the second test(see Figure 10). This experiment shows that the task was much more complex andvery hard to follow with GPS navigation. Several times the beginning of a turn wasmissed; the necessary navigational performance to follow the trajectory was be-yond the performance limit of the GPS system. In contrast, the pathway HMD didallow track following at all times. In comparison with the prior results where thetrack was lost several times using the HMD, the tested predictor–director conceptperformed as intended. Accordingly the NASA–TLX showed a higher subjectiveworkload during GPS trials than during HMD trials for 2 of 3 participants.

The initial pathway HMD concept aroused problems at segment transitions. Pi-lots’ inputs at transitions were more frequent and oscillating in both angles, rolland pitch, sometimes with a high gain. The second flight test did not show as manyovercorrective maneuvers. Oscillating corrections appeared less frequently and atmost times similar to those measured during GPS trials. Corrections still occurredmore frequently compared with the GPS trials. Pilots’ inputs were very differentwhile using the HMD. Pilot A showed oscillating behavior in both pitch and rolland a much smoother control during GPS trials. Pilot B, on the other hand, showedlittle oscillating inputs and tended to control equally during the GPS trials. Thereason might be insufficient training of Pilot A with the HMD or an installationproblem (therefore a readability issue). Further experiments should carefully in-

ENHANCED AND SYNTHETIC VISION 71

TABLE 4Cross-Track Root Mean Square Error

GPS HMD HMD/GPS

Pilot A 566 m 59 m 10.5%Pilot B 973 m 47 m 4.8%Pilot C 1,037 m 25 m 2.4%M 859 m 44 m 5.1%

Note. GPS = global positioning system; HMD = head-mounted display.

Page 24: Korn et al. (2009) article on enhanced/synthetic vision displays

vestigate this topic. The introduced predictor–director guidance also needs to beinvestigated for any benefit related to the scene-linking concept.

CONCLUSION

Combining enhanced and synthetic vision by automatic analysis of sensor data al-lows for a purely synthetic HMI that has the advantage of the displayed informa-tion being cross-checked with the real world by enhanced vision technologies. Thefocus of this contribution is the HMI design and its validation in task simulationsand flight trials. Here we have focused on further exploring the usability of a path-way HMD for civil fixed-wing aviation. It was intended to reduce attention fixa-tion due to 3D compellingness of pathway HDDs and mitigate the perceptual seg-regation of the display’s near domain and the far domain outside the window withthe use of HUDs. The benefit of scene linking was investigated for airborne tasks.Where laboratory experiments found evidence favorable for scene-linked pathwayHUDs or HMDs, the real flight tests could not fully support this display concept.The flight tests were conducted to obtain findings particularly on usability issuesof the pathway HMD. Several system-related difficulties were found that led to alatency that then generated frequent misalignments of conformal symbology with

72 KORN ET AL.

FIGURE 10 Flight track overview of all trials.

Page 25: Korn et al. (2009) article on enhanced/synthetic vision displays

their referents. This problem poses a particular challenge for the implementationof the scene-linking concept. At the same time, it was difficult to ensure readabilityof the displays’ content due to changing ambient lighting. In spite of these chal-lenges, the performance benefits regarding pathway following were once moreproven. How much this performance benefit relies on the display concept, orwhether it might only be a trade-off with a higher workload, could not be deter-mined. Further investigations are needed to evaluate this approach.

The reported disruption of the long-term planning abilities and difficulties oftraffic awareness as well as self-separation reduce the usability of the tested dis-play concept. Even so, in all studies evidence has been found that the head-up path-way concept could be superior to present head-up solutions. To fully unlock thepotential of this concept, it should integrate a planning display that is coupled to anExperimental Flight Management System (EFMS), providing long-term planningabilities. Furthermore, the experimental platform needs to be revised to mitigatelatency problems. Finally, the HMD hardware is still not sophisticated enough tomeet the needs for everyday use.

REFERENCES

Arthur, J. J., Prinzel, L. J., Shelton, K. J., Kramer, L. J., Williams, S. P., Bailey, R. E., & Norman, R. M.(in press). Synthetic Vision Enhanced Surface Operations with Head-Worn Display for CommercialAircraft. In International Journal of Aviation Psychology, IJAP, 20(2).

Czerlitzki, B. (1994). The experimental flight management system: Advanced functionality to complywith ATC constraints. Air Traffic Control Quarterly, 2, 159–188.

Döhler, H.-U., & Korn, B. (2003, October). Robust position estimation using images from an uncali-brated camera. Paper presented at the 22nd Digital Avionics Systems Conference, Indianapolis, IN.

Döhler, H.-U., & Korn, B. (2006). EVS based approach procedures: IR-image analysis and image fu-sion to support pilots in low visibility. In I. Grant (Ed.) Proceedings of 25th Congress of the Interna-tional Council of the Aeronauticl Science (ICAS) [CD-ROM: n.p.], Hamburg, Germany: OptimageLtd., Edinburgh, UK 2006.

EUROCONTROL. (2003). The development of situation awareness measures in ATM systems (Tec.Rep. No. HRS/SHSP-005-REP-01). Brussels, Belgium: Author.

European Aviation Safety Agency. (2006). NPA-OPS 41(JAR-OPS 1): Subpart E All weather operations.Federal Aviation Administration. (2004). Enhanced flight vision systems—Final rule (14 CFR Parts 1,

91, 121, 125, 153). Federal Register, 69(6), 1619–1641.Fadden, S., Ververs, P. M., & Wickens, C. D. (2001). Pathway HUDs: Are they viable? Human Factors,

43, 173–193.Fadden, S., Wickens, C. D., & Ververs, P. (2000). Costs and benefits of head up displays: An attention

perspective and a meta analysis. Warrendale, PA: Society of Automotive Engineers.Flemisch, F. O., & Onken, R. (2000, April). Detecting usability problems with eye tracking in airborne

battle management support. Paper presented at the RTO HFM Symposium on Usability of informa-tion in battle management operation, Oslo, Norway.

Foyle, D. C., McCann, R. S., & Sheldon, S. G. (1995). Attentional issues with superimposed symbology:Formats for scene-linked displays. In R. S. Jensen & L. A. Rakovan (Eds.), Proceedings of the EighthInternational Symposium on Aviation Psychology (pp. 98–103). Columbus: Ohio State University.

ENHANCED AND SYNTHETIC VISION 73

Page 26: Korn et al. (2009) article on enhanced/synthetic vision displays

Grünwald, A. J. (1996). Improved tunnel display for curved trajectory following: Experimental evalua-tion. Journal of Guidance, Control, and Dynamics, 19, 378–384.

Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of em-pirical and theoretical research. In P. A. Hancock & N. Meshkati (Eds.), Human mental workload(pp. 139–183). Amsterdam: North Holland.

Haskell, I. D., & Wickens, C. D. (1993). Two- and three-dimensional displays for aviation: A theoreti-cal and empirical comparison. International Journal of Aviation Psychology, 3, 87–109.

Korn, B. (2005, March). Autonomous sensor based landing systems: Fusion of vague and incompleteinformation by application of fuzzy clustering techniques. Paper presented at the 29th annual confer-ence of the German Classification Society, Magdeburg, Germany.

Korn, B. (2007). Enhanced and synthetic vision system for autonomous all weather approach and landing.In J. G. Verly (Ed.), Enhanced and synthetic vision. 2007 [CD-ROM: n.p.], Orlando, FL: SPIE Vol. 6559.

Korn, B., Döhler, H.-U., & Hecker, P. (2000). MMW radar based navigation: Solutions of the “verticalposition problem. In Enhanced and synthetic vision 2000 (pp. 29–37). Orlando, FL: SPIE.

Korn, B., Döhler, H.-U., & Hecker, P. (2001). Navigation integrity monitoring and obstacle detectionfor enhanced vision systems. In J. G. Verly (Ed.), Enhanced and synthetic vision (pp. 51–57). Or-lando, FL: SPIE Vol. 4363.

Korn, B., & Hecker, P. (2002a). Enhanced and synthetic vision: Increasing pilot’s situation awarenessunder adverse weather conditions. In Air traffic management for commercial and military systems:Proceedings of the 21st Digital Avionics Systems Conference. Omnipress [CD-ROM: n.p.]. Irvine,CA: IEEE 2002.

Korn, B., & Hecker, P. (2002b). Pilot assistance systems: Enhanced and synthetic vision for automaticsituation assessment. In S. Chatty, J. Hansman, G. Boy (Eds.), Proceedings of the International Con-ference on Human-Computer Interaction in Aeronautics (pp. 193–198). Boston, MA: AAAI Press2002.

Korn, B., et al. (2005). OPTIMAL: Deliverable D1.1. State-of-the-art. European FP6 project OP-TIMAL. Retrieved from www.optimal.isdefe.es

Kramer, L. J., Bailey, R. E., & Prinzel, L. J. (in press). Commercial Flight Crew Decision-Making dur-ing Low-Visibility Approach Operations using Fused Synthetic / Enhanced Vision Systems. In Inter-national Journal of Aviation Psychology, IJAP, 20(2).

Lorenz, B., Többen, H., & Schmerwitz, S. (2005). Human performance evaluation of a pathway HMD. InJ. G. Verly (Ed.), Enhanced and synthetic vision 2005 (pp. 166–176). Orlando, FL: SPIE Vol. 5802.

Martin-Emerson, R., & Wickens, C. D. (1997). Superimposition, symbology, visual attention, and thehead-up display. Human Factors, 39, 581–601.

McCann, R. S., Foyle, D. C., & Johnston, J. C. (1993). Attention limitations with head-up displays. InR. S. Jensen (Ed.), Proceedings of the Seventh International Symposium on Aviation Psychology (pp.70–75). Columbus: Ohio State University.

Mulder, M. (2003). An information-centered analysis of the tunnel-in-the-sky display: Part one.Straight tunnel trajectories. International Journal of Aviation Psychology, 13, 49–72.

Pirkl, M., & Tospann, F.-J. (1997). The HiVision MM-Wave radar for enhanced vision systems in civiland military transport aircraft. In J. G. Verly (Ed.), Enhanced and synthetic vision 1997. Orlando, FL:SPIE Vol. 3088 (pp. 8–18).

Prinzel, L. J. (2004). Head-up displays and attention capture (Tech. Rep. No. NASA/TM-2004-213000). Hampton, VA: NASA Langley Research Center.

Prinzel, L. J., Arthur, J. J., Kramer, L. J., & Bailey, R. E. (2004). Pathway concepts experiment forhead-down synthetic vision displays. In J. G. Verly (Ed.), Enhanced and synthetic vision 2004: Pro-ceedings of SPIE (pp. 11–22). Bellingham, WA: International Society for Optical Engineering.

Schmerwitz, S., Lorenz, B., & Többen, H. (2006). Investigating the benefits of scene-linking for a path-way HMD: From laboratory flight experiments to flight tests. In J. G. Verly (Ed.), Enhanced and syn-thetic vision 2006 [CD-ROM: n.p.]. Orlando, FL: SPIE Vol. 6226.

74 KORN ET AL.

Page 27: Korn et al. (2009) article on enhanced/synthetic vision displays

Schmerwitz, S., Többen, H., Lorenz, B., & Korn, B. (2007). Head-mounted display—Evaluation insimulation and flight trials. In RTO-HFM-141 (Eds.) RTO-MP-HFM-141 Human Factors and Medi-cal Aspects of Day/Night All Weather Operations: Current Issues and Future Challenges 2007[CD-ROM: n.p.], Heraklion, Greece: NATO RTO.

Sheldon, S. G., Foyle, D. C., & McCann, R. S. (1997). Effects of scene-linked symbology on flight per-formance. In Proceedings of the 41st annual meeting of the Human Factors and Ergonomic Society(pp. 294–298). Santa Monica, CA: HFES.

Taylor, R. M. (1990, October). Situational Awareness Rating Technique (SART): The development of atoll for aircrew systems design. Paper presented at the AGARD Conference: Situational awareness inaerospace operations. Copenhagen, Denmark: Aerospace Medical Panel Symposium.

Többen, H., Lorenz, B., & Schmerwitz, S. (2005). Design of a pathway display for a retinal scanningHMD. In J. G. Verly (Ed.), Enhanced and synthetic vision 2005 (pp. 102–111). Orlando, FL: SPIEVol. 5802.

Wickens, C. D. (2005). Attentional tunnelling and task management. In Proceedings of the 13th Inter-national Symposium on Aviation Psychology (pp. 620–625). Oklahoma City, OK.

Wickens, C. D., Alexander, A. L., Thomas, L. C., Horrey, W. J., Nunes, A., Hardey, T. J., et al. (2004).Traffic and flight guidance depiction on a synthetic vision system display: The effects of clutter onperformance and visual attention allocation (Tech. Rep. No. AHFD-04-10/NASA(HPM)-04-1). Ur-bana: University of Illinois, Institute of Aviation, Aviation Human Factors Division.

Wickens, C. D., & Long, J. (1995). Object- vs. space-based models of visual attention: Implications forthe design of head-up displays. Journal of Experimental Psychology: Applied, 1, 179–193.

Wickens, C. D., & Alexander, A. L. (in press). Attentional Tunneling and Task Management inSynthetic Vision Displays. In International Journal of Aviation Psychology, IJAP, 20(2).

Williams, K. W. (2002). Impact of highway-in-the-sky displays on pilot situation awareness. HumanFactors, 44, 18–27.

Manuscript first received: April 2008

ENHANCED AND SYNTHETIC VISION 75

Page 28: Korn et al. (2009) article on enhanced/synthetic vision displays