TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley...

20
TerraMax TM : Team Oshkosh Urban Robot Yi-Liang Chen, Venkataraman Sundareswaran, and Craig Anderson Teledyne Scientific & Imaging Thousand Oaks, California 91360 e-mail: [email protected], [email protected], [email protected] Alberto Broggi, Paolo Grisleri, Pier Paolo Porta, and Paolo Zani VisLab, University of Parma Parma 43100, Italy e-mail: [email protected], [email protected], [email protected], [email protected] John Beck Oshkosh Corporation Oshkosh, Wisconsin 54903 e-mail: [email protected] Received 22 February 2008; accepted 28 August 2008 Team Oshkosh, composed of Oshkosh Corporation, Teledyne Scientific and Imaging Com- pany, VisLab of the University of Parma, Ibeo Automotive Sensor GmbH, and Auburn University, participated in the DARPA Urban Challenge and was one of the 11 teams selected to compete in the final event. Through development, testing, and participation in the official events, we experimented and demonstrated autonomous truck operations in (controlled) urban streets of California, Wisconsin, and Michigan under various cli- mate and traffic conditions. In these experiments TerraMax TM , a modified medium tactical vehicle replacement (MTVR) truck by Oshkosh Corporation, negotiated urban roads, in- tersections, and parking lots and interacted with manned and unmanned traffic while observing traffic rules. We accumulated valuable experience and lessons on autonomous truck operations in urban environments, particularly in the aspects of vehicle control, perception, mission planning, and autonomous behaviors, which will have an impact on the further development of large-footprint autonomous ground vehicles for the military. Journal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/rob.20267

Transcript of TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley...

Page 1: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

TerraMaxTM: Team OshkoshUrban Robot

Yi-Liang Chen, Venkataraman Sundareswaran,and Craig AndersonTeledyne Scientific & ImagingThousand Oaks, California 91360e-mail: [email protected],[email protected],[email protected]

Alberto Broggi, Paolo Grisleri,Pier Paolo Porta, and Paolo ZaniVisLab, University of ParmaParma 43100, Italye-mail: [email protected],[email protected],[email protected], [email protected]

John BeckOshkosh CorporationOshkosh, Wisconsin 54903e-mail: [email protected]

Received 22 February 2008; accepted 28 August 2008

Team Oshkosh, composed of Oshkosh Corporation, Teledyne Scientific and Imaging Com-pany, VisLab of the University of Parma, Ibeo Automotive Sensor GmbH, and AuburnUniversity, participated in the DARPA Urban Challenge and was one of the 11 teamsselected to compete in the final event. Through development, testing, and participationin the official events, we experimented and demonstrated autonomous truck operationsin (controlled) urban streets of California, Wisconsin, and Michigan under various cli-mate and traffic conditions. In these experiments TerraMaxTM, a modified medium tacticalvehicle replacement (MTVR) truck by Oshkosh Corporation, negotiated urban roads, in-tersections, and parking lots and interacted with manned and unmanned traffic whileobserving traffic rules. We accumulated valuable experience and lessons on autonomoustruck operations in urban environments, particularly in the aspects of vehicle control,perception, mission planning, and autonomous behaviors, which will have an impact onthe further development of large-footprint autonomous ground vehicles for the military.

Journal of Field Robotics 25(10), 841–860 (2008) C© 2008 Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com). • DOI: 10.1002/rob.20267

Page 2: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

842 • Journal of Field Robotics—2008

In this article, we describe the vehicle, the overall system architecture, the sensors andsensor processing, the mission planning system, and the autonomous behavioral controlsimplemented on TerraMax. We discuss the performance of some notable autonomous be-haviors of TerraMax and our experience in implementing these behaviors and presentresults of the Urban Challenge National Qualification Event tests and the Urban Chal-lenge Final Event. We conclude with a discussion of lessons learned from all of the aboveexperience in working with a large robotic truck. C© 2008 Wiley Periodicals, Inc.

1. INTRODUCTION

Team Oshkosh entered the DARPA Urban Challengewith a large-footprint robotic vehicle, TerraMaxTM,a modified medium tactical vehicle replacement(MTVR) truck. By leveraging our past experience andsuccess in previous DARPA Challenges, the com-bined multifaceted expertise of the team members,and the support of a DARPA Track A program award,we demonstrated various autonomous vehicle be-haviors in urban environments with excellent per-formance, passed through many official tests at theNational Qualification Event (NQE), and qualified forthe Urban Challenge Final Event (UCFE). TerraMaxcompleted the first four submissions in mission 1of the UCFE before being stopped after a failure inthe parking lot due to a software bug. We broughtTerraMax to the UCFE test site in Victorville inDecember 2007, where TerraMax completed success-fully three missions totaling more than 78 miles in 7 hand 41 min.

Team Oshkosh is composed of Oshkosh Corpora-tion, Teledyne Scientific and Imaging Company, Vis-Lab of the University of Parma, Ibeo AutomotiveSensor GmbH, and Auburn University. Oshkosh pro-vided the vehicle, program management, and over-all design direction for the hardware, software, andcontrol systems. Oshkosh integrated all the elec-trical and mechanical components and developedthe low- and midlevel vehicle control algorithmsand software. Teledyne Scientific and Imaging Com-pany developed the system architecture, mission andtrajectory planning, and autonomous behavior gener-ation and supervision. The University of Parma’s Vis-Lab developed various vision capabilities. Ibeo Auto-motive Sensor GmbH provided software integrationof the LIDAR system. Auburn University providedevaluation of the global positioning system/inertialmeasurement unit (GPS/IMU) package.

Although substantial hurdles must be overcomein working with large vehicles such as TerraMax, we

feel that large autonomous vehicles are critical forenabling autonomy in military logistics operations.Team Oshkosh utilized a vehicle based on the U.S.Marine Corps MTVR, which provides the majorityof the logistics support for the Marine Corps. The in-tention is to optimize the autonomous system designsuch that the autonomy capability can be suppliedin kit form. All design and program decisions weremade considering not only the Urban Challenge re-quirements, but eventual fielding objectives as well.

Our vehicle was modified to optimize thecontrol-by-wire systems in providing a superior low-level control performance based on lessons learnedfrom the 2005 DARPA Grand Challenge (Braid,Broggi, & Schmiedel, 2006; Sundareswaran, Johnson,& Braid, 2006). Supported by a suite of carefully se-lected and military-practical sensors and perceptionprocessing algorithms, our hierarchical state-basedbehavior engine provided a simple yet effective ap-proach in generating the autonomous behaviors forurban operations. Through the development, testing,and participation in official events, we have exper-imented and demonstrated autonomous truck op-erations in (controlled) urban streets of California,Wisconsin, and Michigan under various climate con-ditions. In these experiments, TerraMax negotiatedurban roads, intersections, and parking lots and in-teracted with manned and unmanned traffic whileobserving traffic rules.

In this article, we present our experience andlessons learned from autonomous truck operationsin urban environments. In Section 2 we summa-rize the vehicle and hardware implementation. InSection 3 we present the overall system architectureand its modules. In Section 4 we describe TerraMax’ssensor and perception processing. In Section 5 wepresent TerraMax’s autonomous behavior generationand supervision approach. In Section 6 we discussTerraMax’s field performance and experience in theNQE and the UCFE. We comment on lessons learnedin Section 7.

Journal of Field Robotics DOI 10.1002/rob

Page 3: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 843

Figure 1. TerraMax: the vehicle.

2. TERRAMAX: THE VEHICLE ANDHARDWARE SYSTEMS

2.1. Vehicle Overview

The TerraMax vehicle (see Figure 1) is a modifiedversion of a standard Oshkosh MTVR Navy trac-tor,1 which comes with a rear steering system asstandard equipment. The MTVR platform was de-signed for and combat tested by the U.S. MarineCorps. We converted the vehicle to a 4 × 4 versionby removing the third axle and by shortening theframe rail and rear cargo bed. The TAK-4TM indepen-dent suspension allowed rear axle steering angles tobe further enhanced to deliver curb-to-curb turningdiameters of 42 ft, equivalent to the turning diam-eter of a sport utility vehicle. In addition to theenhancement of turning performance, Oshkosh de-veloped and installed low-level controllers andactuators for “by-wire” braking, steering, and power-train control. Commercial-off-the-shelf (COTS) com-puter hardware was selected and installed for thevision system and autonomous vehicle behaviorfunctions.

1Oshkosh MTVR. http://www.oshkoshdefense.com/pdf/OshkoshMTVR brochure 07.pdf

2.2. Computing Hardware

We opted for ruggedized COTS computing platformsto address the computing needs of TerraMax. TwoA-Plus Mobile A20-MC computers with Intel CoreDuo processors running Windows XP Pro were usedfor autonomous vehicle behavior generation and con-trol. The four Vision personal computers (PCs) useSmallPC Core Duo computers running Linux Fedora.One PC is dedicated to each vision camera system(i.e., trinocular, close-range stereo, rearview, andlateral). Low-level vehicle controller and body con-troller modules are customized Oshkosh CommandZone® embedded controllers and use the 68332 andHC12X processors, respectively. To meet our objec-tives of eventual fielding, all the computing hard-ware was housed in the storage space beneath thepassenger seat.

2.3. Sensor Hardware

2.3.1. LIDAR Hardware

TerraMax incorporated a LIDAR system from IbeoAutomobile Sensor, GmbH, that provides a 360-degfield of view with safety overlaps (see Figure 2).Two ALASCA XT laser scanners are positioned on

Journal of Field Robotics DOI 10.1002/rob

Page 4: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

844 • Journal of Field Robotics—2008

Figure 2. LIDAR coverage of TerraMax (truck facing right).

Figure 3. Vision coverage of TerraMax (truck facing right). Systems displayed: Trinocular (orange) looking forward from 7to 40 m, stereo front and stereo back (purple) monitoring a 10 × 10 m area on the front of the truck and 7 × 5 m in the back,Rearview (blue) monitoring up to 50 m behind the truck, and lateral (green) looking up to 130 m.

Journal of Field Robotics DOI 10.1002/rob

Page 5: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 845

the front corners of the vehicle, and one ALASCAXT laser scanner is positioned in the center of therear. Each laser scanner scans a 220-deg horizontalfield. Outputs of the front scanners are fused at thelow level; the rear system remained a separate sys-tem. The LIDAR system native software was modi-fied to operate with our system architecture messag-ing schema.

2.3.2. Vision Hardware

Four vision systems are onboard: trinocular, stereo,rearview, and lateral. Figure 3 depicts the cov-erage of these vision systems. Table I summa-rizes the functions and components of these visionsystems.

Each vision system is formed by a computer con-nected to a number of cameras and laser scanners,depending on the application. Each computer is con-nected through an 800-Mbps, FireWire B link to asubset of the 11 cameras [9 PointGrey Flea 2, sensor:charge-coupled device (CCD), 1/3”, Bayer pattern,1,024 × 768 (XGA); and 2 Allied Vision TechnologiesPike 2, sensor: 1”, Bayer pattern, 1,920 × 1,080 pixelshigh-definition TV (HDTV)] mounted on the truck,depending on the system purpose.

2.3.3. GPS/INS

Using a Novatel GPS receiver with Omnistar HP cor-rections (which provides 10-cm accuracy in 95% ofcases) as a truth measurement in extensive tests un-der normal and GPS-denied conditions, we selectedSmiths Aerospace Inertial Reference Unit (IRU) asour GPS/inertial navigation systems (INS) solutionbased on its more robust initialization performanceand greater accuracy in GPS-denied conditions.

3. SYSTEM ARCHITECTURE

On the basis of a layered architecture design pat-tern (Team Oshkosh, 2007), we designed the softwaremodules as services that provide specific functions tothe overall system. These services interact with eachother through a set of well-defined asynchronousmessages. Figure 4 illustrates these software mod-ules and the relations among them. As illustrated,there are two main types of services: autonomousservices, whose modules provide functionalities forautonomous vehicle behaviors, and system services,whose modules support the reliable operations ofthe vehicle and the mission. We summarize themain functionalities of these software modules in thefollowing description.

Table I. Vision system components.

Visionsystem Trino Stereo Lateral Rearview

Cameras 3x PtGrey Flea2(XGA)

4x PtGrey Flea2 (XGA) 2x Allied VisionTechnologies Pike 2(HDTV)

2x PtGrey Flea2 (XGA)

Cameraposition

Upper part of thewindshield, insidethe cab

2 on the front camera bar,two on the back of thetruck, all lookingdownward

On the sides of thefront camera bar

External, on top of thecab, looking backward,and downward, rotatedby 90 deg

Linked laserscanner

Front Front, back Not used Back

Algorithms Lane detection,stereo obstacledetection

Lane detection, stop linedetection, curbdetection, short-rangestereo obstacledetection

Monocular obstacledetection

Monocular obstacledetection

Range 7 to 40 m 0 to 10 m 10 to 130 m −4 to −50 mNotes 3 stereo systems with

baselines: 1.156,0.572, 1.728 m

2 stereo systems (frontand rear)

Enabled when thetruck stops atcrossings

Overtaking vehiclesdetection

Journal of Field Robotics DOI 10.1002/rob

Page 6: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

846 • Journal of Field Robotics—2008

Figure 4. Software deployment architecture.

3.1. Autonomous Services

Autonomous vehicle manager. The autonomous vehiclemanager (AVM) manages the high-level autonomousoperation of the vehicle. It is primarily responsible forperforming route planning, trajectory planning, andbehavior management. The AVM receives percep-tion updates from the world perception server (WPS)and uses this information to track the current vehiclestate and determine the current behavior mode. TheAVM continuously monitors perceived obstacle andlane boundary information and issues revised trajec-tory plans to the autonomous vehicle driver (AVD)through drive commands.

Autonomous vehicle driver. The AVD providesvehicle-level autonomy, such as waypoint followingand lateral, longitudinal, and stability control by ac-

cepting messages from the AVM and commandingthe lower level control-by-wire actuations.

World perception server. The WPS publishes per-ception updates containing the most recently ob-served vehicle telemetry, obstacle, and lane/roadboundary information. The WPS subscribes to sen-sory data from the LIDAR and the vision system (VI-SION). The WPS combines the sensory data with thevehicle telemetry data received from the navigationservice (NAV). Obstacles detected by the LIDAR andVISION are further fused in order to provide a moreaccurate depiction of the sensed surroundings. TheAVM consumes the perception updates published bythe WPS and uses this information to determine thenext course of action for the vehicle.

Vision system. VISION publishes processed sen-sory data and metadata from different groups of

Journal of Field Robotics DOI 10.1002/rob

Page 7: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 847

cameras. The metadata may contain information suchas detected driving lane/path, lane boundary andcurb marking, and/or obstacles. These sensory dataand metadata are sent to WPS for distribution.

Other autonomous services include the vehiclestate server (VSS), which monitors and manageslow-level control for transitions from manual to au-tonomous operations, detects any low-level faults,and attempts to recover the system into fail-safemode, if needed; the LIDAR system (LIDAR), whichfuses and publishes obstacle information providedby the native obstacle detection and tracking func-tionalities from different laser scanners; and the NAVservice that manages communications to the GPS/INS.

3.2. System Services

Course visualizer. The course visualizer is the primeinterface to allow human operators/developersto observe the internal operations and status ofthe autonomous systems during a run. Duringan autonomous run, it provides real-time, two-dimensional visualization of the course data (i.e.,road network Definition File; RNDF), vehicle teleme-try data, metadata from sensors (e.g., lane updatesand obstacles), and status/results of autonomous be-havior generation (e.g., internal logics of a particularautonomous mode, results of a particular behavior al-gorithm). It can also serve as the main playback plat-form to enable postoperation analysis of the data log.Incorporated with a simplistic vehicle model, it alsoserves as a rough simulation platform to allow earlytesting and verification for developing or adjustingbehavioral algorithms.

Other system services include the mission man-ager, which provides the user interface for configur-ing autonomous services, loading mission files, andstarting the autonomous services/modes; the healthmonitor, which monitors service beacons from otherservices and alerts the service manager if an anomalyoccurs; and the service manager, which manages theinitialization, startup, restart, and shutdown of all au-tonomous services.

4. SENSOR PROCESSING AND PERCEPTION

In previous Grand Challenge efforts we used a trinoc-ular vision system developed by VisLab at the Uni-versity of Parma for both obstacle and path detection,coupled with laser scanning systems for obstacle de-

tection. We used several SICK laser scanners and oneIbeo laser scanner for obstacle detection. The GrandChallenge generally involved only static obstacles,so sensing capabilities focused on the front of thevehicle. Urban driving introduces a new dynamic—obstacles move (i.e., other moving vehicles), and thevehicle must respond to these moving obstacles, re-sulting in a much greater need for sensing behind andto the sides of the vehicle. The AVM needs more in-formation about the obstacles, requiring their veloc-ity as well as their location, and it also needs greatprecision in detecting stop lines and lane boundaries.To meet these goals, we enhanced capabilities in bothlaser scanning and vision.

Ibeo provided three of their advanced ALASCAXT laser scanners and fusion algorithms for an inte-grated 360-deg view. The new TerraMax vehicle uti-lizes multiple vision systems, with perception capa-bilities in all the critical regions.

4.1. LIDAR

The Ibeo laser scanners have two roles onTerraMax. First, they provide processed objectdata (Kaempchen, Buhler, & Dietmayer, 2005;Wender, Weiss, Dietmayer, & Fuerstenberg, 2006)to the WPS. Second, they provide scan-level dataused by the vision system to improve its results. Theonboard external control units (ECUs) fuse the datafrom the two front LIDARs2 acting as a single virtualsensor in the front.

4.2. Vision System

4.2.1. Software Approach

All the vision computers run the same softwareframework (Bertozzi et. al., 2008), and the variousapplications are implemented as separate plug-ins.This architecture allows hardware abstraction, whilemaking a common processing library available to theapplications, thus making algorithm development in-dependent of the underlying system. Each visionsystem controls its cameras using a selective autoex-posure feature. Analysis of each image is focused ona specific region of interest.

All the vision systems are fault tolerant with re-spect to one or more, temporary or permanent, sensor

2Ibeo laser scanner fusion system. http://www.ibeo-as.com/english/technology d fusion.asp

Journal of Field Robotics DOI 10.1002/rob

Page 8: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

848 • Journal of Field Robotics—2008

failure events. The software is able to cope withFireWire bus resets or laser scanner communicationproblems and to reconfigure itself to manage the re-maining sensors.

4.2.2. The Trinocular System

Driving in urban traffic requires detailed percep-tion of the environment surrounding the vehicle: for

this, we installed in the truck cabin a trinocular vi-sion system capable of performing both obstacle andlane detection up to distances of 40 m, derived fromCaraffi, Cattani, and Grisleri (2007). The stereo ap-proach has been chosen because it allows an accu-rate three-dimensional (3D) reconstruction withoutrequiring strong a priori knowledge of the scene infront of the vehicle, but just correct calibration values,which are being estimated at run time.

Figure 5. A frame captured from the right camera; (b) corresponding V-disparity map, where the current pitch is shownin yellow text and the detected ground slope is represented by the slanted part of the red line; and (c) disparity map (greenpoints are closer; orange ones are farther away).

Figure 6. Results of lane detection algorithm projected on the original image. From right to left, the red line represents aright boundary, green a left boundary, and yellow a far left boundary. In this image, the right line is detected although it isnot a complete line.

Journal of Field Robotics DOI 10.1002/rob

Page 9: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 849

The three cameras form three possible baselines(the baseline is the distance between two stereocameras), and the system automatically switchesbetween them depending on the current vehiclespeed; at lower speeds it is thus more convenient touse the shorter (and more accurate) baseline, whereasat higher speeds the large baseline permits the detec-tion of obstacles when they are far from the vehicle.

Images are rectified, so that the correspondingepipolar lines become horizontal, thus correcting anyhardware misalignment of the cameras and allowingfor more precise measurements. The V-disparity map(Labayrade, Aubert, & Tarel, 2002) is exploited to ex-tract the ground slope and current vehicle pitch, inorder to compensate for the oscillations that occurwhile driving [Figures 5(a) and 5(b)].

The next step is to build a disparity map fromthe pair of stereo images: this operation is ac-complished using a highly optimized incrementalalgorithm, which takes into account the previouslycomputed ground slope in order to produce moreaccurate results and to reduce the processing time[Figure 5(c)].

The disparity map, along with the corresponding3D world coordinates, is used to perform obstacle de-tection. After a multistep filtering phase aimed at iso-lating the obstacles present in front of the truck, theremaining points are merged with the ones from thefront LIDAR and are initial values for a flood-fill ex-pansion step, governed by each pixel disparity value,in order to extract the complete shape of each obsta-cle. This data fusion step ensures good performancein poorly textured areas, while ensuring robustness ofthe vision-based obstacle detection algorithm againstLIDAR sensor failures. Previously identified obsta-cles are removed from the full-resolution image usedby the lane detection algorithm, lowering the possi-bility of false positives, such as those introduced bypoles or vehicle parts. Figure 6 shows a typical sceneand the lanes detected by the algorithm.

4.2.3. Stereo System

Navigation in an urban environment requires pre-cise maneuvers. The trinocular system described inthe preceding section can be used for driving onlyat medium to high speeds, because it covers the farrange (7–40 m). TerraMax includes two stereo sys-tems [one in the front and one in the back, derivedfrom Broggi, Medici, and Porta (2007)], which pro-vide precise sensing at closer range. Using wide-

angle (fish-eye, about 160 deg) lenses, these sensorsgather information over an extended area of about10 × 10 m; the stereo systems are designed to detectobstacles and lane markings with high confidence onthe detection and position accuracy.

Obstacle detection is performed in two steps: firstthe two images, acquired simultaneously, are pre-processed in order to remove the high lens distor-tion and perspective effect, a thresholded differenceimage is generated and labeled (Bertozzi, Broggi,Medici, Porta, & Sjogren, 2006), and then a polarhistogram-based approach is used to isolate the la-bels corresponding to obstacles (Bertozzi & Broggi,1998; Lee & Lee, 2004). Data from the LIDARs areclusterized so that laser reflections in a particular areacan boost the score associated with the correspond-ing image regions, thus enhancing the detection ofobstacles.

The chosen stereo approach avoids explicit com-putation of camera intrinsic and extrinsic parame-ters, which would have been impractical, given thechoice of using fish-eye lenses to cover a wide area infront of the truck. The use of a lookup table (gener-ated using a calibration tarp) to remap the distortedinput images to a bird’s-eye view of the scene thusresults in improved performance and reducedcalibration time

Short-range line detection is performed using asingle camera, to detect lane markings (even along asharp curve), stop lines, and curbs. As the camera ap-proaches the detected obstacles and lane markings,the position accuracy increases, yet the markings re-main in the camera field of view due to the fish-eyelens. A precise localization of lane markings enablesthe following: lane keeping despite large width of thevehicle, stopping of the vehicle at close proximity tothe stop line at intersections, accurate turning maneu-vers at intersections, and precise planning of obstacleavoidance maneuvers. Figure 7 shows a frame withtypical obstacle and lane detection.

4.2.4. Lateral System

We employ lateral perception to detect oncomingtraffic at intersections (Figure 8). During a trafficmerge maneuver, the vehicle is allowed to pull intotraffic only when a gap of at least 10 s is available.For this, the vehicle needs to perceive the presenceof oncoming traffic and estimate vehicle speeds atrange. The intersecting road might be viewed at anangle other than 90 deg; therefore the lateral system

Journal of Field Robotics DOI 10.1002/rob

Page 10: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

850 • Journal of Field Robotics—2008

Figure 7. A sample frame showing typical obstacle andlane detection.

must be designed to detect traffic coming from dif-ferent angles. We installed two high-resolution AVTPike cameras (1,920 × 1,080 pixels) on TerraMax—one on each side—for lateral view, together with8-mm Kowa lenses. With this configuration each cam-era can cover a 90-deg angle and is able to see objectsat high resolution up to distances greater than 130 m.

Figure 8. Lateral system algorithm results (detected vehicles are marked red).

The lateral camera image is processed using arobust, ad hoc background subtraction–based algo-rithm within a selected region of interest, with thesystem being triggered by the AVM when the ve-hicle stops at an intersection, yielding to oncomingtraffic. This approach allows us to handle the high-resolution imagery with a simple, computationallyeffective approach by leveraging the semantic contextof vehicular motion.

4.2.5. Rearview System

When driving along a road, in both urban and ruralenvironments, lane changes and passing may occur.The rearview system is aimed at detecting passing ve-hicles (Figure 9). This solution has proven to be veryrobust, while keeping processing requirements low;the onboard camera setup (with cameras placed ontop of the cab, looking backward) ensures good visi-bility, because oncoming traffic is seen from a favor-able viewing angle. The rearview system processingis based on color clustering and optical flow. The firststage of processing performs data reduction in theimage: a category is assigned to each pixel dependingon its color, resulting in blobs that represent objectsor portions of objects of uniform color. In the second

Journal of Field Robotics DOI 10.1002/rob

Page 11: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 851

Figure 9. Rearview system algorithm results (detected ve-hicles are marked in red).

(optic flow) stage, blobs of uniform color are analyzedand tracked, to estimate their shape and movement.

Obstacles found using optical flow are then com-pared with those detected by the LIDAR: because thelatter has higher precision, the position of obstaclesestimated by vision is refined using the LIDAR data,if available. The fusion algorithm thus performs onlyposition refinement and does not create/delete obsta-cles, in order to isolate the detection performance ofthe vision system from that of the LIDAR.

4.3. Obstacle Fusion and Tracking

We adopted a high-level approach for fusing (bothdynamic and static) obstacle information. The high-level fusion approach was favored for its modularityand rapid implementation. It was also well-suited forour geographically dispersed development team.

In this approach, obstacles are detected locallyby the LIDAR and vision systems. Detected obsta-

cles are expressed as objects that contain relevant in-formation such as outline points, ID, velocity, height(vision only), and color (vision only). The obstaclesfrom LIDAR and vision are fused in the WPS basedon their overlap and proximity.

Similarly, we relied on the native functions ofthe LIDAR (Wender et. al., 2006) and vision systemsfor obstacle tracking (through object IDs generatedby these systems). This low-level-only tracking ap-proach proved to be effective for most of the situa-tions. However, it was inadequate in more complexsituations in which a vehicle is temporarily occludedby another (see discussions in Sections 6.2 and 7).

To predict the future position of a moving vehi-cle, the WPS applies a nonholonomic vehicle kine-matic model (Pin & Vasseur, 1990) and the context ofthe vehicle. For example, if the vehicle is in a driv-ing lane, the WPS assumes that it will stay in lane. Ifthe vehicle is not in a lane, the WPS assumes it willmaintain its current heading.

5. PLANNING AND VEHICLE BEHAVIORS

In this section, we describe our approach for vehiclebehavior generation and route/trajectory planning.

5.1. Overview of Vehicle Behaviors

We adopted a goal-driven/intentional approach tomission planning and generation of vehicle behav-iors. The main goal for the mission and behaviorgeneration is to navigate sequentially through a setof checkpoints as prescribed in the DARPA-suppliedmission definition files (MDF). Functionality relatedto autonomous vehicle behaviors is implemented inthe AVM.

Main components in the AVM (as depicted inFigure 10) include the mission/behavior supervisor(MBS), which manages the pursuit of missiongoal (and subgoals) and selects and supervisesthe appropriate behavior mode for execution:the mission/route planner, which generates (andregenerates as needed) high-level route plans basedon the road segments and zones defined in the RNDF;the behavior modes & logic, which contains a set ofbehavior modes, the transitional relationship amongthem, and the execution logic within each behaviormode; the event generators, which monitor thevehicle and environment state estimation from theWPS and generate appropriate events for the behav-ioral modes when a prescribed circumstance arises

Journal of Field Robotics DOI 10.1002/rob

Page 12: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

852 • Journal of Field Robotics—2008

Figure 10. Major function blocks in the AVM.

(e.g., an obstacle in the driving lane ahead); and thebehavior functions/utilities, which provide commonservices (e.g., trajectory generation) for differentbehavior modes.

5.2. Behavioral Modes and Supervision

We adopted a finite state machine (FSM)–baseddiscrete-event supervisory control scheme as our pri-

Figure 11. Components for behavior generation and execution.

mary approach to generate and execute autonomousvehicle behaviors. The FSM–based scheme pro-vides us with a simple, yet structured, approach toeffectively model the race rules/constraints andbehaviors/tactics, as opposed to the conventionalrule-based approaches or behavior-based approaches(Arkin, 1998). This scheme allows us to leverage ex-isting supervisory control theories and techniques(Cassandras & Lafortune, 1999; Chen & Lin, 2001b;Chung, Lafortune, & Lin, 1992; Ramadge & Wonham,1987) to generate safe and optimized behaviors forthe vehicle.

We model autonomous behaviors as different be-havior modes. These behavior modes categorize po-tential race situations and enable optimized logic andtactics to be developed for the situations. We im-plemented 17 behavior modes, shown in Figure 11,to cover all the basic and advanced behaviors pre-scribed in the Urban Challenge. Examples of thebehavior modes include lane driving, in which thevehicle follows a designated lane based on the sensedlane or road boundaries; and inspecting intersection,in which the vehicle observes the intersection proto-col and precedence rules in crossing intersections andmerging with existing traffic.

Journal of Field Robotics DOI 10.1002/rob

Page 13: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 853

For each behavior mode, a set of customized logicis modeled as an extended FSM [e.g., a FSM withparameters (FSMwP) (Chen & Lin, 2000)], which de-scribes the potential behavior steps, conditions, andactions to be taken. The behavior logic may employdifferent utility functions (e.g., trajectory planners/driving algorithms and stop-line procedure) duringexecution.

Transitions among behavior modes are modeledexplicitly as an FSMwP (Chen & Lin, 2000), namedmode transition machine (MTM), in which guardconditions and potential actions/consequences forthe transitions are expressed. The MTM is used bythe behavior supervisor in MBS in determining theappropriate behavior mode to transition to duringexecution.

The generation and control of the vehicle behav-ior may be formulated as a supervisory control prob-lem. We adopted the concepts of safety control (Chen& Lin, 2001a) and optimal effective control (Chen &Lin, 2001b) for FSMwP in which the traffic rules andprotocols are formulated as safety constraints andcurrent mission subgoal (e.g., checkpoint) as the ef-fective measure to achieve. However, to improve thereal-time performance during execution, we manu-ally implemented a simplified supervisor that doesnot require explicit expansion of supervisor states(Chung et. al., 1992) by exploiting the structure of theMTM.

Our FSM-based behavior generation scheme isintuitive and efficient. However, it may suffer fromseveral potential drawbacks. Among them are thereduced robustness in handling unexpected situa-tions and the lack of “creative” solutions/behaviors.To mitigate the potential concern in handling unex-pected situations, we included an unexpected behav-ior mode (RoadBlockRecovery mode) and institutedexception-handling logic to try to bring the vehicleto a known state (e.g., on a known road segment,or zone). Through our field testing and participationat official events, we found this unexpected behaviormode to be generally effective in ensuring the robustautonomous operation of the vehicle. A more robustunexpected behavior mode based on “non-scripted”techniques, such as behavior-based approaches(Arkin, 1998), may be introduced in the future tohandle the unexpected situations. This hybrid ap-proach would strike the balance between simplicity/consistency and flexibility/robustness of behaviors.

5.3. Route Planning

The objective of the route planning component is togenerate an ordered list of road segments among theones defined in the RNDF that enables the vehicleto visit the given set of checkpoints, in sequence, atthe least perceived cost of the current sensed environ-ment. We implemented the route planner as a deriva-tive of the well-known Dijkstra’s algorithm (Cor-men, Leiserson, & Rivest, 1990), which is a greedysearch approach to solve the single-source, shortest-path problem.

We used the estimated travel time as the base costfor each road segment, instead of the length of theroad segment. This modification allowed us to effec-tively take into account the speed limit of the roadsegments (both MDF-specified and self-imposed dueto the large vehicle’s constraints) and the traffic con-ditions the vehicle may experience through the sameroad segment previously traveled (during the samemission run). Meanwhile, any perceived road infor-mation, such as road blockage, and (static) obstaclesare also factored into the cost of the road.

5.4. Trajectory Planning

The trajectory planner generates a sequence of denseand drivable waypoints, with their correspondingtarget speeds, for the AVD to execute, given a pairof start/end waypoints and, possibly, a set of inter-mediate waypoints. Alternatively, the trajectory plan-ner may also prescribe a series of driving commands(which include steering direction and travel dis-tance). Instead of using a general-purpose trajectory/motion planner for all behavior modes, we imple-mented the trajectory planning capabilities as a col-lection of trajectory planner utility functions that maybe called upon by different behavior modes depend-ing on the current vehicle and mission situation.Our approach exploited specific driving conditions indifferent behavior modes for efficient and consistenttrajectory planning.

We implemented four different types of trajectoryplanners:

• Lane-following trajectory planner utilizesdetected/estimated lane and road boundariesto generate waypoints that follow the pro-gression of the lane/road. Targeted speed for

Journal of Field Robotics DOI 10.1002/rob

Page 14: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

854 • Journal of Field Robotics—2008

each waypoint is determined by consideringthe (projected) gap between TerraMax and thevehicle in front, if any, dynamics of TerraMax(e.g., current speed, limits of acceleration/deceleration), and the speed limit of the roadsegment. Curvatures among waypoints arefurther checked, adjusted, and smoothed us-ing a spline algorithm (Schoenberg, 1969) toensure that the waypoints are drivable withinTerraMax’s dynamic constraints.

• Template-based trajectory planners are a setof trajectory planners that can quickly gen-erate trajectory waypoints based on instan-tiation of templates (Horst & Barbera, 2006)with current vehicle and environment stateestimates for common maneuvers such aslane changing, passing, swerving, and turn-ing at intersections. A template-based tra-jectory planner determines first the targetedspeed for (each segment of) the maneuver, us-ing the method similar to that for the lane-following trajectory planner, and applies thetargeted speed to the parameterized trajec-tory template in generating the waypoints.

• Rule-based trajectory planners utilize a setof simple driving and steering heuristic rules(Hwang, Meirans, & Drotning, 1993) thatmimic the decision process of human driversin guiding the vehicle into a certain pre-scribed position and/or orientation, such asU-turns or parking. Because the rule-basedtrajectory planners are usually invoked forprecision maneuvers, we configured the tar-geted speed to a low value (e.g., 3 mph) formaximum maneuverability.

• Open-space trajectory planner provides gen-eral trajectory generation and obstacle avoid-ance in a prescribed open space where obsta-cles may be present. We adopted a two-levelhierarchical trajectory planning approach inwhich a lattice/A*-based, high-level plan-ner (Cormen et al., 1990) provides a coarseset of guiding waypoints to guide the tra-jectory generation of the low-level real-timepath planner (RTPP), which is based on agreedy, breadth-first search algorithm aug-mented with a set of heuristic rules. Similar tothat for the rule-based trajectory planner, weconfigure the target speed of the open-spacetrajectory planner to a low value (e.g., 5 mph)for maximum maneuverability.

6. FIELD PERFORMANCE AND ANALYSIS

In this section, we discuss TerraMax’s performanceduring testing and participation in the Urban Chal-lenge and related events.

6.1. Basic and Advanced Behaviors

TerraMax successfully demonstrated all the basicand advanced behavior requirements set forth byDARPA. In the following, we comment on our ex-perience in implementing some key autonomousbehaviors.

Passing: Initially, the trajectory generation of thepassing behavior was handled by a template-basedpassing trajectory planner, which guided the vehicleto an adjacent lane for passing the detected obsta-cle in its current lane and returned the vehicle backto its original lane after passing. We added a swerveplanner to negotiate small obstacles (instead of pass-ing them). This modification resulted in smooth androbust passing performance.

U-turn and parking: Through testing, we foundthat a rule-based approach outperformed a template-based approach for U-turns and parking. Thereforewe employed a rule-based parking maneuver, whichperformed flawlessly in the NQE and UCFE.

Merge and left turn: We adopted a simple ap-proach to inspect traffic in the lanes of interest anddetermine whether there is a safe gap for executingmerge or left-turn behaviors. We employ a simple ve-hicle kinematic model (Pin & Vasseur, 1990) to pre-dict the possible spatial extent that a moving vehiclein the lane may occupy in the near future (e.g., in thenext 10 s) and apply an efficient geometry-based al-gorithm to check whether the spatial extent intersectswith TerraMax’s intended path. To reduce false (bothpositive and negative) detections of traffic in a lane,we further fine-tuned LIDAR sensitivity, employeda multisample voting scheme to determine whethera vehicle is present based on multiple updates ofthe obstacles reported by the LIDAR, and verifiedour modifications through an extended series of con-trolled live-traffic tests. The enhancements resulted innear-perfect merging and left-turn behaviors.

6.2. Performance at the NQE

TerraMax participated in multiple runs in Test Ar-eas A, B, and C during the NQEs. During theseruns, TerraMax successfully completed all the key

Journal of Field Robotics DOI 10.1002/rob

Page 15: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 855

autonomous maneuvers as prescribed in each TestArea. Specifically, in Test Area A, TerraMax demon-strated merging into live traffic and left-turn maneu-vers; in Test Area B, TerraMax completed leaving thestart chute, traffic circle, zone navigation and driving,parking, passing, and congested road segment ma-neuvers; in Test Area C, TerraMax demonstrated in-tersection precedence, queuing, roadblock detection,U-turn, and replanning behaviors.

In Table II, we summarize the performance issueswe experienced during the early runs in the NQE andour actions in resolving/mitigating these issues forthe subsequent runs in the NQE and the UCFE. Wenext discuss each of the performance items in detail.

Obstacles protruding from the edge of a narrow roadcould interfere with safety spacing constraints and causepath deviation: During our first run of Test Area A,TerraMax did not stay in the travel lane but drove onthe centerline of the road segment. The K-rails on the

Table II. Performance issues and resolutions in NQE.

Impact onPerformance issue Cause performance Mitigating action(s) Lessons learned

Minor obstacles(K-rails) causingplanned pathdeviation

Safety parametersetting

Vehicle rodecenterline

Reduce clearance toobstacle

Widen path by movingK-rails

Narrow roads are harderto navigate for a largetruck

Path updatesincorrectlyprocessed

False positives forroad edges

Vehicle got stuckin driveway

Corrected path updatesand readjusted pathdetermination method

Road edge detection andprocessing can becomplex and unreliable

Parking twice inthe same spot

Dust clouds behindvehicle perceivedas obstacles

Total time takenincreased

None required Temporary dust cloudsneed to be treateddifferently from otherobstacles; high-levelobstacle fusion desirable

Traveling too closeto parked cars

Real-time pathplanner notactivated in time

Brushing a parkedcar

Modified trajectoryplanning transition logic

Navigating congestednarrow road segmentsneeds sophisticatedsupervisory logic

Enteringintersectionprematurely

Timer bug Incorrectprecedence atintersection

Fixed timer bug Simple bugs could causelarge-scale performanceimpact

Incorrect trackingat intersections

Vehicle hiddenby another

Incorrectprecedence atintersection

Implemented obstaclecaching; better sensorfusion and tracking

Simple sensor processingis inadequate in complexsettings

Queuing tooclose/erraticbehaviors

LIDARmalfunction

Nearly ran into thevehicle in front

Fixed bug in sensor healthmonitoring

Critical system serviceshave to be constantlymonitored

road closely hugged the lane boundary, appearing tobe inside the lane. This prompted obstacle avoidance,and due to our earlier safety-driven decision to neverapproach an obstacle closer than 0.5 m, the AVMdecided to cross the centerline to pass.

Unreliable path/lane updates could cause incorrecttravel lane determination: During the first run of TestArea B, TerraMax pulled into a driveway of a houseand, after a series of maneuvers, successfully pulledout of that driveway but led itself right into the drive-way next door, where it stalled and had to be man-ually repositioned. This time-consuming excursionwas primarily triggered by an incorrect path updateand subsequent incorrect travel lane selection. Weresolved the faulty path update and readjustedthe relative importance of the different informationsources used to determine the travel lane.

Dust clouds and other false-positive “transient” ob-stacles could result in unexpected behaviors: Unexpected

Journal of Field Robotics DOI 10.1002/rob

Page 16: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

856 • Journal of Field Robotics—2008

behaviors observed in dirt lots of Test Area B canbe attributed to dust clouds having been sensed asobstacles. These behaviors included parking twicein both runs, a momentary stop, and “wanderingaround” in the dirt lot. On all of these occasions, thesystem recovered well and did not result in failuresother than the superfluous excursions.

More sophisticated free-space trajectory planners(other than simple template-based ones) are needed tonegotiate highly congested areas: While navigatinga segment where many obstacles were set up ona curvy road during the first run of Test Area B,TerraMax slowly knocked down several traffic conesand brushed a parked vehicle with the left corner ofits front bumper. Our postrun study revealed that theRTPP was triggered much less often than desired. Toprevent TerraMax from hitting obstacles, especiallyin the congested road segments as we experienced,we revised AVM’s transition logic and processesbetween normal driving modes and RTPP. Withthe revisions, the supervisor invoked RTPP in moresituations.

Persistent vehicle tracking needed for complex situa-tions at intersections: During the second run of TestArea C, TerraMax entered the intersection prema-turely on one occasion. This was due to the lack ofproper tracking of the obstacles/vehicles in our over-all system. When the first vehicle entered the intersec-tion, it momentarily occluded the second vehicle. TheAVM could not recognize that the newly observedobstacle was in fact the vehicle that was there before.We mitigated this problem by refining the obstaclecaching and comparison mechanism.

Proper response should be designed to prepare for sub-system malfunction: In the second run of Test Area C,TerraMax started to behave erratically after one-thirdof the mission was completed. It drove very close tothe obstacles (and K-rails) at the right side of the lane,and it almost hit a vehicle that queued in front ofit at the intersection. In analysis, we discovered thatthe front-right LIDAR had malfunctioned and didnot provide any obstacle information. This, combinedwith the fact that the stereo obstacle detection had notbeen enabled meant that TerraMax could not detectobstacles in the front right at close range. Fortunately,there was some overlap coverage from the front-leftLIDAR, which was functioning correctly. This over-lap coverage from the front-left LIDAR picked up thevehicle queued in front of TerraMax at the intersec-tion so that the supervisor stopped TerraMax just intime to avoid hitting this vehicle. Stereo obstacle de-

tection was not turned on because the team did nothave time to tune the thresholds before the start ofthis run.

6.3. Performance at the UCFE

TerraMax completed the first four submissions inMission 1 of the UCFE with impressive performance.However, TerraMax was stopped after a failure inthe parking lot (zone 61). Table III summarizes Terra-Max’s performance statistics prior to the parking lotevent.

The arrival into the parking lot was normal. Nodetected obstacles in the path of TerraMax were de-tected, and therefore an s-shaped (farmer) turn wasissued that brought TerraMax into alignment withthe target parking space [Figure 12(a)]. TerraMaxcontinued the parking maneuver successfully andpulled out of the parking space without error [Figure12(c)]. TerraMax backed out of the parking space tothe right so that it would face the exit of the zonewhen finished exiting the parking spot.

There were two separate problems in the parkinglot. The first problem was that TerraMax stalled in theparking lot for a very long time (approx. 17 min). Thesecond problem was that when TerraMax eventuallycontinued, it no longer responded to commands fromthe AVM and eventually had to be stopped.

Table III. Performance statistics for UCFE prior to parkinglot.

Mission 1 (first 4submissions)

Total time 0:40Top speed 21 mphOncoming vehicles encountered 47Oncoming autonomous vehicles 7

encounteredPass Fail

Int. precedence 9 0Vehicle following 0 0Stop queue 1 0Passes 1 0Road blocks/replan 0 0Zone 1 0Park 1 0Mergea 1 1

aFailed cases indicate that traffic flow was impeded by TerraMaxduring merge.

Journal of Field Robotics DOI 10.1002/rob

Page 17: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 857

Figure 12. UCFE parking lot: Successful parking maneuvers.

Stall condition. The RTPP produced paths thatcontained duplicate waypoints while driving in azone, resulting in a stall. A bug was introduced inthe RTPP during prerace modifications. This bug wasquickly corrected after the UCFE and tested when weretested at Victorville in December 2007.

Unresponsive vehicle. TerraMax recovered afterstalling for more than 17 min. GPS drift ”moved”the position of TerraMax to where the RTPP returnedfour points (two pairs of duplicate waypoints).The open-space trajectory planner then commandedTerraMax to drive at 5 mph in a straight path towardthe parking lot boundary. However, the order of thecommanded duplicate waypoints in the drive com-mand caused the AVD service to fault at the pointwhere the vehicle had already accelerated to approx-imately 1 mph. At this point, TerraMax became un-responsive to subsequent commands and continuedto drive in a straight line toward a building until anE-stop pause was issued by DARPA officials.

6.4. Return to Victorville

We were unable to acquire full performance met-rics during the UCFE due to the premature finish.Therefore we brought TerraMax back to Victorville onDecember 13, 2007, for a full test. Although we werenot able to use the entire UCFE course, we used theUCFE RNDF and limited the missions to the housingarea and added a parking spot in Zone 65. The MDFcreated for these missions used the speed limits fromthe MDF for the first mission at the UCFE. TerraMaxran with the software version used in the UCFE. Norevisions were made. The team ran three missions to-

taling more than 78 miles in 7 h and 41 min, for anaverage speed of 10.17 mph. Six test vehicles drivenby team members acted as other traffic. One safety ve-hicle followed TerraMax with a remote e-stop trans-mitter at all times during autonomous operation. Welist performance statistics for the three missions inTable IV.

Table IV. Performance statistics for Victorville testmissions.

Mission 1 Mission 2 Mission 3

Total distance 24.9 miles 19.5 miles 33.8 milesTotal time 2:28 1:55 3:18Average speed 10.09 mph 10.17 mph 10.24 mphOncoming vehicles 42 92 142

encountered inopposite lane

Intersections 84 108 193Pass Fail Pass Fail Pass Fail

Int. precedencea 17 3 8 0 20 2Vehicle following 3 0 1 0 2 0Stop queue 2 0 1 0 1 0Passes 5 0 3 0 1 0Road blocks/replan 0 0 1 0 2 0Zone 5 0 4 0 7 0Park 3 0 2 0 2 0Mergeb 7 1 5 0 10 1aIn all cases, failed intersection precedence was due to the test ve-

hicle(s) being more than 0.5 m behind the stop line.bMerge results include left turns across opposing lane and left andright turns into traffic from a stop with other vehicles present.Failed cases indicate that traffic flow was impeded by TerraMaxduring merge.

Journal of Field Robotics DOI 10.1002/rob

Page 18: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

858 • Journal of Field Robotics—2008

7. CONCLUDING REMARKS

Team Oshkosh entered the DARPA Urban Challengewith the intention of finishing the event as a topcontender. Despite not completing the final event,the team believes that TerraMax performed welland safely up to the point the vehicle was stopped.TerraMax proved to be a very capable contender andis arguably the only vehicle platform in the competi-tion that is relevant for military logistics missions.

Through the development, testing, and officialevents, we experimented and demonstrated au-tonomous truck operations in (controlled) urbanstreets of California, Wisconsin, and Michigan undervarious climate conditions. In these experiments,TerraMax exhibited all the autonomous behaviorsprescribed by DARPA Urban Challenge rules, in-cluding negotiating urban roads, intersections, andparking lots, interacting with manned and un-manned traffic while observing traffic rules, with im-pressive performance. Throughout this endeavor, welearned valuable experience and lessons, which wesummarize in the following.

Course visualizer/simulator efforts truly paid off.Learning from our experience in DARPA GrandChallenge 2005, we invested time and effort up frontto develop a graphic tool with a two-dimensional ca-pability for visualizing the RNDF and MDF. We laterexpanded the functionality of the tool to include mis-sion data log playback, sensor information display,built-in debugging capability that displays results ofvarious autonomous mode status, logic and calcula-tions in the AVM, and simple simulation of TerraMaxoperations.

This tool served as a true “force-multiplier” byallowing team members in widely dispersed geo-graphic locations to verify different functionality anddesigns, experiment with different ideas and behav-ioral modes, pretest the software implementationprior to testing onboard TerraMax, and perform post-run analysis to resolve issues. The tool not only spedup our development and testing efforts, but also en-abled us to quickly identify the causes for issues en-countered during NQE runs and to promptly developsolutions to address them.

Simplicity worked well in U-turn and parking. In-stead of using a full-fledged trajectory planner/generator for maneuvers such as U-Turn and park-ing, we opted to search for simple solutions for suchsituations. Our experiments with U-turn prior to thesite visit clearly indicated the performance, simplic-ity, elegance, and agility superiority of a rule-based

approach over a template-based one. The rule-basedapproach, which mimics a human driver’s actionsand decision process in performing those maneuvers,was our main approach for both U-turn and parkingmaneuvers and performed flawlessly in all our sitevisit, NQE, and UCFE runs.

Better persistent object tracking capability is needed.In the original design, object tracking was to beperformed at a high level. However, due to timeconstraints, the object tracking responsibility wasdelegated to the processing modules of the individ-ual sensor elements. As demonstrated in our firsttwo runs of Test Area C, a persistent object trackingcapability is required to handle situations in whichobjects may be temporarily obstructed from observa-tions. However, this persistent object tracking mech-anism should focus only on the objects of both rele-vance and importance for efficiency and practicality.Though we implemented a less-capable alternative tomaintain persistent tracking of vehicles at the inter-section that yielded satisfactory results, a systematicapproach to address this shortfall is needed.

Our sensor technology proved capable, but additionalwork is required to meet all the challenges. The use ofpassive sensors is one of the primary goals of ourvehicle design. LIDAR sensors produce highly ac-curate range measurements; however, vision allowscost-effective sensing of the environment without theuse of active signals (Bertozzi, Broggi, & Fascioli,2006; Bertozzi et. al., 2002; Broggi, Bertozzi, Fascioli,& Conte, 1999) and contains no moving parts, whichare less desirable in operational environments. Thecalibration of vision systems needed special care be-cause many of them were based on stereo technol-ogy whose large baseline precluded attachment of thetwo cameras to the same rig. Nevertheless, the cali-bration of many of them turned out to be absolutelyrobust, and there was no need to repeat the calibra-tion procedure in Victorville after it was performed afew months before. The calibration of the trinocularsystem, which is installed into the cabin, survived afew months of test and many miles of autonomousdriving. The front stereo system was uninstalled formaintenance purposes a few times and a recalibra-tion was necessary, including in Victorville. The backstereo system did not need any recalibration, whereasthe lateral and the rearview system, being monocu-lar, relied on only a weak calibration, which was justchecked before the race. This is a clear step forwardin usability and durability of vision systems for real-world unmanned systems applications.

Journal of Field Robotics DOI 10.1002/rob

Page 19: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

Chen et al.: TerraMaxTM: Team Oshkosh Urban Robot • 859

Performing low-level fusion between vision andLIDAR data has brought considerable improvementsin distance measurement for the vision systems, es-pecially at greater distances. Robustness and persis-tence of results are additional improvements realizedby means of this technique. Despite these improve-ments, LIDAR was used as the primary sensor forobstacle detection and vision the primary sensor forroad boundary detection during the Final Event.

Lane detection provided consistent data and wasable to localize most of the lane markings. Some prob-lems were encountered when lane markings were tooworn out and in situations in which the red curbwas misinterpreted as a yellow line. The specific al-gorithm used to identify yellow markings was nottested in correspondence to red curbs, which showedthe same invariant features that were selected for yel-low lines.

Improved road boundary interpretation is needed. Al-though the sensor system detected lanes and curbs inmost situations, problems were encountered in situa-tions in which sensed data differed significantly fromthe expected road model obtained by interpreting theRNDF data. As a result TerraMax would cross thecenterline, cut corners, or drive off the road in orderto reach the next RNDF waypoint.

Test for perfection. The team had tested TerraMaxextensively in a variety of environments and sce-narios; the NQE and UCFE differed from our test-ing situations sufficiently, however, so that we wererequired to make last-minute revisions to the sys-tem that were not extensively tested. Unfortunately,these last-minute revisions for the NQE adversely im-pacted performance in the UCFE.

The DARPA Urban Challenge, as all DARPAGrand Challenges, has proven to be an excellentframework for the development of unmanned vehi-cles for Team Oshkosh. We have experimented anddeveloped many elegant solutions for practical mili-tary large-footprint autonomous ground vehicle op-erations in urban environments. The system devel-oped by Team Oshkosh for the Urban Challenge hasbeen shown to be robust and extensible, an excellentbase to which additional capabilities can be addeddue to the modular architecture.

REFERENCES

Arkin, R. C., (1998). Behavior-based robotics. Cambridge,MA: MIT Press.

Bertozzi, M., Bombini, L., Broggi, A., Cerri, P., Grisleri, P.,& Zani, P. (2008). GOLD: A complete framework for

developing artificial vision applications for intelligentvehicles. IEEE Intelligent Systems, 23(1), 69–71.

Bertozzi, M., & Broggi, A. (1998). GOLD: A parallel real-time stereo vision system for generic obstacle and lanedetection. IEEE Transactions on Image Processing, 1(7),62–81.

Bertozzi, M., Broggi, A., Cellario, M., Fascioli, A.,Lombardi, P., & Porta, M. (2002). Artificial vision inroad vehicles. Proceedings of the IEEE—Special issueon Technology and Tools for Visual Perception, 90(7),1258–1271.

Bertozzi, M., Broggi, A., & Fascioli, A. (2006). VisLab andthe evolution of vision-based UGVs. IEEE Computer,39(12), 31–38.

Bertozzi, M., Broggi, A., Medici, P., Porta, P. P., & Sjogren,A. (2006). Stereo vision-based start-inhibit for heavygoods vehicles. In Proceedings of IEEE Intelligent Ve-hicle Symposium 2006, Tokyo, Japan (pp. 350–355).Piscataway, NJ: IEEE.

Braid, D., Broggi, A., & Schmiedel, G. (2006). The TerraMaxautonomous vehicle. Journal of Field Robotics, 23(9),655–835.

Broggi, A., Bertozzi, B., Fascioli, A., & Conte, G. (1999). Au-tomatic vehicle guidance: The experience of the ARGOvehicle. Singapore: World Scientific.

Broggi, A., Medici, P., & Porta, P. P. (2007). StereoBox: A ro-bust and efficient solution for automotive short rangeobstacle detection. EURASIP Journal on EmbeddedSystems—Special Issue on Embedded Systems for In-telligent Vehicles, Vol. 2007.

Caraffi, C., Cattani, S., & Grisleri, P. (2007). Off-road pathand obstacle detection using decision networks andstereo. IEEE Transactions on Intelligent TransportationSystems, 8(4), 607–618.

Cassandras, C., & Lafortune, S. (1999). Introduction to dis-crete event systems, 2nd ed. Boston: Kluwer.

Chen, Y.-L., & Lin, F. (2000). Modeling of discrete event sys-tems using finite state machines with parameters. InProceedings of the 9th IEEE International Conferenceon Control Applications, Anchorage, AK (pp. 941–946). Piscataway, NJ: IEEE.

Chen, Y.-L., & Lin, F. (2001a). Safety control of discrete eventsystems using finite state machines with parameters.In Proceedings of the 2001 American Control Confer-ence (ACC), Arlington, VA (pp. 975–980). Piscataway,NJ: IEEE.

Chen, Y.-L., & Lin, F. (2001b). An optimal effective con-troller for discrete event systems. In Proceedings of the40th IEEE Conference on Decision and Control (CDC),Orlando, FL (pp. 4092–4097). Piscataway, NJ: IEEE.

Chung, S.-L., Lafortune, S., & Lin, F. (1992). Limited look-ahead policies in supervisory control of discrete eventsystems. IEEE Transactions on Automatic Control,37(12), 1921–1935.

Cormen, T., Leiserson, C., & Rivest, R. (1990). Introductionto algorithms. Cambridge, MA: MIT Press.

Horst, J., & Barbera, A. (2006). Trajectory generation for anon-road autonomous vehicle. In Proceedings of SPIE:Unmanned Systems Technology VIII, Vol. 6230.

Hwang, Y. K., Meirans, L., & Drotning, W. D. (1993).Motion planning for robotic spray cleaning with

Journal of Field Robotics DOI 10.1002/rob

Page 20: TerraMaxTM: Team Oshkosh Urban RobotJournal of Field Robotics 25(10), 841–860 (2008) C 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (). • DOI: 10.1002/rob.20267

860 • Journal of Field Robotics—2008

environmentally safe solvents. In Proceedings of theIEEE International Workshop on Advanced Robotics,Tsukuba, Japan (pp. 49–54). Piscataway, NJ: IEEE.

Kaempchen, N., Buhler, M., & Dietmayer, K. (2005).Feature-level fusion for free-form object tracking us-ing laserscanner and video. In Proceedings of the 2005IEEE Intelligent Vehicles Symposium, Las Vegas, NV(pp. 453–458). Piscataway, NJ: IEEE.

Labayrade, R., Aubert, D., & Tarel, J. P. (2002). Real timeobstacle detection in stereo vision on non-flat roadgeometry through V-disparity representation. In Pro-ceedings of the IEEE Intelligent Vehicles Symposium,Versailles, France (Vol. II, pp. 646–651). Piscataway, NJ:IEEE.

Lee, K., & Lee, J. (2004). Generic obstacle detection on roadsby dynamic programming for remapped stereo imagesto an overhead view. In Proceedings of ICNSC 2004,Taipei, Taiwan (Vol. 2, pp. 897–902). Piscataway, NJ:IEEE.

Pin, F. G., & Vasseur, H. A. (1990). Autonomous trajectorygeneration for mobile robots with non-holonomic andsteering angle constraints. In Proceedings of the IEEE

International Workshop on Intelligent Motion Con-trol, Istanbul, Turkey (pp. 295–299). Piscataway, NJ:IEEE.

Ramadge, P. J., & Wonham, W. M. (1987). Supervisory con-trol of a class of discrete event processes. SIAM Journalon Control and Optimization, 25(1), 206–230.

Schoenberg, I. J. (1969). Cardinal interpolation and splinefunctions. Journal of Approximation Theory, 2, 167–206.

Sundareswaran, V., Johnson, C., & Braid, D. (2006). Impli-cations of lessons learned from experience with largetruck autonomous ground vehicles. In Proceedings ofAUVSI 2006, Orlando, FL. Arlington, VA: AUVSI.

Team Oshkosh. (2007). DARPA Urban Challenge tech-nical report. Oshkosh Corp. www.darpa.mil/grandchallenge/TechPapers/Team Oshkosh.pdf.

Wender, S., Weiss, T., Dietmayer, K., & Fuerstenberg, K.(2006). Object classification exploiting high level mapsof intersections. In Proceedings of the 10th Interna-tional Conference on Advanced Microsystems for Au-tomotive Applications (AAMA 2006), Berlin, Germany.New York: Springer.

Journal of Field Robotics DOI 10.1002/rob