Overtaking and Receding Vehicle Detection for Driver...

6
Overtaking & Receding Vehicle Detection for Driver Assistance and Naturalistic Driving Studies Ravi Kumar Satzoda and Mohan M. Trivedi Abstract— Although on-road vehicle detection is a well- researched area, overtaking and receding vehicle detection with respect to (w.r.t) the ego-vehicle is less addressed. In this paper, we present a novel appearance-based method for detecting both overtaking and receding vehicles w.r.t the ego-vehicle. The proposed method is based on Haar-like features that are classified using Adaboost-cascaded classifiers, which result in detection windows that are tracked in two directions temporally to detect overtaking and receding vehicles. A detailed and novel evaluation method is presented with 51 overtaking and receding events occurring in 27000 video frames. Additionally, an analysis of the detected events is presented, specifically for naturalistic driving studies (NDS) to characterize the overtaking and receding events during a drive. To the best knowledgea of the authors, this automated analysis of overtaking/receding events for NDS is a first of its kind in literature. I. MOTIVATION &SCOPE According to National Highway Traffic Safety Adminis- tration (NHTSA), multi-vehicle crashes involving the ego- vehicle and surrounding vehicles result in over 68% of fa- talities [1]. Vehicle detection [2], [3], [4], [5], lane detection [6], [7], [8], pedestrian detection [9] and higher order tasks involving lanes and vehicles such as trajectory analysis [10], [11], [12] using different sensing modalities, is therefore an active area of research for automotive active safety systems, and in offline data analysis for naturalistic driving studies (NDS) also [13], [14], [15]. A. Related Work Among the different sensor modalities, monocular camera- based vehicle detection is being extensively researched be- cause of the increasing trend towards understanding the surround scene of the ego-vehicle using multiple cameras [16]. While on-road vehicle detection techniques such as [2], [3], [4] detect vehicles that are visible partially or fully with varying levels of accuracy and efficiency, techniques such as [17], [18], [19] are particularly dealing with overtaking vehicles, which is relatively less explored [17]. Fig. 1 defines the terms overtaking and receding vehicles with respect to the ego-vehicle. Most related studies such as [20], [19], [17], [21] have primarily focused on detecting overtaking vehicle only. Ve- hicles that are receding w.r.t ego-vehicle are not addressed in any of the works. Another observation is that overtaking vehicle detection methods can be classified into two broad categories depending on the type of overtaking event that is Ravi Kumar Satzoda and Mohan M. Trivedi are with University of Cali- fornia San Diego, La Jolla, CA, USA. [email protected], [email protected] being detected. Methods such as [20], [19], [17] detect the partially visible overtaking vehicles that are passing by the side of the ego-vehicle as seen from the front view of ego- vehicle. The second category has methods such as [21], [17] that detect overtaking events as seen from the rear-view of the ego-vehicle. Fig. 1. Definition of overtaking vehicle V O and receding vehicle V R with respect to ego vehicle V E Motion cues have been used extensively in different ways to detect the overtaking vehicles. This is understandable considering the fact that the optical flow of the overtaking vehicles is in the opposite direction as compared to the background. However, this also limits the use of the same methods for detecting receding vehicles because the optical flow of the receding vehicles will be in the same direction as the background. Furthermore, a study of the recent works shows that there are no standard methodologies or datasets which are available for evaluating the overtaking/receding vehicle detection techniques. Each study reports the evalua- tion in a different way using different number of sequences or frames etc., and different metrics are used for the evaluations. This aspect on evaluation should also be addressed for benchmarking purposes. B. Problem Statement & Scope of Study In this paper, we focus on monocular vision based tech- niques for detecting both the overtaking and receding ve- hicles. More specifically, this work is related to detecting vehicles that are partially visible in the front-view of the ego-vehicle as they either emerge from the blind-spot and pass the ego-vehicle, or recede into the blind-spot as the ego-vehicle passes them (Fig. 1). A vehicle is considered to be overtaking from the moment it first appears in the side view from the ego-vehicle until more than 10% of the rear of the overtaking vehicle is seen from the ego-vehicle. Similarly, the receding vehicle is defined. In the forthcoming sections, we present an appearance based method that is used IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Transcript of Overtaking and Receding Vehicle Detection for Driver...

Page 1: Overtaking and Receding Vehicle Detection for Driver ...swiftlet.ucsd.edu/publications/2014/SatzodaTrivedi_ITSC2014.pdf · Overtaking & Receding Vehicle Detection for Driver Assistance

Overtaking & Receding Vehicle Detection for Driver Assistance andNaturalistic Driving Studies

Ravi Kumar Satzoda and Mohan M. Trivedi

Abstract— Although on-road vehicle detection is a well-researched area, overtaking and receding vehicle detection withrespect to (w.r.t) the ego-vehicle is less addressed. In this paper,we present a novel appearance-based method for detectingboth overtaking and receding vehicles w.r.t the ego-vehicle.The proposed method is based on Haar-like features that areclassified using Adaboost-cascaded classifiers, which result indetection windows that are tracked in two directions temporallyto detect overtaking and receding vehicles. A detailed andnovel evaluation method is presented with 51 overtaking andreceding events occurring in 27000 video frames. Additionally,an analysis of the detected events is presented, specifically fornaturalistic driving studies (NDS) to characterize the overtakingand receding events during a drive. To the best knowledgeaof the authors, this automated analysis of overtaking/recedingevents for NDS is a first of its kind in literature.

I. MOTIVATION & SCOPE

According to National Highway Traffic Safety Adminis-tration (NHTSA), multi-vehicle crashes involving the ego-vehicle and surrounding vehicles result in over 68% of fa-talities [1]. Vehicle detection [2], [3], [4], [5], lane detection[6], [7], [8], pedestrian detection [9] and higher order tasksinvolving lanes and vehicles such as trajectory analysis [10],[11], [12] using different sensing modalities, is therefore anactive area of research for automotive active safety systems,and in offline data analysis for naturalistic driving studies(NDS) also [13], [14], [15].

A. Related Work

Among the different sensor modalities, monocular camera-based vehicle detection is being extensively researched be-cause of the increasing trend towards understanding thesurround scene of the ego-vehicle using multiple cameras[16]. While on-road vehicle detection techniques such as [2],[3], [4] detect vehicles that are visible partially or fully withvarying levels of accuracy and efficiency, techniques suchas [17], [18], [19] are particularly dealing with overtakingvehicles, which is relatively less explored [17]. Fig. 1 definesthe terms overtaking and receding vehicles with respect tothe ego-vehicle.

Most related studies such as [20], [19], [17], [21] haveprimarily focused on detecting overtaking vehicle only. Ve-hicles that are receding w.r.t ego-vehicle are not addressedin any of the works. Another observation is that overtakingvehicle detection methods can be classified into two broadcategories depending on the type of overtaking event that is

Ravi Kumar Satzoda and Mohan M. Trivedi are with University of Cali-fornia San Diego, La Jolla, CA, USA. [email protected],[email protected]

being detected. Methods such as [20], [19], [17] detect thepartially visible overtaking vehicles that are passing by theside of the ego-vehicle as seen from the front view of ego-vehicle. The second category has methods such as [21], [17]that detect overtaking events as seen from the rear-view ofthe ego-vehicle.

Fig. 1. Definition of overtaking vehicle V O and receding vehicle V R withrespect to ego vehicle V E

Motion cues have been used extensively in different waysto detect the overtaking vehicles. This is understandableconsidering the fact that the optical flow of the overtakingvehicles is in the opposite direction as compared to thebackground. However, this also limits the use of the samemethods for detecting receding vehicles because the opticalflow of the receding vehicles will be in the same directionas the background. Furthermore, a study of the recent worksshows that there are no standard methodologies or datasetswhich are available for evaluating the overtaking/recedingvehicle detection techniques. Each study reports the evalua-tion in a different way using different number of sequences orframes etc., and different metrics are used for the evaluations.This aspect on evaluation should also be addressed forbenchmarking purposes.

B. Problem Statement & Scope of Study

In this paper, we focus on monocular vision based tech-niques for detecting both the overtaking and receding ve-hicles. More specifically, this work is related to detectingvehicles that are partially visible in the front-view of theego-vehicle as they either emerge from the blind-spot andpass the ego-vehicle, or recede into the blind-spot as theego-vehicle passes them (Fig. 1). A vehicle is consideredto be overtaking from the moment it first appears in theside view from the ego-vehicle until more than 10% of therear of the overtaking vehicle is seen from the ego-vehicle.Similarly, the receding vehicle is defined. In the forthcomingsections, we present an appearance based method that is used

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 2: Overtaking and Receding Vehicle Detection for Driver ...swiftlet.ucsd.edu/publications/2014/SatzodaTrivedi_ITSC2014.pdf · Overtaking & Receding Vehicle Detection for Driver Assistance

to temporally track the vehicles that are either overtaking orreceding with respect to the ego-vehicle. The Haar-Adaboostfeature classification coupled with a bi-directional tracking ofoutput appearance windows temporally enables the detectionof both overtaking and receding vehicles. A comprehensiveevaluation of the proposed technique is presented using real-world driving datasets. Additionally, the proposed method isis also extended further to characterize the overtaking andreceding events for naturalistic driving studies (NDS) [13].Based on available literature on NDS, this characterizationis the first study of its kind.

II. OVERTAKING & RECEDING VEHICLE DETECTIONMETHOD

When a vehicle is overtaking the ego-vehicle, it is visiblepartially as shown in Fig. 2. Similarly, when a vehicle isreceding w.r.t the ego-vehicle, the vehicle appears partiallyuntil it disappears from the view of the ego-vehicle. In thissection, we first explain the feature extraction and classifiertraining that are used for learning the classifiers, whichwill be used to detect the partially visible vehicles. Then,we explain the proposed detection process for overtakingvehicles followed by receding vehicles.

Fig. 2. (a) Regions IL and IR in the image where overtaking vehicle V O andreceding vehicle V R will be appear with respect to the ego-vehicle V E , (b)Feature extraction windows IL

S and ILP that are used to train two classifiers.

A. Learning Stage: Feature Extraction & Classifier Training

The proposed method uses Haar-like features and twoAdaboost classifiers to detect partially visible overtaking andreceding vehicles. Given an input image I of size M×N(columns by rows), we define the two regions IL and IR asshown in Fig. 2(a). In order to generate the ground truthannotations for training, we have considered 2500 trainingimages in which the side view of the vehicle is visible withinIL only. Annotation window IL

S that encapsulates the entireside view of the vehicle in IL are marked as shown by the redwindow in Fig. 2(b). A second window that is denoted by IL

Pis generated from IL

S by considering the front half of ILS as

shown in Fig. 2(b). It is to be noted that annotation windowsILS and IL

P correspond to the full side view and front-part sideview respectively of the overtaking vehicle to the left of theego-vehicle, i.e. in IL. The full side view sample IL

S is flippedto generate the training samples for overtaking vehicles inthe right region IR. Therefore, we get IR

S and IRP using the

following:

IRS (x,y) = IL

S (MS−x+1,y), IRP (x,y) = IL

P(MP−x+1,y) (1)

where MS and MP are the number of columns in ILS and

ILP respectively. Given IL

S and ILP , and IR

S and IRP , Haar-like

features are extracted from each training sample, which arethen used to learn two Adaboost cascaded classifiers for eachregion resulting in the following four classifiers CL

S , CLP, CR

Sand CR

P for ILS , IL

P , IRS and IR

P respectively. The proposedmethod is aimed at detecting overtaking vehicles in IL andIR adjacent to the ego-vehicle and in lanes that are fartheraway laterally in the ego-vehicle. Therefore, training samplesalso include selection of vehicles that are overtaking the ego-vehicle in the farther lanes (like the white sedan in IL inFig. 2(a)). Such selection also ensures that the classifiers aretrained with vehicles of multiple sizes.

B. Overtaking Vehicle Detection Method

In this section, we will discuss the proposed overtakingvehicle detection method. Given an input image frame attime t, i.e. I(t), the two regions IL(t) and IR(t) are marked first.The superscript (t) indicates temporal reference of the frameI. Classifiers CL

S and CLP are applied to IL(t), and classifiers

CRS and CR

P are applied on IR(t) resulting in possible full orpart side view candidates in IL(t) and IR(t). We describe theremainder of the proposed method using the left region IL(t);the same can be applied on IR(t).

Let us consider the case when there is no overtakingvehicle V O, and in I(t), the V O appears. Let WP

(t) and WS(t)

be the set of candidate detection windows that CLS and CL

Pgive. Ideally, WS

(t) will not have any candidate windowsbecause V O appears partially in I(t). WP

(t) is defined as aset of windows W (t)

Pk such that:

WP(t) = {W (t)

P1 ,W(t)P2 , . . . ,W

(t)Pk , . . . ,W

(t)PNP} (2)

where each k-th window W (t)Pk denotes the bounding window

for the part side-view shown earlier in Fig. 2(b). W (t)Pk is a

set of four standard variables W (t)Pk = [x y w h]T , i.e. the x and

y positions of the top left corner of the window (origin isconsidered to be the top left corner, x is along the horizontalaxis, and y is along the vertical axis), w and h indicate thewidth (along x axis) and height (along y axis) of the boundingwindow respectively.

Given a detection window for the part side view say W (t)Pk ,

we first initialize the overtaking event if a part side viewwindow does not occur in the past NO frames, i.e.

WP(t−1)∪·· ·WP

(ti) · · ·∪WP(t−NO) = φ (3)

where t−NO ≤ ti < t. As part of initialization, given a set ofwindows WP

(t) in the current frame I(t) that satisfy (3), thewindow W (t)

Pk ∈WP(t) that is closest to the left edge of the

image frame is considered as the initialization window, i.e.W (t)

Pk satisfies the following

k = argminj

x j (4)

This is considered as initialization condition because it isassumed that the overtaking vehicle in IL would enter firstfrom the left edge of the image w.r.t the ego-vehicle.

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 3: Overtaking and Receding Vehicle Detection for Driver ...swiftlet.ucsd.edu/publications/2014/SatzodaTrivedi_ITSC2014.pdf · Overtaking & Receding Vehicle Detection for Driver Assistance

After getting the initialization window of the part side-view W (t)

Pk in the current frame I(t), the overtaking eventdetection is initialized. However, overtaking event is notconsidered detected until the (t + NT )-th frame. In everyframe I(ti) such that t < ti≤ t+NT , we determine the presenceof nearest part side-view windows between adjacent frames.In order to do this, we start with the t-th frame, when weinitialized the overtaking event. In the t+1-th frame, we lookfor a window W (t+1)

Pk ∈WP(t+1) such that the k-th window

satisfies the following minimum distance criteria:

k = argminj||W (t)

Pk −W (t+1)P j || such that ||W (t)

Pk −W (t+1)P j ||< δe

(5)where δe caters to the distance and the change in scale if anydue to the movement of the vehicle. If a window is detectedin the (t + 1)-th window, the k-th window selected usingthe above condition in the (t +1)-th frame is used to detectthe window in (t +2)-th frame. This process is repeated till(t +NT )-th frame. An overtaking vehicle is considered tobe detected if we find nd windows within NT frames fromthe t-th frame. This tracking will aid in eliminating anyfalse positive windows that may detected by the classifierssporadically.

After nd windows are detected in NT frames, as theovertaking vehicle moves, the front part side-view of thevehicle would be less trackable. But the vehicle is stillovertaking the ego-vehicle. That is when the second classifierCL

S , i.e. the full side-view classifier, is expected to detect thefull side-view of the overtaking vehicle. CL

S gives the set ofwindows WS

(t). The conditions for a window W (t)Sk ∈WS

(t)

to satisfy are:1) WPk is already detected and overtaking vehicle has

been initialized and detected, before W (t)Sk is detected.

2) W (t)Sk meets the minimum x-position criteria similar to

(4).3) W (t)

Pk if found must lie within the full side-view window,i.e. (W (t)

Pk ∩W (t)Sk )> poverlap(wPhP), where poverlap is the

percentage overlap between W (t)Pk and W (t)

Sk .The first condition eliminates false positive windows fromCL

S because a full side-view is not considered unless thepart side-view is already detected. The second conditioneliminates false windows that do not appear to be in theside view from the ego-vehicle. The third condition is anadditional step to further narrow down the windows fromCL

S in the presence of the part side-view windows that aredetected by CL

P in the same frame.Therefore, the proposed method continues to show over-

taking event as long as the following conditions are met:1) Presence of the part side-view window W (t)

Pk that isconstrained by (5) above.

2) Presence of the full side-view window W (t)Sk that occurs

after W (t)Pk is detected.

Fig. 3(a) shows a sample sequence of an overtakingvehicle with the time stamps from the beginning of theovertaking event till the end of overtaking event (when

Fig. 3. (a) Sequence showing an overtaking vehicle in the far left lane withtime stamps (frame numbers). The detection of the part side-view (in green)and full side-view (in blue) are shown in images, (b) Plot showing the resultsfrom the part and full side-view classifiers and the overtaking detectiondecision. A classifier detection of a part or full side-view is indicated by1. The overtaking detection as estimated by the proposed algorithm is setto 2 when the initialization of the overtaking event is done followed by 3when the overtaking event detection is finally done. The ground truth is alsoshown.

10% of the rear of the overtaking vehicle is visible inthe input image frame). The classifier outputs using thepart and full side-view classifiers are shown in Fig. 3(a).Fig. 3(b) shows the binarized responses of the classifiers.The overtaking detection decision that is estimated by theproposed method is shown in Fig. 3(b). When the overtakingevent initialization takes place, the detection value is set to 2in the plot in Fig. 3(b). The overtaking event is detected aftertwo frames (NT = 3) when it is set to 3. The ground truth isalso plotted in Fig. 3. It can be seen that the proposed methodinitializes the overtaking detection at t = 3, i.e. 3 time unitsafter the actual overtaking starts (from ground truth). Theovertaking event is further confirmed at t = 6 and continuestill t = 24 until the full side-view classifier continues to detectthe overtaking vehicle.

Fig. 4. Block diagram for detecting overtaking and receding vehicles. Theflow in black is used to detect the overtaking vehicles in IL and IR regions,whereas the flow in red is used to detect the receding vehicles.

C. Receding Vehicle Detection

The proposed algorithm for detecting overtaking vehiclescan be extended further to detect receding vehicles using

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 4: Overtaking and Receding Vehicle Detection for Driver ...swiftlet.ucsd.edu/publications/2014/SatzodaTrivedi_ITSC2014.pdf · Overtaking & Receding Vehicle Detection for Driver Assistance

the flow diagram shown in Fig. 4. As discussed in theprevious sub-section, the classification windows from thepart side-view are tracked first followed by the full side-view classification windows. The flow shown in black arrowsfrom left to right in Fig. 4 enables the detection of overtakingvehicles. If the same rules are applied as discussed in SectionII-B but from the opposite direction as shown by the redarrows, the same algorithm can be used to detect the recedingvehicles. In order to detect the receding vehicles, we usethe classification outputs from full side-view classifier CL

Sto initialize the tracking of receding vehicle at t-th timeinstance or frame. If nd full side-view windows are detectedwithin NT frames from I(t), then we consider the detectionof receding vehicle by the side of the ego-vehicle. As thevehicle recedes further, the detection windows from the partside-view classifier CL

P are used to continue detecting thereceding vehicle till it moves into the blind-spot of the ego-vehicle. The same conditions that are applied on the partand full side-view windows for the overtaking vehicle canbe interchanged in the case of receding vehicle detection.

D. Relative Velocity Computation

In this sub-section, we use the output of the proposedovertaking and receding vehicle detectors to determine therelative velocity between the ego-vehicle and the neighboringvehicle with which the overtaking/receding event is occur-ring. This will aid in understanding how the overtaking orreceding occurred, and is particularly useful for naturalisticdriving studies that will be described in Section IV. Weexplain the method for the case of overtaking vehicles;similar equations can be derived for receding vehicles also.

In order to compute the relative velocity, the image I(t) atgiven time instance t is converted to its inverse perspectivemapping (IPM) domain [22]. The conversion is performedusing the homography matrix [22], which can be computedoffline using the camera calibration for a given setup ofthe camera in the ego-vehicle. The homography matrix isdenoted by H. Therefore, H is used to compute the IPMview of I(t), which is denoted by I(t)W . Fig. 5 illustrates theimages I(t) and I(t+1) at t and t +1, and their correspondingIPM views I(t)W and I(t+1)

W .The proposed overtaking vehicle detection method gives

the position of the overtaking vehicle as shown by the greenrectangular windows in I(t) and I(t+1) in Fig. 5. The bottomright positions of these windows denoted by P(t)(x+w,y+h)and P(t+1)(x+w,y+h) are used alongwith the homographymatrix H to compute the position of the vehicle along theroad, i.e. yW positions in Fig. 5. The following transformationequation is applied on P(t) and P(t+1),

kW[xW yW 1

]T= H

[x y 1

]T (6)

which results in y(t)W and y(t+1)W respectively. The relative

velocity between the ego-vehicle and the overtaking vehicleat time instance t + 1 for the time period ∆ = (t + 1)− t isgiven by

v(t+1)r = ψ

y(t+1)W − y(t)W

∆(7)

Fig. 5. Relative velocity estimation between the ego-vehicle and theneighboring overtaking vehicle.

where ψ is the calibration constant to convert the IPMimage coordinates into real-world coordinates (in meters),and v(t+1)

r is the relative velocity in m/sec at time t + 1.Velocities of the two vehicles are assumed to be constantsfor the short time interval ∆ in this study. However, morecomplete vehicle dynamics can be applied with lateral andlongitudinal accelerations.

It can be seen from Fig. 5 that as the overtaking vehicleovertakes the ego-vehicle, the distance between the two isalso increasing. The relative velocity is computed betweenevery two adjacent frames from the beginning of the over-taking event till the end, and average relative velocity can becomputed for the entire overtaking event. In Section IV, weuse (7) for the analysis of naturalistic driving data to inferabout the driving characteristics of the ego-vehicle duringovertaking/receding events.

Fig. 6. Analysis of overtaking events: Each blue-red rectangle pair refersto one overtaking event. x-axis (frame numbers) shows the duration ofeach event. Duration in ground truth is shown by blue rectangle. Thecorresponding red rectangle indicates the overtaking event detection resultfrom the proposed method.

III. RESULTS & DISCUSSION

In this section, we will first evaluate the performanceof the proposed overtaking and receding vehicle detectionmethod. Existing evaluations for overtaking vehicle indicate

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 5: Overtaking and Receding Vehicle Detection for Driver ...swiftlet.ucsd.edu/publications/2014/SatzodaTrivedi_ITSC2014.pdf · Overtaking & Receding Vehicle Detection for Driver Assistance

the number of overtaking events that were detected as com-pared to the ground truth. However, such evaluation presentsthe accuracy of the techniques partially. In this paper, wepresent a more detailed evaluation of the overtaking andreceding vehicles. In order to do this, we considered twovideo sequences that are captured at 30 frames per secondand having 27000 frames in total. The total number ofovertaking events, and their properties are listed in Table I.A total of 51 events (34 overtaking and 17 receding events)were annotated. The events were occurring either on theimmediately adjacent lanes to the ego-lane or neighbor lanesbeyond the adjacent lanes, i.e. the performance is evaluatedfor overtaking and receding vehicles in multiple lanes.

TABLE IPROPERTIES OF DATASET FOR EVALUATION

Dataset name Duration RemarksNew LISA overtakingvehicle dataset

1410 frames Highway scenario, dense traffic

LISA-Toyota CSRC [15] 27000 frames Highway, urban scenario,highly dense traffic.

Overtaking & Receding Vehicle Events PropertiesLeft Right Adjacent

lanesNeighborlanes

Overtaking 26/34 8/34 34 0Receding 2/17 15/17 8 7Total 28/51 23/51 42 7

In our experiments, all the 51 overtaking and reced-ing vehicles were detected resulting in 100% of detectionaccuracy. However, this needs to be further explained interms of how early was the detection of the event. This isillustrated through the plots in Fig. 6 and 7 for overtaking andreceding vehicles respectively. Referring to Fig. 6, the x-axisindicates the frame numbers. Each blue and red rectanglepair corresponds to one overtaking event with respect to theego vehicle. The blue rectangle shows the duration of theovertaking event as annotated in the ground truth, whereasthe red rectangle shows the duration as estimated by theproposed method. The overlap between the red and bluerectangles indicates how much of the overtaking event isdetected by the proposed method when compared to theground truth. Similarly, the receding events are listed in Fig.7.

It can be seen from Fig. 6 that it takes a few framesfrom the start of the overtaking event (as annotated inthe ground truth) for the proposed method to detect theovertaking vehicles. This is primarily because the part side-view classifier requires a few frames before the part side-view of the overtaking vehicle appears in the regions IL andIR for the classifiers to detect accurately. Similarly, in Fig.7, the proposed algorithm will stop detecting the recedingvehicle earlier than ground truth because the ground truthannotation is done till the receding vehicle disappears fromthe image frame.

Considering 30 frames a second, the proposed methodis able to detect the overtaking events within 0.7s± 0.5sfrom the start of overtaking event. Similarly, in the caseof receding vehicle detection, the proposed method stops

Fig. 7. Analysis of receding events: Each blue-red rectangle pair refersto one receding event. x-axis (frame numbers) shows the duration ofeach event. Duration in ground truth is shown by blue rectangle. Thecorresponding red rectangle indicates the receding event detection resultfrom the proposed method.

detecting the receding vehicle 0.4s±0.4s before the actualreceding event stops in the image frame.

IV. ANALYSIS FOR NATURALISTIC DRIVING STUDIES(NDS)

In this section, we will briefly discuss how the proposedmethod and analysis presented in the previous sections canbe extended for data analysis in naturalistic driving studies(NDS). Fig. 8 shows three events that are related to overtak-ing and receding vehicle detection which are identified andextracted by data reductionists manually in NDS [23]. Whilethe analysis presented in Section III evaluates the detection ofsuch events, we will now present ways to extract informationof the detected events in this section.

Fig. 8. Overtaking receding events that are identified by data reductionistsin NDS [23]: (a) Neighboring vehicle V O is overtaking ego-vehicle V E , (b)Ego-vehicle is overtaking neighboring vehicle V R, (c) Neighboring vehicleV O is overtaking ego-vehicle and cutting into ego-lane.

In addition to the number of detections as listed in TableI, we extract the following information about the detectedevents. The duration of the events indicates the relativevelocities between the ego-vehicle and overtaking/recedingvehicles. Table II is generated from 45 events in one of thedrives that was used to generate the results in the previ-ous section. Table II shows the mean, standard deviation,minimum and maximum durations in seconds for the over-taking and receding events with respect to the ego-vehicle.Additionally, identifying overtaking events with durationsless than the mean by a considerable margin could indicatespecific maneuvers such as cutting in front of the ego-vehicle.

IEEE Conference on Intelligent Transportation Systems, Oct. 2014

Page 6: Overtaking and Receding Vehicle Detection for Driver ...swiftlet.ucsd.edu/publications/2014/SatzodaTrivedi_ITSC2014.pdf · Overtaking & Receding Vehicle Detection for Driver Assistance

TABLE IINDS METRICS FOR OVERTAKING/RECEDING EVENTS

Duration Metrics Overtaking RecedingMean ± deviation (s) 1.9176 ± 1.1461 2.2451 ± 1.7748Minimum/Maximum (s) 0.266/6.33 0.3/5.96

ABSOLUTE RELATIVE VELOCITIESOvertaking Receding

Left/Right (m/s) 6.9/9.2 12.4/5.6

It should be noted that with respect to the neighboringvehicle, a receding event indicates an overtaking maneuverby the ego-vehicle. If a receding event is detected on the leftof the ego-vehicle, it implies that the ego-vehicle is movingfaster than the vehicles moving on the faster lanes (the leftlanes are considered as fast lanes in USA). Therefore, ego-vehicle was overtaking the fast moving vehicles in the fasterlanes. Similarly an overtaking event in the right lane indicatesthe other vehicles are moving faster than the ego-vehicle,and they can cause a driving hazard for the ego-vehicle. Itcan be seen from Table I that the proposed method is ableto determine that there were 2 receding events on the leftof the ego-vehicle, and 8 overtaking events on the right ofthe ego-vehicle. These specific events could be flagged forfurther analysis by data reductionists in NDS.

The relative velocity estimation method presented in Sec-tion II-D was used to determine the average relative veloc-ities for the overtaking and receding events. Table II liststhe relative velocities for different overtaking and recedingconditions. From Table II, the following can be inferred aboutthe drive from which the overtaking and receding vehicleswere considered. The relative velocity was higher for rightovertaking events, i.e. the ego-vehicle was moving slowerthan the vehicles on its right lane during the drive. Next,the relative velocity is 12.4 m/s during left receding events,which implies that the ego-vehicle accelerated more than thevehicles on the fast lane (left lane) during the receding eventsin the drive.

The study presented in this section is an initial work on theanalysis of the overtaking and receding events for naturalisticdriving studies. More metrics and analysis items will beexplored in future works.

V. CONCLUSIONS

In this paper, an appearance based technique to detectboth overtaking and receding vehicles w.r.t ego-vehicle ispresented. The classification windows that are obtained fromHaar features and Adaboost cascaded classifiers are trackedin bi-directional manner, which enables the detection of bothoptically opposite flowing events. The detailed evaluationsshow that the proposed method detects/terminates the eventswithin an average of 1 second as compared to the groundtruth. The extension of the proposed work to extract in-formation of the drive for NDS is being reported for thefirst time in the context of receding and overtaking vehicles.Future work would focus on improving the accuracy of theproposed event detection technique, and also on extractingmore analysis metrics for NDS.

REFERENCES

[1] “TRAFFIC SAFETY FACTS 2011,” National Highway Traffic SafetyAdministration, U.S. Department of Transportation, Tech. Rep., 2011.

[2] S. Sivaraman and M. M. Trivedi, “A General Active-Learning Frame-work for On-Road Vehicle Recognition and Tracking,” IEEE Trans.on Intell. Transp. Sys., vol. 11, no. 2, pp. 267–276, June 2010.

[3] R. K. Satzoda and M. M. Trivedi, “Efficient Lane and Vehicle detectionwith Integrated Synergies ( ELVIS ),” in 2014 IEEE Conference onComputer Vision and Pattern Recognition Workshops on EmbeddedVision, 2014, p. To be published.

[4] S. Sivaraman and M. M. Trivedi, “Integrated Lane and VehicleDetection, Localization, and Tracking : A Synergistic Approach,”IEEE Trans. on Intell. Transp. Sys., pp. 1–12, 2013.

[5] E. Ohn-bar, A. Tawari, S. Martin, and M. M. Trivedi, “Visionon Wheels : Looking at Driver , Vehicle , and Surround for On-Road Maneuver Analysis,” in IEEE Computer Society Conference onComputer Vision and Pattern Recognition Workshops, 2014.

[6] R. K. Satzoda and M. M. Trivedi, “Selective Salient Feature basedLane Analysis,” in 2013 IEEE Intell. Transp. Sys. Conference, 2013,pp. 1906–1911.

[7] ——, “Vision-based Lane Analysis : Exploration of Issues and Ap-proaches for Embedded Realization,” in 2013 IEEE Conference onComputer Vision and Pattern Recognition Workshops on EmbeddedVision, 2013, pp. 604–609.

[8] ——, “On Performance Evaluation Metrics for Lane Estimation,” inInternational Conference on Pattern Recognition, 2014, p. To Appear.

[9] A. Prioletti, A. Mø gelmose, P. Grisleri, M. M. Trivedi, A. Broggi, andT. B. Moeslund, “Part-Based Pedestrian Detection and Feature-BasedTracking for Driver Assistance : Real-Time , Robust Algorithms , andEvaluation,” IEEE Trans. on Intell. Transp. Sys., vol. 14, no. 3, pp.1346–1359, 2013.

[10] B. Morris and M. M. Trivedi, “Trajectory Learning for ActivityUnderstanding: Unsupervised, Multilevel, and Long-Term AdaptiveApproach,” IEEE Transactions on Pattern Analysis and MachineIntelligence, pp. 2287–2301, Nov. 2011.

[11] S. Sivaraman and M. M. Trivedi, “Dynamic Probabilistic DrivabilityMaps for Lane Change and Merge Driver Assistance,” IIEEE Trans.on Intell. Transp. Sys., p. To appear, 2014.

[12] B. Morris and M. Trivedi, “Robust classification and tracking ofvehicles in traffic video streams,” 2006 IEEE Intell. Transp. Sys.Conference, pp. 1078–1083, 2006.

[13] R. K. Satzoda and M. M. Trivedi, “Drive Analysis using VehicleDynamics and Vision-based Lane Semantics,” IEEE Transactions onIntelligent Transportation Systems, p. To appear, 2014.

[14] L. N. Boyle, S. Hallmark, J. D. Lee, D. V. McGehee, D. M. Neyens,and N. J. Ward, “Integration of Analysis Methods and Developmentof Analysis Plan - SHRP2 Safety Research,” Transportation ResearchBoard of the National Academies, Tech. Rep., 2012.

[15] R. K. Satzoda, P. Gunaratne, and M. M. Trivedi, “Drive Analysis usingLane Semantics for Data Reduction in Naturalistic Driving Studies,”in IEEE Intell. Veh. Symp., 2014, pp. 293–298.

[16] S. Sivaraman and M. M. Trivedi, “Looking at Vehicles on the Road:A Survey of Vision-Based Vehicle Detection, Tracking, and BehaviorAnalysis,” IEEE Trans. on Intell. Transp. Sys., vol. 14, no. 4, pp.1773–1795, Dec. 2013.

[17] A. Ramirez, E. Ohn-bar, and M. Trivedi, “Integrating Motion andAppearance for Overtaking Vehicle Detection,” IEEE Intell. Veh.Symp., pp. 96–101, 2014.

[18] X. Zhang, P. Jiang, and F. Wang, “Overtaking Vehicle Detection UsingA Spatio-temporal CRF,” in IEEE Intell. Veh. Symp., no. Iv, 2014, pp.338–342.

[19] D. Hultqvist, J. Roll, F. Svensson, J. Dahlin, and T. B. Sch, “Detectingand positioning overtaking vehicles using 1D optical flow,” in IEEEIntell. Veh. Symp., no. Iv, 2014, pp. 861–866.

[20] F. Garcia, P. Cerri, A. Broggi, and S. Member, “Data Fusion forOvertaking Vehicle Detection based on Radar and Optical Flow,” inIEEE Intell. Veh. Symp., 2012, pp. 494–499.

[21] P. Guzman, J. Dıaz, J. Ralli, R. Agıs, and E. Ros, “Low-cost sensorto detect overtaking based on optical flow,” Machine Vision andApplications, vol. 25, no. 3, pp. 699–711, Dec. 2011.

[22] A. Broggi, “Parallel and local feature extraction: a real-time approachto road boundary detection.” IEEE Trans. on Image Proc., vol. 4, no. 2,pp. 217–23, Jan. 1995.

[23] “Researcher Dictionary for Video Reduction Data,” VTTI, Tech. Rep.,2012.

IEEE Conference on Intelligent Transportation Systems, Oct. 2014