1-s2.0-S0957417412002126-main.pdf

download 1-s2.0-S0957417412002126-main.pdf

of 13

Transcript of 1-s2.0-S0957417412002126-main.pdf

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    1/13

    Advanced formation and delivery of traffic information in intelligent

    transportation systemsq

    Hsu-Yung Cheng a,, Victor Gau b, Chih-Wei Huang c, Jenq-Neng Hwang d

    a Department of Computer Science and Information Engineering, National Central University, Taiwanb Microsoft, Redmond, WA, United Statesc Department of Communication Engineering, National Central University, Taiwand Department of Electrical Engineering, University of Washington, Seattle, WA, United States

    a r t i c l e i n f o

    Keywords:

    Intelligent transportation systems

    Traffic parameters

    Event Detection

    WiMAX

    WiFi

    a b s t r a c t

    To meet the safety requirement for the increasing traffic densities nowadays, there exists a growing

    demand for advanced systems that can provide drivers essential traffic and travel information to improve

    road safety and traffic efficiency. In this paper, we combine the video analysis and multimedia network-

    ing technologies to present a highly integrated intelligent system that can achieve the above goals. For

    traffic information, the system presented in this paper collects traffic parameters and detects relevant

    events by analyzing traffic surveillance videos. Through robust tracking algorithms and reasoning logics,

    important traffic parameters and events are extracted from the surveillance videos accurately. After-

    wards, summarized real-time traffic conditions and important events along with corresponding live traf-

    fic videos are formed into layers and multicasted through an integration of WiMAX infrastructure and

    vehicular ad hoc networks (VANET). By the support of adaptive modulation and coding in WiMAX, the

    radio resources can be optimally allocated when performing multicast so as to dynamically adjust the

    number of data layers received by users. In addition to multicast supported by WiMAX, we also design

    a knowledge propagation and information relay scheme by VANET. Through this relaying technology,

    about 80% of the mobile stations that were unable to subscribe additional layers of data due to insuffi-

    cient downlink bandwidth from WiMAX could regain more than 90% of the data in the additional layers

    within tolerable buffering time.

    2012 Elsevier Ltd. All rights reserved.

    1. Introduction

    Intelligent transportation systems (ITS) integrate advanced

    electronics, communications, sensing and information technolo-

    gies to design management strategies for streets, vehicles and peo-

    ple. Intelligent vehicles can be equipped with many kinds of

    sensors, powerful computing capabilities, sophisticated software

    systems, and large communication capacities (Chen, 2011; Wang,

    2006). Connecting intelligent vehicles with one another and with

    their driving environment is a growing trend in ITS (Li, Song, Wang,

    Niehsen, & Zheng, 2005). With the increasing traffic density, a traf-

    fic and travel information multicasting system that provides real-

    time information to drivers can substantially improve road safety

    and traffic efficiency. In this paper, we combine the video analysis

    and multimedia networking technologies to present a highly inte-

    grated intelligent system that can achieve the above goal.

    The system architecture for the proposed advanced traffic and

    travel information multicasting system is illustrated in Fig. 1. The

    roadside cameras capture the traffic surveillance videos of major

    intersections or highway sections and send the videos back to

    the local traffic service center, which is responsible for analyzing

    the received surveillance videos to obtain the desired traffic

    parameters and events. To accomplish the task, the system seg-

    ments out the foreground objects from the surveillance videos

    and perform tracking on the objects. Traffic parameters are com-

    puted from the statistics of the tracking results, and events are de-

    tected either by establishing rule-based reasoning logic or training

    appropriate recognition models using training data. After the traf-

    fic parameters and the events are extracted, the information is

    multicasted through the wide-range wireless infrastructure. In

    the work proposed by Liu, Yoo, Jang, Choi, and Hwang (2005), scal-

    able geo-referenced videos and geographic information are trans-

    mitted to GPS-guided vehicles. The feasibility of multicasting

    real-time traffic data to mobile stations is inspired by the work

    (Liu et al., 2005). In addition to the traffic conditions, the drivers

    0957-4174/$ - see front matter 2012 Elsevier Ltd. All rights reserved.doi:10.1016/j.eswa.2012.01.184

    q This work is under the support of Ministry of Economic affairs, Taiwan. Corresponding author.

    E-mail addresses: [email protected] (H.-Y. Cheng), victorgau@

    gmail.com (V. Gau), [email protected] (C.-W. Huang), [email protected].

    edu(J.-N. Hwang).

    Expert Systems with Applications 39 (2012) 83568368

    Contents lists available at SciVerse ScienceDirect

    Expert Systems with Applications

    j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / e s w a

    http://dx.doi.org/10.1016/j.eswa.2012.01.184mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]://dx.doi.org/10.1016/j.eswa.2012.01.184http://www.sciencedirect.com/science/journal/09574174http://www.elsevier.com/locate/eswahttp://www.elsevier.com/locate/eswahttp://www.sciencedirect.com/science/journal/09574174http://dx.doi.org/10.1016/j.eswa.2012.01.184mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]://dx.doi.org/10.1016/j.eswa.2012.01.184
  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    2/13

    can also choose to see the real-time streamed traffic surveillance

    videos of a certain location. Contents of local information services

    such as places of interest, hotels, restaurants, gas stations or park-

    ing lots can also be multicasted through the infrastructure. Under

    such circumstances, the multicast contents can include both pure

    text and multimedia data. In order to use the channel efficiently,

    the multicast contents are arranged into layers, and utility-based

    resource allocation is applied at WiMAX base station (BS). Users re-

    ceive a certain number of layers of information according to their

    needs and real-time channel conditions. Furthermore, a comple-

    mentary knowledge propagation and information relay mecha-

    nism is designed to support users that need more information

    but with unfavorable channel conditions.

    We will review some related works on video processing tech-

    niques for traffic applications, as well as some works on vehicle-to-infrastructure and vehicle-to-vehicle communications in the

    next section. In the literature review, we also discuss the advanta-

    ges of the proposed schemes over other existing works. Then, we

    elaborate the proposed traffic parameter extraction and event

    detection scheme via video analysis in Section 3. In Section4, we

    discuss the information multicasting and relay through heteroge-

    neous networks. The experimental results are reported and dis-

    cussed in Section5. Finally, conclusions are made in Section6.

    2. Related works

    It has been an important topic to extract traffic parameters and

    report relevant events in real-time in ITS. Traditionally, trafficparameters are obtained via inductive loop or magnetic detectors.

    However, video-based systems have emerged in recent years

    (Fernndez-Caballero, Gmez, & Lpez-Lpez, 2008; Hu, Tan, Wang,

    & Maybank, 2004; Kastrinaki, Zervakis, & Kalaitzakis, 2003; Vallejo,

    Albusac, Jimenez, Gonzalez, & Moreno, 2009). Video-based systems

    have many advantages over traditional sensing systems. First of all,

    video-based systems are able to capture a larger variety of informa-

    tion. Also, they are relatively inexpensive and easier to install, oper-

    ate, and maintain. With massive deployment of surveillance

    cameras nowadays, there is a growing demand for intelligent

    systems that can replace human operators and analyze both unidi-

    rectional traffic on highways (Beymer, McLauchlan, Coifman, &

    Malik, 1997) and multi-directional traffic in intersections (Atev,

    Arumugam, Masoud, Janardan, & Papanikolopoulos, 2005; Kamijo,Matsushita, Ikeuchi, & Sakauchi, 2000; Veeraraghavan, Masoud, &

    Papanikolopoulos, 2003). Therefore, video processing techniques

    for traffic applications have received increasing attention of

    researchers. Video-based event or accident detection (Atev et al.,

    2005; Fu, Hu, & Tan, 2005; Hu et al., 2004; Kamijo et al., 2000)

    and traffic parameter extraction schemes (Fathy & Siyal, 1998;

    Hsu, Liao, Jeng, & Fan, 2004) have been designed based on the foun-

    dation of robust video tracking. Various tracking methods for video

    objects have been investigated. Classical filters such as Kalman Fil-

    ters (KF) (Beymer et al., 1997; Melo, Naftel, Bernardino, & Santos-

    Victor, 2006; Veeraraghavan et al., 2003), Extended Kalman Filters

    (EKF) (Foresti, 1998; Foresti, 1999), and Particle Filters (PF) (Maggio

    & Cavallaro, 2005; Qu, Schonfeld, & Mohamed, 2007; Zhou, Chellap-

    pa, & Moghaddam, 2004) have been largely employed for the track-

    ing purpose. Kalman filters assume linear models and Gaussian

    noises to obtain optimized closed-form formulation for predictionand update. Although extended Kalman filters and particle filters

    relax one or both of the assumptions for more flexible models, the

    main problem of video tracking remains in handling segmentation

    errors and occlusion. No matter what kind of filters are used, seg-

    mentation error and occlusion handling schemes still need to be

    explicitly designed in order to achieverobust tracking. In thispaper,

    we utilize the measurement candidate selection procedure and a

    modified Probabilistic Data Association (PDA) approach (Cheng &

    Hwang, 2007; Cheng & Hwang, 2009) combined with Kalman filter-

    ing to perform robust tracking and thus obtain traffic parameters

    and relevant events accurately.

    Traffic conditions, events and travel information can be multi-

    casted through the infrastructure of either full-cover wireless tech-

    nology such as 3G or WiMAX (Fig. 2(a)) or Dedicated Short RangeCommunications (DSRC) (Fig. 2(b)). Information can also be ex-

    changed directlyamongvehiclesviavehicular ad hocnetworks(VA-

    NET). Guo, Ammar, and Zegura (2005) proposed a framework to

    provide a live video streaming service to drivers through vehicle-

    to-vehicle (V2V)networks.Theyuseda store-carry-and-forward ap-

    proachto transmit video data in a partitioned network environment

    and incorporated a signaling mechanism to continuously trigger vi-

    deo sources to send video data to receivers. Killat and Hartenstein

    (2007)discussed the design of an accident prevention application

    (APA) based on vehicular ad hoc networks and addressed how an

    APA could be designed and formalized with the help of Markov Re-

    ward Processes. It would be extremely costly and unnecessary for

    the deployment of DSRC base station to cover the entire roads in a

    large area. The VANET canalleviate the extensive DSRC base stationdeployment and maintenance cost (Fig. 2(b)). However, it is very

    Fig. 1. System architecture.

    H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368 8357

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    3/13

    challenging for the vehicle ad hoc networks to transmit real-time

    streaming data because the vehicles are mobile and dynamic routes

    need to be constructed at all times. The wide-range wireless tech-

    niques such as WiMAX (Fig. 2(a)) can cover a larger area. However,

    because different vehicles have different channel conditions, scal-

    able coding is necessary when performing multicasting to improve

    efficiency and channel utilization. The multicast content should be

    separated into several layers according to their importance. The

    most important part is encoded as the base layer and should be sent

    out with a modulation scheme that can be received by everyone.

    More detailed information is encoded as enhancement layers and

    canbe sent outwithmodulationschemes that areable to bereceived

    by vehicles with better channel conditions.

    The system presented in this paper takes advantage of bothwide-range WiMAX infrastructure and short-range VANET tech-

    nologies. The traffic and live video information is partitioned into

    layers and multicasted utilizing the wide-range WiMAX infrastruc-

    ture. Every vehicle (mobile station, MS) in the WiMAX coverage

    can receive the most critical information encoded in the base layer,

    while only those MSs with good channel condition are allowed to

    request higher layers information. For MSs with unfavorable chan-

    nel conditions with WiMAX but desire more detailed information,

    a relay scheme is designed for them to request the additional lay-

    ers of information from other vehicles with better channel condi-

    tions, as illustrated inFig. 2(a). The relay through VANET scheme

    is complementary to the multicasting mechanism supported by

    the WiMAX infrastructure. There have been a lot of existing wire-

    less ad hoc multicasting technologies. However, few of them ad-dress the relay mechanism in heterogeneous networks. Hauge

    and Kure (2006) proposed a heterogeneous cellular and ad hoc net-

    work architecture, which assumes 3G infrastructures instead of

    WiMAX. There is no discussion on layered or scalable data concept

    in their work either. We also noticed that the use of beacon mes-

    sages in our relaying mechanism is similar to AMRIS ( Wu & Tay,

    1999). However, there are still several key differences, which will

    be discussed along with the advantages of our proposed scheme

    in Section4.2.

    3. Traffic parameter extraction and event detection

    Fig. 3illustrates the procedure of the traffic parameter extrac-tion and event detection scheme. First of all, the video object

    segmentation (Fig. 3(a)) is performed based on the Gaussian

    Mixture method described in (Stauffer & Grimson, 1999). The

    segmentation process separates the moving vehicles from the

    background scene. Vehicles newly entering the scene are initial-

    ized and put into the tracking list (Fig. 3(b)). Then, tracking is per-

    formed on each individual vehicle in the tracking list based on our

    previous work (Cheng & Hwang, 2007). In the prediction stage

    (Fig. 3(c)), the positions of vehicles at the next time instance are

    predicted utilizing Kalman Filters (KF). Then, measurements are

    required in order to update the KF (Fig. 3(c)) in order to continu-

    ously track the vehicles. The system state xk and measurement

    stateyk in the KF are defined as uk vk _uk _vkT

    , where uk and

    vk are the coordinates of the representation point of a vehicle in

    the image plane at time instance k. And _uk and _vk are the displace-ments in the u and vdirections, respectively. The measurements

    are obtained by referring to the segmentation result of each image

    frame. When there is no occlusion or segmentation error, the cen-

    troid of the segmented object would be the position of the mea-

    surement used for KF update. However, when the vehicles are

    occluded from each other due to the viewing angle of the camera,

    different vehicles is connected with one another in the segmenta-

    tion mask. In this situation, one connected component in the seg-

    mentation mask may consist of several different vehicles. On the

    other hand, when vehicles are over-segmented due to having sim-

    ilar colors as the background or being occluded by the background

    objects, one vehicle can be separated into more than one con-

    nected components in the segmentation mask. To solve the prob-

    lems of occlusion and segmentation errors, the measurementcandidate list construction procedure (Fig. 3(d)) is employed to

    select reasonable candidates to serve as the measurements of a

    given tracking target.

    For each target vehicle, a validation gate is formed. A measure-

    mentykis defined to be in the validation gate ofxkif the following

    criterion is satisfied: yk HkxkTS1k ykHkxk 6 c

    2, where Hkis the

    measurement model of KF, Sk is the innovation covariance, and c2

    is a threshold. Occlusion or segmentation error cases are detected

    when one of the following conditions are held: (1) No measure-

    ment can be found in the validation gate, (2) the size of the

    measurement is very different from the target vehicle, (3) the pre-

    dicted states show that two or more tracked vehicles start to merge

    together. When an occlusion or segmentation error case is de-

    tected, measurement candidates are searched within a neighbor-hood range of the predicted position of the tracking target. The

    Fig. 2. VANETs and infrastructure: (a) wide-range infrastructure, (b) DSRC base station.

    8358 H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    4/13

    measurement candidates are selected by inspecting two criteria.

    First, the overlapping areas between the measurements and the

    segmentation mask of are checked. Then, the similarities between

    the appearance of the measurements in the current frame and the

    recorded appearance of the target object are computed. The simi-

    larity metric is the cross correlation coefficient given in Eq. (1).

    The RGB values of the measurement and the target are arranged

    in twon 1 vectorsAn andBn, respectively.A andB are the mean

    values ofAn andBn:

    CR

    Pn AnA

    BnB

    ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPn AnA 2

    Pn BnB 2 s : 1

    If either the overlapping area or the similarity is higher than a

    threshold at a particular position in the searching range, then the

    position would be added to the measurement candidate list.

    Through the measurement candidate list construction procedure,

    reasonable measurements can be provided for filter update.

    The modified Probabilistic Data Association (PDA) approach

    (Fig. 3(e)) is used to associate the measurements with each tracking

    target in a multi-target tracking environment. Original PDA consid-

    ers each tracking target independently, and computes a probability

    bj for each measurement in the validation gate for a given target.

    Probability bjis the posterior probability that the jth measurement

    ykj is the true measurement at time instance k. Suppose that there

    arem candidate measurements in the validation gate for a given

    target at time k. The probabilities bjs (j= 1,. . . ,m) are used for thecombined innovation for the state of the target: ~yk

    Pmj1bj ~ykj,

    where ~ykj ykjHk^xkjk1 denotes the corresponding innovation

    for measurementykj. The posterior association probabilities bj can

    be expressed by Eq.(2):

    bj P vjjYk

    n o P vjj~y1;. . . ;~ym; m; Y

    k1n o

    ; 2

    where vj denotes the event that the jth measurement is the true

    measurement for the target. Yk denotes the set of all innovations

    up to the present time k , i.e. Yk f~y1;. . . ~ymg [ Yk1. The posterior

    probabilities bj can be re-written as Eq. (3)using the Bayes rule

    and the assumption that event vj does not depend on Yk1:

    bj

    p~y1; ;~ymjvj; m; Yk1Pfvjjmg

    p~y1; ;~ymjm; Yk1 : 3

    In Eq.(3), the second factorP{vj|m} is the prior probability of event

    vj, conditioned on m, whose detailed expression is given in Eq. (4).

    We only list the expression ofP{vj|m} in order to explain the mod-

    ification we made on the classical PDA. The mathematical expres-

    sions and the detailed explanations for other terms of Eq. (3) can

    befound in Fortmann, Bar-shalom, and Scheffe (1983) and therefore

    are not elaborated here:

    P vjjmn o

    P vjjmF m 1; m

    n oP mF m 1jm

    P vjjmF m; m

    n oP mF mjm

    1=mP mF m 1jm

    j 1;. . .;m;

    P mF

    mjm

    j0

    ;( 4

    In Eq.(4),mF denotes the number of false measurements. Note that

    mF can either equal to m1 if the target is successfully detected, or

    mF equals tom if all measurements are false measurements. It was

    assumed thatP{vj|mF = m 1,m} = 1 /mis equal for every measure-

    ment inFortmann et al. (1983). Such assumption is acceptable for

    sonar or radar applications because each target is only a dot. How-

    ever, it would be better to relax theassumption and utilize the extra

    information from videos to compute priors in video object tracking

    applications. We can observe that if a measurement has a higher

    similarity or a larger overlapping area with the target, it is more

    likely to be the true measurement. Therefore, the similarity and

    the area of the overlapping region between the measurement and

    the tracking target are incorporated into the computation of the

    association probability:

    Bj djPfm

    F m 1jmg j 1; ; m;

    PfmF mjmg j 0;

    5

    where

    dj a SimilarityjPm

    i1Similarityi 1 a

    Overlap AreajPmi1Overlap Areai

    ; 6

    anda is an adjusting factor between 0 and 1. By incorporating thefactordj, which is more relevant to the relation between the targets

    and the measurements, we can obtain the probability for the mod-

    ified PDA. Other terms remain the same as the classical PDA. After-

    wards, the parameters of the filters and the appearance of thetracking targets are updated (Fig. 3(f)). The update phase of the

    Fig. 3. Traffic parameter extraction and event detection.

    H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368 8359

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    5/13

    covariance matrix and state estimate also remains intact. The de-

    tails of the filter updating can be found in Fortmann et al. (1983).

    Initial occlusion occurs when two or more objects enter the

    scene closely together and thus occlude one another. Initial occlu-

    sion remains an open issue and the existing tracking systems in the

    literature do not have effective ways to resolve this problem. The

    purpose of the initial occlusion handling process (Fig. 3(g)) is to

    alleviate the problem and enhance the performance of the system.If we detect that one object splits into two or more objects and

    does not merge back to one object for a few frames, we decide that

    most likely there are more than one vehicles originally occluded

    with one another but separated afterwards. In this case, we per-

    form backtracking to revise the tracking results. The frame se-

    quence of the initial occlusion part is inversed and the existing

    object tracking modules are reused to perform backtracking. The

    results of backtracking are used to correct the trajectories of the

    initially occluded vehicles.

    Events or regulation violation such as dropped items in the lane

    or driving on the road shoulders can be determined by reasoning

    logics (Fig. 3(i)). Traffic parameters are computed according to

    the statistics obtained from the tracking results (Fig. 3(h)). Impor-

    tant and commonly used traffic parameters including flow, turningflow, turning percentage, occupancy, average speed, queue forma-

    tion and queue length (Ceder & Eldar, 2002; Galatioto & Zito, 2007)

    are listed inTable 1.

    The traffic flowQis calculated using the following equation:

    Q NTotalTDuration

    : 7

    Similarly, left turning flow QLeftand right turning flow QRightare cal-

    culated by

    QLeft NLeftTDuration

    and QRight NRightTDuration

    : 8

    NTotal is the total number of vehicles that pass through a detection

    zone during time duration TDuartion. NLeft and NRight are the total

    numbers of left-turning and right-turning vehicles. The left-turning

    and right-turning percentages PLeftandPRightare calculated by

    PLeft NLeftNTotal

    and PRightNRightNTotal

    : 9

    Occupancy is the percentage of time that vehicles occupy a detec-

    tion zone, which can be calculated by

    OccupAvg

    Pveh2VT

    vehPass

    TDuration: 10

    In Eq.(10),TvehPass is the time duration that a certain vehicle veh stays

    in the detection zone. The symbolVdenotes all the vehicles that ap-

    pear during time duration TDuration, which can be obtained using:

    TDuration Frame CountFrame Rate : 11

    andTpass can be calculated similarly. If that the real-world distance

    DZone between the entering border and the exiting border of a detec-

    tion zone and the average length of vehicles LAvgare known, the

    average speed can also be calculated (Gartner, Messer, & Rathi,

    1992). Note that the unit of frame rate is frames per second, and

    the units ofTpass andTDurationare seconds. Therefore, when we want

    to compute the flow parameters in vehicles per minute and the

    average speed in Kilometers per hour, additional unit conversionneeds to be taken care of. To detect queue formation for a certain

    detection zone, we inspect the average speed and occupancy of

    the detection zone. When a queue is formed, the average speed be-

    comes low and the occupancy becomes high. Queue length is calcu-

    lated by counting the number of successive vehicles whose speed is

    lower than a certain threshold.

    In order to compute the left turning flow and right turning flow,

    we analyze the trajectories of the vehicles to determine their turn-

    ing trends. The entire trajectory of each vehicle from the entrance

    to the exit of the detection zone is recorded. Each trajectory is seg-

    mented intoKsegments of length , whereKequals to:

    K Total length of the trajectory

    : 12In order to determine whether a trajectory is going straight, turning

    left or turning right, a sequence of angleshkare extracted from each

    trajectory. Supposing the two end points of segment karepkand qk.

    As illustrated inFig. 4, hk is the angle between the line segments

    pkqk and pk1qk1. The sequence of angles for one trajectory are

    summed together to determine the turning trend of the trajectory.

    If the summation of the angles exceeds a positive threshold, the

    vehicle is turning right. On the other hand, if the summation of

    the angles exceeds a negative a negative threshold, the vehicle is

    turning left. Otherwise the vehicle is going straight.

    4. Information multicasting and relay

    This section describes a framework with the objective of effec-tively delivering layered traffic information and videos to users

    through multicast with information relay support. The framework

    is built on top of the emerging IEEE 802.16e for WiMAX and IEEE

    802.11 family standards. In Liu et al. (2005), the scalable geo-

    graphic information data is divided into layers of roads, buildings,

    rivers, and administrative districts. In our application, we divide

    the multicast contents into layers as listed in Table 2. Users can

    choose to receive two types of information alternatively. One type

    of information contains the traffic conditions and local information

    attached to the map. The other type contains the real-time

    streamed live traffic videos of certain locations. Through optimized

    resource allocation described in Section 4.1, the system will ensure

    that every mobile station (MS) can receive the base layer of the

    map and summarized traffic information. Other layers can bereceived only by a subset of users that satisfy a certain channel

    Table 1

    Definitions of some commonly used traffic parameters.

    Parameters Definition Unit

    Flow Q Number of vehicles that pass through a detection zone within a time unit Vehicle/minute

    Turning flowQLeft, QRight Number of vehicles that make left or right turns after passing through a certain point within a time unit Vehicle/minute

    Turning percentagePLeft, PRight The percentage of vehicles making turns toward a certain direction after passing through a certain point Percentage

    Occupancy OccupAvg The percentage of time that a detection zone is occupied by vehicles for a certain time duration Percentage

    Average speedSpeedAvg Average velocities of vehicles passing through a detection zone km/h

    Queue formation Whether a traffic queue is formed in a certain detection zone Binary

    Queue length Number of vehicles in a traffic queue when a queue is formed Vehicle

    8360 H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    6/13

    quality requirement. If a user cannot receive a certain layer due to

    unfavorable channel condition but desires the information, we de-

    sign a relay mechanism, which is elaborated in Section 4.2, to relaythe information to the user. As the example shown in Fig. 5, whena

    user receives the map and summarized traffic conditions (see

    Fig. 5(a)), he/she can choose to receive the live surveillance video

    of a certain location, as illustrated inFig. 5(b). For the relay mech-

    anism, there is no synchronization problem for type I information.

    However, for type II information, the base layer and the enhance-

    ment layer of the live video must be synchronized. If the enhance-

    ment layer is obtained through the relay mechanism, there will be

    a delay incurred. Such a delay is normally tolerable for this appli-

    cation because when we choose to see the traffic video a fewblocks

    or even further away, we would not mind a small amount of delay.

    Therefore we can use buffering to synchronize the layers of a video.

    We will also show that the delay is small in our simulation.

    4.1. Intelligent Multicasting

    The IEEE 802.16 standards with the associated WiMAX forum is

    expected to provide a promising last mile wireless broadband tech-

    nology. Its high bandwidth and broad coverage range combined

    with multicast support fit nicely to the role of disseminating lay-

    ered geographic traffic data and live traffic video. In a wireless

    environment, however, quality of the signal received by a mobile

    station (MS, also referred to as Subscriber Station (SS) in some ref-

    erences) can vary over time in different ranges that the modulation

    and coding schemes have to be adaptive. Furthermore, the channel

    is shared with other services, so the overall amount of bandwidth

    manageable under the service is limited. An intelligent approach to

    managing wireless infrastructure multicast resource is the first keystep of successful delivery.

    A utility-based resource allocation scheme is adopted for effi-

    cient multicasting (Huang, Wu, Lin, & Hwang, 2009). Given N

    MSs {MSn,n= 1,. . . , N}, the goal is to dynamically adjust the burst

    profile for each multicast layer in order to efficiently deliver base

    layers (TB and VB) to all subscribers as well as maximize the total

    utility, U=P

    nUnforenhancementlayers (TE and VE) subject to the

    total resource B pre-allocated to multicasting service in BS. For

    each multicast data layer, we assign them a utility, uc,i

    , for the

    cth channel and the lth layer. Therefore Un equals toP

    c(n),l,(n)uc,l,

    which is the summation of utility of all received enhancement lay-

    ers at MSn. An example of utility assignment is shown in Table 2,

    which can give us an intuition that lower layers are more impor-

    tant to users. The process first selects proper burst profiles serving

    base layers to all MSs subscribing the channel. For enhancement

    layers, we assign their modulation and coding schemes to maxi-

    mize the marginal utility,DU/DBc, which is the ratio of total utility

    gain to extra resource consumed when a layer has been scheduled.

    Depending on the number of MS in good or bad channel condition

    DUcan vary and the best one, i.e. with the highest marginal utility,

    among all layers will be picked and served until B is fully used up.

    The total enhancement layer utility is therefore maximized by

    making the most utility out of every single resource unit. We can

    run this effective resource allocation every frame for the best chan-

    nel usage. After the process, resources allocated for traffic multi-

    cast service is optimized at WiMAX BS as illustrated in the

    downlink sub-frame example inFig. 6. Note that a burst is a block

    of data transmitted from BS to MS. The parameters used are the

    same for all the data in the same burst. Details of PHY and MAC

    headers inFig. 6including preamble, FCH, DL-MAP, UL-MAP, and

    MBS-MAP can be found in theIEEE standard (2009). The multicast

    and broadcast service (MBS) region at right is allocated for the

    multicast contents. Base layers (TB, and VB) are usually in more ro-

    bust modulations and cost more resources; enhancement layers

    (TE1, TE2, and VE) are intended for the part of users under faster

    modulations. Therefore at the client side, all MSs receive at least

    base layer of channels subscribed, while some at good channel

    quality can receive more and be capable of relaying to others usingthe mechanism detailed in the next section.

    4.2. Knowledge propagation and information relay mechanism

    Due to unfavorable channel qualities, some MSs might be able

    to receive only the base layer. For MSs who need more information

    but cannot receive the enhancement layers, a relay mechanism is

    designed to enable the desired information flow from those who

    already have it to those who cannot receive it. We use the beacon

    messages sent in the VANET to propagate the knowledge of who

    owns the information. Beacon messages are sent regularly by a

    set of MSs in a VANET for the MSs to learn the neighboring rela-

    tionship of one another. It is also legitimate to request all the

    MSs in the VANET to send out beacon messages regularly. In theproposed scheme, we put some information in the frame body of

    the beacon messages. MSs will indicate how many layers they

    are able to receive in the beacon messages they send out. For an

    MS that can only receive the base layer, it can check the beacon

    messages it receives from its neighbors. If it finds out that one of

    its neighboring MSs has the enhancement layers it desires, it can

    request the neighbor to broadcast the enhancement layer.

    Each MS keeps a list of its neighbors, its children, and a parent.

    The information about the number of children it has, the number of

    enhancement layers it is currently subscribing, and the relay

    counts to the source are exchanged among mobile stations through

    beacons. Once an MS who is not receiving the enhancement layers

    (seeMS 1 in Fig. 2(a)) finds out its neighbor, MS 2, has the enhance-

    ment layer, it will send out a REQ_RELAY message to MS 2. OnceMS 2 receives REQ_RELAY, an ACK_RELAY message will be returned

    Fig. 4. Angle featurehkfor vehicle trajectory analysis.

    Table 2

    Multicast information.

    Type Layers Contents Utility

    Type I: Map and

    traffic

    information

    Base layer

    (TB)

    Coarse map with summarized

    traffic conditions on specific

    locations

    1

    Enhancement

    layer 1 (TE)

    Detailed map, buildings, and

    detailed traffic parameters

    0.75

    Enhancement

    layer 2 (TE2)

    Local information service

    (Hotels, restaurants, etc.)

    0.5

    Type II:

    Surveillance

    videos

    Base layer

    (VB)

    Layer-encoded videos 1

    Enhancement

    layer (VE)

    0.5

    H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368 8361

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    7/13

    to MS 1. MS 1 then broadcasts an UPDATE message to update its

    most recent information among its neighbors. This is to prevent

    the slower update rate using beacon messages. The REQ_RELAY,

    ACK_RELAY, and UPDATE messages are shown in Fig. 7.

    Wedesign a relay check procedureand a relay improvement pro-

    cedure for the VANET. The relay check procedure is for the mobile

    stations without enhancement layers to find a suitable neighbor

    to serve as its parent node. And the relay improvement procedure

    is for the mobile stations already getting enhancement layers from

    others toimprove the route.The flow chartin Fig. 8 shows howrelaycheck and relay improvement procedures are performed.

    There are two favorable criteria for choosing a neighbor as a

    parent node to relay the enhancement layer data: (1) choosing

    the one with the smallest relay count, which denotes the number

    of times that the enhancement layer is relayed from the source

    to the mobile station, and (2) choosing the one with most children.

    The latter criterion is to minimize the broadcasted information

    among the mobile stations. The relay check procedure chooses

    the parent for an MS in the following steps. If there is only one

    neighbor with the enhancement layer, the MS will just send RE-

    Q_RELAY to this neighbor. If more than one neighbors have theenhancement layer, the MS will choose the one with the smallest

    1. Normal2. Normal3. Normal

    4. Normal5. Normal6. Accident

    7. Normal8. Congested9. Congested10. Normal

    (a)

    1. Normal2. Congested3. Congested4. Normal5. Normal

    6. Accident7. Normal8. Congested

    9. Congested10. Normal

    (b)

    Fig. 5. Example of multicast contents.

    Preamble

    FCH

    DL-MAP

    UL-MA

    P

    MBS

    MBS-MAP

    DL Burst #1DL Burst #2

    DL Burst #4

    DL Burst #5

    TB TE TE2

    VB VE VB VE VB

    VB VE VB VE VB VE

    VB VE VB VE VB VE

    VB

    DL Burst#3

    Fig. 6. A WiMAX downlink sub-frame at left and detailed MBS region at right. The MBS region is further divided into burst profiles for each layer.

    8362 H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    8/13

    relay count. If more than one neighbors have the same smallest re-

    lay count, the MS will compare the children count of these neigh-

    bors and choose the one with the largest children count. If more

    than one neighbors have the same smallest relay count and largest

    children count, the MS will choose the one with the best signal

    strength. For example, inFig. 9, MS C is within the transmission

    range of A and B. And in both cases (a) and (b), C will choose A

    as its parent, and send out REQ_RELAY to A. Sending REQ_RELAY

    message is actually to register the information of the node into

    the children list of the parent node, so that other MSs can learn

    the most updated information about the routing and perform the

    parent choosing. The relay improvement procedure is for an MS

    to figure out if there are neighbors more suitable to be current par-ent by comparing the relay count and children count. If an MS finds

    a potentially better parent, it will send message to cancel the cur-

    rent relay and send REQ_RELAY to this potentially better parent.

    The two pictures inFig. 10show how the relay routes are built

    conceptually. The black dots with an outer square represent the

    MSs receiving both base layer and enhancement layer of the data.

    The black dots represent the MSs receiving only the base layer.

    Fig. 10(b) shows the routes after the performing relay check and

    relay improvement.

    In AMRIS (Wu & Tay, 1999), they also use beacon messages to

    design a relay mechanism in ad hoc networks. However, there

    are several key differences between their work and the proposed

    scheme. In our proposal, there is no particular source broadcasting

    to all MSs. The MSs gather information through three means: bea-con message, UPDATE message, and the broadcasted data packets.

    Fig. 7. REQ_RELAY, ACK_RELAY, and UPDATE.

    Neighbor information changed

    Neighbor Has

    Enhancement Layer ?

    One neighbor withsmallest relay count ?

    One neighbor withlargest children

    count?

    Screen out neighbors withoutsmallest relay count

    Screen out neighbors withoutsmallest relay count

    Relay request to the one withbest signal strength

    QUIT

    YES

    NO

    NO

    Relay request to this neighbor

    Relay request to this neighborYES

    YES

    QUIT

    QUIT

    Neighbor information changed

    Neighbor Has smallerRelay Count?

    Neighbor with larger

    children count ?

    Screen out neighbors with

    larger relay count

    NO

    Cancel relay from currentparent

    Cancel relay from current

    parent

    YES

    YES

    QUIT

    QUIT

    Relay request to this newneighbor

    Relay request to this newneighbor

    NO

    (a) Relay check (b) Relay improvement

    Fig. 8. Relay check and relay improvement procedures.

    H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368 8363

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    9/13

    The knowledge of which MS should become the source is learned

    automatically. InWu and Tay (1999), there is no request mecha-

    nism and all the MSs in the ad hoc networks have to deliver the re-

    lay data, which is more suitable in pure ad hoc network because it

    can lower the chances that the ad hoc networks become seg-

    mented. Our relay mechanism is more suitable for heterogeneous

    networks that consist of both VANET and wide-range infrastruc-ture WiMAX because only MSs with inferior channel qualities need

    to request some of the information through relay. When relaying

    live traffic videos in the proposed way, the chance of collision

    due to hidden nodes at broadcasting will also be smaller since

    unnecessary broadcasts by some of the MSs will be prevented.

    5. Experimental results

    In this section we exhibit several experimental results to dem-

    onstrate the feasibility and performance of the proposed system.

    Experimental results for the traffic parameter and event extraction

    algorithm are shown in sub-section V A. Simulation results for the

    knowledge propagation and information relay scheme utilizingbeacon messages are discussed in Section 5.2.

    5.1. Video-based traffic parameter and event extraction

    Videos from both intersections and highways are considered for

    traffic parameter and event extraction. The experimental videos

    and the corresponding detection zones are shown inFig. 11. Se-

    lected tracking results for these two surveillance videos are dis-

    played inFigs. 12 and 13. To compute the traffic parameters, wedefine different detection zones around the intersection, as shown

    inFig. 11(a). The flowsQof Zone 1, Zone 2 and Zone 3 are calcu-

    lated. The turning flows QLeft andQRightof the vehicles that pass

    through each zone are also computed. Because the street in Zone

    3 is a one-way street, we can observe thatQLeftthat passes through

    Zone 2 and QRightthat passes through Zone 3 both have zero values.

    Table 3 lists the ground truth statistics and the estimated traffic

    parameters of the intersection surveillance video. Queues are

    formed in Zone 1, Zone 2 and Zone 3 due to the traffic signals at

    the intersection and therefore the values of occupancy OccupAvgare high in these zones. But when we calculate the occupancy after

    the vehicles pass through the traffic lights in Zones 1A, 2A and 3A,

    theOccupAvgs are normal in these zones. Two detection zones are

    defined on different lanes for the highway surveillance video. Table4lists the ground truth statistics and the reported traffic parame-

    (a) (b)

    Fig. 9. Example of parent choosing for a mobile station.

    Fig. 10. (a) Before relay check and relay improvement processes. (b) After relay check and relay improvement processes.

    8364 H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    10/13

    ters of the highway video. There is no queue formed in this video

    andOccupAvgis thus estimated low.

    From the ground truth statistics and the traffic parameters esti-

    mated by the system, we can see that our video-based system can

    extract traffic parameters accurately based on robust tracking.Note that the ground truth statistics of the flows are obtained by

    manually counting the number of vehicles in the detection zones.

    The ground truth values of the occupancy are not available. Other

    events such as regulation violation can be discovered using rule-

    based logics. An example of driving on the road shoulder is de-

    tected as shown inFig. 14.

    These statistics can be sent to traffic control centers for deter-

    mining traffic management policies and adjusting traffic signals

    at intersections. Also, with the support of online navigation soft-

    ware, recommended routes for drivers can be calculated based on

    theextracted traffic statistics. When thedeployment of surveillance

    cameras is not intensive, the turning percentage and turning flow

    information is especially helpful for predicting traffic conditions

    of nearby streets or intersections. In addition, more statistics canbe derived based on these basic parameters. For example, if we as-

    sume the average length of the vehicle Lavgis 5 m, and the lengthof

    the detection Zone 2 inFig. 11(b)d is 50 m, we can get the average

    speedSpeedAvg= 108.5 km/h in this detection zone using Eq.(13):

    SpeedAvg QLavgd

    Occupancy: 13

    5.2. Knowledge propagation and information relay

    In the simulations, we assume the streaming live traffic video

    contains one base layer and one enhancement layer. The enhance-

    ment layer is about 40 Kbps, i.e., 10 frames per second with

    512 bytes data in each packet. This experiment is to simulate the

    live traffic video transmission over a 10 10 km geographic area.

    MSs within the central circular area of 2.5 km radius can receive

    base layer and enhancement layer. MSs outside this central circular

    area only receive base layer of the video. We develop a simulator

    using C# to simulate the relay of the enhancement layer. Twothousand MSs are uniformly distributed in the 10 10 km area.

    Fig. 11. Experimental videos: (a) intersection, (b) highway.

    Fig. 12. Tracking results for the intersection video.

    H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368 8365

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    11/13

    The transmission radius for each MS is set to 300 m. The transmis-

    sion rate is set to 1 Mbps.

    The histograms of relay count and children count in our simula-

    tion are shown inFig. 15. The relay count histogram shows that

    there are 520 MSs receiving the enhancement layer through the

    WiMAX broadcasting, and 1480 MSs rely on the relaying for theenhancement layer (all MSs receive the base layer from WiMAX).

    The children count histogram shows that each enhancement layer

    packet will be rebroadcasted 615 times along the relay path.

    The histogram of the number of received packets is shown in

    Fig. 16.Table 5highlights the statistics of MSs receiving more than90% of the packets. Out of 1480 MSs which are expecting the re-

    layed enhancement layer of live traffic video, there are 1187 MSs

    (about 80% of those who could not receive the enhancement from

    the WiMAX multicasting) receiving more than 90% of the enhance-

    ment layer through the relay mechanism.

    The relationship between the distance and the initial relay route

    built-time is shown inFig. 17. The initial relay route built-time in-

    creases along the distance. Some of the route building time takes

    up to 0.15 s. That is because the collisions of relay request packets.

    Some MSs would still receive the enhancement layer before the re-

    lay route is built, because what the relay request does is to register

    the node information into the parents children list and update the

    information to neighboring mobile stations. The initial relay route

    built-time determines how long the potential playback delaywould be.

    Fig. 13. Tracking results for the highway video.

    Table 3

    Traffic parameters extracted from the intersection video.

    Traffic parameters Ground truth System estimation

    Zone 1 Q(Veh/Min) 9.2 9.2

    QLeft(Veh/Min) after passing 2.6 2.6

    QRight(Veh/Min) after passing 0 0

    PLeft (%) after passing 28 28

    PRight(%) after passing 0 0

    Zone 2 Q(Veh/Min) 6.6 6.6

    QLeft(Veh/Min) after passing 0 0

    QRight(Veh/Min) after passing 0.6 0.6

    PLeft (%) after passing 0 0

    PRight(%) after passing 5.6 5.6

    Zone 3 Q(Veh/Min) 10.3 10.7

    QLeft(Veh/Min) after passing 0.7 0.7

    QRight(Veh/Min) after passing 0 0

    PLeft (%) after passing 7.1 6.8

    PRight(%) after passing 0 0

    Table 4

    Traffic parameters extracted from the highway video.

    Ground truth System estimation

    Zone 1 Q(Veh/Min) 9.2 9.2

    OccupAvg(%) N/A 0.29

    Queue formation 0 0

    Zone 2 Q(Veh/Min) 19.1 19.4

    OccupAvg(%) N/A 0.59

    Queue formation 0 0

    Fig. 14. Event of a vehicle driving on the road shoulder.

    8366 H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    12/13

    Therelationshipbetween the distanceand average relay delay is

    shown in Fig. 18. Average relaydelay is defined as the average delay

    between the receiving time of the enhancement layer and the base

    layer. As shown inFig. 18, the average relay delay increases along

    with the distance. The maximum average relay delay is below

    0.1 s. This means that the MSs would not need to tolerate a long

    waiting time before playback of high quality streaming video.

    In our simulator, we mainly implement the IEEE 802.11 CSMA/

    CA mechanism to validate the relay scheme in this work. We intro-

    duce a packet loss rate to represent the possible packet loss due to

    channel fading or shadowing. Also, in the real world scenarios, MSs

    are moving and therefore mobility also needs to be considered. In

    order to observe the effects of MS mobility and packet loss, several

    speeds and packet loss rates are considered in our simulation.

    Although higher packet loss rates can be derived fromfading model

    when the MSs are moving at higher speeds, in this experiment we

    vary the settings of the packet loss rates and the speeds indepen-

    dently to see the effects on packet delivery ratio. Table 6shows

    the packet delivery ratio of the proposed relay scheme under differ-

    0 2 4 6 8 10 12 14 16 18 200

    100

    200

    300

    400

    500

    600

    Relay Count

    NumberofM

    obileStations

    Number of Mobile Station v.s. Relay Count

    0 2 4 6 8 10 12 14 16 18 200

    500

    1000

    1500

    Children Count

    NumberofM

    obileStations

    Number of Mobile Stations v.s. Children Count

    Fig. 15. Histograms of relay count and children count.

    0 10 20 30 40 50 60 70 80 90 1000

    100

    200

    300

    400

    500

    600

    700

    800

    Number of Received Packets

    NumberofMobileStations

    Mobile Station # v.s. Received Packet #

    Fig. 16. Histogram of the received packets.

    Table 5

    Number of mobile stations receiving more than 90% of the enhancement layer packets.

    Number of received packets 91 92 93 94 95 96 97 98 99 100

    Number of mobile stations 17 34 11 29 20 46 113 157 760 520

    0 1000 2000 3000 4000 5000 60000

    0.02

    0.04

    0.06

    0.08

    0.1

    0.12

    0.14

    Distance between Mobile Station and Base Station

    RouteBuiltTime(Sec)

    Route Built Time v.s. Distance

    Fig. 17. Distance vs. route built time.

    0 1000 2000 3000 4000 5000 60000

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    0.08

    0.09

    0.1

    Distance between Mobile Station and Base Station

    Trans

    missionDelay(Sec)

    Transmission Delay v.s. Distance

    Fig. 18. Distance vs. average relay delay.

    H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368 8367

  • 8/10/2019 1-s2.0-S0957417412002126-main.pdf

    13/13

    ent MS speeds and packet loss rates. Note that (x%,y%) inTable 6

    denotes the packet loss rates for WiMAX and WiFi channels respec-

    tively. As expected, the packet delivery ratio drops as the speed of

    theMSs or thepacket lossrates increase. However, thepacket deliv-

    eryratio canstillbe maintained at around 80% when the packet loss

    rates of both WiMAX and WiFi channels are 15%.

    6. Conclusions

    In this paper, we combine the video analysis and multimedia

    networking technologies to present a highly integrated system that

    can provide drivers essential traffic and travel information to im-

    prove road safety and traffic efficiency. The proposed video-based

    traffic parameter and event extraction system utilizes the mea-surement candidate selection procedure and modified Probabilistic

    Data Association (PDA) approach combined with Kalman filtering

    to perform robust tracking. Based on the tracking results, traffic

    parameters are extracted and traffic conditions are summarized.

    Events or regulation violation such as dropped items in the lane

    or driving on the road shoulders can be determined by rule-based

    reasoning logics. From the ground truth statistics and the traffic

    parameters reported by the system in the experiments, we can

    see that our video-based system can extract traffic parameters

    accurately.

    In addition to the traffic information extraction technology, we

    also propose a scheme to deliver layered information through

    wireless heterogeneous networks. The resource allocation of Wi-

    MAX can be optimized. The mobile stations fail to receive all layerscan learn the layer information from neighbors and regain the

    information through relaying using VANETs. The simulation results

    show that about 80% of the mobile stations failing to receive addi-

    tional layers can regain 90% of the additional layer information

    through our proposed relaying mechanism. We believe that realiz-

    ing our proposed system will greatly facilitate a more informative

    and safe traffic environment.

    References

    Atev, S., Arumugam, H., Masoud, O., Janardan, R., & Papanikolopoulos, N. P. (2005). A

    vision-based approach to collision prediction at traffic intersections. IEEETransactions on Intelligent Transportation Systems, 6(4), 416423.

    Beymer, D., McLauchlan, P., Coifman, B., & Malik, J. (1997). A real-time computer

    vision system for measuring traffic parameters. In Proceedings of IEEE

    workshop computer vision and pattern recognition (pp. 495501).Ceder, A., & Eldar, K. (2002). Optimal distance between two branches of

    uncontrolled split intersection. Transportation Research, Part A: Policy andPractice, 36(8), 699724.

    Chen, C. H. (2011). Intelligent transportation control system design using wavelet

    neural network and PID-type learning algorithms. Expert Systems withApplications, 38(6), 69266939.

    Cheng, H. Y., & Hwang, J. N. (2007). Multiple-target tracking for crossroad traffic

    utilizing modified probabilistic data association. In IEEE international conferenceon acoustics, speech and signal processing (ICASSP) , (pp. I-921I-924.

    Cheng, H. Y., & Hwang, J. N. (2009). Adaptive particle sampling and adaptive

    appearance for multiple video object tracking. Signal Processing, 89(9),18441849.

    Fathy, M., & Siyal, M. Y. (1998). A window-based image processing technique for

    quantitative and qualitative analysis of road traffic parameters. IEEETransactions on Vehicular Technology, 47(4), 13421349.

    Fernndez-Caballero, A., Gmez, F. J., & Lpez-Lpez, J. (2008). Road-traffic

    monitoring by knowledge-driven static and dynamic image analysis. ExpertSystems with Applications, 35(3), 701719.

    Foresti, G. L. (1998). A real-time system for video surveillance of unattended

    outdoor environments. IEEE Transactions on Circuits and Systems for VideoTechnology, 8(6), 697704.

    Foresti, G. L. (1999). Object recognition and tracking for remote video surveillance.

    IEEE Transactions on Circuits and Systems for Video Technology, 9(7), 10451062.Fortmann, T. E., Bar-shalom, Y., & Scheffe, M. (1983). Sonar tracking of multiple

    targets using joint probabilistic data association. IEEE Journal of OceanicEngineering, 8(3), 173184.

    Fu, Z., Hu, W., & Tan, T. (2005). Similarity based vehicle trajectory clustering and

    anomaly detection. In IEEE international conference on image processing(Vol. 2,pp. 602605).

    Galatioto, F., & Zito, P. (2007). Traffic parameters estimation to predict road side

    pollutant concentrations using neural networks. Environmental Modeling andAssessment.

    Gartner, N. H., Messer, C. J., & Rathi, A. (1992). Traffic flow theory: A state of art

    report.Transportation Research Board.Guo, M., Ammar, M. H., & Zegura, E. W. (2005). V3: A vehicle-to-vehicle live video

    streaming architecture. In IEEE international conference on pervasive computingand communications(pp. 171180).

    Hauge, M., & Kure, O. (2006). A heterogeneous cellular and ad hoc network

    architecture for multipoint streaming: A detailed performance analysis. In

    International conference on mobile computing and ubiquitous networking (ICMU)(pp. 174182).

    Hsu, W. L., Liao, H. Y. M., Jeng, B. S., & Fan, K. C. (2004). Real-time traffic parameter

    extraction using entropy. IEE Proceedings Vision Image and Signal Processing,151(3), 194202.

    Hu, W., Tan, T., Wang, L., & Maybank, S. (2004). A survey on visual surveillance of

    object motion and behaviors. IEEE Transactions on Systems, Man, and Cybernetics.Part C. Applications and Reviews, 34(3), 334352.

    Huang, C. W., Wu, P. H., Lin, S. J., & Hwang, J. N. (2009). Layered video resource

    allocation in Mobile WiMAX using opportunistic multicasting. In IEEE wirelesscommunications and networking conference.

    IEEE, (2009). IEEE standard for local and metropolitan area networks. Part 16: Air

    interface for broadband wireless access systems. IEEE Std 802.16-2009.

    Kamijo, S., Matsushita, Y., Ikeuchi, K., & Sakauchi, M. (2000). Traffic monitoring and

    accident detection at intersections. IEEE Transactions on IntelligentTransportation Systems, 1(2), 108118.

    Kastrinaki, V., Zervakis, M., & Kalaitzakis, K. (2003). A survey of video processing

    techniques for traffic applications. Image and Vision Computing, 21(4), 359381.Killat, M., & Hartenstein, H. (2007). Vehicular ad hoc networks: How to show the

    impact on traffic safety. In IEEE 65th vehicular technology conference (pp. 659

    663).Li, L., Song, J., Wang, F. Y., Niehsen, W., & Zheng, N. N. (2005). IVS 05: New

    developments and research trends for intelligent vehicles. IEEE IntelligentSystem, 20(4), 1014.

    Liu, Q., Yoo, J., Jang, B. T., Choi, K., & Hwang, J. N. (2005). A scalable VideoGIS system

    for GPS-guided vehicles. Signal Processing: Image Communication, 20(3),205208.

    Maggio, E., & Cavallaro, A. (2005). Hybrid particle filter and mean shift tracker with

    adaptive transition model. In IEEE international conference on acoustics, speech,and signal processing (ICASSP) (pp. 221224).

    Melo, J., Naftel, A., Bernardino, A., & Santos-Victor, J. (2006). Detection and

    classification of highway lanes using vehicle motion trajectories. IEEETransactions on Intelligent Transportation Systems, 7(2), 188200.

    Qu, W., Schonfeld, D., & Mohamed, M. (2007). Real-time distributed multi-object

    tracking using multiple interactive trackers and a magnetic-inertia potential

    model. IEEE Transactions on Multimedia, 9(3), 511519.Stauffer, C., & Grimson, W. E. L. (1999). Adaptive background mixture models for

    real-time tracking. InIEEE conference on computer vision and pattern recognition(Vol. 2, pp. 246252).

    Vallejo, D., Albusac, J., Jimenez, L., Gonzalez, C., & Moreno, J. (2009). A cognitivesurveillance system for detecting incorrect traffic behaviors. Expert Systems with

    Applications, 36(7), 1050310511.Veeraraghavan, H., Masoud, O., & Papanikolopoulos, N. P. (2003). Computer vision

    algorithms for intersection monitoring. IEEE Transactions on IntelligentTransportation Systems, 4(2), 7889.

    Wang, F. Y. (2006). Driving into the future with ITS. IEEE Intelligent Systems, 21(3),9495.

    Wu, C. W., & Tay, Y. C. (1999). AMRIS: A multicast protocol for ad hoc wireless

    network. InIEEE military communication conference of proceedings(pp. 2529).Zhou, S. K., Chellappa, R., & Moghaddam, B. (2004). Visual tracking and recognition

    using appearance-adaptive models in particle filters. IEEE Transactions on ImageProcessing, 13(11), 14911506.

    Table 6

    Packet delivery ratio under different speeds and packet loss rates.

    (0%, 0%) (5%, 5%) (10%, 10%) (15%, 15%)

    0 95.7% 91.7% 86.2% 83.2%

    36 km/h 95.5% 91.4% 85.5% 79.1%

    72 km/h 90.6% 92.3% 86.5% 78.6%

    108 km/h 89.7% 87.5% 82.8% 77.9%

    8368 H.-Y. Cheng et al. / Expert Systems with Applications 39 (2012) 83568368