Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

9
Published in IET Signal Processing Received on 11th April 2013 Revised on 19th August 2013 Accepted on 22nd August 2013 doi: 10.1049/iet-spr.2013.0150 ISSN 1751-9675 Robust Bayesian partition for extended target Gaussian inverse Wishart PHD filter Yongquan Zhang, Hongbing Ji School of Electronic Engineering, Xidian University, Xian 710071, Peoples Republic of China E-mail: [email protected] Abstract: Extended target Gaussian inverse Wishart PHD lter is a promising lter. However, when the two or more different sized extended targets are spatially close, the simulation results conducted by Granström et al. show that the cardinality estimate is much smaller than the true value for the separating tracks. In this study, the present authors call this phenomenon as the cardinality underestimation problem, which can be solved via a novel robust clustering algorithm, called Bayesian partition, derived by combining the fuzzy adaptive resonance theory with Bayesian theorem. In Bayesian partition, alternative partitions of the measurement set are generated by the different vigilance parameters. Simulation results show that the proposed partitioning method has better tracking performance than that presented by Granström et al., implying good application prospects. 1 Introduction Extended target tracking (ETT) has drawn considerable attention in recent years because of the development of high-resolution sensors [1]. In the classical target tracking, it is assumed that each target produces at most one measurement per time step. However, in the real-world target tracking, when targets lie in the near eld of a high-resolution sensor, the sensor is able to receive more than one measurement per time step from different corner reectors of a single target. In this case, the target is no longer categorised as a point target. It is denoted as extended target. ETT is valuable for many actual applications, which include vehicle tracking using automotive radar, person tracking using laser range sensors, and tracking of sufciently close airplanes or ships with ground or marine radar stations [2]. Some approaches [35] have been proposed for tracking extended targets. In the recent work, Mahler developed the probability hypothesis density (PHD) lters for the extended targets [6]. By approximating the extended target PHD (ET-PHD) with a Gaussian mixture, a practical implementation of the ET-PHD lter is obtained, called the extended target Gaussian mixture PHD (ET-GM-PHD) lter, which is introduced by Granström et al. [7]. Orguner et al. proposed a cardinalised PHD (CPHD) lter for extended targets and presented the Gaussian mixture implementation for the lter [8]. Lundquist et al. presented a gamma Gaussian inverse Wishart implementation of an extended target CPHD lter [9]. Lian et al. proposed the unied cardinalised probability hypothesis density lters for extended targets and unresolved targets [10]. In [7], however, estimating the targetsextensions is omitted, which leads to some drawbacks. For this reason, an improved version of the ET-GM-PHD lter is proposed, called the extended target Gaussian inverse Wishart PHD (ET-GIW-PHD) lter [11], where the target kinematical states are modelled using a Gaussian distribution, while the target extension is modelled using an inverse Wishart distribution. An integral part of the ET-PHD lter is the partitioning of the set of measurements [6]. For the partitioning of ET-GM-PHD lter, we have developed a novel fast partitioning algorithm [12] which can obviously reduce computational burden without losing tracking performance compared with the original partitioning method. In ET-GM-PHD and ET-GIW-PHD lters, Distance partition, Distance partition with sub-partition, Prediction partition and expectation maximisation (EM) partition are taken. However, for the separating tracks [11] (which is a scenario where the two different sized and spatially close extended targets rst move in parallel, after about half the scenario they separate, see Fig. 1), their proposed partitioning methods do not work, then the cardinality underestimation problem will occur. We focus on the separating tracks, introduced by Granström et al., in this work. The primary contribution of this paper is the development of a novel robust Bayesian partition for ET-GIW-PHD lter. This partitioning method is inspired by the fuzzy adaptive resonance theory (ART) [13] that is the neural network architecture. Compared with the other clustering algorithms, the fuzzy ART has a distinct merit of rapid stable learning. Bayesian partition is obtained by combining the fuzzy ART with Bayesian theorem. Since it takes into account cluster centres and sizes via mean vector and covariance matrix (distribution shape), it has the potential advantage for the different sized and spatially close extended targets. The centre and distribution shape of a category is iteratively updated by learning process of Bayesian partition. Note that the concept of categorycomes from the fuzzy ART. However, in ETT, it is equivalent to celldened by www.ietdl.org 330 & The Institution of Engineering and Technology 2014 IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330338 doi: 10.1049/iet-spr.2013.0150

description

Robust Bayesian partition for extended target Gaussian inverse Wishart PHD filter.

Transcript of Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

Page 1: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

www.ietdl.org

3&

Published in IET Signal ProcessingReceived on 11th April 2013Revised on 19th August 2013Accepted on 22nd August 2013doi: 10.1049/iet-spr.2013.0150

30The Institution of Engineering and Technology 2014

ISSN 1751-9675

Robust Bayesian partition for extended targetGaussian inverse Wishart PHD filterYongquan Zhang, Hongbing Ji

School of Electronic Engineering, Xidian University, Xi’an 710071, People’s Republic of China

E-mail: [email protected]

Abstract: Extended target Gaussian inverse Wishart PHD filter is a promising filter. However, when the two or more differentsized extended targets are spatially close, the simulation results conducted by Granström et al. show that the cardinality estimate ismuch smaller than the true value for the separating tracks. In this study, the present authors call this phenomenon as the cardinalityunderestimation problem, which can be solved via a novel robust clustering algorithm, called Bayesian partition, derived bycombining the fuzzy adaptive resonance theory with Bayesian theorem. In Bayesian partition, alternative partitions of themeasurement set are generated by the different vigilance parameters. Simulation results show that the proposed partitioningmethod has better tracking performance than that presented by Granström et al., implying good application prospects.

1 Introduction

Extended target tracking (ETT) has drawn considerableattention in recent years because of the development ofhigh-resolution sensors [1]. In the classical target tracking,it is assumed that each target produces at most onemeasurement per time step. However, in the real-worldtarget tracking, when targets lie in the near field of ahigh-resolution sensor, the sensor is able to receive morethan one measurement per time step from different cornerreflectors of a single target. In this case, the target is nolonger categorised as a point target. It is denoted asextended target. ETT is valuable for many actualapplications, which include vehicle tracking usingautomotive radar, person tracking using laser range sensors,and tracking of sufficiently close airplanes or ships withground or marine radar stations [2].Some approaches [3–5] have been proposed for tracking

extended targets. In the recent work, Mahler developed theprobability hypothesis density (PHD) filters for theextended targets [6]. By approximating the extended targetPHD (ET-PHD) with a Gaussian mixture, a practicalimplementation of the ET-PHD filter is obtained, called theextended target Gaussian mixture PHD (ET-GM-PHD)filter, which is introduced by Granström et al. [7]. Orguneret al. proposed a cardinalised PHD (CPHD) filter forextended targets and presented the Gaussian mixtureimplementation for the filter [8]. Lundquist et al. presenteda gamma Gaussian inverse Wishart implementation of anextended target CPHD filter [9]. Lian et al. proposed theunified cardinalised probability hypothesis density filters forextended targets and unresolved targets [10]. In [7],however, estimating the targets’ extensions is omitted,which leads to some drawbacks. For this reason, animproved version of the ET-GM-PHD filter is proposed,

called the extended target Gaussian inverse Wishart PHD(ET-GIW-PHD) filter [11], where the target kinematicalstates are modelled using a Gaussian distribution, while thetarget extension is modelled using an inverse Wishartdistribution. An integral part of the ET-PHD filter is thepartitioning of the set of measurements [6]. For thepartitioning of ET-GM-PHD filter, we have developed a novelfast partitioning algorithm [12] which can obviously reducecomputational burden without losing tracking performancecompared with the original partitioning method. InET-GM-PHD and ET-GIW-PHD filters, Distance partition,Distance partition with sub-partition, Prediction partition andexpectation maximisation (EM) partition are taken. However,for the separating tracks [11] (which is a scenario where thetwo different sized and spatially close extended targets firstmove in parallel, after about half the scenario they separate,see Fig. 1), their proposed partitioning methods do not work,then the cardinality underestimation problem will occur. Wefocus on the separating tracks, introduced by Granström et al.,in this work.The primary contribution of this paper is the development

of a novel robust Bayesian partition for ET-GIW-PHD filter.This partitioning method is inspired by the fuzzy adaptiveresonance theory (ART) [13] that is the neural networkarchitecture. Compared with the other clustering algorithms,the fuzzy ART has a distinct merit of rapid stable learning.Bayesian partition is obtained by combining the fuzzy ARTwith Bayesian theorem. Since it takes into account clustercentres and sizes via mean vector and covariance matrix(distribution shape), it has the potential advantage for thedifferent sized and spatially close extended targets. Thecentre and distribution shape of a category is iterativelyupdated by learning process of Bayesian partition. Note thatthe concept of ‘category’ comes from the fuzzy ART.However, in ETT, it is equivalent to ‘cell’ defined by

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

Page 2: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

Fig. 1 Separating tracks

www.ietdl.org

Section 2.1. Finally, one true target’s shape is depicted by onecategory. Unlike Prediction partition and EM partition (whosepartitions of the measurement set are generated by thepredicted GIW components), in Bayesian partition, itspartitions are generated by the different vigilanceparameters. In other words, the vigilance parameters ofBayesian partition are not the fixed values, which isdifferent from the fuzzy ART. The purpose of doing so isto let all possible partitions be efficiently approximatedusing a subset of partitions. Similar to the fuzzy ART, thefunction of the Bayesian partition’ vigilance parameter is tocalibrate the minimum confidence of a category formed.The remainder of the paper is organised as follows. Section

2 describes the partitioning problem of the ET-GIW-PHDfilter, and Prediction partition and EM partition. The detailsof our algorithm, that is, Bayesian partition, are given inSection 3. The simulation results are presented in Section4. Section 5 contains the conclusions.

2 Problem formulation

2.1 Partitioning problem

An integral part of the ET-PHD filter is the partitioning of theset of measurements [6]. In order to illustrate the importanceof partitioning in the ET-PHD filter, the process ofpartitioning with a measurement set containing threeindividual measurements, Zk = {z(1)k , z(2)k , z(3)k }, isconsidered. This set can be partitioned as follows [6]

p1:W11 = z(1)k , z(2)k , z(3)k

{ }p2:W

21 = z(1)k , z(2)k

{ }, W 2

2 = z(3)k

{ }p3:W

31 = z(1)k , z(3)k

{ }, W 3

2 = z(2)k

{ }p4:W

41 = z(2)k , z(3)k

{ }, W 4

2 = z(1)k

{ }p5:W

51 = z(1)k

{ }, W 5

2 = z(2)k

{ }, W 5

3 = z(3)k

{ }Here, pi is the ith partition and Wi

j is the jth cell of partition i.Note that a cell should consist of the similar measurements,namely, the measurements from an extended target or the

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

clutter measurements. Obviously, the number of possiblepartitions grows very large as the size of the measurementset increases. Note that the ET-PHD filter requires allpossible partitions of the current measurement set forupdating. This makes these filters computationallyintractable even with a simple case. In order to obtain acomputationally tractable solution to the ET-PHD filter,only a subset of all possible partitions can be considered.Moreover, in order to achieve excellent ETT results, thissubset of partitions must efficiently approximate all possiblepartitions. Granström et al. in [2] and [11] adopted Distancepartition, Distance partition with sub-partition, Predictionpartition and EM partition to solve this problem,respectively. In the following Section 2.2, we give thedescription of Prediction partition and EM partition.

2.2 Prediction partition and EM partition

In this paper, since Distance partition and Distance partitionwith sub-partition [2] are not applicable for the differentsized and spatially close extended targets, only Predictionand EM partitions are described as comparison.Suppose that the jth predicted GIW component is defined

by

j(j)k+1|k = m(j)k+1|k , P

(j)k+1|k , v

(j)k+1|k , V

(j)k+1|k

( )(1)

where m(j)k+1|k and P(j)

k+1|k are the predicted Gaussian meanand covariance of the jth component, while v(j)k+1|k andV (j)

k+1|k are the predicted inverse Wishart degrees of freedomand inverse scale matrix of the jth component. ForPrediction partition [11], a partition is obtained by iteratingover the predicted GIW components which are selected inthe order of decreasing weight. Note that the weight of onecomponent selected must satisfy, w(j)

k+1|k . 0.5 acorresponding position mean is obtained by taking the dfirst components of m(j)

k+1|k , denoted m(j),dk+1|k , where d is the

dimension of the measurement vectors. In one partition, allmeasurements z(i)k that fulfill

z(i)k −m(j),dk+1|k

( )TX̂ (j)k+1|k

( )−1z(i)k −m(j),d

k+1|k( )

, Dd(p) (2)

are put into the same cell. Here, a d-dimensional extensionestimate X̂ (j)

k+1|k is computed as in [3]

X̂ (j)k+1|k =

V (j)k|k−1

v(j)k|k−1 − 2d − 2(3)

and Δd(p) is computed using inverse cumulative χ2 distributionwith d degrees of freedom for probability p = 0.99. If ameasurement falls into two or more extension estimates, it isonly put into the cell formed by the highest weight. If ameasurement does not satisfy (2) for any GIW component, itis put into the cell containing only one measurement.For EM partition [11], a partition is also obtained by the

predicted GIW components. For components j with weightw(j)k+1|k . 0.5, the initial values of Gaussian mixture

parameters are set as means ml = m(j),dk+1|1, covariances

Sl = X̂ (j)k+1|k and mixing coefficients pl / g(j(j)k+1|k). The

mixing coefficients πl are normalised to meet Σlπl = 1 beforethe first E-step. The details of the EM algorithm forGaussian mixtures can be found in, e.g., [14].

331& The Institution of Engineering and Technology 2014

Page 3: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

www.ietdl.org

Note that for a given set of the initial predicted GIW

components, the EM algorithm will converge to the closestlocal optimum of the likelihood function, namely, there isno guarantee that EM converges to the global optimum.This implies that EM partition can obtain a right partition ifthe predicted GIW components can correctly represent thelikelihood function; otherwise EM partition is likely toobtain a wrong partition. This problem also exists inPrediction partition. Thus, Prediction partition and EMpartition are sensitive to the predicted GIW components. Inparticular, for the different sized and spatially closeextended targets, if we first use the wrong predicted GIWcomponents, we do not obtain the right partition. It meansthat measurements from more than one measurement sourcewill be included into the same cell in all partitions, andsubsequently the ET-GIW-PHD filter will interpretmeasurements from multiple targets as having originatedfrom just one target, leading to the cardinalityunderestimation problem. For the separating tracks ofFig. 1, since we first cannot obtain the correct predictedGIW components, Prediction partition and EM partition donot work.

3 Robust Bayesian partition

3.1 Fuzzy ART

The fuzzy ART [13] (see Fig. 2) is a neural networkarchitecture that includes an input field F0 storing thecurrent input vector, a choice field F2 containing the activecategories and a matching field F1 receiving bottom-upinput from F0 and top-down input from F2. All inputpatterns are complement-coded by the field F0 in order toavoid category proliferation. Each input a is represented asa 2M-dimensional vector A = (a, ac). The weight associatedwith each F2 category node j( j = 1,...,N ) is denoted aswj = (uj, vcj ), which subsumes both the bottom-up andtop-down weight vectors of fuzzy ART. Initially, allweights are set to one, each category is said to beuncommitted. When a category is first coded it becomescommitted. Each long-term memory trace wi(i = 1,...,2M ) ismonotonically non-increasing with time and henceconverges to a limit. Categorisation with the fuzzy ART isperformed by category choice, resonance or reset andlearning. The details of the fuzzy ART model can be foundin [13].Category choice: The category function, Tj, for each input

A is defined by

Tj =|A ^ wj|a+ |wj|

(4)

Fig. 2 Fuzzy ART architecture

332& The Institution of Engineering and Technology 2014

where (p ^ q)i = min (pi, qi) and |p| = ∑Mi=1 |pi|. The Jth

winner node in F2 is selected by

TJ = max Tj:j = 1, . . . , N{ }

(5)

In particular, nodes become committed in order j = 1, 2,3,….Resonance or reset: When the category J is chosen, a

hypothesis testing is performed in order to measure thedegree to which A is a fuzzy subset of wJ. Resonanceoccurs if the match function of the chosen category meetsthe vigilance criterion ρ∈ [0, 1]

|A ^ wJ ||A| ≥ r (6)

then the chosen category is said to win (match) and learning isperformed. Otherwise, the value of TJ is set to zero for the restof this pattern presentation to prevent the persistent selectionof the same category during search. A new category is thenchosen by (5), and the search process continues until thechosen category satisfies (6).Learning: Once search is finished, the weight vector wJ is

updated according to

w(new)J = b A ^ w(old)

J

( )+ (1− b)w(old)

J (7)

where β∈ [0, 1] is the learning rate parameter. Fast learningcorresponds to β = 1. By (7), the fuzzy ART can realiselearning new data without forgetting past data.Compared with the other clustering algorithms, the fuzzy

ART has a distinct merit of rapid stable learning. Moreover,it is a global clustering algorithm which is not constrainedby the initial values. Of course, k-means clustering [14], itsimproved version k-means + + [15] and EM clustering [14]also can be used to partition the measurement set. However,Granström et al. in [11] illustrated that k-means clusteringand k-means + + do not apply to the different sized andspatially close extended targets. Additionally, from thedescription of Section 2.2, we can obtain that EM clusteringis limited by the initial values, so there is no guarantee thatEM converges to the global optimum. Therefore for theET-GIW-PHD filter, choosing the fuzzy ART as thearchitecture of partitioning algorithm has potential advantage.

3.2 Bayesian partition

In order to solve the cardinality underestimation problem, anovel robust clustering algorithm, Bayesian partition, isproposed in this section. It is based on the fuzzy ART [13]and Bayesian theorem. In Bayesian partition, we adopt theneural network architecture (including category choice,resonance or reset, and learning) that is similar to the fuzzyART to achieve partitioning the measurement set. However,in Bayesian partition, we use a Bayesian posteriorprobability as category choice function of category choicestage, which can ensure that the input measurement is putinto the correct category. Therefore we call the proposedalgorithm as Bayesian partition. Similar to the fuzzy ART,Bayesian partition is a neural network architecture thatincludes an input field F0 storing the current input vector, achoice field F2 containing the active categories and amatching field F1 receiving bottom-up input from F0 andtop-down input from F2. The mean vector and covariancematrix associated with each F2 category nodes j( j = 1,…,

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

Page 4: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

www.ietdl.org

Ncat) is denoted as μj and Σj, respectively, where Ncat is thenumber of categories. Initially, all mean vectors are set tothe first measurement vector that chooses this category, allcovariance matrices are set to be so large enough thatsatisfies the following vigilance criterion, and each categoryis said to be uncommitted. When a category is first learnedit becomes committed. In other words, if the initialcovariance matrix of a category is learned, this categorybecomes committed (namely, learned). Otherwise, it isuncommitted. Bayesian partition is performed by categorychoice, resonance or reset (vigilance test) and learning forevery measurement vector.Category choice: In this stage, all existing categories

compete to represent an input measurement. The choicefunction is a Bayesian posterior probability forthejthcategory to input measurement vector z, which isrepresented as

P(cj|z) =p(z|cj)P̂(cj)∑Ncat

l=1 p z|cl( )

P̂ cl( ) (8)

where cj is the jth category, P̂(cj) is the estimated priorprobability of the jth category, and p(z|cj) is a Gaussianlikelihood function that is used to measure the similaritybetween z and cj, which is defined as

p(z|cj) =1

(2p)d/2|Sj|1/2exp − 1

2z − mj

( )TS

−1j z − mj

( )( )

(9)

where μj and Σj are mean vector and covariance matrix of thejth category. Note that if z is the first input vector of allmeasurement vectors, Ncat = 1; if z is the first input vectorof one category, P̂(cj) = 1. The winning category J isselected by

J = argmaxj

P(cj|z)( )

(10)

Resonance or reset (Vigilance test): The purpose of this stageis to determine whether the input measurement z matcheswith the shape of the chosen category J’s distribution,which is characterised by match function. In this paper, weuse the normalised similarity between z and cJ to definematch function, namely

M (z, J ) = (2p)d/2 SJ

∣∣ ∣∣1/2p z|cJ( )

(11)

Resonance occurs if the match function of cJ meets thevigilance criterion

M (z, J ) ≥ r (12)

where ρ∈ [0, 1] is a vigilance parameter. Learning thenensues, as defined below. Mismatch reset occurs if

M (z, J ) , r (13)

The category J is then removed from the competition for themeasurement z, and Bayesian partition searches for anothercategory via (8) and (10) until finding one satisfying (12).If all existing categories fail the vigilance test, a newcategory characterised by a mean vector z and an initial

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

covariance matrix Σini is formed, then

Ncat = Ncat + 1 (14)

Note that the initial distribution shape represented by Σini mustbe so large enough that enables meeting (12).

Learning:When the chosen category satisfies the vigilancecriterion (12), then the category parameters are updated byLearning. Learning of Bayesian partition involves theadjustment of the centre (i.e., mean vector), distributionshape (i.e., covariance matrix), counts and estimated priorprobability of the winning category. In the learning stage ofBayesian partition, in order to retain the fuzzy ART’ meritof learning new data without forgetting past data, we adoptthe following measures: firstly, the centre of the chosencategory J is updated by weighting the current measurementz and mean vector μJ. The weights are obtained byconsidering that z and μJ make the contribution to thecategory centre, respectively. Given NJ is the number ofmeasurements that have been clustered by the Jth category.If considering the current measurement z, theshare contributed by the original mean vector μJ should beNJ/(NJ + 1). Similarly, the share contributed by z is 1/(NJ + 1).Thus, the centre of the chosen category J can be updated by

mJ = NJ

NJ + 1mJ +

1

NJ + 1z (15)

Secondly, analogously, the distribution shape of the chosencategory is updated by weighting the original covariancematrix ΣJ and the temporary covariance matrix derived by zand the updated μJ. According to probability theoryknowledge, this temporary covariance matrix is simplycomputed as (z− μJ)(z− μJ)

T. Therefore the distribution shapeof the chosen category J can be updated using

SJ = NJ

NJ + 1SJ +

1

NJ + 1z − mJ

( )z − mJ

( )T(16)

Finally, NJ and P̂ cJ( )

are updated using the following equations

NJ = NJ + 1 (17)

P̂(cJ ) =NJ∑Ncatj=1 Nj

(18)

Note that (17) and (18) are often used for clustering. Bayesianpartition is also illustrated by Fig. 3. If Bayesian partition isapplied to the ETT, one partition is generated by one presetvigilance parameter. A partition contains a few of categories,that is, cells. In the next section, we give the detailed processof generating partitions.

3.3 Generating alternative partitions

For Bayesian partition, given a vigilance parameter, apartition is obtained by iterating over all currentmeasurement vectors. Therefore NV alternative partitions ofthe measurement set can be generated by selecting NV

different vigilance parameters, that is

rl{ }NV

l=1, rl+1 = rl + D, for l = 1, . . . , NV − 1 (19)

where ρl∈ [0, 1], and Δ is a step length, which is an empiricalvalue. A larger step length generates fewer alternative

333& The Institution of Engineering and Technology 2014

Page 5: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

Fig. 3 Bayesian partition

www.ietdl.org

partitions with less computation time, whereas sacrifices a lotof performance. The alternative partitions contain morecategories as ρl increases, and the categories typicallycontain fewer measurements.If one uses all vigilance parameters satisfying rl [ [0, 1] to

form alternative partitions, NV = round(1/D)+ 1 partitionsare obtained. Note that the adjacent vigilance parametersmay generate the same partition. Thus, some partitionsmight be identical, and must hence be discarded so thateach partition is unique at the end. Finally, we can obtainN ′V different alternative partitions by deleting the same

partitions, where N ′V ≤ NV. If D is smaller, N ′

V is quitelarge. In order to reduce the computational burden, we onlychoose the vigilance parameters in confidence region. Thisregion is defined by the vigilance thresholds dV

Land dVU

for lower and upper vigilance parameters. Thus, thevigilance parameters that only satisfy the condition

dVL ≤ rl ≤ dVU (20)

can form the alternative partitions. Similar to D, dVLand dVUare also empirical values. Note that the subscripts ‘V’ inNV, N

′V, dVL

and dVU, denotes the ‘Vigilance’.

4 Simulation results

In this section, we illustrate our robust Bayesian partition forthe four different ETT scenarios, that is, crossing tracking,parallel tracks, separating tracks and turning tracks. Todemonstrate the tracking performance of the proposedmethod, the joint partitioning strategy presented by [11] isused as a compared method. It includes Distance partitionwith sub-partition and Prediction partition. For notationalclarity, we call this partitioning strategy as Joint partition.Note that in order to track the different sized and spatiallyclose extended targets, Prediction partition is only adopted.The reason for this is that both Prediction partition and EMpartition handle this type of true target scenario [11].The true target extensions, the model for the expected

number of measurements, the motion model parameters andthe parameters of scenario are very similar to thosepresented in [11]. The true tracks for the four test cases areshown in Figs. 1 and 4. The true target extensions aredefined by

X (i)k = R(i)

k diag A2i a2i

[ ]( )R(i)k

( )T(21)

334& The Institution of Engineering and Technology 2014

where R(i)k is a rotation matrix, which makes the ith

extension’s major aligned with the ih target’s direction ofmotion at time step k, and Ai and ai are the length of themajor and minor axes, respectively. In the separating tracks,parallel tracks and turning tracks scenarios, the major andminor axes are set to (A1, a1) = (25, 6.5) and (A2, a2) = (15, 4)for the two targets, respectively. In the crossing tracksscenario, (A1, a1) = (25, 6.5), (A2, a2) = (15, 4) and (A3, a3) =(10, 2.5) for the three targets, respectively. Suppose thatthe expected number of measurements generated by thetargets is a function of the extended target volume

V (i)k = p

������X (i)

k

∣∣∣ ∣∣∣√= pAiai. We here adopt the following

simple model for the expected number of measurementsgenerated by the extended targets, which is denoted as

g(i)k =�������4

pV (i)k

√+ 0.5

⌊ ⌋= 2

�����Aiai

√ + 0.5⌊ ⌋

(22)

where ⌊·⌋ is the floor function. This model is equivalent toassuming a uniform expected number of measurements persquare root of surveillance area. The target dynamic motionmodel is represented as [3]

x(i)k+1 = Fk+1|k ⊗ Id( )

x(i)k + w(i)k+1 (23)

where w(i)k+1 is zero mean Gaussian process noise with

covariance Qk+1|k ⊗ X (i)k+1, d is the dimension of the target

extent, X (i)k is a d × d symmetric positive-definite matrix, Id is

a d × d identity matrix, A⊗ B is the Kronecker product andFk + 1|k and Qk + 1|k are defined by Koch [3]

Fk+1|k =1 Ts 0.5T2

s

0 1 Ts0 0 e−Ts/u

⎡⎣

⎤⎦ (24)

Qk+1|k = S2 1− e−2Ts/u( )

diag([001]) (25)

where Ts is the sampling time, Σ is the scalar accelerationstandard deviation and θ is the manoeuvre correlation time.The measurement model is represented as [3]

z(j)k = Hk ⊗ Id( )

x(j)k + e(j)k (26)

where e(j)k is white Gaussian noise with covarianceX (j)

k , and H k = [100]. Each target generates a Poissondistributed number of measurements, where the Poisson rateg(j)k is defined by (22). In the four simulation scenarios, themodel parameters are set to Ts = 1s, u = 1s, S = 0.1m2/sand t = 5s. Here, the parameters of the Jb,k = 2 and 3 birthextended targets are set w(j)

b,k = 0.1 to, m(j)b,k = [(x(j)0 )T0T4 ]T,

P(j)b,k = diag([1002252252]), v(j)b,k = 7 andV (j)

b,k = diag([11]).In Fig. 1, the two extended targets are born at 1s and die

at 100s. Similarly, in Figs. 4b and c, the two extendedtargets are also born at 1s and die at 100s, whereas inFigs. 4a the extended target 1 and 2 are born at 1s and dieat 100s, and extended target 3 is born at 20s and dies at92s. A total of 500 Monte Carlo simulations areperformed for the four tracking scenario (see Figs. 1 and4), with a clutter rate of ten clutter measurements per scanand clutters and measurements generated independently.The probabilities of survival and detection are set to 0.99and 0.98, respectively. According to the empirical

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

Page 6: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

Fig. 4 True target tracks used in simulations

a Crossing tracksb Parallel tracksc Turning tracking

Fig. 5 Results of separating tracks

a Average cardinality estimatesb Average OSPA distancesc Average computation time for Joint partition and Bayesian partition

www.ietdl.org

simulations with good target tracking results, the initialcovariance matrix is set to Σini = diag([2000\2000]), thevigilance thresholds is set to dVL = 0.05 and dVU = 0.2,and the step length is set to Δ = 0.05. Therefore thealternative partitions approximating all possible partitions

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

are generated by setting the vigilance parameters as 0.05,0.1, 0.15 and 0.2, respectively. In Bayesian partition, thelarger the vigilance parameter is, the smaller the ellipseareas of the formed categories are but the more thenumber of the ellipses are. By a few of simulations, wefind that the different sized and spatially close extendedtargets cannot be split if dVL , 0.05, and the cardinality

335& The Institution of Engineering and Technology 2014

Page 7: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

Fig. 6 Results of crossing tracks

a Average cardinality estimatesb Average OSPA distancesc Average computation time for Joint partition and Bayesian partition

Fig. 7 Results of parallel tracks

a Average cardinality estimatesb Average OSPA distancesc Average computation time for Joint partition and Bayesian partition

www.ietdl.org

overestimation problem will occur if dVL . 0.2. In Jointpartition, for Prediction partition Δd(p) in (2) is computedusing inverse cumulative χ2 distribution with d = 2 degreesof freedom for probability p = 0.99 [11], and for Distancepartition with sub-partition the distance threshold dl

336& The Institution of Engineering and Technology 2014

satisfying the condition dPL , dl , dPU for lowerprobabilities PL≤ 0.3 and PU≥ 0.8, where the definition ofdPL and dPU in referred to [2].The results are shown in Figs. 5–8, which show the

corresponding average cardinality estimates, optical

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

Page 8: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

Fig. 8 Results of turning tracks

a Average cardinality estimatesb Average OSPA distancesc Average computation time for Joint partition and Bayesian partition

www.ietdl.org

subpattern assignment (OSPA) distances [16] andcomputation time, respectively. Compared with the previousmiss-distances used in multi-target tracking, the OSPAdistance overcomes the limitations including inconsistentbehaviour and the lack of a meaningful physicalinterpretation if the cardinalities of the two finite sets underconsideration differ. It allows for a natural physical

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150

interpretation even if the two sets’ cardinalities are not thesame, and does not exhibit any elements of arbitrarinessinherent in ad hoc assignment approaches [16]. It has twoadjustable parameters p∈ [1,∞] and c > 0 that havemeaningful interpretations, that is, outlier sensitivity andcardinality penalty, respectively. In the paper, theparameters of the OSPA metric are set as p = 2 and c = 60.As seen from Fig. 5a, for the separating tracks scenario, theaverage cardinality estimated by Joint partition is farsmaller than the true value when the two different sized andspatially close extended targets move in parallel, which iscaused by the cardinality underestimation problemdiscussed in Section 2. As a comparison, the cardinalityestimate of Bayesian partition is closer to the true value.This is due to the fact that Bayesian partition takes intoaccount the distribution’s shape information. In Bayesianpartition, with the increase of the measurement input, thetrue distribution’ shape of the category is iterativelydepicted by updating of (16). This is verified by Fig. 5bwhich shows that the average OSPA distance of Bayesianpartition is far smaller than that of Joint partition formoving in parallel. However, Bayesian partition requiresslightly more computation time, as shown Fig. 5c. This canbe accepted by target tracking system. By calculating, theET-GIW-PHD filter using Bayesian partition requires7.1798s on average for one MC run, while theET-GIW-PHD filter using Joint partition requires 6.1872s.Note that the computational requirements for bothalgorithms are compared via the CPU processing time. Thesimulations are implemented by MATLAB on Intel Corei5-2400 3.10 GHz processor and 4 GB RAM. As seen fromFigs. 7 and 8, for the parallel tracks and turning tracks,Bayesian partition can also obtain good trackingperformance except for requiring slightly more computationtime. However, for the crossing tracking, the performanceof the two partitioning algorithms is comparable. This is dueto the fact that both Bayesian partition and Joint partition canwell split the spatially farther (i.e., separated) extendedtargets. Therefore for the separated extended targets, thetracking of the two partitioning algorithms can achieve goodperformance, thus their performance is comparable. Note thatboth Bayesian partition and Joint partition are sensitive tomanoeuvres that are modelled poorly by the motion model,but results of Bayesian partition are better than that of Jointpartition are, as shown Fig. 8.

5 Conclusions

In order to solve the cardinality underestimation problemcaused by the separating tracks, a novel robust clusteringalgorithm based on Bayesian theorem, called Bayesianpartition, is proposed in this paper. The partitioning methodgenerates the alternative partitions of the measurement set viadifferent vigilance parameters. The simulation results showthat the proposed algorithm can effectively solve thecardinality underestimation problem, and has better trackingperformance than Joint partition for the two or more differentsized and spatially close extended targets. However, for theseparated extended targets, the tracking performance of theproposed algorithm and Joint partition is comparable.

6 Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (grant no. 61372003). The authors

337& The Institution of Engineering and Technology 2014

Page 9: Robust Bayesian Partition for Extended Target Gaussian Inverse Wishart PHD Filter

www.ietdl.org

acknowledge the anonymous reviewers and associate editorfor their valuable suggestions and comments.

7 References

1 Gilholm, K., Godsill, S., Maskell, S., Salmond, D.: ‘Poisson models forextended target and group tracking’. Proc. SPIE Signal Data Process.Small Targets, San Diego, CA, August 2005, vol. 5913, pp. 230–241

2 Granström, K., Lundquist, C., Orguner, U.: ‘Extended target trackingusing a Gaussian-mixture PHD filter’, IEEE Trans. Aerosp. Electron.Syst., 2012, 48, (4), pp. 3268–3286

3 Koch, J.W.: ‘Bayesian approach to extended object and cluster trackingusing random matrices’, IEEE Trans. Aerosp. Electron. Syst., 2008, 44,(3), pp. 1042–1059

4 Angelova, D., Mihaylova, L.: ‘Extended object tracking using MonteCarlo methods’, IEEE Trans. Signal Process., 2008, 56, (2),pp. 825–832

5 Feldmann, M., Franken, D.: ‘Tracking of extended objects and grouptargets using random matrices – a new approach’. Proc. Int. Conf.Information Fusion, Cologne, Germany, July 2008, pp. 1–8

6 Mahler, R.: ‘PHD filters for nonstandard targets, I: extended targets’.Proc. Int. Conf. Information Fusion, Seattle, WA, July 2009,pp. 915–921

7 Granström, K., Lundquist, C., Orguner, U.: ‘A Gaussian mixture PHDfilter for extended target tracking’. Proc. Int. Conf. InformationFusion, Edinburgh, Scotland, July 2010, pp. 1–8

338& The Institution of Engineering and Technology 2014

8 Orguner, U., Lundquist, C., Granström, K.: ‘Extended target trackingwith a cardinalized probability hypothesis density filter’. Proc. Int.Conf. Information Fusion, Chicago, Illinois, USA, 5–8 July, 2011,pp. 1–8

9 Lundquist, C., Granström, K., Orguner, U.: ‘An extended target CPHDfilter and a gamma Gaussian inverse Wishart implementation’, IEEEJ. Spec. Topics Signal Process., 2013, 7, (3), pp. 472–483

10 Lian, F., Han, C., Liu, W., Liu, J., Sun, J.: ‘Unified cardinalizedprobability hypothesis density filters for extended targets andunresolved targets’, Signal Process., 2012, 92, (7), pp. 1729–1744

11 Granström, K., Orguner, U.: ‘A PHD filter for tracking multipleextended targets using random matrices’, IEEE Trans. SignalProcess., 2012, 60, (11), pp. 5657–5671

12 Zhang, Y.Q., Ji, H.B.: ‘A novel fast partitioning algorithm for extendedtarget tracking using a Gaussian mixture PHD filter’, Signal Process.,2013, 93, (11), pp. 2975–2985

13 Carpenter, G.A., Grossberg, S., Rosen, D.B.: ‘Fuzzy ART: fast stablelearning and categorization of analog patterns by an adaptiveresonance system’, Neural Netw., 1991, 4, (1), pp. 759–771

14 Bishop, C.M.: ‘Pattern recognition and machine learning’ (Springer,New York, 2006)

15 Arthur, D., Vassilvitskii, S.: ‘k-means + + : the advantages of carefulseeding’. Proc. ACM-SIAM Symp. Discrete Algorithms, Philadelphi,PA, USA, January 2007, pp. 1027–1035

16 Schuhmacher, D., Vo, B.T., Vo, B.N.: ‘A consistent metric forperformance evaluation of multi-object filters’, IEEE Trans. SignalProcess., 2008, 56, (8), pp. 3447–3457

IET Signal Process., 2014, Vol. 8, Iss. 4, pp. 330–338doi: 10.1049/iet-spr.2013.0150