The Gaussian Mixture Probability Hypothesis Density Filter

Author
shafayatabrar 
Category
Documents

view
219 
download
0
Embed Size (px)
Transcript of The Gaussian Mixture Probability Hypothesis Density Filter

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006 4091
The Gaussian Mixture Probability HypothesisDensity Filter
BaNgu Vo and WingKin Ma, Member, IEEE
AbstractA new recursive algorithm is proposed for jointlyestimating the timevarying number of targets and their statesfrom a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise, and false alarms.The approach involves modelling the respective collections oftargets and measurements as random finite sets and applying theprobability hypothesis density (PHD) recursion to propagate theposterior intensity, which is a firstorder statistic of the randomfinite set of targets, in time. At present, there is no closedformsolution to the PHD recursion. This paper shows that under linear,Gaussian assumptions on the target dynamics and birth process,the posterior intensity at any time step is a Gaussian mixture.More importantly, closedform recursions for propagating themeans, covariances, and weights of the constituent Gaussiancomponents of the posterior intensity are derived. The proposedalgorithm combines these recursions with a strategy for managingthe number of Gaussian components to increase efficiency. Thisalgorithm is extended to accommodate mildly nonlinear targetdynamics using approximation strategies from the extended andunscented Kalman filters.
Index TermsIntensity function, multipletarget tracking, optimal filtering, point processes, random sets.
I. INTRODUCTION
I N a multipletarget environment, not only do the states ofthe targets vary with time but also the number of targetschanges due to targets appearing and disappearing. Often, notall of the existing targets are detected by the sensor. Moreover,the sensor also receives a set of spurious measurements (clutter)not originating from any target. As a result, the observation setat each time step is a collection of indistinguishable partial observations, only some of which are generated by targets. Theobjective of multipletarget tracking is to jointly estimate, ateach time step, the number of targets and their states from a sequence of noisy and cluttered observation sets. Multipletargettracking is an established area of study; for details on its techniques and applications, readers are referred to [1] and [2]. Upto date overviews are also available in more recent works suchas [3][5].
Manuscript received August 18, 2005; revised January 23, 2006. This workwas supported in part by the Australian Research Council under DiscoveryGrant DP0345215. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Carlos H. Muravchik.
B.N. Vo is with the Department of Electrical and Electronic Engineering, TheUniversity of Melbourne, Parkville, Victoria 3010, Australia (email: [email protected]).
W.K. Ma is with the Institute of Communications Engineering and Department of Electrical Engineering, National Tsing Hua University, Hsinchu,Taiwan 30013, R.O.C. (email: [email protected]).
Digital Object Identifier 10.1109/TSP.2006.881190
An intrinsic problem in multipletarget tracking is the unknown association of measurements with appropriate targets[1], [2], [6], [7]. Due to its combinatorial nature, the dataassociation problem makes up the bulk of the computationalload in multipletarget tracking algorithms. Most traditionalmultipletarget tracking formulations involve explicit associations between measurements and targets. Multiple hypothesestracking (MHT) and its variations concern the propagation ofassociation hypotheses in time [2], [6], [7]. The joint probabilistic data association filter (JPDAF) [1], [8], the probabilisticMHT (PMHT) [9], and the multipletarget particle filter [3], [4]use observations weighted by their association probabilities. Alternative formulations that avoid explicit associations betweenmeasurements and targets include symmetric measurementequations [10] and random finite sets (RFS) [5], [11][14].
The RFS approach to multipletarget tracking is an emergingand promising alternative to the traditional associationbasedmethods [5], [11], [15]. A comparison of the RFS approach andtraditional multipletarget tracking methods has been given in[11]. In the RFS formulation, the collection of individual targets is treated as a setvalued state, and the collection of individual observations is treated as a setvalued observation. Modelling setvalued states and setvalued observations as RFSs allows the problem of dynamically estimating multiple targets inthe presence of clutter and association uncertainty to be cast in aBayesian filtering framework [5], [11], [15][17]. This theoretically optimal approach to multipletarget tracking is an elegantgeneralization of the singletarget Bayes filter. Indeed, novelRFSbased filters such as the multipletarget Bayes filter, theprobability hypothesis density (PHD) filter [5], [11], [18], andtheir implementations [16], [17], [19][23] have generated substantial interest.
The focus of this paper is the PHD filter, a recursion thatpropagates the firstorder statistical moment, or intensity, of theRFS of states in time [5]. This approximation was developed toalleviate the computational intractability in the multipletargetBayes filter, which stems from the combinatorial nature of themultipletarget densities and the multiple integrations on the(infinite dimensional) multipletarget state space. The PHDfilter operates on the singletarget state space and avoids thecombinatorial problem that arises from data association. Thesesalient features render the PHD filter extremely attractive.However, the PHD recursion involves multiple integrals thathave no closedform solutions in general. A generic sequentialMonte Carlo technique [16], [17], accompanied by variousperformance guarantees [17], [24], [25], has been proposedto propagate the posterior intensity in time. In this approach,state estimates are extracted from the particles representing the
1053587X/$20.00 2006 IEEE

4092 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006
posterior intensity using clustering techniques such as meanor expectation maximization. Special cases of this socalledparticlePHD filter have also been independently implementedin [21] and [22]. Due to its ability to handle the timevaryingnumber of nonlinear targets with relatively low complexity,innovative extensions and applications of the particlePHDfilter soon followed [26][31]. The main drawbacks of thisapproach are the large number of particles and the unreliabilityof clustering techniques for extracting state estimates. (Thelatter will be further discussed in Section IIIC.)
In this paper, we propose an analytic solution to the PHD recursion for linear Gaussian target dynamics and Gaussian birthmodel. This solution is analogous to the Kalman filter as a solution to the singletarget Bayes filter. It is shown that whenthe initial prior intensity is a Gaussian mixture, the posteriorintensity at any subsequent time step is also a Gaussian mixture. Moreover, closedform recursions for the weights, means,and covariances of the constituent Gaussian components are derived. The resulting filter propagates the Gaussian mixture posterior intensity in time as measurements arrive in the same spiritas the Gaussian sum filter of [32], [33]. The fundamental difference is that the Gaussian sum filter propagates a probability density using the Bayes recursion, whereas the Gaussian mixturePHD filter propagates an intensity using the PHD recursion. Anadded advantage of the Gaussian mixture representation is that itallows state estimates to be extracted from the posterior intensityin a much more efficient and reliable manner than clustering inthe particlebased approach. In general, the number of Gaussiancomponents in the posterior intensity increases with time. However, this problem can be effectively mitigated by keeping onlythe dominant Gaussian components at each instance. Two extensions to nonlinear target dynamics models are also proposed.The first is based on linearizing the model while the second isbased on the unscented transform. Simulation results are presented to demonstrate the capability of the proposed approach.
Preliminary results on the closedform solution to the PHDrecursion have been presented as a conference paper [34]. Thispaper is a more complete version.
The structure of this paper is as follows. Section II presentsthe random finite set formulation of multipletarget filteringand the PHD filter. Section III presents the main result of thispaper, namely, the analytical solution to the PHD recursionunder linear Gaussian assumptions. An implementation of thePHD filter and simulation results are also presented. Section IVextends the proposed approach to nonlinear models using ideasfrom the extended and unscented Kalman filters. Demonstrations with tracking nonlinear targets are also given. Finally,concluding remarks and possible future research directions aregiven in Section V.
II. PROBLEM FORMULATION
This section presents a formulation of multipletarget filtering in the random finite set (or point process) framework. Webegin with a review of singletarget Bayesian filtering in Section IIA. Using random finite set models, the multipletargettracking problem is then formulated as a Bayesian filtering
problem in Section IIB. This provides sufficient backgroundleading to Section IIC, which describes the PHD filter.
A. SingleTarget Filtering
In many dynamic state estimation problems, the state is assumed to follow a Markov process on the state space ,with transition density , i.e., given a state attime 1, the probability density of a transition to the stateat time is1
(1)
This Markov process is partially observed in the observationspace , as modelled by the likelihood function ,i.e., given a state at time , the probability density of receiving the observation is
(2)
The probability density of the state at time given allobservations up to time , denoted by
(3)
is called the posterior density (or filtering density) at time .From an initial density , the posterior density at time canbe computed using the Bayes recursion
(4)
(5)
All information about the state at time is encapsulated in theposterior density , and estimates of the state at timecan be obtained using either the minimum mean squared error(MMSE) criterion or the maximum a posteriori (MAP) criterion.2
B. Random Finite Set Formulation of MultiTarget Filtering
Now consider a multiple target scenario. Let be thenumber of targets at time , and suppose that, at time ,the target states are . At the nexttime step, some of these targets may die, the surviving targetsevolve to their new states, and new targets may appear. This results in new states . Note that the orderin which the states are listed has no significance in the RFS multipletarget model formulation. At the sensor, measurements are received at time . The originsof the measurements are not known, and thus the order in whichthey appear bears no significance. Only some of these measurements are actually generated by targets. Moreover, they are indistinguishable from the false measurements. The objective ofmultipletarget tracking is to jointly estimate the number of targets and their states from measurements with uncertain origins.
1For notational simplicity, random variables and their realizations are not distinguished.
2These criteria are not necessarily applicable to the multipletarget case.

VO AND MA: THE GAUSSIAN MIXTURE PROBABILITY HYPOTHESIS DENSITY FILTER 4093
Even in the ideal case where the sensor observes all targets andreceives no clutter, singletarget filtering methods are not applicable since there is no information about which target generatedwhich observation.
Since there is no ordering on the respective collections oftarget states and measurements at time , they can be naturallyrepresented as finite sets, i.e.,
(6)
(7)
where and are the respective collections of all finite subsets of and . The key in the random finite set formulation is to treat the target set and measurement set asthe multipletarget state and multipletarget observation respectively. The multipletarget tracking problem can then be posedas a filtering problem with (multipletarget) state spaceand observation space .
In a singletarget system, uncertainty is characterized bymodelling the state and measurement as random vectors. Analogously, uncertainty in a multipletarget system ischaracterized by modelling the multipletarget state andmultipletarget measurement as random finite sets. An RFS
is simply a finitesetvalued random variable, which can bedescribed by a discrete probability distribution and a familyof joint probability densities [11], [35], [36]. The discretedistribution characterizes the cardinality of , while for agiven cardinality, an appropriate density characterizes the jointdistribution of the elements of .
In the following, we describe an RFS model for the time evolution of the multipletarget state, which incorporates target motion, birth and death. For a given multipletarget stateat time 1, each either continues to exist attime with probability3 or dies with probability
. Conditional on the existence at time , theprobability density of a transition from state to is givenby (1), i.e., . Consequently, for a given state
at time 1, its behavior at the next time step ismodelled as the RFS
(8)
that can take on either when the target survives or whenthe target dies. A new target at time can arise either by spontaneous births (i.e., independent of any existing target) or byspawning from a target at time 1. Given a multipletargetstate at time 1, the multipletarget state at timeis given by the union of the surviving targets, the spawned targets, and the spontaneous births
(9)
where
3Note that p (x ) is a probability parameterized by x .
RFS of spontaneous birth at time ;
RFS of targets spawned at time from a targetwith previous state .
It is assumed that the RFSs constituting the union in (9) are independent of each other. The actual forms of and areproblem dependent; some examples are given in Section IIID.
The RFS measurement model, which accounts for detectionuncertainty and clutter, is described as follows. A given target
is either detected with probability4 ormissed with probability . Conditional on detection, the probability density of obtaining an observationfrom is given by (2), i.e., . Consequently, at time
, each state generates an RFS
(10)
that can take on either when the target is detected orwhen the target is not detected. In addition to the target originated measurements, the sensor also receives a set of falsemeasurements, or clutter. Thus, given a multipletarget stateat time , the multipletarget measurement received at thesensor is formed by the union of target generated measurementsand clutter, i.e.,
(11)
It is assumed that the RFSs constituting the union in (11) areindependent of each other. The actual form of is problemdependent; some examples will be illustrated in Section IIID.
In a similar vein to the singletarget dynamical model in (1)and (2), the randomness in the multipletarget evolution and observation described by (9) and (11) are, respectively, capturedin the multipletarget transition density and multipletarget likelihood5 [5], [17]. Explicit expressions for
and can be derived from the underlying physical models of targets and sensors using finite setstatistics (FISST)6 [5], [11], [15], although these are not neededfor this paper.
Let denote the multipletarget posterior density.Then, the optimal multipletarget Bayes filter propagates themultipletarget posterior in time via the recursion
(12)
(13)
where is an appropriate reference measure on [17],[37]. We remark that although various applications of pointprocess theory to multipletarget tracking have been reported in
4Note that p (x ) is a probability parameterized by x .5The same notation is used for multipletarget and singletarget densities.
There is no danger of confusion since in the singletarget case the argumentsare vectors, whereas in the multipletarget case the arguments are finite sets.
6Strictly speaking, FISST yields the set derivative of the belief mass functional, but this is in essence a probability density [17].

4094 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006
the literature (e.g., [38][40]), FISST [5], [11], [15] is the firstsystematic approach to multipletarget filtering that uses RFSsin the Bayesian framework presented above.
The recursion (12) and (13) involves multiple integrals on thespace , which are computationally intractable. SequentialMonte Carlo implementations can be found in [16], [17], [19],and [20]. However, these methods are still computationally intensive due to the combinatorial nature of the densities, especially when the number of targets is large [16], [17]. Nonetheless, the optimal multipletarget Bayes filter has been successfully applied to applications where the number of targets is small[19].
C. The Probability Hypothesis Density (PHD) Filter
The PHD filter is an approximation developed to alleviate thecomputational intractability in the multipletarget Bayes filter.Instead of propagating the multipletarget posterior densityin time, the PHD filter propagates the posterior intensity, afirstorder statistical moment of the posterior multipletargetstate [5]. This strategy is reminiscent of the constant gainKalman filter, which propagates the first moment (the mean) ofthe singletarget state.
For an RFS on with probability distribution , its firstorder moment is a nonnegative function on , called the intensity, such that for each region [35], [36]
(14)
In other words, the integral of over any region gives theexpected number of elements of that are in . Hence, the totalmass gives the expected number of elements of
. The local maxima of the intensity are points in with thehighest local concentration of expected number of elements, andhence can be used to generate estimates for the elements of .The simplest approach is to round and choose the resultingnumber of highest peaks from the intensity. The intensity is alsoknown in the tracking literature as the probability hypothesisdensity [18], [41].
An important class of RFSs, the Poisson RFSs, are those completely characterized by their intensities. An RFS is Poissonif the cardinality distribution of , , is Poissonwith mean and for any finite cardinality, the elements ofare independently identically distributed according to the probability density [35], [36]. For the multipletarget problemdescribed in Section IIB, it is common to model the clutter RFS[ in (11)] and the birth RFSs [ and in (9)]as Poisson RFSs.
To present the PHD filter, recall the multipletarget evolutionand observation models from Section IIB with
intensity of the birth RFS at time ;
intensity of the RFS spawned at timeby a target with previous state ;
probability that a target still exists at timegiven that its previous state is ;
probability of detection given a state at time ;
intensity of clutter RFS at time ;
and consider the following assumptions.A.1: Each target evolves and generates observations indepen
dently of one another.A.2: Clutter is Poisson and independent of targetoriginated
measurements,A.3: The predicted multipletarget RFS governed by
is Poisson.Remark 1: Assumptions A.1 and A.2 are standard in most
tracking applications (see, for example, [1] and [2]) and have already been alluded to in Section IIB. The additional assumptionA.3 is a reasonable approximation in applications where interactions between targets are negligible [5]. In fact, it can be shownthat A.3 is completely satisfied when there is no spawning andthe RFSs and are Poisson.
Let and denote the respective intensities associated with the multipletarget posterior density and the multipletarget predicted density in the recursion (12)(13).Under assumptions A.1A.3, it can be shown (using FISST [5]or classical probabilistic tools [37]) that the posterior intensitycan be propagated in time via the PHD recursion
(15)
(16)
It is clear from (15) and (16) that the PHD filter completelyavoids the combinatorial computations arising from the unknown association of measurements with appropriate targets.Furthermore, since the posterior intensity is a function on thesingletarget state space , the PHD recursion requires muchless computational power than the multipletarget recursion(12) and (13), which operates on . However, as mentioned in the introduction, the PHD recursion does not admitclosedform solutions in general, and numerical integrationsuffers from the curse of dimensionality.
III. THE PHD RECURSION FOR LINEAR GAUSSIAN MODELS
This section shows that for a certain class of multipletargetmodels, herein referred to as linear Gaussian multipletargetmodels, the PHD recursion (15) and (16) admits a closedformsolution. This result is then used to develop an efficient multipletarget tracking algorithm. The linear Gaussian multipletarget models are specified in Section IIIA, while the solutionto the PHD recursion is presented in Section IIIB. Implementation issues are addressed in Section IIIC. Numerical results arepresented in Section IIID, and some generalizations are discussed in Section IIIE.

VO AND MA: THE GAUSSIAN MIXTURE PROBABILITY HYPOTHESIS DENSITY FILTER 4095
A. Linear Gaussian MultipleTarget Model
Our closedform solution to the PHD recursion requires, inaddition to assumptions A.1A.3, a linear Gaussian multipletarget model. Along with the standard linear Gaussian modelfor individual targets, the linear Gaussian multipletarget modelincludes certain assumptions on the birth, death, and detectionof targets. These are summarized below.
A.4: Each target follows a linear Gaussian dynamical modeland the sensor has a linear Gaussian measurement model, i.e.,
(17)
(18)
where denotes a Gaussian density with mean andcovariance , is the state transition matrix, is theprocess noise covariance, is the observation matrix, andis the observation noise covariance.
A.5: The survival and detection probabilities are state independent, i.e.,
(19)
(20)
A.6: The intensities of the birth and spawn RFSs are Gaussianmixtures of the form
(21)
(22)
where , , , , , are given modelparameters that determine the shape of the birth intensity; similarly, , , , , and , ,determine the shape of the spawning intensity of a target withprevious state .
Some remarks regarding the above assumptions are in order.Remark 2: Assumptions A.4 and A.5 are commonly used in
many tracking algorithms [1], [2]. For clarity in the presentation,we only focus on stateindependent and , althoughclosedform PHD recursions can be derived for more generalcases (see Section IIIE).
Remark 3: In assumption A.6, , are thepeaks of the spontaneous birth intensity in (21). These pointshave the highest local concentrations of expected number ofspontaneous births and represent, for example, air bases or airports where targets are most likely to appear. The covariancematrix determines the spread of the birth intensity in the
vicinity of the peak . The weight gives the expected
number of new targets originating from . A similar interpretation applies to (22), the spawning intensity of a target withprevious state , except that the th peak isan affine function of . Usually, a spawned target is modelledto be in the proximity of its parent. For example, could correspond to the state of an aircraft carrier at time 1, while
is the expected state of fighter planes spawnedat time . Note that other forms of birth and spawning intensitiescan be approximated, to any desired accuracy, using Gaussianmixtures [42].
B. The Gaussian Mixture PHD Recursion
For the linear Gaussian multipletarget model, the followingtwo propositions present a closedform solution to the PHD recursion (15), (16). More concisely, these propositions show howthe Gaussian components of the posterior intensity are analytically propagated to the next time.
Proposition 1: Suppose that Assumptions A.4A.6 hold andthat the posterior intensity at time is a Gaussian mixtureof the form
(23)
Then, the predicted intensity for time is also a Gaussian mixture and is given by
(24)
where is given in (21)
(25)
(26)
(27)
(28)
(29)
(30)
Proposition 2: Suppose that Assumptions A.4A.6 hold andthat the predicted intensity for time is a Gaussian mixture ofthe form
(31)
Then, the posterior intensity at time is also a Gaussian mixtureand is given by
(32)
where
(33)
(34)

4096 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006
(35)
(36)
(37)
Propositions 1 and 2 can be established by applying the following standard results for Gaussian functions.
Lemma 1: Given , , , , and of appropriate dimensions and that and are positive definite
(38)Lemma 2: Given , , , and of appropriate dimensions
and that and are positive definite
(39)
where
(40)
(41)
(42)
(43)
Note that Lemma 1 can be derived from Lemma 2, which in turncan be found in [43] or [44, Section 3.8], though in a slightlydifferent form.
Proposition 1 is established by substituting (17), (19), and(21)(23) into the PHD prediction (15) and replacing integralsof the form (38) by appropriate Gaussians as given by Lemma1. Similarly, Proposition 2 is established by substituting (18),(20), and (31) into the PHD update (16) and then replacing integrals of the form (38) and product of Gaussians of the form(39) by appropriate Gaussians as given by Lemmas 1 and 2, respectively.
It follows by induction from Propositions 1 and 2 that if theinitial prior intensity is a Gaussian mixture (including thecase where ), then all subsequent predicted intensities
and posterior intensities are also Gaussian mixtures.Proposition 1 provides closedform expressions for computingthe means, covariances, and weights of from those of
. Proposition 2 then provides closedform expressions forcomputing the means, covariances, and weights of from thoseof when a new set of measurements arrives. Propositions 1 and 2 are, respectively, the prediction and update steps ofthe PHD recursion for a linear Gaussian multipletarget model,herein referred to as the Gaussian mixture PHD recursion. Forcompleteness, we summarize the key steps of the Gaussian mixture PHD filter in Table I.
Remark 4: The predicted intensity in Proposition 1consists of three terms , , and due, respectively, to the existing targets, the spawned targets, and the spontaneous births. Similarly, the updated posterior intensity inProposition 2 consists of a misdetection term 1and detection terms , one for each measurement
. As it turns out, the recursions for the means andcovariances of and are Kalman predictions,
TABLE IPSEUDOCODE FOR THE GAUSSIAN MIXTURE PHD FILTER
and the recursions for the means and covariances ofare Kalman updates.
Given the Gaussian mixture intensities and , the corresponding expected number of targets and can beobtained by summing up the appropriate weights. Propositions

VO AND MA: THE GAUSSIAN MIXTURE PROBABILITY HYPOTHESIS DENSITY FILTER 4097
1 and 2 lead to the following closedform recursions forand .
Corollary 1: Under the premises of Proposition 1, the meanof the predicted number of targets is
(44)
Corollary 2: Under the premises of Proposition 2, the meanof the updated number of targets is
(45)
In Corollary 1, the mean of the predicted number of targets isobtained by adding the mean number of surviving targets, themean number of spawnings, and the mean number of births.A similar interpretation can be drawn from Corollary 2. Whenthere is no clutter, the mean of the updated number of targets isthe number of measurements plus the mean number of targetsthat are not detected.
C. Implementation Issues
The Gaussian mixture PHD filter is similar to the Gaussiansum filter of [32] and [33] in the sense that they both propagate Gaussian mixtures in time. Like the Gaussian sum filter,the Gaussian mixture PHD filter also suffers from computationproblems associated with the increasing number of Gaussiancomponents as time progresses. Indeed, at time , the Gaussianmixture PHD filter requires
Gaussian components to represent , where is the numberof components of . This implies that the number of components in the posterior intensities increases without bound.
A simple pruning procedure can be used to reduce the numberof Gaussian components propagated to the next time step. Agood approximation to the Gaussian mixture posterior intensity
can be obtained by truncating components that have weakweights . This can be done by discarding those withweights below some preset threshold, or by keeping only a certain number of components with strongest weights. Moreover,some of the Gaussian components are so close together thatthey could be accurately approximated by a single Gaussian.Hence, in practice these components can be merged into one.These ideas lead to the simple heuristic pruning algorithmshown in Table II.
Having computed the posterior intensity , the next task isto extract multipletarget state estimates. In general, such a taskmay not be simple. For example, in the particlePHD filter [17],the estimated number of targets is given by the total massof the particles representing . The estimated states are then
TABLE IIPRUNING FOR THE GAUSSIAN MIXTURE PHD FILTER
TABLE IIIMULTIPLETARGET STATE EXTRACTION
obtained by partitioning these particles into clusters, usingstandard clustering algorithms. This works well when the posterior intensity naturally has clusters. Conversely, when
differs from the number of clusters, the state estimates become unreliable.
In the Gaussian mixture representation of the posterior intensity , extraction of multipletarget state estimates is straightforward since the means of the constituent Gaussian components are indeed the local maxima of , provided that they arereasonably well separated. Note that after pruning (see Table II),closely spaced Gaussian components would have been merged.Since the height of each peak depends on both the weight andcovariance, selecting the highest peaks of may result instate estimates that correspond to Gaussians with weak weights.This is not desirable because the expected number of targets dueto these peaks is small, even though the magnitudes of the peaksare large. A better alternative is to select the means of the Gaussians that have weights greater than some threshold, e.g., 0.5.This state estimation procedure for the Gaussian mixture PHDfilter is summarized in Table III.

4098 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006
D. Simulation Results
Two simulation examples are used to test the proposedGaussian mixture PHD filter. An additional example can befound in [34].
Example 1: For illustration purposes, consider a twodimensional scenario with an unknown and time varyingnumber of targets observed in clutter over the surveillanceregion [ 1000,1000] [ 1000,1000] (in ). The state
of each target consists ofposition and velocity , while the measurement is a noisy version of the position.
Each target has survival probability and followsthe linear Gaussian dynamics (17) with
where and denote, respectively, the identity andzero matrices, s is the sampling period, and(m/s ) is the standard deviation of the process noise. Targetscan appear from two possible locations as well as spawned fromother targets. Specifically, a Poisson RFS with intensity
where , ,and is used to model spontaneous births in the vicinity of and . Additionally, theRFS of targets spawned from a target with previousstate is Poisson with intensity
Each target is detected with probability , and themeasurement follows the observation model (18) with
, , where m is the standard deviation of the measurement noise. The detected measurements areimmersed in clutter that can be modelled as a Poisson RFSwith intensity
(46)
where is the uniform density over the surveillance region,m is the volume of the surveillance region, and
m is the average number of clutter returnsper unit volume (i.e., 50 clutter returns over the surveillanceregion).
Fig. 1 shows the true target trajectories, while Fig. 2 plotsthese trajectories with cluttered measurements against time. Targets 1 and 2 are born at the same time but at two different locations. They travel along straight lines (their tracks cross at
s) and at s target 1 spawns target 3.The Gaussian mixture PHD filter, with parameters ,
, and (see Table II for the meanings of theseparameters), is applied. From the position estimates shown inFig. 3, it can be seen that the Gaussian mixture PHD filter provides accurate tracking performance. The filter not only successfully detects and tracks targets 1 and 2 but also manages todetect and track the spawned target 3. The filter does generate
Fig. 1. Target trajectories. : locations at which targets are born; : locations at which targets die.
Fig. 2. Measurements and true target positions.
anomalous estimates occasionally, but these false estimates dieout very quickly.
Example 2: In this example, we evaluate the performance ofthe Gaussian mixture PHD filter by benchmarking it against theJPDA filter [1], [8] via Monte Carlo simulations. The JPDA filteris a classical filter for tracking a known and fixed number oftargets in clutter. In a scenario where the number of targets isconstant, the JPDA filter (given the correct number of targets)is expected to outperform the PHD filter, since the latter hasneither knowledge of the number of targets, nor even knowledgethat the number of targets is constant. For these reasons, theJPDA filter serves as a good benchmark.
The experiment settings are the same as those of Example1, but without spawning, as the JPDA filter requires a known

VO AND MA: THE GAUSSIAN MIXTURE PROBABILITY HYPOTHESIS DENSITY FILTER 4099
Fig. 3. Position estimates of the Gaussian mixture PHD filter.
and fixed number of targets. The true tracks in this example arethose of targets 1 and 2 in Fig. 1. Target trajectories are fixedfor all simulation trials, while observation noise and clutter areindependently generated at each trial.
We study track loss performance by using the following circular position error probability (CPEP) (see [45] for and example)
CPEP
for some position error radius , where
for all
and is the 2norm. In addition, we measure the expected absolute error on the number of targets for theGaussian mixture PHD filter
Note that standard performance measures such as the meansquare distance error are not applicable to multipletarget filtersthat jointly estimate the number of targets and their states (suchas the PHD filter).
Fig. 4 shows the tracking performance of the two filters forvarious clutter rates [see (46)] with the CPEP radius fixedat m. Observe that the CPEPs of the two filters arequite close for a wide range of clutter rates. This is rather surprising considering that the JPDA filter has exact knowledgeof the number of targets. Fig. 4(a) suggests that the occasionaloverestimation/underestimation of the number of targets is notsignificant in the Gaussian mixture PHD filter.
Fig. 5 shows the tracking performance for various values ofdetection probability with the clutter rate fixed at
m . Observe that the performance gap betweenthe two filters increases as decreases. This is because the
Fig. 4. Tracking performance versus clutter rate. The detection probability isfixed at p = 0:98. The CPEP radius is r = 20 m.
Fig. 5. Tracking performance versus detection probability. The clutter rate isfixed at = 12:5 10 m . The CPEP radius is r = 20 m.
PHD filter has to resolve higher detection uncertainty on top ofuncertainty in the number of targets. When detection uncertaintyincreases ( decreases), uncertainty about the number of targets also increases. In contrast, the JPDA filters exact knowledge of the number of targets is not affected by the increase indetection uncertainty.
E. Generalizations to Exponential Mixture and
As remarked in Section IIIA, closedform solutions to thePHD recursion can still be obtained for a certain class of statedependent probability of survival and probability of detection.

4100 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006
Indeed, Propositions 1 and 2 can be easily generalized to handleand of the forms
(47)
(48)
where , , , , , , and ,
, , , , , are given modelparameters such that and lie between zero andone for all .
The closedform predicted intensity can be obtained by applying Lemma 2 to convert into aGaussian mixture, which is then integrated with the transitiondensity using Lemma 1. The closedform updatedintensity can be obtained by applying Lemma 2 once to
and twice to toconvert these products to Gaussian mixtures. For completeness,the Gaussian mixture expressions for and are givenin the following propositions, although their implementationswill not be pursued.
Proposition 3: Under the premises of Proposition 1, withgiven by (47) instead of (19), the predicted intensity
is given by (24) but with
(49)
Proposition 4: Under the premises of Proposition 2, withgiven by (48) instead of (20)
(50)
where
Conceptually, the Gaussian mixture PHD filter implementation can be easily extended to accommodate exponential mixture probability of survival. However, for exponential mixtureprobability of detection, the updated intensity contains Gaussians with negative and positive weights, even though the updated intensity itself (and hence the sum of the weights) is nonnegative. Although these Gaussians can be propagated usingPropositions 3 and 4, care must be taken in the implementationto ensure nonnegativity of the intensity function after mergingand pruning.
IV. EXTENSION TO NONLINEAR GAUSSIAN MODELS
This section considers extensions of the Gaussian mixturePHD filter to nonlinear target models. Specifically, the modelling assumptions A.5 and A.6 are still required, but the state andobservation processes can be relaxed to the nonlinear model
(51)
(52)
where and are known nonlinear functions and andare zeromean Gaussian process noise and measurement
noise with covariances and , respectively. Due tothe nonlinearity of and , the posterior intensity can nolonger be represented as a Gaussian mixture. Nonetheless,the proposed Gaussian mixture PHD filter can be adapted toaccommodate nonlinear Gaussian models.
In singletarget filtering, analytic approximations of the nonlinear Bayes filter include the extended Kalman (EK) filter [46],[47] and the unscented Kalman (UK) filter [48], [49]. The EKfilter approximates the posterior density by a Gaussian, which ispropagated in time by applying the Kalman recursions to locallinearizations of the (nonlinear) mappings and . The UKfilter also approximates the posterior density by a Gaussian, butinstead of using the linearized model, it computes the Gaussianapproximation of the posterior density at the next time stepusing the unscented transform. Details for the EK and UK filtersare given in [46][49].
Following the development in Section IIIB, it can be shownthat the posterior intensity of the multipletarget state propagated by the PHD recursions (15) and (16) is a weighted sum ofvarious functions, many of which are nonGaussian. In the same

VO AND MA: THE GAUSSIAN MIXTURE PROBABILITY HYPOTHESIS DENSITY FILTER 4101
TABLE IVPSEUDOCODE FOR THE EKPHD FILTER
vein as the EK and UK filters, we can approximate each of thesenonGaussian constituent functions by a Gaussian. Adopting thephilosophy of the EK filter, an approximation of the posterior intensity at the next time step can then be obtained by applying theGaussian mixture PHD recursions to a locally linearized targetmodel. Alternatively, in a similar manner to the UK filter, theunscented transform can be used to compute the components ofthe Gaussian mixture approximation of the posterior intensity atthe next time step. In both cases, the weights of these components are also approximations.
Based on the above observations, we propose two nonlinearGaussian mixture PHD filter implementations, namely, theextended Kalman PHD (EKPHD) filter and the unscentedKalman PHD (UKPHD) filter. Given that details for the EKand UK filters have been well documented in the literature (see,e.g., [44] and [46][49]), the developments for the EKPHDand UKPHD filters are conceptually straightforward, thoughnotationally cumbersome, and will be omitted. However, forcompleteness, the key steps in these two filters are summarizedas pseudocodes in Tables IV and V, respectively.
Remark 5: Similar to its singletarget counterpart, theEKPHD filter is only applicable to differentiable nonlinear
TABLE VPSEUDOCODE FOR THE UKPHD FILTER
models. Moreover, calculating the Jacobian matrices may betedious and errorprone. The UKPHD filter, on the other hand,does not suffer from these restrictions and can even be appliedto models with discontinuities.

4102 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006
Fig. 6. Target trajectories. : locations at which targets are born; : locations at which targets die.
Remark 6: Unlike the particlePHD filter, where the particleapproximation converges (in a certain sense) to the posteriorintensity as the number of particle tends to infinity [17], [24],this type of guarantee has not been established for the EKPHDor UKPHD filter. Nonetheless, for mildly nonlinear problems,the EKPHD and UKPHD filters provide good approximationsand are computationally cheaper than the particlePHD filter,which requires a large number of particles and the additionalcost of clustering to extract multipletarget state estimates.
A. Simulation Results for a Nonlinear Gaussian Example
In this example, each target has a survival probabilityand follows a nonlinear nearly constant turn model [50] in
which the target state takes the form , where, and is the turn rate. The state
dynamics are given by
where
s, , m/s , and, rad/s. We assume no spawning, and
that the spontaneous birth RFS is Poisson with intensity
where ,and .
Each target has a probability of detection . Anobservation consists of bearing and range measurements
Fig. 7. Position estimates of the EKPHD filter.
Fig. 8. Position estimates of the UKPHD filter.
where with ,rad/s and m. The clutter
RFS follows the uniform Poisson model in (46) over thesurveillance region rad [0, 2000] m, with
rad m (i.e., an average of 20 clutterreturns on the surveillance region).
The true target trajectories are plotted in Fig. 6. Targets 1and 2 appear from two different locations, 5 s apart. They bothtravel in straight lines before making turns at s. Thetracks almost cross at s, and the targets resume theirstraight trajectories after s. The pruning parameters forthe UKPHD and EKPHD filters are , , and
. The results, shown in Figs. 7 and 8, indicate thatboth the UKPHD and EKPHD filters exhibit good trackingperformance.
In many nonlinear Bayes filtering applications, the UK filterhas shown better performance than the EK filter [49]. The sameis expected in nonlinear PHD filtering. However, this example

VO AND MA: THE GAUSSIAN MIXTURE PROBABILITY HYPOTHESIS DENSITY FILTER 4103
only has a mild nonlinearity, and the performance gap betweenthe EKPHD and UKPHD filters may not be noticeable.
V. CONCLUSIONS
Closedform solutions to the PHD recursion are importantanalytical and computational tools in multipletarget filtering.Under linear Gaussian assumptions, we have shown that whenthe initial prior intensity of the random finite set of targets isa Gaussian mixture, the posterior intensity at any time step isalso a Gaussian mixture. More importantly, we have derivedclosedform recursions for the weights, means, and covariancesof the constituent Gaussian components of the posterior intensity. An implementation of the PHD filter has been proposedby combining the closedform recursions with a simple pruningprocedure to manage the growing number of components. Twoextensions to nonlinear models using approximation strategiesfrom the extended Kalman filter and the unscented Kalman filterhave also been proposed. Simulations have demonstrated the capabilities of these filters to track an unknown and timevaryingnumber of targets under detection uncertainty and false alarms.
There are a number of possible future research directions.Closedformed solutions to the PHD recursion for jump Markovlinear models are being investigated. In highly nonlinear, nonGaussian models, where particle implementations are required,the EKPHD and UKPHD filters are obvious candidates for efficient proposal functions that can improve performance. Thisalso opens up the question of optimal importance functions andtheir approximations. The efficiency and simplicity in implementation of the Gaussian mixture PHD recursion also suggestpossible application to tracking in sensor networks.
REFERENCES[1] Y. BarShalom and T. E. Fortmann, Tracking and Data Association.
San Diego, CA: Academic, 1988.[2] S. Blackman, Multiple Target Tracking with Radar Applica
tions. Norwood, MA: Artech House, 1986.[3] C. Hue, J. P. Lecadre, and P. Perez, Sequential Monte Carlo methods
for multiple target tracking and data fusion, IEEE Trans. Trans. SignalProcess., vol. 50, no. 2, pp. 309325, 2002.
[4] , Tracking multiple objects with particle filtering, IEEE Trans.Aerosp. Electron. Syst., vol. 38, no. 3, pp. 791812, 2002.
[5] R. Mahler, Multitarget Bayes filtering via firstorder multitargetmoments, IEEE Trans. Aerosp. Electron. Syst., vol. 39, no. 4, pp.11521178, 2003.
[6] D. Reid, An algorithm for tracking multiple targets, IEEE Trans.Autom. Control, vol. AC24, no. 6, pp. 843854, 1979.
[7] S. Blackman, Multiple hypothesis tracking for multiple targettracking, IEEE Aerosp. Electron. Syst. Mag., vol. 19, no. 1, pt. 2, pp.518, 2004.
[8] T. E. Fortmann, Y. BarShalom, and M. Scheffe, Sonar tracking ofmultiple targets using joint probabilistic data association, IEEE J.Ocean. Eng., vol. OE8, pp. 173184, Jul. 1983.
[9] R. L. Streit and T. E. Luginbuhl, Maximum likelihood method forprobabilistic multihypothesis tracking, in Proc. SPIE, 1994, vol.2235, pp. 57.
[10] E. W. Kamen, Multiple target tracking based on symmetric measurement equations, IEEE Trans. Autom. Control, vol. AC37, pp.371374, 1992.
[11] I. Goodman, R. Mahler, and H. Nguyen, Mathematics of Data Fusion. Norwell, MA: Kluwer Academic, 1997.
[12] R. Mahler, Random sets in information fusion: An overview, inRandom Sets: Theory and Applications, J. Goutsias, Ed. Berlin,Germany: SpringerVerlag, 1997, pp. 129164.
[13] , Multitarget moments and their application to multitargettracking, in Proc. Workshop Estimation, Tracking Fusion, Monterey,CA, 2001, A tribute to Yaakov BarShalom, pp. 134166.
[14] , Random set theory for target tracking and identification, inData Fusion Hand Book, D. Hall and J. Llinas, Eds. Boca Raton, FL:CRC Press, 2001, pp. 14/114/33.
[15] , An introduction to multisourcemultitarget statistics and applications, in Lockheed Martin Technical Monograph, 2000.
[16] B. Vo, S. Singh, and A. Doucet, Sequential Monte Carlo implementation of the PHD filter for multitarget tracking, in Proc. Int. Conf. Inf.Fusion, Cairns, Australia, 2003, pp. 792799.
[17] , Sequential Monte Carlo methods for multitarget filtering withrandom finite sets, IEEE Trans. Aerosp. Electron. Syst., vol. 41, no. 4,pp. 12241245, 2005.
[18] R. Mahler, A theoretical foundation for the SteinWinter probabilityhypothesis density (PHD) multitarget tracking approach, in Proc.2002 MSS Nat. Symp. Sensor Data Fusion, San Antonio, TX, 2000,vol. 1.
[19] W.K. Ma, B. Vo, S. Singh, and A. Baddeley, Tracking an unknowntimevarying number of speakers using TDOA measurements: Arandom finite set approach, IEEE Trans. Signal Process., vol. 54, no.9, pp. 32913304, Sep. 2006.
[20] H. Sidenbladh and S. Wirkander, Tracking random sets of vehiclesin terrain, in Proc. 2003 IEEE Workshop MultiObject Tracking,Madison, WI, 2003.
[21] H. Sidenbladh, Multitarget particle filtering for the probability hypothesis density, in Proc. Int. Conf. Information Fusion, Cairns, Australia, 2003, pp. 800806.
[22] T. Zajic and R. Mahler, A particlesystems implementation of thePHD multitarget tracking filter, in Proc. SPIE Signal Process. SensorFusion Target Recognit. XII, 2003, vol. 5096, pp. 291299.
[23] M. Vihola, Random Sets for Multitarget Tracking and Data Association, Licentiate, Dept. Inf. Tech. Inst. Math., Tampere Univ. Technology, Tampere, Finland, 2004.
[24] A. Johansen, S. Singh, A. Doucet, and B. Vo, Convergence of thesequential Monte Carlo implementation of the PHD filter, Method.Comput. Appl. Probab., vol. 8, no. 2, pp. 265291, 2006.
[25] D. Clark and J. Bell, Convergence results for the particlePHD filter,IEEE Trans. Signal Process., vol. 54, no. 7, pp. 26522661, Jul. 2006.
[26] K. Panta, B. Vo, S. Singh, and A. Doucet, I. Kadar, Ed., Probabilityhypothesis density filter versus multiple hypothesis tracking, in Proc.SPIE Signal Process. Sensor Fusion, Target Recognit. XIII, 2004, vol.5429, pp. 284295.
[27] L. Lin, Y. BarShalom, and T. Kirubarajan, O. E. Drummond, Ed.,Data association combined with the probability hypothesis densityfilter for multitarget tracking, in Proc. SPIE Signal Data Process.Small Targets, 2004, vol. 5428, pp. 464475.
[28] K. Punithakumar, T. Kirubarajan, and A. Sinha, O. E. Drummond, Ed.,A multiple model probability hypothesis density filter for tracking maneuvering targets, in Proc. SPIE Signal Data Process. Small Targets,2004, vol. 5428, pp. 113121.
[29] P. Shoenfeld, I. Kadar, Ed., A particle filter algorithm for the multitarget probability hypothesis density, in Proc. SPIE Signal Processing,Sensor Fusion, Target Recognit. XIII, 2004, vol. 5429, pp. 315325.
[30] M. Tobias and A. D. Lanterman, Probability hypothesis densitybasedmultitarget tracking with bistatic range and Doppler observations,Proc. Inst. Elect. Eng. Radar Sonar Navig., vol. 152, no. 3, pp.195205, 2005.
[31] D. Clark and J. Bell, Bayesian multiple target tracking in forward scansonar images using the PHD filter, Proc. Inst. Elect. Eng. Radar SonarNavig., vol. 152, no. 5, pp. 327334, 2005.
[32] H. W. Sorenson and D. L. Alspach, Recursive Bayesian estimationusing Gaussian sum, Automatica, vol. 7, pp. 465479, 1971.
[33] D. L. Alspach and H. W. Sorenson, Nonlinear Bayesian estimationusing Gaussian sum approximations, IEEE Trans. Autom. Control,vol. AC17, no. 4, pp. 439448, 1972.
[34] B. Vo and W.K. Ma, A closedform solution to the probabilityhypothesis density filter, in Proc. Int. Conf. Information Fusion,Philadelphia, PA, 2005.
[35] D. Daley and D. VereJones, An Introduction to the Theory of PointProcesses. Berlin, Germany: SpringerVerlag, 1988.
[36] D. Stoyan, D. Kendall, and J. Mecke, Stochastic Geometry and Its Applications. New York: Wiley, 1995.
[37] B. Vo and S. Singh, Technical aspects of the probability hypothesisdensity recursion, EEE Dept., Univ. of Melbourne, Australia, 2005,Tech. Rep. TR05006.
[38] S. Mori, C. Chong, E. Tse, and R. Wishner, Tracking and identifyingmultiple targets without apriori identifications, IEEE Trans. Autom.Control, vol. AC21, pp. 401409, 1986.

4104 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 11, NOVEMBER 2006
[39] R. Washburn, A random point process approach to multiobjecttracking, in Proc. Amer. Contr. Conf., 1987, vol. 3, pp. 18461852.
[40] N. Portenko, H. Salehi, and A. Skorokhod, On optimal filtering ofmultitarget systems based on point process observations, RandomOper. Stochastic Equations, vol. 5, no. 1, pp. 134, 1997.
[41] M. C. Stein and C. L. Winter, An adaptive theory of probabilisticevidence accrual, Los Alamos National Laboratories Rep., 1993,LAUR933336.
[42] J. T.H. Lo, Finitedimensional sensor orbits and optimal nonlinearfiltering, IEEE Trans. Inf. Theory, vol. IT18, no. 5, pp. 583588,1972.
[43] Y. C. Ho and R. C. K. Lee, A Bayesian approach to problems instochastic estimation and control, IEEE Trans. Autom. Control, vol.AC9, pp. 333339, 1964.
[44] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter:Particle Filters for Tracking Applications. Norwood, MA: ArtechHouse, 2004.
[45] Y. Ruan and P. Willett, The turbo PMHT, IEEE Trans. Aerosp. Electron. Syst., vol. 40, no. 4, pp. 13881398, 2004.
[46] A. H. Jazwinski, Stochastic Processes and Filtering Theory. NewYork: Academic, 1970.
[47] B. D. Anderson and J. B. Moore, Optimal Filtering. EnglewoodCliffs, NJ: PrenticeHall, 1979.
[48] S. J. Julier and J. K. Uhlmann, A new extension of the Kalman filterto nonlinear systems, in Proc. AeroSense: 11th Int. Symp. Aerospace/Defence Sensing, Simulation Controls, Orlando, FL, 1997.
[49] , Unscented filtering and nonlinear estimation, Proc. IEEE, vol.92, pp. 401422, 2004.
[50] X.R. Li and V. Jilkov, Survey of maneuvering target tracking, IEEETrans. Aerosp. Electron. Syst., vol. 39, no. 4, pp. 13331364, 2003.
BaNgu Vo was born in Saigon, Vietnam, in 1970.He received the B.Sc./B.E. degree (with firstclasshonors) from the University of Western Australia in1994 and the Ph.D. degree from Curtin University ofTechnology in 1997.
He is currently a Senior Lecturer in the Departmentof Electrical and Electronic Engineering, Universityof Melbourne. His research interests are stochasticgeometry, random sets, multipletarget tracking, optimization, and signal processing.
WingKin Ma (M01) received the B.Eng. degree in electrical and electronic engineering (withfirstclass honors) from the University of Portsmouth,Portsmouth, U.K., in 1995 and the M.Phil. and Ph.D.degrees in electronic engineering from the ChineseUniversity of Hong Kong (CUHK), Hong Kong, in1997 and 2001, respectively.
Since August 2005, he has been an Assistant Professor in the Department of Electrical Engineeringand the Institute of Communications Engineering,National Tsing Hua University, Taiwan, R.O.C.
Previously he held research positions at McMaster University, Canada; CUHK;and the University of Melbourne, Australia. His research interests are in signalprocessing for communications and statistical signal processing.
Dr. Mas Ph.D. dissertation was commended to be of very high quality andwell deserved honorary mentioning by the Faculty of Engineering, CUHK, in2001.