Probabilistic Data Association Avoiding Track Coalescence - CiteSeerX

0 downloads 0 Views 2MB Size Report
First, the problem of multi- ple target tracking is embedded into one filtering for a linear ... The Joint Probabilistic Data Association (JPDA) filter. 1], 4] is the best ...
Nationaal Lucht- en Ruimtevaartlaboratorium National Aerospace Laboratory NLR

NLR-TP-2001-625

Probabilistic Data Association Avoiding Track Coalescence H.A.P. Blom and E.A. Bloem

This paper has been published in IEEE Transactions on Automatic Control, February 2000. The contents of this report may be cited on condition that full credit is given to NLR and the authors.

Division: Issued: Classification of title:

Air Transport December 2001 Unclassified

-2NLR-TP-2001-625

Summary For the problem of tracking multiple targets the Joint Probabilistic Data Association (JPDA) approach has shown to be very effective in handling clutter and missed detections. The JPDA, however, tends to coalesce neighbouring tracks and ignores the coupling between those tracks. Fitzgerald has shown that hypothesis pruning may be an effective way to prevent track coalescence. Unfortunately, this process leads to an undesired sensitivity to clutter and missed detections, and it does not support any coupling. To improve this situation, the paper follows a novel approach to combine the advantages of JPDA, coupling and hypothesis pruning into new algorithms. First, the problem of multiple target tracking is embedded into one filtering for a linear descriptor system with stochastic coefficients. Next, for this descriptor system the exact Bayesian and new JPDA filters are derived. Finally, through Monte Carlo simulations, it is shown that these new PDA filters are able to handle coupling and are insensitive to track coalescence, to clutter and to missed detections.

-3NLR-TP-2001-625

Abbreviations ARTAS

Advanced Surveillance Tracker and Server

CPDA

Coupled Probabilistic Data Association

CPDA*

Track-coalescence-avoiding Coupled Probabilistic Data Association

EKF

Extended Kalman Filter

ENNPDA

Exact Nearest Neighbour Probabilistic Data Association

IMM

Interacting Multiple Model

JPDA

Joint Probabilistic Data Association

JPDA*

Track-coalescence-avoiding Joint Probabilistic Data Association

JPDAC

Joint Probabilistic Data Association - Coupled

MHT

Multiple Hypothesis Tracking

MLE

Maximum Likelihood Estimation

MMSE

Minimum Mean Square Error

MR

Mixture Reduction

MTA

Measurement to Track Association

MTMR

Multi Target Mixture Reduction

NLR

Nationaal Lucht- en Ruimtevaartlaboratorium

NN

Nearest Neighbour

PDA

Probabilistic Data Association

SME

Symmetric Measurement Equation

-4NLR-TP-2001-625

Contents I.

Introduction

5

II.

The Stochastic Model

6

III.

Embedding into a Descriptor System with Stochastic Coefficients

7

IV.

Development of the Coupled PDA (CPDA) filter

8

V.

Track-Coalescence-Avoidance through Hypothesis Pruning

10

VI.

Monte Carlo Simulations

11

VII.

Concluding Remarks

15

Appendix A

16

Appendix B

16

References

17

(18 pages in total)

-5NLR-TP-2001-625

Probabilistic Data Association Avoiding Track Coalescence Henk A.P. Blom and Edwin A. Bloem

Abstract |For the problem of tracking multiple targets the Joint Probabilistic Data Association (JPDA) approach has shown to be very e ective in handling clutter and missed detections. The JPDA, however, tends to coalesce neighbouring tracks and ignores the coupling between those tracks. Fitzgerald has shown that hypothesis pruning may be an e ective way to prevent track coalescence. Unfortunately, this process leads to an undesired sensitivity to clutter and missed detections, and it does not support any coupling. To improve this situation, the paper follows a novel approach to combine the advantages of JPDA, coupling and hypothesis pruning into new algorithms. First, the problem of multiple target tracking is embedded into one ltering for a linear descriptor system with stochastic coecients. Next, for this descriptor system the exact Bayesian and new JPDA lters are derived. Finally, through Monte Carlo simulations, it is shown that these new PDA lters are able to handle coupling and are insensitive to track coalescence, to clutter and to missed detections. Keywords | Bayesian Filtering, Descriptor system, Joint Probabilistic Data Association, Multi-target tracking

F

I. Introduction

OR the problem of tracking multiple targets, the Joint Probabilistic Data Association (JPDA) lter [1] has shown to be very e ective in handling clutter and missed detections. The JPDA, however, tends to coalesce neighbouring tracks. The aim of this paper is to develop probabilistic lters that both avoid JPDA's sensitivity to track coalescence and preserve JPDA's resistance to clutter and missed detections. This development forms a further elaboration of the new approach and lters presented in [2] and [3]. The Joint Probabilistic Data Association (JPDA) lter [1], [4] is the best known example of the Bayesian data association paradigm. From this point of view, JPDA seems to have a fundamental advantage over classical Measurement to Track Association (MTA) approaches, such as a single scan based Nearest Neighbour (NN) approach, or the multi-scan based Multiple Hypothesis Tracking (MHT) approach. Because of its appealing paradigm, JPDA has stimulated further developments, many of which have been directed to improving the stability or complexity of the numerical evaluation of the JPDA equations. This research has led to the development of several sub-optimal JPDA weight evaluation schemes [5], [6], [7], [8], [9] and to the Exact Nearest Neighbour version of the JPDA (ENNPDA) of [6]. The latter uses the JPDA weight evaluation and subsequently prunes all Gaussians from the conditional density, except the joint association hypothesis that has the heighest weight. The resulting ENNPDA appeared to be remarkably insensitive to track coalescence in case of no clutter and no missed detections. The dramatic pruning

used for ENNPDA, however, leads to an undesired sensitivity to clutter and missed detections [10]. From this point of view, ENNPDA and JPDA seem to have complementary qualities, both of which one would like to combine into a new algorithm. More fundamental studies have been directed toward the development of new approaches in approximating the conditional density. One direction is the approximation of the conditional density for each target's state by a reduced mixture of Gaussian densities, rather than by JPDA's single Gaussian. Through the introduction of appropriate distance measures the single-target Mixture Reduction (MR) scheme of [11] has been extended to a Multi Target Mixture Reduction (MTMR) version [12]. Another fundamental extension over JPDA is the estimation of the joint targets state. The underlying motivation is that the probabilistic sharing of measurements by closely spaced targets results in a correlation between the individual tracks [13]. Presently, this idea has been elaborated along three di erent directions, as follows.  JPDA Coupled (JPDAC) lter approach. Following the JPDA derivation framework, a coupled lter has been given for the joint state (e.g. [14], pp. 328-329). During each lter cycle a single Gaussian replaces the Gaussian sum. The e ectiveness of the JPDAC approach in combination with two other Bayesian approaches (Imaging sensor lter and IMM, respectively) has been demonstrated for closely spaced target situations [15], [16]. Note, however, that a direct comparison between JPDAC and JPDA has not been made in these papers.  Symmetric Measurement Equation (SME) approach. Similar as JPDAC, SME yields a coupled lter for the joint state results. The novel idea is to transform the joint state observation equation such that the data association uncertainty disappears completely. The result of such transformation is a nonlinear ltering problem, which can be approached by e.g. an Extended Kalman Filter (EKF) for the joint target state. E ective SME transformations have been developed and some initial comparisons with JPDA have shown the e ectiveness of the approach [17], [18], [19], [20], [21], [22]. Due to the nonlinear ltering for the joint targets' state, the SME approach evaluates the correlation between close target tracks. SME is numerically less complex than JPDA.  Maximum Likelihood Estimation (MLE) approach. Here the aim is to recursively evaluate the MLE of the targets' joint state. By making use of a (single) Gaussian approximation around the maximum likelihood estimate of the joint targets' state, recursive methods for MLE based tracking have been developed [23], [24]. The most general

-6NLR-TP-2001-625

II. The Stochastic Model recursive MLE lter is based on a mean eld approach towards the evaluation of the maximum likelihood estimate In this section we will describe the target model and the of the targets joint state [24]. The e ectiveness of this re- measurement model. cursive MLE approach relative to JPDA has been shown in [25], and two types of improvements over JPDA have been A. The target model reported: (1) improved resistance to track coalescence, and We consider M targets and we assume that the state of (2) improved resistance to clutter measurements. the ith target is modeled as follows : xit+1 = ai xit + bi wti ; i = 1; :::; M; (1) The JPDAC, the SME and the MLE developments all point into the direction that a study towards combining where xit is the n-vectorial state of the i-th target, ai and the complementary qualities of JPDA and ENNPDA into bi are (n  n)-matrices and wti is a sequence of i.i.d. stana new algorithm deserves a broader setup, in order to also dard Gaussian variables of dimension n with wi , wj int t try to combine the coupling between target tracks. To do dependent for all i 6= j and wi ,xi ; xj independent for all t 0 0 so, this paper explores two complementary directions, the 4 4 1 M 1 M rst of which provides a theoretical basis for incorporat- i 6=4 j . Let xt = Colfxt ; :::; x4t g, A = Diagfa ; :::; a g, ing coupling with JPDA, and the second exploits the EN- B = Diagfb1 ; :::; bM g, and wt = Colfwt1 ; :::; wtM g. Then we NPDA advantages. The rst direction starts by embed- can model the state of our M targets as follows: ding the problem of multi-target tracking, given unassoxt+1 = Axt + Bwt (2) ciated measurements from clutter and randomly detected targets, into one of ltering for a well-de ned system of B. The measurement model stochastic di erence equations. The resulting embedding A set of measurements consists of two types of measureis a linear descriptor system with stochastic i.i.d. coecients [2]. This representation forms the key to the de- ments, namely measurements originating from targets and velopment of the exact Bayesian lter equations for the measurements originating from clutter. First we will treat conditional density of the joint state of the multiple tar- the two types of measurements separately. Subsequently gets, and of the Gaussian approximation-based lter algo- we treat the random insertion of clutter measurements berithm. The latter lter algorithm appears to di er signi - tween the target measurements. cantly from the JPDAC lter in case of missed detections, and is referred to as Coupled Probabilistic Data Associa- B.1 Measurements originating from targets We assume that a potential measurement associated with tion (CPDA) lter. The second, complementary direction shows how the hypothesis pruning approach of ENNPDA state xit (which we will denote by zti ) is modeled as follows: should be incorporated within the CPDA and JPDA lzti = hi xit + gi vti ; i = 1; :::; M (3) ters. To do so, we rst develop an ENNPDA-inspired trackcoalescence-avoiding hypothesis pruning strategy. Apply- where z i is an m-vector, hi is an (m  n)-matrix and gi is ing this to CPDA and JPDA results into new algorithms an (m t m)-matrix, and vi is a sequence of i.i.d. standard t called CPDA and JPDA , respectively, in which the * is Gaussian variables of dimension m with vti and vtj indepenshort for "Track-coalescence-avoiding". In order to comi for all i 6= j . Moreover vt is independent of xj0 and pare the newly developed CPDA, CPDA and JPDA l- dent j ters versus each other and versus JPDAC, JPDA and EN- wt for all i,4j . 4 Diagfh1; :::; hM g, G = 4 With zt = Colfzt1; :::; ztM g, H = NPDA, we subsequently run Monte Carlo simulations for 4 Colfv1 ; :::; vM g, we obtain: a basic example in which track coalescence plays a distinc- Diagfg1; :::; gM g, and vt = t t tive role in JPDA. On the basis of these results, it appeared possible to characterize the di erences between the various zt = Hxt + Gvt (4) lters, to show the distinct advantage of the CPDA and JPDA algorithms over the other ones, and to identify the We next introduce a model that takes into account that not all targets have to be detected at moment t, which cause of track coalescence. implies that not all potential measurements zti have to be The paper is organized as follows. In section II, we in- available as true measurements at moment t. To this end, troduce the multitarget tracking problem considered. In we de ne the following variables: Let Pdi be the detection section III, we embed this problem into one of ltering probability of target i and let i,t 2f0,1g be the detection given measurements from a linear descriptor system with indicator for target i, which assumes the value one with stochastic coecients. In section IV, we develop the CPDA probability Pdi > 0, independently of j;t , j 6= i. This lter algorithm and explain its di erence with JPDAC result yields the following detection indicator vector t : and JPDA. In section V, we develop the track-coalescence4 Colf ; :::;  g: t = avoiding lter algorithms CPDA and JPDA. In section 1;t M;t VI, we evaluate all new developments through Monte-Carlo 4 PM  . simulations. Finally, in section VII, we summarize the re- Thus, the number of detected targets is Dt = i=1 i;t Furthermore, we assume that ft g is a sequence of i.i.d. sults and draw more general conclusions.

-7NLR-TP-2001-625

4 Colfz~ ; v g, it follows with the above de ned vectors. In order to link the detection indicator vector with With y~t = t t the measurement model, we introduce the following opera- variables that 0 0 2  ( )z 3 tor : for an arbitrary (0,1)-valued M -vector  we de ne 0 0 P 4 t t t 5 M 0 0 D( ) = i=1 i and the operator  producing ( ) as 4 y ~ = ; if Lt > Dt > 0 (5) :::::::::::::: t t a (0; 1)-valued matrix of size D(0 )  M 0 of which the ith v row equals the ith non-zero row of Diagf0 g. Hence by whereas the upper and lower subvector parts disappear for de ning, for Dt > 0, Dt = 0 and Lt = Dt respectively. With this equation, 4 ( ) I ; 4 ( )z ; where ( ) = the measurements originating from clutter still have to be z~t = t t t t m randomly inserted between the measurements originating with Im a unit-matrix of size m, and denoting the tensor from the detected targets. To do so, we rst introduce the following target indicator and clutter indicator processes, product denoted by f t g and f t g, respectively: Let the random 2 3 .. variable i,t 2f0,1g be a target indicator at moment t for a b aIm . bIm 7 6 measurement i, which assumes the value one if measure6 7 ...... 5 c d Im = 4 ment i belongs to a detected target and zero if measure.. cIm . dIm ment i comes from clutter. This result yields the following we get the vector that contains all measurements originat- target indicator vector t of size Lt: 4 ing from targets at moment t in a xed order. In reality, t = Colf 1;t ; :::; Lt ,t g: however, we do not know the order of the targets. Hence, we introduce the stochastic Dt  Dt permutation matrix Let the random variable i,t 2f0,1g be a clutter indicator t , which is conditionally independent of ft g. We also at moment t for measurement i, which assumes the value zero if meaassume that ft g is a sequence of independent matrices. one if measurement i comes from clutter and i,t = 1 ; i,t ). surement i belongs to an aircraft (thus Hence, for Dt > 0, This result yields the following clutter indicator vector t 4 4 of size Lt : z~t = t z~t ; where t = t Im ; 4   t = Colf 1;t ; :::; Lt ,t g: is a vector that contains all measurements originating from In order to link the target and clutter indicator vectors targets at moment t in a random order. with the measurement model, we make use of the operator  introduced before. With this the measurement vector B.2 Measurements originating from clutter reads as follows: Let the random variable Ft be the number of false mea- with clutter inserted   .. T  T surements at moment t. We assume that Ft has Poisson yt = ( t ) . ( t ) y~t if Lt > Dt > 0 (6) distribution: ;  F Substituting (5) into (6) yields the following model for the pFt (F ) = exp ;V (VF !) , F = 0, 1, 2, . .. observation vector yt at moment t: = 0, else   2 t(t )zt 3 . 5 if Lt > Dt > 0 where  is the spatial density of false measurements (i.e. yt = ( t )T .. ( t )T 4 ::::::::::::::  v the average number per unit volume) and V is the volume t (7) of the validation region. Thus, V is the expected number of false measurements in the validation gate. We assume This, together with equation (2), forms a complete charthat the false measurements are uniformly distributed in acterization of our tracking problem in terms of stochastic the validation region, which means that a column-vector di erence equations. vt of Ft i.i.d. false measurements is assumed to have the III. Embedding into a descriptor system with following density: stochastic coefficients    ; F .  pvt jFt (v jF ) = V Because ( t )T .. ( t )T is a permutation matrix for where V is the volume of the validation region. Further- L > Dt > 0, its inverse satis es more we assume that the process fvt g is a sequence of inde- t  T 2 ( t ) 3 pendent vectors, which are independent of fxt g; fwtg; fvt g . ( t )T .. ( t )T = 4 :::: 5 (8) and ft g. ( t ) B.3 Random insertion of clutter measurements Premultiplying (7) by such inverse yields Let the random variable Lt be the total number of mea2 ( ) 3 2  ( )z 3 surements at moment t. Thus, t t t 5 4 ::::t 5 yt = 4 :::::::::::::: if Lt > Dt > 0 (9) ( t ) Lt = Dt + Ft vt

-8NLR-TP-2001-625

M Y

From (9), it follows that ( t )yt = t (t )zt if Dt > 0

(10)

t (; ~) = Ft (; ~)(Lt ;D()) [ (1 ; Pdi )(1;i ) (Pdi )i ]=ct i=1

(17) 4 ~ I , and F (; ~) and c are such that they where ~ = m t t normalize pxt jt ;~t ;Yt (x j ; ~) and t (; ~) respectively. ( t )yt = t (t )Hxt + t (t )Gvt if Dt > 0 (11) Proof: see Appendix A. The specialty of this proof is because of the derivation of Bayesian equations for the Notice that (11) is a linear Gaussian descriptor system [26] descriptor system (13). with stochastic i.i.d. coecients ( t ) and t (t ). Because t has an inverse, (11) can be transformed into Our next step is given by the following Theorem. Tt ( t )yt = (t )Hxt + (t )Gvt if Dt > 0 (12) Theorem 1: Let pxt jYt;1 (x) be Gaussian with mean xt Next we introduce an auxiliary indicator process ~t as fol- and covariance Pt and let Ft (; ~) be de ned by Proposition 1. Then Ft (0; ~) = 1, whereas for  6= 0: lows: 4 T ( ) if D > 0: ~t = t t t Ft (; ~) = expf; 21 Tt (; ~)Qt ();1 t (; ~)g With this we get a simpli ed version of (12): (18) mD() DetfQt ()g]; 12  [(2  ) (13) ~ yt = (t )Hxt + (t )Gvt if Dt > 0 Substitution of (4) into (10) yields:

t

Remark 1: For the development of the JPDA, [1, p. 224] where makes use of an Lt  (M + 1) dimensional measurement4 ~y ; ()H x t (; ~) = t t target association matrix t = [!ij;t ] with !ij;t = 1 if the 4 ()(H P H T + GGT )()T ith measurement belongs to target j (with target 0 meaning Qt() = t clutter) and !ij;t = 0 otherwise. In our setup t satis es, for Lt > 0: Moreover, pxt jYt (x) is a Gaussian mixture, whereas its overall mean x^t and its overall covariance P^t satisfy  T

t = [ t ( t ) t (t )], if Dt > 0 (14) X  = [ t ;Lt M ], if Dt = 0 X x^t = xt + Kt () t (; ~)t (; ~) (19) where ;LtM denotes an Lt  M -dimensional zero-matrix.  ~ IV. Development of the Coupled PDA (CPDA) filter Let Yt denote the -algebra of measurements yt up to and

including moment t. In this section we develop Bayesian characterizations of the conditional density pxt jYt (x). From (13), it follows that all relevant associations and permutations can be covered by (t ; ~t )-hypotheses. Hence, through de ning the weights 4 Probf = ; ~ = ~ j Y g; t (; ~) = t t t

the law of total probability yields:

pxt jYt (x) =

X ~;

6=0

X

P^t = Pt ; +

X 



6=0

(15)

6=0

4

M

i=1 i

t

t

D()  Lt, the following holds true: (~y j x; )  p p (x) pxt jt ;~t ;Yt (x j ; ~) = z~t jxt ;t tF (; ~) xt jYt;1 t

~

X ~



t (; ~) +



t (; ~)t (; ~)Tt (; ~) KtT () +

0 1   BX K () X (; ~) (; ~) CC  ;B t t @ t A 6=0

~

0 1T   X X B K (0 ) (0 ; ~0) (0 ; ~0) CC B t t t @ A

0 And thus, our problem is to characterize the terms in the 0 6=0 last summation. This problem is solved in two steps, the rst of which is the following proposition. with:

1: For any  2f0; 1g , such that D() = PProposition M   L , and any ~ matrix realization ~ of size

X

Kt ()



t (; ~)pxt jt ;~t ;Yt (x j ; ~)

Kt ()()H Pt

~0

4 P H T ()T Q ();1 if  6= 0; and Kt() = t t 40 Kt(0) =

(20.a)

(20.b)

Proof: (Outline) If pxt jYt;1 (x) is Gaussian with mean (16) xt and covariance Pt , then the density pxt jt ;~t ;Yt (x j ; ~)

-9NLR-TP-2001-625

is Gaussian with mean x^t (; ~) and covariance P^t () satisfying

x^t (; ~) = xt + Kt()[~yt ; ()H xt ] = xt P^t () = Pt ; Kt()()H Pt = Pt

if  6= 0 , if  = 0 if  6= 0 , if  = 0

Hence, pxt jYt (:) is a Gaussian mixture, and all equations follow from a lengthy but straightforward evaluation of this mixture. Theorem 1 implies that we get a recursive algorithm if the conditional density pxt jYt;1 (x), is approximated by a Gaussian shape. We refer to this recursive algorithm as the CPDA lter. It consists of evaluating the following three subsequent steps:

+

X 

X

Kt ()

6=0

~



t (; ~)t (; ~)Tt (; ~) KtT () +

0 1   X X B K () (; ~) (; ~) CC  ;B t t @ t A 

6=0

~

0 1T   B X K (0 ) X (0 ; ~0) (0 ; ~0) CC B t t t @ A 0

0 6=0

~0

(25)

with Kt () de ned in (20.b)

Remark 2: Note that the CPDA algorithm has similarities with the JPDA Coupled (JPDAC) lter of [14, ch.6.2, pp. 328-329]: Steps 1 and 3 are equivalent, Step 2 differs slightly only. For the evaluation of Step 4, however, CPDA Step 1 - Prediction: JPDAC uses the additional approximations xt = Ax^t;1 (21) Kt() = Pt H T (H Pt H T + GGT );1 ()T (26.a) T T Pt = AP^t;1 A + BB (22) in (24) and in the 2nd and 3rd term of (25), and CPDA Step 2 - Gating: (26.b) Kt ()() = Pt H T (H Pt H T + GGT );1 For each prediction (xit ; Pti ), de ne a gate Git 2 IRm as follows: in the rst term of (25). Strict equality of the latter approximation will hold true 4 fy0 2 IRm ; (y0 ; hi xi )T (hi P i hiT + gi giT );1  Git = in exceptional cases only, e.g., when Pt is block-diagonal t t 0 i i (i.e., targets far apart), or when () = I (i.e., detection (y ; h xt )  g probability is unity). with the gate size. If the j -th measurement yjt falls outside gate Git , i.e. Remark 3: The well-known JPDA lter equations [1] can yjt 2= Git , then the i-th component of the j -th column of be obtained from the CPDA equations under the following ~t is assumed to equal zero. This reduces the set of pos- additional approximate assumption: sible detection/permutation hypotheses to be evaluated at P^tij = 0 for all i 6= j moment t to, say, X~t . In addition, CPDA's gating Step 2 simpli es for JPDA, CPDA Step 3 - Evaluation of the Detection/Permutation the e ect of which is that JPDA's gating approach may Hypotheses, by Using (16) as Approximation: eliminate some (t ; ~t )-hypotheses that are considered by CPDA. t (; ~) =h Ft (~)(Lt ;D()) i  QMi=1 (1 ; Pdi )(1;i ) (Pdi )i =ct for ~ 2 X~t , Remark 4: For M > 1 the CPDA algorithm obviously is more complex than the JPDA algorithm. If  = 0 t (; ~) = 0 else Pdi = 1 for all i, and m  n, then the number (23) and scalar computations for one lter cycle is of the order with Ft (; ~) satisfying equation (18) and ct a normalizing of O(M 2 n3 + M 3 n2 m + M !M 2m2 ) for the CPDA and of the constant. order O(Mn3 + Mn2 m + M !M + M 2 m2 ) for the JPDA. CPDA Step 4 - Measurement-Based Update Equations, Us- Remark 5: The ENNPDA equations [6] can be obtained ing (19) and (20) as Approximations: from the CPDA equations through pruning all, except the   most likely (; ~)-hypothesis before executing CPDA step X X x^t = xt + Kt () t (; ~)t (; ~) (24) 4, i.e., the weights after pruning become  ~ 6=0 t0 (; ~) = 1 if (; ~) = Argmax t (0 ; ~0 ) (0 ; ~0 )   = 0 else X X P^t = Pt ; Kt ()()H Pt t (; ~) + The e ect of this dramatic pruning is that the summations  ~ in (24) and (25) actually select a single value of  only. 6=0

-10NLR-TP-2001-625

As a result of this, the cross-covariance terms of P^t stay Equivalent to CPDA step 2 zero. The diagonal terms of P^t equal the measurementCPDA Step 3 - Evaluation of the Detection/Permutation based update equations of ENNPDA. Hypotheses: V. Track-Coalescence-Avoidance through Equivalent to CPDA step 3: equations (23) and (18) Hypothesis Pruning

With its ENNPDA approach, [6] has shown that hypothesis pruning can provide an e ective approach toward track-coalescence avoidance (see remark 5 of Section IV). The ENNPDA lter equations can be obtained from the CPDA algorithm by pruning all less likely (t ; ~t )hypotheses before measurement updating (CPDA step 4). Obviously, ENNPDA's resistance to track coalescence is caused by this pruning, and its sensitivity to missed detections and clutter is also caused by this pruning. Obviously the latter does not occur if  = 0 and Pdi = 1 for all i, because in that case Lt = Dt = M and ( t ) = (t ) = IM . Hence, to reduce the CPDA in that case to the ENNPDA, it is sucient to prune all less likely t -hypotheses only. Thus, from ENNPDA we know that track-coalescence can be avoided by pruning t -hypotheses. From PDA we know that sensitivity to missed detections and clutter can be avoided by not pruning any t or t hypothesis. Combining these two ndings leads to the following new hypothesis pruning strategy: evaluate all (t ; t ) hypotheses and prune per (t ; t )-hypothesis all less-likely t -hypotheses. To do this, we de ne for every  and satisfying D( ) = D() MinfM; Ltg, a mapping ^t (; ): 4 Argmax ;; T ( ) ^t (; ) = t



where the maximization is over all permutation matrices  of size D()  D(). The pruning strategy of evaluating all (; )-hypotheses and only one -hypothesis per (; )-hypothesis implies  that we get the following approximated weights t0 (; ~) = t (; ~):

;



;



t0 ; T ( ) = t ; T ( ) c^t if  = ^t (; ) = 0

else

with c^t a normalization constant for t0 ; i.e.,

X

;;

;



t0 ; T ( ) = 1

CPDA Step 4 - Track-Coalescence Hypothesis Pruning: First, evaluate for every (; ) such that D( ) = D() MinfM; Ltg:

;



4 Argmax ; T ( ) ^t (; ) = t  Next, evaluate all ^t (; ) hypothesis weights ;  ^t ; ) = t (; ^Tt (; )( ) =c^t where c^t is a normalizing constant satisfying X ;  c^t = t ; ^Tt (; )( ) ;

CPDA Step 5 - Measurement-Based Update Equations: X X ;  x^ = x + K () ^ (; ) ; ^T (; )( ) t

t



t

6=0

P^t = Pt ; +

t D( )=D()

t

(27)

P K ()()H P P ^ (; )+ 

6=0

t

P K () P ^ (; t t 

6=0

t

D( )=D()

t t D( )=D() ;  )t ; ^Tt (; )( ) 

;  Tt ; ^Tt (; )( )



K T ()+

0 1t  ;   ;B @ P Kt() P ^t(; )t ; ^Tt (; )( ) C A 0 6=0 D( )=D() 1T   ;  B @ P0 Kt(0 ) P0 ^t(0 ; 0)t 0 ; ^Tt (0 ; 0)( 0 ) C A 0 6=0

D( 0 )=D(0 )

(28)

with t (:) and Kt (:) satisfying equations (18) and (20.b) of CPDA.

The computational complexity of CPDA is similar to that  Obviously, this allows for a shorter notation, and we de ne: of CPDA. Hence, our next step is to develop a JPDA lter from the CPDA lter in a similar way as the JPDA lter 4 ;; ^(; )T ( ) =c^ follows from the CPDA lter (see Remark 3). To do so, we ^t (; ) = t t rst prove the following Theorem: for all (; ) satisfying D() = D( ) MinfM; Ltg. Theorem 2: Let pxt jYt;1 (x) be Gaussian with mean xt = This approach yields a track-coalescence-avoiding Coupled  Col fx1t ; :::; xMt g and covariance Pt = DiagfPt1 ; :::; PtM g, PDA lter, which we refer to as CPDA . then t (; ~) of Proposition 1 satis es: "Y # CPDA Step 1 - Prediction: M ( L ; D (  )) i i (1 ;  ) i  Equivalent to CPDA step 1: (21) and (22) t (; ~) =  t ft (; ~)(1 ; Pd ) i (Pd ) i =ct D( )=D()

CPDA Step 2 - Gating:

i=1

(29)

-11NLR-TP-2001-625

with:

i with fti (; ~), ik t and Qt satisfying (30.a,b,c).

Lt X

;[()]T ~ ikT [Qi ];1ik g  JPDA Step 4 - Track-coalescence hypothesis pruning: fti (; ~) = expf; 21 i k t t t Equivalent to CPDA step 4. k=1 1 [(2)m DetfQit g]; 2 i (30.a) JPDA Step 5- Measurement-Based Update Equations for where: all i 2 f1; :::; M g: 4 ! ik k i i Lt t = yt ; h xt (30.b) X  i i i ik ik ^ x^t = xt + Wt t t (34.a) 4 hi P i hiT + gi giT Qit = (30.c) t k=1 ! Lt whereas [()]i and ~i are the ith columns of () and X  i i i i i ik ^ i ^   t + ~, respectively. Moreover, pxit jYt (x ), i 2 f1; :::; M g, is a Pt = Pt ; Wt h Pt k=1 Gaussian mixture, while its overall mean x^it and its overall ! Lt X covariance P^ti satisfy: i ik ik ikT ^ + Wt t t t WtiT + ! Lt X k=1 x^it = xit + Wti tik ikt (31.a) ! X !T Lt Lt X 0 ik0 k=1 i ik ik ik ^ ^ ; Wt t t t t WtiT ! Lt X 0 k =1 k =1 P^ti = Pti ; Wti hi Pti tik + (34.b) + Wti

; with:

Wti tik

Wti

Lt X

k=1 Lt X k=1

k=1

tik ikt ikT t tik ikt

!

WtiT +

! X Lt k=1

tik ikt

with

!T

WtiT

X ;~

6=0

Note that the JPDA lter simpli es to the well-known JPDA by omitting Step 4 and slightly simplifying the gating mechanism of Step 2. This also implies that the numerical complexity of JPDA is similar to that of JPDA.

t (; ~)[()]Ti ~k

VI. Monte Carlo simulations

Proof: see Appendix B. The novel part of this proof consists of non-trivial matrix manipulations. Following this, the remaining part is similar to the JPDA derivations and therefore omitted.

By combining Theorem 2 with the CPDA steps, we arrive at the JPDA lter algorithm. JPDA Step 1 - Prediction for all i 2 f1; :::; M g: xit = ai x^it;1 Pti = ai P^ti;1 aiT + bi biT

(32.a) (32.b)

JPDA Step 2 - Gating: Equivalent to CPDA step 2 JPDA Step 3 - Evaluation of the Detection/Evaluation Hypotheses, by Adopting (29) as Approximation: M Y

t (; ~) = Lt ;D() [ fti (; ~)(1 ; Pdi )(1;i ) (Pdi )i ]=ct i=1

;

6=0

(31.b) with [:]k the kth column of [:].

4 P i hiT [Qi ];1 = t t 4 = Probf[()]Ti;t ~k;t = 1 j Yt g =

=

Wti = Pti hiT [Qit ];1 (34.c) ^ tik = X ^t (; )[()]Ti [^Tt (; )( )]k (34.d)

(33)

In this section, the new lters (CPDA, JPDA and CPDA ) are evaluated and compared with the existing lters (ENNPDA, JPDA, and JPDAC). In case of a single target, all except the ENNPDA are equal to the ordinary PDA. Obviously, in case of a single target (M = 1) and no clutter (i.e.,  = 0), the ENNPDA is equivalent to the PDA lter as well. In case of multiple targets (i.e., M > 1), all six lters di er, unless  = 0 or Pdi = 1 for all i. If Pdi = 1, then JPDAC equals CPDA if the same gating is being used, and if both Pdi = 1 and  = 0, then JPDA equals ENNPDA if the same gating is being used. In order to compare the performance of all lters in multiple target situations, Monte Carlo simulations have been performed. In order to simplify the comparisons for all lters, the same gating procedure is used (CPDA Step 2). The simulations we used are based on a simple crossing target scenario, similar to the one used by [6]: two targets modeled as constant velocity objects that move towards each other with a given relative velocity Vrel , cross at a certain moment in time and then move away from each other with the same relative velocity. Each simulation that starts with perfect estimates is run for 50 scans, with the crossover point at scan 10.

-12NLR-TP-2001-625

For each target, the underlying model of the potential target measurements is given by (1) and (3)

xit+1 = ai xit + bi wti zti = hi xit + gi vti Furthermore, for i = 1; 2:   1 2 ai = 10 T1s ; bi = ai  2TTs ; s





hi = 1 0 ;

(1) (3)

gi = mi

where ai represents the standard deviation of acceleration noise and mi represents the standard deviation of the measurement error. For simplicity we consider the situation of similar targets only; i.e., ai = a , mi = m , Pdi =Pd . 4 Following [6] and [27], we de ne the tracking index  = 2 norm Ts a =m and the normalized relative velocity Vrel = Vrel Ts =m . With this, the scenario parameters are Vnorm rel , , Ts , m , , Pd and the gate size . Table I gives the scenario parameter values that are being used.

1a. Both tracks \O.K." percentage

TABLE I

Scenario parameter values.

Scenario 1 2 3 4 5

Pd  1 0.9 1 0.9 0.9

0 0 0.001 0.001 0.001

m  30 30 30 30 30

3 31 3 31 3 31 3 31 1

25 25 25 25 25

norm Ts Vrel 10 10 10 10 10

Variable Variable Variable Variable Variable

During our simulations we counted track i \O.K." if

j hi x^iT ; hi xiT j 9m and we counted track i = 6 j \Swapped" if j hi x^iT ; hj xjT j 9m Furthermore, two tracks i = 6 j are counted \Coalescing" at

scan t, if

j hi x^it ; hj x^jt j m ^ j hi xit ; hj xjt j> m For each of the scenarios in table 1, Monte Carlo simulations containing 100 runs of the crossing trajectories have been performed for each of the tracking lters. To make the comparisons more meaningful, for all tracking mechanisms the same random number streams were used. To rule out possible CPDA covariance matrix singularities, such as reported by [24], the simulations were performed in double precision and it was veri ed that the ratio between largest and smallest eigenvalues of the covariance matrices stayed low enough. For scenario 1, results similar to [6] were obtained. Results of the Monte Carlo simulations for the other scenarios

1b. Both tracks \O.K." or \Swapped" percentage

Fig. 1. Simulation results for scenario 2, with Pd = 0:9,  = 0 and  = 3 13 .

are depicted as function of the normalized relative velocities in three types of gures, showing respectively:  The percentage of Both tracks \O.K. " ( gures 1a,2a,3a,4a).  The percentage of Both tracks \O.K." or \Swapped" ( gures 1b,2b,3b,4b).  The average number of \Coalescing" scans ( gures 5a,5b). For the scenarios considered, the simulation results show a superior performance of the JPDA and CPDA lters above the other lters. They also show that CPDA performs less than JPDAC, which on its turn may be competitive with JPDA. Scenario 4 results show that there are also cases in which none of the algorithms considered perform really satisfactorily, although JPDA and CPDA perform best. In addition to this, more detailed observations have been made for the following comparisons:  JPDA outperforms CPDA,  JPDAC outperforms CPDA if Pd < 1,

-13NLR-TP-2001-625

2a. Both Tracks \O.K." percentage

3a. Both Tracks \O.K." percentage

2b. Both Tracks \O.K." or \Swapped" percentage

3b. Both Tracks \O.K." or \Swapped" percentage

Fig. 2. Simulation results for scenario 3, with Pd = 1,  = 0:001 and Fig. 3. Simulation results for scenario 4, with Pd = 0:9,  = 0:001  = 3 13 . and  = 3 13 .

 JPDA outperforms JPDA and JPDAC,  JPDA outperforms ENNPDA,  CPDA may perform marginally better than JPDA .

JPDA outperforms CPDA CPDA appears to be a case where the optimal Gaussian approximation of the exact Bayesian lter equations for the conditional density leads to worse performance than a non-optimal Gaussian approximation of JPDA. The explanation for this phenomenon is that the conditional density for the joint state of slowly crossing targets has a multimodality of a particular form: apart from the target identity the joint state is almost known. For slowly crossing targets this may result into a strong coupling between the two tracks through the cross-covariance terms, which supports CPDA's strong preference for keeping both tracks in between competing measurements. Because of JPDA's negligence of these cross-covariance terms, the latter e ect is less strong for JPDA. This explains why CPDA degrades in

performance at signi cantly higher relative velocities than JPDA. JPDAC outperforms CPDA if Pd < 1 If Pd = 1, JPDAC and CPDA perform equally. For Pd = 0:9, however, JPDAC outperforms CPDA. The explanation is that because of the approximation adopted, JPDAC's lter gain (26) varies signi cantly less than CPDA's lter gain (20.b) if Pd < 1. As a consequence, JPDAC tends to neglect cross-covariance terms during the measurementbased track update, in case of missed detections. As such JPDAC becomes competitive with JPDA if Pd < 1 . JPDA outperforms JPDA and JPDAC For the examples considered, JPDA clearly retains JPDA's insensitivity to clutter and missed detections for the full range of relative velocities. At the same time, JPDA clearly outperforms JPDA and JPDAC when the relative velocities become small enough. This indeed ap-

-14NLR-TP-2001-625

4a. Both Tracks \O.K." percentage

5a. Scenario 2, Pd = 0:9,  = 0 and  = 3 31

4b. Both Tracks \O.K." or \Swapped" percentage

5b. Scenario 4, Pd = 0:9,  = 0:001 and  = 3 31

Fig. 4. Simulation results for scenario 5, with Pd = 0:9,  = 0:001 Fig. 5. Typical results in terms of average number of \coalescing" and  = 1. scans

pears to be caused by JPDA's avoidance of track coalescence. Instead of coalescing tracks, JPDA either performs O.K. or may swap tracks. For a better understanding of this di erence, consider the situation  = 0 and Pd = 1 (scenario 1). Then at each scan only two hypotheses exist with non-zero probability. JPDA and JPDAC use both hypotheses for track updating, while JPDA uses the most likely hypothesis only for track updating. If the probabilities of the two hypotheses become almost the same and the measurements are clearly separated (which is very likely at low relative velocities), then JPDA and JPDAC tend to update both tracks somewhere in between the two measurements, and JPDA updates the tracks at separate positions as indicated by the two separated measurements. In this simple example, JPDA and JPDAC tend to coalesce both tracks, and JPDA has a probability of about 50% both tracks \O.K", about 50% to swap both tracks and no track coalescence or track loss. Similar ndings also apply to scenarios 2-5.

JPDA outperforms ENNPDA If Pd = 1 and  = 0, then JPDA and ENNPDA perform equally well. For  = 0:001, however, JPDA performs signi cantly better than ENNPDA. Track coalescence is not observed. If the detection probability Pd < 1 or  > 0 then JPDA clearly outperforms ENNPDA. The di erences also appear when the relative velocities are large. This simply illustrates that JPDA indeed avoids ENNPDA's sensitivity to clutter and missed detections. CPDA may perform marginally better than JPDA CPDA and JPDA hardly show di erence in performance. For small relative velocities only, CPDA may perform only marginally better than JPDA . Thus in this case, memorizing cross-covariance between crossing tracks does neither lead to a worse performance nor to a really improved performance. It means that because of the track-coalescenceavoiding hypothesis pruning method of Section V, the practical use of memorizing cross-covariance between crossing

-15NLR-TP-2001-625

tracks seems to disappear. VII. Concluding remarks

In this paper, new directions in PDA development have been explored. First, in sections II and III, the multitarget tracking problem has been embedded into a problem of ltering for a linear descriptor system with stochastic i.i.d. coecients. Subsequently, in section IV, the exact Bayesian and Gaussian approximated lter equations have been developed. The resulting lter algorithm has been named CPDA, and appeared to di er signi cantly from the JPDA lter and, if detection probability is not unity, also from the more recent JPDAC lter. Then, in section V, an hypothesis pruning strategy has been developed that allows both to avoid JPDA's sensitivity to track coalescence, and to preserve JPDA's insensitivity to clutter and missed detections. Application of this new pruning strategy to JPDA and CPDA resulted into two new algorithms that have been named JPDA and CPDA , respectively. Finally, in section VI, the new developments have been evaluated through Monte Carlo simulations for some characteristic multitarget tracking scenarios. Both JPDA and JPDAC appeared to outperform CPDA. On their turn, however, JPDA and CPDA appeared to outperform them all, simply due to their ENNPDA inherited insensitivity to track coalescence. Moreover, JPDA and CPDA appeared to perform similarly. On the basis of the results obtained, it is also possible to draw some more general conclusions. First, the commonly established approach of approximating exact Bayesian lter equations, by assuming a centred Gaussian approximation for the conditional density, appears to lead to a lter (CPDA) that performs less good than those based on other Gaussian approximations (i.e. JPDA, JPDAC, JPDA and CPDA ). In order to identify a logical explanation for this phenomenon, we notice that the conditional density of the targets' joint state has a particular multi-modality: in addition to the local optimum for the non-swapped tracks, often other local optima exist for track swap possibilities. The approach of centering a Gaussian optimally (in MMSE sense) between these local optima implies a preference to track coalescence over track swap. For tracking applications, however, it is better to accurately know the target locations while being uncertain about the track identities because of possible track swap, than to know the identities and be uncertain about the target locations because of possible track coalescence; thus, track swap is preferred over track coalescence. This preference implies that a straightforward application of an MMSE sense optimality criterion is not so practical when it comes to tracking closely spaced targets. Obviously, ENNPDA, JPDA and CPDA avoid track coalescence through centering a Gaussian density around one of the local optima (which need not be the global optimum), which appears to be so e ective that the need for remembering the coupling between tracks practically disappears. The optimal centering of JPDA, JPDAC and CPDA in MMSE sense, simply prefers track coalescence over track swap, which is a less good choice from a

practical tracking point of view. Having identi ed the elementary practical problem that comes with adopting an MMSE optimality criterion, it is interesting to see what this practically means for Kastella's MLE, for Kamen's SME, and for Pao's MTMR:  For the SME approach of Kamen, it is clear that due to the one-to-one SME transformation of the measurement equation [19]-[21], the data association problem seems totally avoided. Since this SME transformation is one-to-one and it does not change the targets' joint state space, it has no in uence at all on the exact conditional density. In addition to its transformation, the SME approach uses the EKF approach towards approximately evaluating the conditional density. Since an EKF is based on optimality in MMSE sense, one may expect that Kamen's SME lter will prefer track coalescence above track swap, similarly to JPDA. The latter agrees well with the ndings in [20] that an SME lter performs similarly to a JPDA lter, thus preferring track coalescence over track swap.  Kastella's MLE approach tries to center a Gaussian density around the global optimum. As such, we should expect that MLE will neither swap nor coalesce tracks as long as the local optima, representing track swap possibilities, stay suciently apart. During the period that those local optima do not stay suciently apart, however, the MLE approach also prefers track coalescence over track swap. The memorization during such period of the coupling (nonzero cross-covariance terms) between individual target tracks, however, allows Kastella's MLE approach to react very e ectively as soon as targets split. These considerations correspond with the practically sound results reported for Kastella's MLE approach [25].  With the MTMR approach [12], the conditional density is approximated by a sum of Gaussian densities, the covariances of which are not coupled, however. From a theoretical point of view, the latter seems an omission. On the basis of our experience that CPDA performs almost equally well as JPDA , however, we may expect that as long as MTMR retains a Gaussian for each individual target and each relevant track swap possibility, then MTMR's performance should not su er from neglecting the coupling between individual Gaussians. When, however, the targets stay suciently long and close enough to each other, then the latter condition is not satis ed and MTMR might tend to centre for each target a single Gaussian in between the local optima, which implies track coalescence. Thus, when targets cross at low relative velocities, Pao's MTMR is expected to perform similarly to JPDA. Finally, the newly developed embedding of multitarget tracking into ltering for a linear descriptor system with stochastic coecients, has shown to be of value in the derivation of exact Bayesian and Gaussian approximate multitarget tracking equations. This embedding for example enabled us to avoid the heuristic reasoning such as necessary at the time of JPDAC development. As such, this new embedding formulates multitarget tracking problems within the modeling framework of nonlinear ltering theory, which eventually may lead to the exploration of further

-16NLR-TP-2001-625

developments in PDA, e.g., incorporating the crucial plot Subsequently using the JPDA derivation [1] yields: resolution problem [28], [29]. Similarly, the new representa t (; T ( )) = Ft (; T ( ))(Lt ;D())  tion might support developments in e ectively combining M Y multi-target probabilistic data association approaches with  [ (Pdi )i (1 ; Pdi )(1;i ) ]=ct other Bayesian solutions, e.g. with IMM [16], [30], [31], i=1 with imaging sensor models [15], [32]. Presently, at NLR good progress is made in e ectively incorporating IMM, with ct a normalizing constant. It can be easily veri ed that plot resolution model, and imaging sensor models, the re- the last equation also holds true if Lt = D() or D() = 0. sults of which have already been added to those of [31] during the realization of Eurocontrol's multitarget multiAPPENDIX B Proof of Theorem 2 sensor tracking system ARTAS. Because Pt is block-diagonal, (H Pt H T + GGT ) and APPENDIX A Proof of Proposition 1 ()(H Pt H T +GGT )()T are block-diagonal too. Hence, If  = 0 we get pxt jt ;~t ;Yt (x j 0; ~) = pxt jYt;1 (x). Else it can be shown that ;  ()T ()(H Pt H T + GGT )()T ;1 = pxt jt ;~t ;Yt (x j ; ~) = = ()T ()(H Pt H T + GGT );1 ()T = pxt jt ;~t ;yt ;Lt ;Yt;1 (x j ; ~; yt ; Lt) = = pxt jt ;~t ;yt ;Lt ;z~t ;Yt;1 (x j ; ~; yt ; Lt; ~ yt ) = Because of the form of (:), we know ()T () = Diagf1 ; :::; M g. Hence, the former simpli es to = pxt jt ;z~t ;Yt;1 (x j ; ~ yt ) = ;  = pz~t jxt ;t (~yt j x; )pxt jYt;1 (x)=Ft (; ~) ()T ()(H Pt H T + GGT )()T ;1 = 4p = (H Pt H T + GGT );1 ()T with Ft (; ~) = yt j ). Subsequently z~t jt ;Yt;1 (~ Because Qt () = ()(H Pt H T + GGT )()T is block4 diagonal, we get t (; ~) = Probft = ; ~t = ~ j Yt g = = pt ;~t jYt (; ~) = DetfQt()g = Detf()(H Pt H T + GGT )()T g = = pt ;~t jyt ;Lt ;Yt;1 (; ~ j yt ; Lt ) = M Y = Detfhi Pti hiT + gi giT g = = pyt ;~t jt ;Lt ;Yt;1 (yt ; ~ j ; Lt )pt jLt ;Yt;1 ( j Lt)=c0t i 6=0

i=1

If Dt > 0 we have

~Tt ~t = ( t )T t Tt ( t ) = = ( t )T ( t ) = = Diagf t g ~t ( t )T = Tt ( t )( t )T = = Tt which means that the transformation from ( t ; t ) into ~t has an inverse which implies

pyt ;~t jt ;Lt ;Yt;1 (yt ; T ( ) j ; Lt ) = = pyt ; t ;t jt ;Lt ;Yt;1 (yt ; ;  j ; Lt ) Furthermore, because the transformation from (yt ; t ; t ) into (~zt ; vt ; t ; t ) is a permutation, we get for Lt > D () > 0

pyt ; t ;t jt ;Lt ;Yt;1 (yt ; ;  j ; Lt ) = = pz~t ;vt ; t ;t jt ;Lt ;Yt;1 (T ( )yt ; (  )yt ; ;  j ; Lt ) Hence, for Lt > D() > 0, t satis es: t (; T ( )) = Ft (; T ( ))pvt jt ;Lt ((  )yt j ; Lt ) p t jt ;Lt ( j ; Lt )pt jt ( j )pLt jt (Lt j )pt ()=c00

=

M Y

i=1

DetfQitg

i

(B.1)

and t (; ~)T Qt ();1 t (; ~) =  ;1 = t (; ~)T ()(H Pt H T + GGT )()T t (; ~) =



;1

= t (; ~)T () H Pt H T + GGT ()T t (; ~) = =

= =

=

M  X

i 

i=1



t (; ~)T ()

i=1

iT  i ;1

()T t (; ~)

Qt

Lt M X X i=1

k=1

[()]Ti ~k ik t

Lt  X Lt M X X i=1 k=1 k0 =1

;1

i  T () t (; ~) =



M X

hi Pti hiT + gi giT

k0 =1

i

()T t (; ~) =

T  i ;1

Lt X





Qt



[()]Ti ~k0 ik t

[()]Ti ~k [()]Ti 

0



=

-17NLR-TP-2001-625

 T  i ;1 ik0 

= =

~k0 ikt

Qt

t

=

 T  i ;1 ik [()]Ti ~k [()]Ti ~k ik Qt t = t

Lt M X X i=1 k=1

Lt M X X i=1 k=1

 T  i ;1 ik [()]Ti ~k ik Qt  t

(B.2)

where for = use is made of



i

()T t (; ~) =



i

= ()T ~yt ; ()T ()H xt = = =

ik ! 

Lt  X

ii

()T ~ ykt ; ()T () hi xit =

k=1 Lt X



ik



ik

()T ~ ykt ; ()T ~

k=1





ii

!

 ()T () hi xit = = = =

!

Lt  X k=1 Lt  X k=1 Lt X

ik  ik ()T ~ ykt ; ()T ~ hi xit = ik

()T ~ ik t =

[()]Ti ~k ik t

k=1

Substituting (B.1) and (B.2) into (18) yields

Ft (; ~) =

M Y i=1

fti (; ~)

(B.3)

with fti (; ~) as given by (30). Substituting (B.3) into (16) yields (29), and (31.a) and (31.b) follow as for JPDA [1]. Acknowledgments

The authors would like to thank the anonymous reviewers for their stimulating comments, which signi cantly helped in improving the paper. References [1] Y. Bar-Shalom and T.E. Fortmann, Tracking and Data Association, Academic Press, 1988 [2] H.A.P. Blom and E.A. Bloem, Joint Probabilistic Data Association Avoiding Track Coalescence, IEE Colloquium on Algorithms for Target Tracking, Savoy Place, London, 16 May 1995, IEE Digest No: 1995/104, pp. 1/1-1/3. [3] E.A. Bloem and H.A.P Blom, Joint Probabilistic Data Association Methods Avoiding Track Coalescence, Proc. 34th IEEE Conference on Decision and Control, 1995, pp. 2752-2757. [4] T.E. Fortmann, Y. Bar-Shalom and M. Sche e, Sonar tracking of multiple targets using Joint Probabilistic Data Association, IEEE J. of Oceanic Engineering, Vol. 8 (1983), pp. 173-183. [5] V. Nagarajan, M.R. Chidambara, R.N. Sharma, Combinatorial problems in multitarget tracking data association problems, IEEE

Tr. on Aerospace and Electronic systems, Vol. 23 (1987) pp. 260263. [6] R.J. Fitzgerald, Development of practical PDA logic for multitarget tracking by microprocessor, Ed: Y. Bar-Shalom, Multitargetmultisensor tracking: advanced applications, Artech House, 1990, pp. 1-23. [7] B. Zhou, N.K. Bose, Multitarget tracking in clutter: fast algorithms for data association, IEEE Tr. on Aerospace and Electronic Systems, Vol. 29 (1993) pp. 352-363. [8] J.A. Roecker and G.L. Phillis, Suboptimal Joint Probabilistic Data Association, IEEE Tr. on Aerospace and Electronic Systems , Vol. 29 (1993) pp. 510-517. [9] J.A. Roecker, A class of near optimal JPDA algorithms, IEEE Tr. on Aerospace and Electronic Systems, Vol. 30 (1994) pp. 504-510. [10] X.R. Li and Y. Bar-Shalom, Tracking in clutter with Nearest Neighbour lters: analysis and performance, IEEE Tr. on AES, Vol. 32 (1996), pp. 995-1009. [11] D.J. Salmond, Mixture reduction algorithms for target tracking in clutter, SPIE Proc. Signal and Data Processing of Small Targets in Clutter, 1990 Vol. 1305, pp. 434-445. [12] L.Y. Pao, Multisensor Multitarget Mixture Reduction algorithms for tracking, J. of Guidance, Control and Dynamics, Vol. 17 (1994), pp. 1205-1211. [13] S. Blake and S.C. Watts, A Multitarget Track-White-Scan Filter, Proc. IEE Radar 87 Conf, London, October 1987. [14] Y. Bar-Shalom and X.R. Li, Multitarget-Multisensor Tracking: Principles and Techniques, 1995, (ISSN 0895-9110). [15] H.M. Shertukde and Y. Bar-Shalom, Tracking crossing targets with imaging sensors, IEEE Transactions on Aerospace and Electronic Systems, Vol. 27 (1991), pp. 582-592. [16] Y. Bar-Shalom, K.C. Chang and H.A.P. Blom, Tracking splitting targets in clutter by using an Interacting Multiple Model Joint Probabilistic Data Association lter, Ed. Y. Bar-Shalom, Multitarget multisensor tracking: applications and advances, Vol. II, Artech House, 1992, pp. 93-110 [17] E.W. Kamen, Multiple target tracking based on Symmetric Measurement Equations, Proc. 1989 American Control Conf., pp. 263-268 [18] E.W. Kamen, Multiple target tracking based on Symmetric Measurement Equations, IEEE Tr. on Automatic Control, Vol. 37 (1992) pp. 371-374. [19] E.W. Kamen and C.R. Sastry, Multiple target tracking using products of position measurements, IEEE Tr. on Aerospace and Electronic Systems, Vol. 29 (1993) pp. 476-493. [20] Y.J. Lee and E.W. Kamen, The SME lter approach to multiple target tracking with false and missing measurements, SPIE Proc. Signal and data processing of small targets, 1993, Vol. 1305, pp. 574-585. [21] E.W. Kamen, Y.J. Lee and C.R. Sastry, A parallel SME lter for tracking multiple targets in three dimensions, SPIE Proc. Signal and data processing of small targets, 1994, pp. 417-428. [22] F. Daum, A Cramer-Rao bound for multiple target tracking, SPIE Proc. Signal and Data Processing of Small Targets, Vol. 1481, 1991, pp. 341-344. [23] K. Kastella, A maximum likelihood estimator for report-to-track association, SPIE Proc. Signal and Data Processing of Small Targets, Vol. 1954, 1993, pp. 386-393. [24] K. Kastella, Event-Averaged Maximum Likelihood Estimation and Mean-Field Theory in Multitarget Tracking, IEEE Tr. on Automatic Control, Vol. 40 (1995), pp. 1070-1074. [25] K. Kastella and C. Lutes, Comparison of mean- eld tracker and joint probabilistic data association tracker in high-clutter environments, SPIE Proc. Signal and data processing of small targets, 1995, SPIE Vol. 2561, pp. 489-495. [26] L. Dai, Singular control systems, Lecture notes in Control and information sciences, Vol. 118, Springer, 1989. [27] P.R. Kalata, The Tracking Index: A Generalized Parameter for - and - - Target Trackers, IEEE Tr. on Aerospace and Electronic Systems, Vol. 20 (1984), pp. 174-182. [28] F.E. Daum, A system approach to multiple target tracking, Ed. Y. Bar-Shalom, Multitarget multisensor tracking: applications and advances, Vol. II, Artech House, 1992, pp. 149-181. [29] F.E. Daum and R.J. Fitzgerald, The importance of resolution in multiple target tracking, SPIE Proc. Signal and data processing of small targets, 1994, pp. 329-338. [30] H.A.P. Blom and Y. Bar-Shalom, The Interacting Multiple Model algorithm for systems with Markovian switching coe-

-18NLR-TP-2001-625

cients, IEEE Tr. on Automatic Control, Vol. 33 (1988) pp. 780-783. [31] H.A.P. Blom, R.A. Hogendoorn and B.A. van Doorn, Design of a multisensor tracking system for advanced Air Trac Control, Ed. Y. Bar-Shalom, Multitarget multisensor tracking: applications and advances, Vol. II, Artech House, 1992, pp. 31-63. [32] D.D. Sworder, J.E. Boyd and G.A. Clapp, Image fusion for tracking manoeuvering targets, Int. J. of Systems Sience, Vol. 28 (1997), pp. 1-14.