New square-root smoothing algorithms - IEEE Xplore

0 downloads 0 Views 664KB Size Report
New Square-Root Smoothing Algorithms. The authors would like to thank C. Scherer and R. Schrama for fruitful discussions that contributed to the results of this ...
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996

727

New Square-Root Smoothing Algorithms

ACKNOWLEDGMENT The authors would like to thank C. Scherer and R. Schrama for fruitful discussions that contributed to the results of this paper. REFERENCES [ l ] P. M. M. Bongers, “On a new robust stability margin,” Recent Advances Math. Theory Syst., Contr., Networks Signal Processing-Proc. Int. Symp. MTNS-91, 1991, pp. 377-382. [2] -, “Modeling and identification of flexible wind turbines and a factorizational approach to robust control,” Ph.D. dissertation, Delft Univ. of Technol., Mech. Eng. Systems and Control Group, 1994. 131 C. T. Chen and C. A. Desoer, “Algebraic theory for robust stability of interconnected systems: Necessary and sufficient conditions,” in Proc. 2Ist IEEE Con$ Decision Contr., 1982, pp. 491494. 141 J. C. Doyle, B. A. Francis, and A. R. Tannenbaum, Feedback Control Theory. New York MacMillan, 1992. 151 J. C. Doyle and C. Stein, “Multivariable feedback design: Concepts for a classical/modem synthesis,” IEEE Trans. Automat. Contr., vol. AC-26, pp. 4-16, 1981. [6] B. A. Francis, A Course in H , Control Theory (Lecture Notes Control Information Sciences), vol. 88. Berlin: Springer-Verlag, 1987. [7] E. J. M. Geddes and I. Postlethwaite, “The weighted gap metric and structured uncertainty,” in Proc. Amer. Contr. Conc, pp. 1138-1 142, 1992. [8] T. T. Georgiou. “On the commtation of the gau metric.” Svvt. Contr. ,~ Lett., vol. i l , pp. 253-257, 1$88. T. T. Georgiou and M. C. Smith, “Robust control of feedback systems with combined plant and controller uncertainty,” in Proc. Amer. Contr. CO@, 1990, pp. 2009-2013. -, “Optimal robustness in the gap metric,” IEEE Trans. Automat. Contr., vol. 35, pp. 673-686, 1990. K. Glover and D. McFarlane; “Robust stabilization of normalized coprime factor plant descriptions with H , -bounded uncertainty,” IEEE Trans. Automat. Contr., vol. 34, pp. 821-830, 1989. G. C. Hsieb and M. G. Safonov, “Conservatism of the gap metric,” IEEE Trans. Automat. Contr., vol. 38, pp. 594-598, 1993. A. Packard and M. Helwig, “Relating the gap and graph metrics via the triangle inequality,” IEEE Trans. Automat. Contr., vol. 34, pp. 1296-1297, 1989. L. Qui and E. J. Davidson, “Pointwise gap metrics on transfer matrices,” IEEE Trans. Automat. Contr., vol. 37, pp. 741-758, 1992. M. G. Safonov and M. Athans, “A multiloop generalization of the circle criterion for stability margin analysis,’’ IEEE Trans. Automat. Contr., vol. AC-26, pp. 415422, 1981. M. Vidyasagar, “The graph metric for unstable plants and robustness estimates for feedback stability,” IEEE Trans. Automat. Contr., vol. AC-29, pp. 403418, 1984. -, Control System Synthesis: A Factorization Approach. Cambridge, MA: MIT Press, 1985. M. Vidyasagar, H. Schneider, and B.A. Francis. “Algebraic and topological aspects of feedback stabilization,” IEEE Trans. Automat. Contr., vol. AC-27, pp. 880-894, 1982. G. Zames and A. K. El-Sakkary. “Unstable systems and feedback The gap metric,” in Proc. Allerton Con&, 1980, pp. 380-385. U

I

I

PooGyeon Park and Thomas Kailath

Abstract-This paper presents new square-root smoothing algorithms for the three best-known smoothing formulas: 1) Rauch-Tung-Striebel (RTS) formulas, 2) Desai-Weinert-Yusypchuk (DWY) formulas, called backward RTS formulas, and 3) Mayne-Fraser (MF) formulas, called two-filter formulas. The main feature of the new algorithms is that they use unitary rotations to replace all matrix inversion and backsubstitution steps common in earlier algorithms with unitary operations; this feature enables more efficient systolic array and parallel implementations and leads to algorithms with better numerical stability and conditioning properties.

I. INTRODUCTION Square-root (or factorized, as they are sometimes called) algorithms for state-space estimation have been found to have several advantages over the conventional equation-based algorithms in terms of numerical stability, conditioning, and amenability to parallel and systolic implementation. While such algorithms for prediction and filtering have by now been studied quite extensively (see, e.g., [l]-[S]), the picture is not quite as complete for smoothing. In the literature, there are two classes of square-root smoothing algorithms, both based on using quantities propagated by the squareroot information filter algorithm (SRIF) presented by Dyer and McReynolds in 1969 [4]. In 1971, Kaminski [9] proposed the squareroot information smoother (SRIS) of which Bierman in 1983 [lo] gave a so-called UD (free of arithmetic square-root) version. The SRIF and SRIS propagate the square-root of the inverse of the filtering and smoothing error covariances, respectively, hence the name “infomation” form. In 1974, Bierman [ 111 proposed propagating the smoothing error covariance itself, using certain outputs from the SRIF to provide the coefficients of certain smoothing error covariance recursions. He called this the DMCS (Dyer-McReynolds Covariance Smoothing)-SRIF algorithm. A UD version of the DMCS-SRIF was given by Watanabe and Tzafestas [12]; see also McReynolds [13]. Watanabe [14] also gave a square-root form of certain smoothing formulas of Desai-Weinert-Yusypchuk (DWY) formulas [ 151, while Dobbins [ 161 derived a square-root version of the Mayne-Fraser (MF) (or two-filter) formulas. These square-root algorithms have various advantages and disadvantages. However, all of them require certain matrix inversion and/or backsubstitution steps and, thus, none of them is particularly wellsuited for parallel implementation. Recently, we have presented in [ 171 a new square-root smoothing algorithm for Bryson-Frazier (BF) formulas [18] (1963) that employs unitary rotations instead of matrix inversion and backsubstitution steps, thus simultaneously improving numerical stability and conditioning and also making parallel and systolic implementation easier-see, e.g., the discussion of these issues in [19] and 1201. There are essentially three more best-known smoothing formulas: those of Rauch-Tung-Striebel (RTS) [21] (1965), DWY [15] (1983), and Mayne [22] (1966) and Fraser 1231 (1967). In this paper, we Manuscript received May 20, 1994; revised March 17, 1994 and November 3, 1995. This work was supported in part by the Advanced Research Projects Agency of the Department of Defense and was monitored by the Air Force Office of Scientific Research under Grant F49620-93- 1-0085. The authors are with the Information Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA 94305 USA. Publisher Item Identifier S 0018-9286(96)02824-3.

0018-9286/96$05.00 0 1996 IEEE

728

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 41,

shall provide their square-root versions. An interesting conclusion from our results is that the apparently most computationally intensive traditional algorithm-the two-filter solution of Mayne (1966) and Fraser (1967)-has the conceptually least complex square-root form among the four square-root smoothing algorithms. State-Space Model: x;+l = Fix, G;u; and y; = H;x; +vi for i 2 0, where { X O , U ; , U ~ } are zero-mean white Gaussian random variables uncorrelated with each other and E(u;uf) = Q; > O,E(vivt) = R, > 0. We define

NO. 5, MAY 1996

In Case 2, the square-root version of the RTS smoothing formulas is not very different from those for the BF smoothing formulas in [17, Proposition LLT.11. Multiplying the left side of (1) by Pt-”’ yields the following backward recursions for a, PZ-’/’ X t I N A

+

gtlJ4 the linear least-squares estimate of

and multiplying both sides of (2) by P,-’” following backward recursions for

2;

given {YO,. . . , ~ j }

and (P%-”’)* yields the

E [ & - P z 1 j ] [ & - &,J*

PZlj

the error covariance of the estimate P ; l j . When j = i - l,P;lzpl is called the one-step predicted estimate, ; called the filtered estimate. When j > i , while when j = i , d ; ~ is we have smoothed estimates. A corresponding terminology will be used for the error-variance matrices. For compactness we shall write P:212-l 4 P, and P;1ipl 4 P, unless amplification is necessary for emphasis or comparison. One-Step Predicted Estimates: Obey

+

dz+i = Fp,;O; Iip,zy’z, do = 0 Pz+1 =

F,P,F,*

Recall Step 1 in [17, Proposition 111.11. For

+ G,Q,G,* - Ii7p,;Re,;Ki,i, Po = Eo

(PL’/2P~) = 0 and (Pt”)

where

=

{Pt-1/2P,} and

Fp,i 2 F, - ICp,,Hz KP,; F i l i ~ , ~

lIi’2,propagate {Ptl”}

using a forward recursion

&-f . L! P . H ? R - ~ ,z , z e,z Re,, 4R, + H;P,H,*. Over the years, several square-root algorithms for thls formula have been obtained [24], [25]. In [26], we showed that various previous results on covariance-form and information-form squareroot algorithms could be usefully written in the “combined” forms. For easy understanding, we refer to [26]. The fixed-interval smoothing problem is to find {i?;l~}~=~:. given N . the following section, we shall provide new the data { Y ~ } ~ = O : In square-root versions for RTS, DWY, and MF smoothing formulas which will be obtained by the appropriate modification of the arrays in [26, Algorithm IIIS]. For convenience, we shall use the following notations:

Qt4 Q ; - Q,GZ;PG;G;Q; E-;,t & ~;+~G;Q;Q;*/’ F s , zi2 PzF;,;PzTi = F,-’( I - G ; Q , G : P z t ) -

F ; ~ ( I- ~ , Q t / ’ h - ~ , . ) .

11. (FORWARDS) RTS SMOOTHING FORMULAS The RTS smoothing formulas introduced by Rauch, Tung, and Striebel [21] in 1965 generate ? , I N from a backward recursion using the 9, There are two cases. Case 1: (singular F,)

+

+

? , I N = 9, PW’R;: Fs,t(di.,+11~ P 2 = ~Pt ~ PzHt*R,:H,P, - FS,%(Pz+i - Pz+lIN)F:,,.

(1) (2)

Case 2: (nonsingular F,) QN

= Fs,t&+ij,v

+ Fz-lGtQ,GTP~~dt+i +

P,IN= Fs,zPz+ilNFs*,,2 F ~ l G , Q z G ~ F ~ ” . Here,

~ N + ~ I=N~ N + Iand

PN+lIN = PN+i.

(3)

where 0, is any umtary operator that lower-tciangularizes the first and second rows of the pre-array and “(*)” indicates the redundant entnes Using the fact that the above Step 1 in [17, Proposition 111.11 provides (P,*/2H:R,,*/2), (P,*/’FF,”tPtyT’z), (R;,’/’eC), (Pzl/’) and (P,-”’&), we can obtan a coii-esponding square-root version Algorithm 11.1. Square-Root RTS (Case I ) Step 1. Same as Step 1 in [17, Proposition 111 13, Step 2: With U N + ~ = (Pi:/l”?~+i)and AN+^ = I, propa~ ) (Pzpl”P,~~Pt-*”) using the backward gate ( P , - i ~ 2 d c l and recursions (4) and ( 5 ) ,

Step 3 Calculate the smoothed estimates and their error covanances via

9 z l =~ (P61/2)a,and P,IN = (Ptl/’)Ac(P:/’). This algorithm has characteristics simlar to those of [17 Propositlon 111 11, the error covanance P , I N mght not be positive semidefinite because of round-off errors and, during the procedures, we should save (P;“”H:R,r/2), (P,”’2FF,*,Pzy;’2),(R,,’/’eZ), (Ptp’/’&) which will require memory on the order of ( N l ) n p , ( N l ) n 2 (, N l ) P , ( N l ) n ( n I)/& ( N l ) n , respectively. The major costs arise from the (P,*’2 F; ,Pzy:’2) and

+

(P?’)

+

+

+

+

+

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996

129

However, if the F, are nonsingular, we can significantly reduce the amount of memory and computation as we did for the BF smoothing as formed in [17,Proposition formulas. Thus for (3), we shall use Fs,t 111.31 and then explain how to reduce the number of operations required to compute

where 0 is any unitary operator that zeros out the (1, 2) entry of the pre-array. This procedure requires inversion of (Pz:/!). Remark 3: Watanabe and Tzafestas [ 121 suggested a square-root version of the RTS smoothing formulas using a rank-2 UD covariance matrix factorization. Their approach was based on the following equations:

From the entries of the time-update post arrays in [17, Proposition and This suggests 111.31, we can find the terms (2r+lx-i,z) that instead of computing 2, itself, we use the entries of the post-array &+I. to yield Fz-' G, QtG: One more attractive feature is our ability to use the square-root algorithm to update the smoothing error covariances as well as the smoothed estimates. Consider the following array:

(Qi").

Pt;i

where ( 1 , l ) 2 FC'(I- G,{ ( Q ; " ) ( I - i - b , ; ) } )

(P:/;lN).

From inner- and cross-products of the array rows, we obtain

+ F,-~G~Q~G,*F,-*

P,IA~ = F ~ , ~ P , + , I N F ~ * , ,

=XX" QN

(7)

= (Fs,iPzy;lN) (Pt;;(;2i+llN)

+ (F?G,

(Q:/'))(&,iii+l)

=Xa.

where without direct construction of Fs,cthey propagated ( Pcil), (Pz~z12tlc), and (F,-*Pti12,~,) by using a rank-two UD information matrix factorization for the forward information filtering. Even though their method was not the most compact (i.e., see a modified version of Algorithm 11.2 in the following remark), they successfully provided an inversion-free algorithm, except in a step connecting the forward information filtering to the backward-pass smoothing, which required the inverse of ( P G i l l N ) . Remark 4: If we allow this inversion step in the procedures, we can obtain a more compact form of the first step of Algorithm 11.2 by using the SRIF blocks in combined measurement and time update square-root filtering arrays in [26, Section IV]. = II;". Then propagate Step 1: Set Po-1f2P~= 0 and P:" (e-'/22t) and (Pt'/') using the measurement- and time-update equations:

(ac),

(8)

Therefore, we can identify that

X = (Pt'l/,") and

Q

= (Pt~~i'2il,).

In summary, we have the following algorithm. Algorithm 11.2: Square-Root RTS (Case 2): Assume that the Fi are nonsingular. Step 1: (Same as Step 1 in [17, Proposition 111.31.) Save (&-t,+), ( C ? ; j 2 ) , and (K-b,i&+~). Step 2: With ( P ~ : / ; f , 2 , ~ + I ~ =~ )( P & $ 2 ~ + 1 ) , propagate ( Ptyk/2.EtIN) using the backward recursions (6). Calculate smoothed estimates and their error covariances using (7) and (8). This algorithm has several other advantages over the previous algorithms. The first one is that the P z i ~is guaranteed to be positive definite. The second is that this algorithm requires relatively l ) p 2 , ( N 1). little memory, on the order of (N l ) n p , ( N corresponding to ( f i - b , i ) , and ( I ? b , i & + ~ ) . Finally, there is less computation because we do not need to form 2%itself (by multiplying (PZ1")and (Pzr1'22;)). Remark 1: Bierman derived a square-root version of the RTS smoothing formulas using the so-called UD covariance matrix factorization in [lo]. The main goal was to express Fs,; as a product of elementary rank-1 type matrices. However, this procedure still required matrix inversion and backsubstitution steps. Remark 2: McReynolds [ 131 modified Bierman's UD version of the RTS smoothing formulas by using the following square-root array for constructing Fa,%:

(a:/'),

+

+

where @ % , I is any unitary rotation that zeros out the (1,l) entry of the pre-array; b)

r

0

+

where 0 , , 2 is any unitary rotation that zeros out the (3,1) entry of the pre-array. To find the initial value of P$ in Step 2, we have to invert This modified algorithm in Remark 4 uses the SRIF form for both measurement- and time-update steps, while [ 17, Proposition 111.31 and Algorithm 11.2 use the SRIF form and the square-root covariance filtering (SRCF) form for measurement- and time-update steps, respectively. Therefore, the size of the arrays for measurementupdates in this algorithm is smaller than that in [17, Proposition 111.31 and Algorithm 11.2. This modified RTS algorithm, then, needs less

730

E E E TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996

computation than [17, Proposition 111.31and Algorithm II.2. However, this algorithm requires inversion of PG;{iN . Rernark5: The modified SRCF Algorithm (Algorithm DI.2 in [26]) was also found by Gaston and Irwin in 1989 [30]. 111. DWY (OR BACKWARDS RTS) SMOOTHING FORMULAS

The smoothing formulas of Desai, Weinert, and Yusypchuk [15] separate out the dependence on no by using (Fisher-type) backward Kalman filtering formulas (with infinite “initial” covariance). The equations are

+ G~Q,G:L~+I)-~(F;&~N + GiQiGfz,b+l) (9) = ( I + G,Q,Gz;+,,)-~ x ( F , P , ~ ~ F+ , *G;Q;(Q;’ + G:L:+,,G,)Q~G:)

where we can easily identify that X = Rb,z*/2.Moreover, as in can also be found in Algorithm II.2, the error covariance P z l ~ square-root form. Therefore, without further explanation, we shall now present the corresponding square-root version. Algorithm iiI.1: Square-Root DWY: Step 1: With L$zl = 0 and z&+, = 0, propagate z,b via the backward recursions (12). Save the following variables ((Rb,z*’2)),(get112Gt* zt+1), b [ ( ~ 7 b , ~ , z ) ( R ~ zfor 1 1Step 2 ) , 13.) Step 2: Using (II;”)), (L:”)), and ( L i b / 2 ~ t construct ), P$/2fo,N) and (Pi$

P;+IIN = ( I

+

x (I L!+~G~Q~G:)-’ where 00is any unitary operator that zeros out the (1, 2) entry of the pre-may. As a result of this rotation, P o ~ ~ ’ 2=?Pl$o. ~p

with initial conditions ?OlN

= (I+IIoLyIIoz;

POlN

= ( I + rIoL:)-lIIo

where for Lh+l = 0 and

Therefore, &IN = ( P $ ) ( P l k z o ) . Step 3: Using (PGh’2&l~)and

=0

(Pi;),propagate (P$’2?tl~)

ancl (P,;C,

zp = F;”( I + L ; + ~ G , Q , G : ) - ~ ~ , ~++H, , R ; ~ ~ ;

(10)

L; = F: ( I + L:+lG;Q,G:)-lL,b+lF;+ HiR,’H,*.

A square-root algorithm for the DWY smoothing formulas is essentially based on the S N F formulas. For (lo), consider the following arrays:

0

1 i,’

x 0

= Y

p1

z

0

P’

(1 1)

where eZ+. is any unitary operator that zeros out the (1, 2) entry of the pre-array. The smoothed estimates and the error covariances are

where @,+I is any unitary operator that lower-triangularizes the prearray. From appropriate inner- or cross-products of the array rows, we find

xx*= Q ; ~+ G:L;++,G;e ( ~ k j , 2 (R;?) ) Y X ’ = F;”L;+lG, A l?b,p,zX*

zz”= F%*L;+~F; + H,*R;’H; A

- YY*

(p) (L;*+)

Xa = (Rij:) (R,Z’/2G:z,b+l)

zp = ~;“~,b,, + H , * R ; ~-~ya ~ A L ~ L/ i -~~ I ~ ~ P V

Z

2) ,

We see that the array (1 1) propagates zp and L,b. Next, since (9) can be written as

+G,Q~G,*L;+~)-~F,L~.~~~ + (I+ G;Q,G:L,4tl)-lG,Q;G,*z,b+, - ( I - G;R;, G; L ~ + , ) F , P ,+ ~G ~ ~R;,~G:~:+~

2i+llN= ( I

-

-

l

*

This square-root DWY algorithm behaves like the forward squareroot RTS algorithm (Algorithm 11.2) in terms of flop count, array size, and storage. But there is no constraint on F;, whereas in Algorithm 11.2 the F; must be nonsingular. A possible disadvantage of this algorithm, however, is that we cannot begin processing the measurements before the last YN is available; this means extra delay in the smoother. On the other hand, the BF and RTS smoothing formulas can be carried out in parallel with collecting measurements because they are based on forward Kalman filtering. Remark 6: Watanabe derived a square-root version of the DWY smoothing formulas using the UD information matrix factorization in [ 141. He successfully provided an inversion-free algorithm except in a step connecting backward Kalman filtering to forward-pass smoothing. This inversion step can be written with our expression as

b

(F;* - G,R-*/’ b,i I*;L b , p , i ) z i J N

+ G;R$Gb,4tl

we will be done if we can construct the component (RCZ*/’)in the array. For this, we augment the array as follows:

which corresponds to Step 2 in Algorithm I11 1 In fact, Watanabe’s algorithm can be considered as a special implementation of Algorithm II.2 (except for Step 2) designed to eliminate arithmetic square-root operations (see [27] for a comprehensive discussion of how to avoid anthmetic square-roots and division operations in the triangularizabon procedure)

Iv. MAYNE-FRASER (OR TWO-FILTER) SMOOTHING FORMULAS By introducing a method of combining the outputs of forwards Kalman filtering and (Fisher-type) backwards Kalman filtering,

73 1

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996

Mayne [22] and Fraser [23] suggested the so-called “two-filter’’ smoothing formulas

k t l= ~ P,inr(P,-’P,

+ z : ) , PZ$ = P,-’ + LLL.

(13)

Due to their nature, the MF smoothing formulas perform well in terms of speed in real-time processing’ and flexibility in handling variation of IIo; nevertheless, the MF smoothing formulas are considered to have a great computational disadvantage because of the several matrix inversion and backsubstitution steps. We shall now show how to overcome this handicap by introducing an appropriate square-root algorithm. We have already shown how to construct

in [26, Proposition 111.51 and Algorithm 111.1, respectively. Since we cannot find a direct connection between the above quantities and the components in (13), we need to construct certain intermediate square-root arrays. By judicious use of the above quantities, we shall now introduce the following array, where { X c ,Y,, a t ,p,} are to be determined

where 0: is any unitary operator that lower-triangularizes the pre-array. Step 2-Backward Estimate: With Lg:, = 0 and L,+,b / 2 z b~ + ’ = 0, propagate and save L:/’ and L,b/2z,b using the backward recursions (11). Step 3-Smoothed Estimate: Using the quantities in Steps 1 and 2, construct { Y t ,a , , p t } in (14) and compute smoothed estimates and error covariances via (15) and (16). In batch processing where all the measurements are in hand before running the smoothing formulas, the MF smoothing formulas are the fastest of all the smoothing formulas. The calculations require memory on the order of (N l ) n , ( N 1)n2 for saving either {(P:”), (Pt-’’23t)} or {(Lp”), (L,b”z,b)}. Therefore, if we have to calculate error covariances, Algorithm IV.l demands less memory than [17, Proposition 111.11 and 11.1 but more memory than Algorithm 111.1. There are certain cases where we do not need to compute error covariances (see, e.g., adaptive filtering [28], [29] in communications). Even in such cases, however, the MF smoothing formulas still need to compute the error covariances to obtain the smoothed estimates which is not the case for the [17, Proposition 111.31 and Algorithm 11.2 corresponding to the RTS and DWY smoothing formulas. Therefore, Algorithm IV.l requires more memory than [17, Proposition 111.31 and Algorithm 11.2. Remark 7: The MF (or two-filter) smoothing formulas that Dobbins used in [16] employed the recursions

+

+

where 0, is any unitary operator that (block) lower-triangularizes the pre-array. Applying inner- and cross-products of the array rows yields

x,x:= P,*/‘LpP,‘/~+ I

(P Z $ pP y )

= (Pz“l2P$2)

y,x:

= P:/2Pt$P,1/2

e)(

= P y = (Pz;

Pt$2

P y )

x,a, = (P,-1/22,)= ( P y P , $ 2 ) (P$P;%J */2

b/2

xzp, = (P, L,

In his square-root version, the first inversion steps appeared when 3, was constructed; others arose in the following procedures used to compute 2,1N from 2, and zg

)(L;b12Z:)

= (Pc*/*PzT;/2) (P$Zp).

Therefore, we can identify

x, = (Pz*/2Pt$), y, = (P$) a , = (Pt$P;’2%), pz = (P$zZ”)

where 0 is any unitary operator that (block) lower-triangularizes the pre-array .

and thus verify that izlN

+ 2) = K ( a z+ p,)

= P,IN ( P ? &

Pzp= KK’. This algorithm is summarized in the following. Algorithm IV.1: Square-Root MF (or Two-Filter): Step 1-Forward Estimate: With (Pi”) = and ( P i 1 / ’ 3 0 ) = 0, propagate and save PZ1” and (PZ-’/*Oz)using the following forward recursions:

RZ1I2

H,P,’/2 0 F, P,’12 G.v;“] 2,tp,-”12 0 0

e:R,:/2



2:++IPzJ;’,lz

elf

(w)

“Real-time processing” for fixed-interval smoothing is a series of actions or operations estimating each input frame from each output frame, while one collects outputs frame by frame. This concept differs from that of real-time processing for filtering or fixed-lag smoothing in which one executes ths algorithms while collecting the outputs symbol by symbol.

V. CONCLUDING REMARKS Table I compares various square-root smoothing algorithms in terms of “running memory size” which means the amount of storage available for temporary quantities that we require for the propagation of ? , I N and P,lh~. If the F, are singular or if we need to obtain error covariances, we cannot use either [17, Proposition 111.31 or Algorithm RTS (Case 2). In this case, the square-root DWY algorithms require the smallest amount of memory. If we do not have to construct error covariances and the F, are nonsingular, the square-root BF algorithm in [17], the RTS algorithm, and the DWY algorithm require almost the same small amount of memory. The square-root algorithms for the BF and the RTS smoothing formulas, both of which run backward recursions for the smoothed estimates by using the outputs of a forwards Kalman filter, have an advantage over the others in terms of the amount of computation needed for real-time processing. However, when the F, are singular, they require more memory--0( $ Nn2)-than does the square-root DWY algorithm-O(Nnp). Therefore, when the matrices F, are nonsingular and the smoothed estimates must be formed as soon as

[EEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996

132

TABLE I ALGOMTHMS COMPARISONS AMONGVARIOUS SR SMOOTHING

Square-root Algorithms BF (case 1) BF (case 2) RTS (case 1) RTS (case 2) DWY MF

FZ 3Ft3Fz-

PzlN

Available Not Available Available* Available* Available*

Running Memory

Nn’) O(NnP) O(+Arn’) O(NnP) O(Nnp)

(3(

*guaranteed to be positive definite

possible after all outputs are collected, the BF and RTS algorithms are recommended (in fact, the latter is more efficient than the former because the smoothed estimates are directly constructed using the outputs of the forwards Kalman filter). The square-root DWY algorithm, which runs a forward recursion for the smoothed estimates by using the outputs of (Fisher-type) backwards Kalman filtering, is very flexible in handling change of IIo, has no constraint on the F,, and requires a relatively small amount of storage. When the Ft are singular or only a small amount of memory is available, this algorithm is recommended. However, there is a time delay because the backward Kalman filtering cannot start before the last measurement is available. The MF smoothing formulas simultaneously run forwards Kalman filtering and Fisher-type backwards Kalman filtering algorithms. This feature enables the MF smoothing formulas to pick up the main merits of both the above two methods: fast computation in both real-time and batch processing and flexibility in handling change of IIo. The major drawback, however, is that it requires a somewhat larger amount of memory--O(Nn2). In conclusion, we note that the main features of our new squareroot algorithms are that they use square-root arrays formed from the state estimates and their error covariances and that they avoid matrix inversion and backsubstitution steps in forming the state estimates. These features provide many advantages over conventional algorithms with respect to systolic array and parallel implementations as well as with respect to numerical stability and conditioning.

REFERENCES J. E. Potter and R. G. Stern, “Statistical filtering of space navigation measurements,” in Proc. I963 AIAA Guidance Contr. Cor$, p. 5. S. F. Schmidt, “Computational techniques in Kalman filtering,” in Theory Appl. Kalman Filtering, C. T. Leondes, Ed., NATO Advisory Group for Aerospace Research and Development, AGARDograph 139, Feb. 1970. G. H. Golub, “Numerical methods for solving linear least squares problems,” Numerical Math., vol. 7, pp. 206-216, 1965. P. Dyer and S. McReynolds, “Extension of square-root filtering to include process noise,” J. Optim. Theory Appl., vol. 3, pp. 4 4 4 4 5 9 , 1969. P. G. Kaminski, A. E. Bryson, and S. F. Schmidt, “Discrete squareroot filtering-A survey of current techniques,” IEEE Trans. Automat. Contr., vol. AC-16, pp. 727-736, 1971. M. Morf and T. Kailath, “Square root algorithms for least squares estimation,” IEEE Trans. Automat. Contr., vol. AC-20, pp. 487497, Aug. 1975. B. D. 0. Anderson and J. B. Moore, Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall, 1979. T. Kailath, Lectures on Wiener and Kalman Filtering. NY: SpringerVerlag, 1981. P. G. Kaminski, “Square-root filtering and smoothing for discrete processes,” Ph.D. dissertation, Dept. of Aero. and Astr., Stanford Univ., Stanford, CA, 1971.

G. J. Bierman, “A new computationally efficient fixed-interval, discretetime smoothers,” Automatica, vol. 19, p. 503, 1983. -, “Sequential square-root filtering and smoothing discrete linear systems,” Automatica, vol. 10, pp. 147-158, 1974. K. Watanabe and D. S. G. Tzafestas, “New computationally efficient formula for bacwards-pass fixed-interval smoother and its UD factorization algorithm,” IEE Proc. D, 1989, vol. 136, do. 2, pp. 73-78. S. R. McReynolds, “Covariance factorization algorithms for fixedinterval smoothing of linear discrete dynamic systems,” IEEE Trans. Automat. Contr., vol. 35, pp. 1181-1183, Oct. 1990. K. Watanahe, “A new forward-pass fixed-interval smoother using the U D information matrix factorization,” Automaticu, vol. 22, pp. 465-475, 1986. U. Desai, H. Weinert, and G. Yusypchuk, “Discrete-time complementary models and smoothing algorithms,” IEEE Trans. Automat. Contr., vol. AC-28, pp. 536-539, Apr. 1983. J. R. Dobbins, “Covariance factorization techniques for least squares estimation,” Ph.D. dissertation, Dept. of Elec. Eng., Stanford Univ., Stanford, CA, Jan. 1979. P. Park and T. Kailath, “Square-root Bryson-Frazier smoothing dgorithms,” IEEE Trans. Automat. Contr., vol. 40, pp. 761-766, Apr. 1995. A. E. J. Bryson and M. Frazier, “Smoothing for linear and nonlinear dynamic systems,” Aero. Syst. Div., Wright-Patterson Air Force Base, OH, Tech. Rep. TDR 63-1 19, pp. 353-364, Feb. 1963. S. Y. Kung, VLSlArrays Processors. Englewood Cliffs, N.J.: PrenticeHall, 1988. N. Petkov, Systolic Parallel Processing. NY: North-Holland, 1993. H. E. Rauch, F. Tung, and C. T. Striebel, “Maximum likelihood estimates of linear dynamic systems,” AIAA J., vol. 3, pp. 1445-1450, Aug. 1965. D. Q. Mayne, “A solution of the smoothing problem for linear dynamic systems,’’ Automatica, vol. 4, pp. 73-92, 1966. D. C. Fraser, “A new technique for the optimal smoothing of data,” Ph.D. dissertation, Massachusetts Inst. of Tech., Cambridge, MA, 1967. F. M. F. Gaston and G. W. Irwin, “Systolic Kalman filtering: An overview.” IEE Proc. D, 1990, vol. 137, no. 4, pp. 235-244. M. Moonen, “Implementing the square-root information Kalman filter on a jacobi-type systolic array,” J. VLSI Signal Processing, vol. 8 , pp. 283-292, Dec. 1994. P. Park and T. Kailath, “New square-root algorithms for Kalman filtering,” IEEE Trans. Automat. Contr., vol. 40, pp. 895-899, May 1995. S. F. Hsieh, K. J. R. Liu, and K. Yao, “A unified square-root-free approach for QRD-based recursive least squares estimation,” IEEE Trans. Signal Processing, vol. 41, pp. 1405-1409, Mar. 1993. S. Haykin, Adaptive Filter Theory, 2nd ed. Englewood Cliffs, NI: Prentice-Hall, 199 1. K. Astrom, G. Goodwin, and P. Kumar, Adaptive Control, Filtering, and Signal Processing. N Y Springer-Verlag, 1995. F. M. F. Gaston and G. W. Irwin, “VLSI architectures for square root covariance Kalman filtering,” in Proc. SPIE Int. Soc. Opt. Eng., Aug. 1989, pp. 44-55.