efficient nlms and rls algorithms for perfect periodic ... - IEEE Xplore

2 downloads 0 Views 1MB Size Report
ABSTRACT. The paper discusses computationally efficient NLMS and RLS al- gorithms for perfect periodic excitation sequences. The most inter- esting aspect ...
EFFICIENT NLMS AND RLS ALGORITHMS FOR PERFECT PERIODIC SEQUENCES Alberto Carini STI - University of Urbino “Carlo Bo” Piazza della Repubblica, 13 - 61029 Urbino, Italy E-mail: [email protected] ν(n)

ABSTRACT The paper discusses computationally efficient NLMS and RLS algorithms for perfect periodic excitation sequences. The most interesting aspect of these algorithms is that they are exact NLMS and RLS algorithms suitable for identification and tracking of every linear system and they require a real-time computational effort of just a multiplication, an addition and a subtraction per sample time. Moreover, the algorithms have convergence and tracking properties that can be comparable to or even better than the NLMS algorithm with a white noise input. The transient and steady state behavior of the algorithms is also studied in the paper. Index Terms— Adaptive filters, adaptive signal processing 1. INTRODUCTION One of the most common approaches for identification and tracking of linear systems comes from the concepts of adaptive filter theory. A time-varying linear system can be identified and tracked by using an adaptive FIR filter of sufficient memory length having the same input of the unknown system, as in Fig. 1. The coefficients of the FIR filter are adapted with an iterative adaptation algorithm in order to minimize the error between the two systems’ outputs in accordance with some minimization criterion. For this purpose, the most successful adaptive algorithms are the Least Mean Square (LMS) algorithm, and its normalized version, the NLMS algorithm. The excitation signal of the unknown system is often determined by the specific application. When the designer is allowed to choose an excitation signal, the choice falls almost always on a white random noise. Indeed, in the class of random signals, a white random noise excitation guarantees the fastest convergence speed of the NLMS algorithm. The authors of [1] and [2] have argued that the excitation signal that optimizes the convergence speed of the NLMS algorithm is a deterministic Perfect Periodic Sequence (PPSEQ) with period equal to the memory length of the adaptive FIR filter. Periodic sequences have been widely used for the identification of linear systems. Periodic pulse sequences, maximal length sequences, PPSEQs and even generic periodic sequences are often employed for this purpose. Compared with the other sequences, the PPSEQs are particularly suitable for identification because by definition they have a perfect periodic autocorrelation function. It has been proved in [1–4] that without output noise, an NLMS algorithm excited by a PPSEQ of period N is capable of identifying a linear system within N samples. In a different contest, this result has already been proved in a paper of 1971 about the Kaczmarz iterative method for solving systems of linear equations. In noiseless conditions, [5] proves also the convergence of the NLMS algorithm for every periodic input signal and it provides an expression for the asymptotic solution. It was shown in [1, 2] that when the unknown system has memory length

978-1-4244-4296-6/10/$25.00 ©2010 IEEE

3746

x(n)

Unknown System

+

d(n)

y(n) FIR Filter

+ e(n)

Fig. 1. An adaptive FIR filter. smaller than or equal to N , the adaptive filter converges to the une when the unknown system has known system impulse response w; memory length longer than N , the adaptive filter converges to a time e with period N . In presence of output noise, the aliased version of w convergence performance of the NLMS algorithm with PPSEQ input is always better than or comparable to that of the NLMS algorithm with a white random noise input that provides the same steady-state system distance [3]. The approach of [1,2] was extended in [3,4] for the identification of multichannel linear systems. The works of [1–4] have inspired the results presented in this paper. Here we first derive computationally efficient NLMS and RLS algorithms for identification and tracking of linear systems with a PPSEQ excitation. The proposed algorithms are exact LMS and RLS algorithms that require a real-time computational effort of just a multiplication, an addition and a subtraction per sample time. Then, the transient and steady-state behavior of the algorithms are analyzed. The rest of the paper is organized as follows. In Section 2, the PPSEQs are reviewed. In Section 3, the LMS and RLS algorithms for PPSEQs are discussed. The transient and steady-state behavior of the algorithms are discussed in Section 4. Simulation results are given in Section 5. Eventually, some concluding remarks are provided in Section 6. For simplicity, we consider real signals in this paper. The systems we want to identify are assumed to be LTI or linear time-varying. Similarly, we present the single channel case here, but the results can be easily extended to the multichannel case by following the arguments of [3, 4]. Throughout the paper, lowercase boldface letters are used to denote vectors, uppercase boldface letters are used to denote matrices, the symbol  N denotes the circular convolution of order N , E[·] denotes the mathematical expectation, DFT{·} and IDFT{·} denote the Discrete Fourier Transform (DFT) and the Inverse DFT (IDFT) of the argument, respectively, A∗ denotes the complex conjugate of A,  ·  denotes the Euclidean norm, and · is the largest integer smaller than or equal to the argument. 2. PERFECT PERIODIC SEQUENCES A PPSEQ is a periodic sequence p(n) of period N with perfect periodic autocorrelation function,

ICASSP 2010

N −1 X

j

n mod N = 0 (1) otherwise, i=0 P −1 2 where E is the energy of the sequence, E = N i=0 p (i). ∗ Since DFT{φpp (n)} = DFT{p(n)} · DFT{p(n)}, the DFT √ of a PPSEQ has constant amplitude equal to E. Therefore, the simplest way to generate a PPSEQ of period N is to consider the periodic repetition of the IDFT of a conjugate symmetric, constant amplitude, complex sequence of length N . In particular, a random phase sequence could be considered. These sequences are called random phase multisine sequences (RPMS) by some authors [6]. For many applications the PPSEQ should have a good energy φpp (n) =

p(i)p(n + i) =

E 0

PN −1 2 i=0 p (n) , N maxi {p2 (i)}

efficiency η = or equivalently a low crest factor 1/η. A binary sequence would have η = 1, but no binary PPSEQ is known for N > 4. For this reason, ternary PPSEQs, which assumes the values {0, a, −a}, have been proposed in the literature [7, 8]. The sequences of [8] are particularly interesting. They are almost binary sequences with just a leading zero in the fundamental period and they exist for every period N with N = q u + 1 with q > 2 prime, and u ∈ N. Moreover, the sequences of [8] are odd PPSEQ, i.e., the fundamental period is periodically repeated with sign inversion and the periodic autocorrelation function is φpp (n) = E if n mod 2N = 0, −E if n mod 2N = N , and 0 otherwise. Nevertheless, we must point out that a high energy efficiency η is useful only when we need to exploit all the available measurement dynamic range. There are many applications where this is not necessary. There exist procedures for deriving RPMS with high energy efficiency, by considering RPMS with a quadratic phase response [9], or by applying an iterative optimization procedure [6]. The theory we will present in the next sections is applicable also to odd PPSEQs, but for simplicity we will consider only PPSEQs with periodic autocorrelation given by (1). 3. EFFICIENT NLMS AND RLS ALGORITHMS FOR PPSEQS 3.1. Preliminary considerations The derivation of efficient NLMS and RLS algorithms for PPSEQ inputs exploits the fact that, after an initial transitory period, the sequence of vectors x(n) is periodic with period N , i.e., there are only N different vectors x(n), which will be indicated with x0 , . . . , xN −1 . According to the definition in (1), these vectors and any scaled version of these vectors form an orthogonal basis for RN . In what follows we find adaptation algorithms for the N coefficients c0 (n), . . . , cN −1 (n) of the FIR filter with input-output relation N −1 X y(n) = ci (n)wiT x(n). (2) i=0

with wi = xi /E and E = xTi xi ∀i. When the input signal is a PPSEQ, x(n) = xi with i = j n mod N , and from (1) for all i, j 1 when i = j T (3) xi wj = 0 otherwise, and y(n) = ci (n) with i = n mod N. (4) It should be noted that the coefficients ci (n) characterize the filter in (2) as well as its impulse response. Thus, they can be used for system identification and tracking. Moreover, we can efficiently compute the impulse response vector w(n) from the ci (n). Indeed, N −1 X ci (n)wi = Wc(n), (5) w(n) = i=0

3747

where W is the N × N matrix with i + 1-th column equal to wi , and c(n)=[c0 (n), . . . , cN −1 (n)]T . Since W is a circulant matrix, the product can be efficiently computed with the help of DFT [10], w(n) = IDFT{DFT{w0 } · DFT{c(n)}}.

(6)

T

From (3) we have W X = I, where X is the N × N matrix with i + 1-th column equal to xi , and WT = X−1 . According to (1), the columns of X and of W form orthogonal bases. Therefore, X and W are perfectly conditioned. 3.2. Efficient NLMS algorithm for PPSEQ In the LMS algorithm, we want to find the coefficients ci (n) that minimize the following minimum-mean-square cost function: ˜ ˆ (7) J(n) = E (d(n) − y(n))2 , with y(n) given in (2). By using the gradient method, μ ∂J(n) . ci (n + 1) = ci (n) − 2 ∂ci (n)

(8)

By approximating J(n) with (d(n)−y(n))2 and taking into account (4), it is can be verified that j ∂J(n) −2(d(n) − ci (n)) when i = n mod N (9) = 0 otherwise. ∂ci (n) Thus, j ci (n) + μ(d(n) − ci (n)) when i = n mod N ci (n + 1) = ci (n) otherwise. (10) This adaptation equation can also be written in vector form, ´ ` (11) c(n + 1) = c(n) + μ d(n) − cT (n)ei (n) ei (n), with i = n mod N and ei (n) the i + 1-th column of the N × N identity matrix. For μ = 1, the adaptive filter in (10) and (11) requires only a product, an addition and a subtraction per sample time. The adaptive filter includes (2) also, and still there is only one multiplication. When μ is an integer-power of two, this is a multiplication-free adaptive filter. The algorithm is an NLMS algorithm because ei (n) has unit norm. The algorithm is also an Affine Projection algorithm of order N . Indeed, for μ = 1 it provides the minimum coefficient variation that set to zero the last N a posteriori estimation errors (d(n − k) − ci (n + 1) = 0 with i = (n − k) mod N and 0≤ k