ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

ISA Transactions journal homepage: www.elsevier.com/locate/isatrans

Research article

New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases Tawfik Najeh, Abdelkader Mbarek n, Kais Bouzrara, Lotﬁ Nabli, Hassani Messaoud Research Laboratory of Automatic Signal and Image Processing, National School of Engineers of Monastir, University of Monastir, 5019, Tunisia

art ic l e i nf o

a b s t r a c t

Article history: Received 15 January 2014 Received in revised form 14 April 2017 Accepted 21 May 2017

The ARX-Laguerre model is a very important reduced complexity representation of linear system. However a signiﬁcant reduction of this model is subject to an optimal choice of both Laguerre poles. Therefore we propose in this paper two new methods to estimate, from input/output measurements, the optimal values of Laguerre poles of the ARX-Laguerre model. The ﬁrst method is based on the NewtonRaphson's iterative technique where we prove that the gradient and the Hessian can be expressed analytically. The second method is based on Genetic Algorithms. Both proposed algorithms are tested on a numerical example and on a heating benchmark. & 2017 ISA. Published by Elsevier Ltd. All rights reserved.

Keywords: ARX-Laguerre model Pole optimisation Newton-Raphson method Genetic algorithms

1. Introduction Recently an upsurge in research relating to Laguerre ﬁlters for use in system modelling and reduced order models have been provided [1–7]. The Laguerre functions have the property of being completely characterized by one parameter entitled Laguerre pole and its optimal identiﬁcation results in a signiﬁcant reduction of the model parameter number. Many works have dealt with Laguerre modeling, among them we remind the works of Fu and Dumont (1993) [8] who established an optimality condition of the Laguerre pole using the system impulse response and those of Tanguy et al. (2000) [9] where the optimal pole is expressed in terms of Laguerre model coefﬁcients. Some researchers suggested an off-line method to estimate the position of the stationary points of the squared error curve of a truncated Laguerre series or Laguerre ﬁlter [10]. Other orthonormal function bases (OFB) have been proposed in the literature such as generalized Laguerre functions and Meixner functions [11–15], Kautz orthonormal basis [16] and generalized orthonormal basis (GOB) [17]. We note that these bases are characterized by a set of poles; the choice of which strongly inﬂuences the parsimony of the expansion. The problem of optimum choice of the free parameter of some orthogonal functions (Laguerre, Kautz, Meixner) for the conventional system modeling has already been reported by many researchers [5–7,11– n

Corresponding author. E-mail addresses: najehtawﬁ[email protected] (T. Najeh), [email protected] (A. Mbarek), [email protected] (K. Bouzrara), lotﬁ[email protected] (L. Nabli), [email protected] (H. Messaoud).

15,18,19]. Some authors were interested by the optimization of the GOB poles, Den Brinker et al. [20] developed a method to recursively determine GOB poles. Malti et al. [21] and Oliveira e Silva [22] used the gradient algorithm to minimize output quadratic error. When expanding the ARX coefﬁcients on two Laguerre bases, an easy representation and a good approximation capability of complex linear system is given. Expansion of the ARX model on Laguerre bases was ﬁrst suggested by Bouzrara et al. [1]. The resulting model is entitled ARX-Laguerre model. The parsimony of the expansion is strongly linked to the choice of the poles deﬁning both Laguerre bases. Bouzrara et al. [3] have proposed an iterative algorithm to optimise Laguerre poles based on an analytical solution which depends on the coefﬁcients deﬁning the ARX-Laguerre model. In this paper, we focus on Laguerre pole optimization of the ARX-Laguerre model by proposing two different methods. The ﬁrst one uses an iterative algorithm based on the Newton-Raphson's approximation technique that deals with black box context where only the response of the system to a persistently exciting input signal is known. Moreover this methods needs initial values of the Laguerre poles in the iteration procedure which results is a local minimum. We suggest a procedure in the black box context, that yields the optimal values of the Laguerre poles and the least squares estimates of the ARX-Laguerre model coefﬁcients. The second proposed method uses the Genetic Algorithms (GA) that are widely adopted in recent years and which are very powerful in stochastic system modelling and have no special requirement on the form of the objective function. The processing of these algorithms can be achieved in many different ways by using some

http://dx.doi.org/10.1016/j.isatra.2017.05.015 0019-0578/& 2017 ISA. Published by Elsevier Ltd. All rights reserved.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

stochastic rules and they can converge to the global optimum with high probability. Genetic Algorithms [23,24] are one of the stochastic optimization algorithms that can express the complex structure problems with its hierarchy and can determine the feasible solving space automatically without giving any superstructure. Then, the optimum can be seeked automatically in the genetic evolutionary process. The paper is organized as follows: in Section 2, we present the ARX-Laguerre model by developing the coefﬁcients of the classical ARX model on two independent Laguerre bases. In this context, we write the coefﬁcients of ARX-Laguerre model as a linear combination of Laguerre basis functions. Then, we present the network of input and output ﬁlters of the ARX-Laguerre model as well as its recursive representation. Section 3 presents in the ﬁrst part an analytical Laguerre poles optimization in which the optimal poles are expressed in terms of ARX-Laguerre model coefﬁcients. In the second part we propose two new methods for Laguerre pole optimization. Finally, Section 4 evaluates, through a simulation examples, the performances of the proposed Laguerre pole algorithms of the ARX-Laguerre model compared in term of approximation quality. Moreover, we validate the proposed algorithms on a benchmark system manufactured by Feedback known as Process Trainer PT326 [25].

∞

x n, b(k, ξb) =

2.2. Recursive representation of the ARX-Laguerre model The orthonormal functions deﬁning both independent Laguerre bases are given by their Z-transform [1]:

(n = 0, 1, …, Ni − 1) and (i = a, b)

nb

∑ ha( j)y(k − j) + ∑ hb( j)u(k − j) j=1

(1)

j=1

where u(k) and y(k) are the system input and output respectively, na and nb are the ARX model order associated to the output and to the input respectively, ha(j) and hb(j) are the ARX model parameters. These coefﬁcients can be decomposed on two independent Laguerre bases as follow: ∞

∞

∑ gn, aℓ an( j,

ξa) and hb(j ) =

n= 0

∑ gn, bℓ bn(j, n= 0

ξb )

⎧ 2 ⎪ L i (z) = 1 − ξi 0 ⎪ z − ξi ⎨ ; i = a, b ⎪ i 1 − ξiz i L n − 1(z, ξi ); n = 1, …, Ni − 1 ⎪ L n(z) = z − ξi ⎩

The Z-transform of the ARX-Laguerre model given by relation (4) gives:

i = a, b (3)

Taking into account the relation (3) and by substituting the linear combinations of relation (2) in the ARX model (1), the resulting model, entitled the ARX-Laguerre is written as [1]: ∞

n= 0

∑ gn, a x n, a(k,

ξa) +

∑ gn, bx n, b(k,

ξb )

n= 0

(4)

where x n, a(k, ξa ) and x n, b(k, ξb) are the ﬁltered output and the ﬁltered input respectively by Laguerre functions such as: ∞

∑ ℓ an(j, j=1

ξa)y(k − j ) = ℓ an(k, ξa)*y(k )

(9)

where Xn, a(z, ξa ) and Xn, b(z, ξb) are the Z-transform of x n, a(k ,, ξa ) and x n, b(k ,, ξb) given by (5) and (6), respectively, and Y(z) is the Z-transform of y(k). The ARX-Laguerre model (9) can be truncated to a ﬁnite order Na and Nb as follows: Nb − 1

Na − 1

Y (z ) =

∑

gn, aX n, a(z, ξa) +

∑

gn, bX n, b(z, ξb) + E(z, ξ )

n= 0

n= 0

(10)

where

ξ = [ξa, ξb]

|ξi| < 1

(i = a, b)

(11)

(5)

Nb − 1

Na − 1

E(z, ξ ) = Y (z ) −

∑

gn, aX n, a(z, ξa) +

n= 0

∑

gn, bX n, b(z, ξb)

n= 0

(12)

From (12), we can deﬁne the following quadratic criterion:

JT ( ξ ) = ∥ E(z, ξ )∥2

(13)

The minimisation of the quadratic criterion JT ( ξ ) with respect to the coefﬁcients (n = 0, … , Na − 1) and gn, a for gn, b for (n = 0, … , Nb − 1) is given as follows:

∂JT ( ξ ) ∂gn, a ∂JT ( ξ ) ∂gn, b

∞

n= 0

x n, a(k, ξa) =

∞

∑ gn, aX n, a(z, ξa) + ∑ gn, bX n, b(z, ξb)

(2)

∞ j=0

(8)

and E (z, ξ ) is the truncation error deﬁned by:

where ℓ in(j, ξi ), gn, i and ξi, (i = a, b), are the orthonormal functions of Laguerre bases, the Fourier coefﬁcients and the pole deﬁning the orthonormal basis {Ii} respectively. By using the stability condition Bounded Input Bounded Output (BIBO) of the ARX model, the coefﬁcients hi(j) (i = a, b) are absolutely summable.

∑ |hi(j)| < ∞,

(7)

where ξa ( ξa < 1) and ξb ( ξb < 1) are the Laguerre poles. This functions satisfy the following recurrence.

n= 0

na

y(k ) =

n 1 − ξi2 ⎛ 1 − ξi, z ⎞ ⎜ ⎟ z − ξi ⎝ z − ξi ⎠

L ni(z ) =

Y (z ) =

A discrete-time system can be described by an ARX model as [1]:

(6)

where * is the convolution product.

2.1. Principle

ha(j ) =

ξb)u(k − j ) = ℓ bn(k, ξb) * u(k )

j=1

∞

2. ARX-Laguerre model

y(k ) =

∑ ℓ bn(j,

=

∂E(z, ξ ) , E(z, ξ ) ∂gn, a

= 0, n = 0, …, Na − 1

=

∂E(z, ξ ) , E(z, ξ ) ∂gn, b

= 0, n = 0, …, Nb − 1 (14)

By using relation (12), the gradient of the truncation error E (z, ξ ) with respect to the coefﬁcients gn, a for (n = 0, … , Na − 1) and gn, b for (n = 0, … , Nb − 1) can be written as:

∂E(z, ξ ) = − X n, a(z, ξa) ∂gn, a

(15)

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

with

∂E(z, ξ ) = − X n, b(z, ξb) ∂gn, b

(16)

From relations (14), (15) and (16) we can formulate the following property: Property 2.1. When the coefﬁcients gn, a and gn, b of the ARX-Laguerre model are optimal, the inner product between the truncation error E (z, ξ ) and the output ﬁlters Xn, a(z, ξa ) for (n = 0, … , Na − 1) and Xn, b(z, ξb) for (n = 0, … , Nb − 1) are zero:

⎧ X (z, ξ ), E(z, ξ ) = 0 ⎪ n, a a ⎨ ⎪ ξ X ( z , b), E(z, ξ ) = 0 ⎩ n, b

T X (k ) = ⎡⎣ X Na (k )T X Nb (k )T ⎤⎦

(23)

where X Na(k ) and X Nb(k ) are deﬁned as follows:

X Nb (k )

T

= ⎡⎣ x 0, b(k ), …, x Nb − 1, b(k )⎤⎦

(24)

Ai for (i = a, b) are two square matrices with dimensions Ni given by:

(18)

By combining the recurrent form (8) of Laguerre orthonormal functions with relation (18), we can formulate the recurrence relation for the ﬁlters Xn, a(z ) and Xn, b(z ), (n = 1, 2, …):

⎧ 2 ⎪ X (z, ξ ) = 1 − ξa Y (z ) 0, a a ⎪ z − ξa ⎨ ⎪ 1 − ξaz X n − 1, a(z, ξa) ⎪ X n, a(z, ξa) = z − ξa ⎩

(22)

and X(k) is the (Na + Nb)-dimensional vector including the ﬁltered outputs and the ﬁltered inputs of both Laguerre bases.

(17)

From (5) and (6) we can write

X n, a(z, ξa) = L na(z )Y (z ) and X n, b(z, ξb) = L nb(z ) U (z )

T C = ⎡⎣ g0, a , …, g Na − 1, a , g0, b , …, g Nb − 1, b ⎤⎦ ∈ R Na + Nb

T X Na (k ) = ⎡⎣ x 0, a(k ), …, x Na − 1, a(k )⎤⎦ ,

■

⎧ ⎪ X (z, ξ ) = b ⎪ 0, b ⎨ ⎪ ⎪ X n, b(z, ξb) = ⎩

3

⎡ ai 0 ⎢ 11 i ⎢ ai a 22 Ai = ⎢ 21 ⋮ ⎢ ⋮ ⎢ ai i a ⎣ (Ni)1 (Ni)2

⎤ ⎥ ⋯ 0 ⎥ ⎥ ⋱ ⋮ ⎥ ⋯ a(iNi)(Ni) ⎥⎦ ⋯

0

(25)

with: i asm

(19)

⎧ξ if s = m ⎪ i = ⎨ ( − ξi )(s − m − 1)(1 − ξi2) if s > m ⎪ ⎩0 if s < m

(26)

and bi for (i = a, b) are two vectors of dimension Ni:

1−

ξb2

z − ξb

T bi = ⎡⎣ b1i b2i ⋯ b Nii ⎤⎦

U (z )

1 − ξb z X n − 1, b(z, ξb) z − ξb

with:

(20)

From (10), (19) and (20), we represent in Fig. 1, the ﬁlter network of the ARX-Laguerre model. From Fig. 1, we obtain the following recursive representation for the ARX-Laguerre model [1]:

⎧ X (k + 1) = A X (k ) + b y(k ) Na a Na a ⎪ ⎪ ⎨ X Nb (k + 1) = Ab X Nb (k ) + bbu(k ) ⎪ T ⎪ ⎩ y(k ) = C X (k ) + e(k )

(27)

i bm = ( − ξi )m − 1 1 − ξi2 for (m = 1, ⋯, Ni )

(28)

3. Laguerre pole optimisation 3.1. Method of Bouzrara et al.

(21)

In this method, a separate optimisation of both poles deﬁning the Laguerre bases relative to parameters ha and hb of the ARX model is considered. The cost functions are deﬁned as [3]: ∞

1

Ja =

2

ha

∑

∞

1

n gn2, a and Jb =

hb

n= 0

∑

2

n gn2, b

(29)

n= 0

with

ha

∞

2

=

∑ ha2(j) and

hb

∞

2

=

j=0

∑ hb2(j)

(30)

j=0 2

2

However, we note that the quantities h a and h b can be expressed as function of the square of the Fourier coefﬁcients gn, a and gn, b as:

ha

2

∞

=

∑ gn2, a n= 0

and

hb

2

∞

=

∑ g 2n, b n= 0

(31)

Thus, the two optimal Laguerre poles ξopt , a and ξopt , b in the sense of the minimization of the criteria Ja and Jb respectively are given by [3]:

ξopt, i Fig. 1. Discrete-time ARX-Laguerre ﬁlter network.

⎧ ⎪ ρi − =⎨ ⎪ρ + ⎩ i

ρi2 − 1 si ρi > 1 ρi2 − 1 si ρi < − 1

i = a, b (32)

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

their reﬁnement using the Newton-Raphson method that determines the poles vector ξ at iteration (m + 1) noticed ξ m + 1 according to poles vector ξ at iteration m, ξ m , as follows

with

ρi =

(1 + (1 +

ξi2 ) T1, i ξ 2i ) T2, i

+ 2ξi T2, i + 2 ξi T1, i

i = a, b (33)

ξm+1 = ξm − μ

where ∞ ⎧ ⎪ T1, i = ∑ (2n + 1)gn2, i ⎪ n= 0 ⎨ i = a, b ∞ ⎪ T = 2 ng g ∑ 2, i ⎪ n, i n − 1, i ⎩ n= 0

(34)

3.2.1. Method based on Newton-Raphson algorithm The Laguerre poles ξ = [ξa, ξb] are optimized in terms of minimizing the quadratic criterion given by: ⎛ ⎞2 ⎜ ⎟ Nb − 1 H ⎜ ⎛ Na − 1 ⎞⎟ 1 J ( ξ ) = ∑ ⎜ y(k) − ⎜⎜ ∑ gn, a x n, a(k, ξa) + ∑ gn, b x n, b(k, ξb)⎟⎟ ⎟ 2 k =1 ⎜ ⎝ n=0 ⎠⎟ n=0 ⎜ ⎟ ⎝ ⎠ e T (k )

(

(42) m

) ( Y − C^) = 21

(35)

(36)

with a matrix that includes the state vectors X Ni (k ) for (i = a, b) given by (24) for (k = 1 … , H ):

(43)

From the property 2.1, the gradient of J ( ξ ) with respect to ξ is simpliﬁed to:

∂J ( ξ ) ∂ ^ C = − ET ∂ξ ∂ξ

(44)

and then (44), the Hessian of J ( ξ ) is given by:

∂ξ

T

T = ⎡⎣ Na Nb ⎤⎦

⎛ ^⎞ ∂J ( ξ ) ∂C ∂ ^ C − ⎟⎟ = T ⎜⎜ − ∂ξ ∂ξ ⎠ ⎝ ∂ξ

⎛ T T ⎛ 2 ^ ⎞ ^⎞ ∂C T ⎟ ∂ ^ ∂ ^ ∂ ∂C ⎟ ^ ∂ ⎟ = ⎜⎜ C + C − ET ⎜⎜ C+ 2 ∂ξ ∂ ξ ∂ ξ ⎟⎠ ∂ξ ⎠ ∂ξ ⎝ ⎝ ∂ξ

∂ 2J ( ξ )

where H is the observation number, y(k) is the output of the system and x n, a(k, ξa ) and x n, b(k, ξb) are the ﬁltered output and the ﬁltered input by the Laguerre model, respectively. The criterion J ( ξ ) can be written in the following matrix form:

1 ^ Y − C 2

ξ= ξ

⎛ ∂J ( ξ ) ⎞ ⎜ ⎟ ∂ξ ⎠ m m⎝ ξ= ξ

) < J ( ξ ). We note that μ = [μa , μb ] is selected so that J ( ξ The gradient of J ( ξ ) with respect to ξ is obtained by differentiating equation (36).

3.2. New methods of Laguerre pole optimization

J( ξ) =

∂ ξ2

−1

m+1

Consequently, by applying relation (32), the optimal Laguerre poles ξopt , a and ξopt , b are obtained by minimizing the cost functions J a and J b respectively.

T

∂ 2J ( ξ )

2

(45)

^ where the gradient of C with respect to ξ is determined from relation (40) as:

^ ∂C = T ∂ξ

(

−1 ∂T

)

∂ξ

(

Y − T

−1⎛

)

∂T ∂ ⎞ ^ ⎜ ⎟C + T ∂ξ ⎠ ⎝ ∂ξ

(46)

and the derivative of , given by (37) with respect to ξ , is the following matrix:

⎤ ⎡ ∂ N ∂ N ⎤ ⎡ ∂ N a a a ⎥ ⎢ ⎢ 0 ⎥ ∂ξb ⎥ ⎢ ∂ξa ∂ ⎥ ⎢ ∂ξa =⎢ = ∂ Nb ∂ Nb ⎥ ⎢ ∂ Nb ⎥ ∂ξ ⎥ ⎥ ⎢ 0 ⎢ ⎢⎣ ∂ξa ∂ξb ⎥⎦ ⎢⎣ ∂ξb ⎥⎦

(47)

(37) then

where T Ni = ⎡⎣ X NTi(1) X NTi(2) ⋯ X NTi(H )⎤⎦ , i = a, b

(38)

and, Y and include the output and the truncation error, respectively, for (k = 1 … , H ) T Y = ⎡⎣ y(1) ⋯ y(H )⎤⎦ ,

T = ⎡⎣ eT (1) ⋯ eT (H )⎤⎦

(39)

^ The optimal vector C of the Fourier coefﬁcients is identiﬁed by the Least Squares (LS) method as follows:

^ C = T

(

−1

)

T Y

(40)

Criterion (35) is nonlinear with respect to the poles ξi (i = a, b), the minimum can be found by an iterative method: we start with an initial estimate of ξi then we reﬁne at each step until the poles converge. Note that, for most nonlinear optimization methods, the minimum obtained is a local minimum located in the neighborhood of the initial estimate. The latter should be carefully determined. The identiﬁcation of the initial poles is made using the minimization of the quadratic criterion J2 ( ξ ).

J2 ( ξ ) = 10 log10

2 H ∑k = 1 ⎡⎣ y(k ) − y^ (k, C , ξ )⎤⎦ 2 H ∑k = 1 ⎡⎣ y(k )⎤⎦

⎤ ⎡ ∂ 2 Na ⎢ 0 ⎥ 2 ⎥ ⎢ ∂ξa ∂ ⎥ =⎢ 2 ∂ ξ2 ⎢ ∂ Nb ⎥ 0 ⎥ ⎢ ∂ξb2 ⎦ ⎣ 2

The actual poles vector ξ given by (42) can be computed by using the gradient and the Hessian, of the criterion J ( ξ ), given by (44) and (45), respectively. These quantities depend on the ﬁrst and the second derivatives of with respect to ξ = [ξa, ξb], given by (47) and (48) respectively. Lemma 3.1. The ﬁrst and the second derivatives of the matrix containing the ARX-Laguerre state vectors X Ni (k ) for (i = a, b) with respect to ξ = [ξa, ξb] can be formulated as proved in the Appendix A, by the following relations:

The ﬁrst derivative of Ni with respect to ξi, (i = a, b), is: ∂ Ni ∂ξi

= Ki Ni+ 1ΨNTi+ 1

(49)

The second derivative of Ni with respect to ξi, (i = a, b), is: ∂ 2 Ni

(41)

(48) m+1

∂ξi2

= Ki2 ⎡⎣ 2 ξi Ni+ 1ΨNTi+ 1 + Ni+ 2ΨNTi+ 2 ΨNTi+ 1⎤⎦

(50)

Once the initial poles ξi (i = a, b) are determined, we proceed to Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

with, for (i = a, b):

Ki =

⎡ 0 ⎢ ⎢ −1 ⎢ 0 ΨNi+ 1 = ⎢ 0 ⎢ ⎢ ⋮ ⎢⎣ 0

1 ; 1 − ξi2

1 0 −2 0 ⋮ 0

0 2 0 −3 ⋮ 0

0 0 3 0 ⋮ 0

⋯ 0 ⋯ 0 ⋯ 0 ⋯ 0 ⋮ ⋮ ⋯ −(Ni − 1)

0 0 0 0 ⋮ 0

0⎤ ⎥ 0⎥ 0⎥ 0⎥ ⎥ ⋮⎥ Ni ⎥⎦

(51)

■ Remark: The calculation of the second derivative requires voluminous calculations which encourages to abandon the Newton Raphson method in favor of the Gauss Newton method where the approximated Hessian is calculated by eliminating the second term of the second member in relation (45). The calculation of the approximated Hessian occurs, therefore, only from the knowledge of the ﬁrst order sensitivity functions as:

∂ 2J ( ξ ) ∂ ξ2

⎛ T T ^ ⎞ ∂C T ⎟ ∂ ^ ^ ∂ = ⎜⎜ C + C ∂ξ ∂ ξ ⎟⎠ ∂ ξ ⎝

(52)

an optimum of a function deﬁned on a data space. These algorithms are applied to a wide variety of problems [28] regarding their simplicity and efﬁciency [29]. After ﬁxing the expression of the objective function to be optimized, probabilistic steps are involved to create an initial population of individuals [30]. The initial population will submit some genetic operations (such as evaluation, mutation, crossover,…) to converge to the optimal solution of the problem. The optimization with GA is summarized by the following steps:

Initialization: It is usually random and it is often advantageous to include the maximum knowledge about the problem.

Evaluation: This step is to compute the quality of individuals by

Algorithm 1. 1. Choose two initial poles ξ 0 = [ξa0, ξb0] 2. Fix μ = 1 and m ¼ 0 3. We propose to calculate the optimal poles as follows: (a) Form the matrices N , (i = a, b) by using relation (38) i and the matrix given by (37). (b) Compute the ﬁrst and the second derivatives of Ni with respect to ξim from relations (49) and (50), respectively. (c) Determine the ﬁrst and the second derivatives of from relation (47) and (48). (d) Estimate the optimal vector C^ of the Fourier coefﬁcients from relation (40). (e) Calculate the gradient of the vector C^ with respect to ξ m from relation (46). (f) Determine the gradient and the Hessian of J ( ξ ) from relation (44) and (45), respectively. (g) Evaluate ξ m + 1 from relation (42). (h) if ( ξam + 1 > 1) or ( ξbm + 1 > 1) i. Adjust

(ξ

m+1 a

the allocation of positive value called ”ability or ﬁtness” to each one. The highest is assigned to the individual that minimizes (or maximizes) the objective function. The selection: This step selects a deﬁnite number of individuals of the current population. The selection is probabilistic and characterized by a parameter called selection pressure (SP) ﬁxing the number of selected individuals. The crossover: The genetic crossover operator creates new individuals. From two randomly selected parents, crossover produces two descendants. This step affects only a limited number of individuals established by the crossover rate (CR) number. The mutation: The mutation consists in providing a small disruption to a number (MR) of individuals. The effect of this operator is to counteract the attraction exerted by the best individuals. This allows to explore other areas of the search space.

The optimum values of Laguerre poles are determined by using the criterion J ( ξ ) given by (35) as objective function known as ﬁtness. The minimization of which can expand the solution space of the poles that supports the achievement of better solutions if we choose a very small threshold. After determining the objective function to be minimized, we generate, randomly, a population of individuals ξ = [ξa, ξb]. Abilities of individuals ξ are evaluated by the criterion J ( ξ ). Individuals with the highest skills are selected to undergo different genetic operations (crossover, mutation and selection). After a set of generations, the genetic algorithm converges to the optimal values. The new genetic curve algorithm is constructed following the strategy outlined as:

μa or μb, with (μi < 1)

ii. Go to step 3 (i) else i. if

5

− ξam > ε or ξbm + 1 − ξbm > ε

0.8

)

A. Increment m, (m = m + 1) B. Go to step (a) ii. else B. ξopt , b =

u(k)

A. ξopt , a = ξam + 1

0.6

0.4

ξbm + 1 0.2

3.2.2. Method based on genetic algorithms The Laguerre poles are expressed in the criterion J ( ξ ), given by (35), with nonlinear way. We note that the two previous methods of the Laguerre pole optimization are adequate for determinist systems. In this section we propose to compute the optimum values of Laguerre poles by using Genetic Algorithms classiﬁed as evolutionary optimization methods [26,27]. They allow to deﬁne

0 Identification phase

−0.2 0

100

200

300

400 500 600 Number of iterations

Validation phase

700

800

900

1000

Fig. 2. Input signal.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

Algorithm 2.

4. Simulation examples 4.1. Example 1 This example illustrates the identiﬁcation of a fourth-order Butterworth ﬁlter [31] given by the following transfer function:

H (z ) = H1(z ) H2(z )

(53)

with

H1(z ) =

H2(z ) =

0.078 + 0.1559z−1 + 0.078z−2 1 − 1.3209z−1 + 0. 6327−2

(54)

0.0619 + 0.1238z−1 + 0.0619z−2 1 − 1.0486z−1 + 0. 2961−2

(55)

This system contains 9 parameters. The input system, draw in Fig. 2, is a pseudo-random sequence with variable amplitude. The output, as shown in Fig. 3, is disturbed by an additive white and gaussian noise which is independent of the input signal and with Signal to Noise Ratio (SNR) equals to 10 dB. For the different pole optimization methods, presented in this paper, we select the optimum poles ( ξopt , a and ξopt , b ) for a set of H ¼ 1000 observations and for different values of the truncation orders (Na and Nb) where we identify the Fourier coefﬁcients by the Least Squares (LS) method. In this simulation we applied the monte carlo method for 60 independent runs. For each truncated order, we evaluate the performance of the pole optimisation algorithms for ARX-Laguerre model in terms of calculation of the Parameter Reduction Ratio (PRR) and the Mean Square Error (MSE) given by: ⎛ Parameters number of ARX − Laguerre model ⎞ PRR = ⎜ 1 − ⎟ × 100 ⎝ ⎠ Parameters number of ARX model

these Figures we can conclude that the algorithm converges in relatively few number of generations when we ﬁx a population size composed of 80 individuals. Moreover, to evaluate the efﬁciency of the proposed pole optimisation algorithms as well as that proposed in [3], we study the MSEdB variation and the global computing time (given for the 1000 iterations) for different couples of truncation orders Na and Nb. The simulation results are presented in Table 1 and Figs 6 and 7. The goal of using the ARX-Laguerre model is to reduce the parameter number of ARX model for a lower MSE. From Table 1 and Fig. 6 we can conclude that the Genetic Algorithm is more efﬁcient in term of MSE. For example, In the case of Genetic Algorithm, to obtain a (MSEdB ≈ − 54 dB ) the parameter number does not exceed (Na = Nb = 2), therefore a parametric reduction ratio of (PRR = 55%). In the case of Newton-Raphson algorithm, we obtain the same MSE for (Na = 3, Nb = 2) then a parametric reduction ratio equal (PRR ≤ 44.44%). In the case of Bouzrara et al. algorithm this MSEdB is obtained for (Na = Nb = 3) then a parametric reduction ratio (PRR = 33.33%). To validate the three algorithms, we draw in Figs. 8 to 10 the variations of the outputs of the ARX-Laguerre model and the ARX model by using, for pole optimization, the Bouzrara et al. algorithm, the Newton Raphson algorithm and the Genetic Algorithm, respectively, for (Na = Nb = 2). We note that with the truncation orders (Na = Nb = 2), the performance of ARX-Laguerre model in terms of MSEdB is signiﬁcant when the Laguerre poles are 1.2

1

0.8

(56) y(k)

0.6

⎛ 1 MSEdB = 10 log10⎜⎜ ⎝H

H

⎞

k=1

⎠

∑ (y(k) − y^ (k))2⎟⎟

0.4

(57) 0.2

where y is the measured output and H is the number of observations. To evaluate the population size effects, several runs are done with different population size. The convergence speed of the Laguerre poles ξa and ξb is faster for a larger population size. Figs. 4 and 5 illustrate the behavior of the estimated Laguerre poles during the number of generations of the GA method. From

0 Validation phase

Identification phase

−0.2 0

100

200

300

400 500 600 Number of iterations

700

800

900

1000

Fig. 3. Output signal.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

7

−10

0.7

Bouzrara et al Newton Raphson Genetic algorithm

0.6

−20

−30

0.4

MSE (dB)

Pole evolution ξ b

0.5

0.3

−50

Population size =10 Population size=30 Population size=50 Population size=70 Population size=80

0.2

0.1

0 0

10

20 30 Number of generations

40

−40

−60

−70 2

50

3

Fig. 4. Estimation of pole ξa for several population size.

5 6 7 Truncation order (N=Na+Nb)

8

9

Fig. 6. Variation of the MSEdB.

10

0

Bouzrara et al Newton Raphson Genetic algorithm

9

−0.05

8

Population size=10 Population size=30 Population size=50 Population size=70 Population size=80

−0.15 −0.2

7 6 Time (s)

−0.1

Pole evolution ξa

4

−0.25

5

−0.3

4

−0.35

3

−0.4

2

−0.45

1

−0.5 0

10

20 30 Number of generations

40

50

0 2

3

4

Fig. 5. Estimation of pole ξb for several population size.

5 6 7 Truncation order (N=Na+Nb)

8

Fig. 7. Computing time.

Table 1 Performances of optimisation methods of Laguerre poles. Method TO

Bouzrara et al.

Newton-Raphson

Na

Nb

PRR

MSEdB

Poles

MSEdB

1

1

77.77

15.5

ξinit , a = 0.2

28.12

ξinit , b = 0.1 1

2

66.66

33.46

ξinit , a = 0.2

2

2

55.55

45.38

ξopt , a = −0.72

39.71

44.44

50.81

ξopt , a = −0.73

3

3

33.33

55.51

ξopt , a = −0.71

4

3

22.22

60.76

ξopt , a = −0.70

54.95

61.11

ξopt , a = −0.69

4

5

0

61.75

ξopt , a = −0.70

ξopt , a = −0.98

45.30

ξopt , a = −0.39

ξopt , a = − 0.42 ξopt , a = − 0.51 ξopt , a = 0.60

ξopt , a = 0.51

59.75

ξopt , a = 0.41

ξopt , b = 0.75

ξopt , b = 0.83 53.68

ξopt , a = −0.47

57.15

ξopt , a = 0.62

59.83

ξopt , a = 0.52

ξopt , b = 0.59

ξopt , b = 0.65

ξopt , b = 0.50

ξopt , b = 0.52 61.32

ξopt , a = 0.46

61.94

ξopt , a = 0.54

ξopt , b = 0.55 60.85

ξopt , b = 0.81 ξopt , b = 0.80

−32.09

ξopt , b = 0.65

ξopt , b = 0.80 11.11

ξopt , a = − 0.87

59.08

ξopt , b = 0.81

4

Poles

ξopt , b = 0.62

ξopt , b = 0.79

4

MSEdB

ξopt , b = 0.78 48.80

ξopt , b = 0.78 2

Poles

ξopt , b = 0.72

ξopt , b = 0.50

3

Genetic algorithm

ξopt , a = 0.52

ξopt , b = 0.66

ξopt , b = 0.46 61.14

ξopt , a = 0.50

ξopt , b = 0.48

ξopt , b = 0.47 62.04

ξopt , a = 0.54

ξopt , b = 0.47

9

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

optimized by the Genetic Algorithm.

1.2 ARX−Laguerre output Real output

4.2. Example 2

1

0.8

0.6 y(k)

This application consists in identifying a bench-scale hot airﬂow device the PT326 Process Trainer as shown in Fig. 11, from Feedback Ltd. This thermal process exhibits many linear ranges, output drift and inherent process noise. It has been used many times to illustrate the performances of other identiﬁcation methods, as in [25]. Fig. 12 shows a schematic description of this process. Air is pulled by a fan into a 30 cm length tube through a valve and heated by a mesh of resistor wires at the inlet. The air temperature is measured by a thermocouple at the outlet. The input u(t) is the voltage over of resistor wires to heat the incoming air; the output y(t) is the outlet air temperature measurement. The process perturbation can be realized by adding the ambient air the quantity of which is tuned by the angle α. Figs. 13 and 14 plot the input voltage u and the output temperature y respectively, at a sampling time of 0.08 s. The PT326 process can be described by the ARX model (1) with (na = 6) and (nb = 6) and therefore it contains 12 parameters which are estimated using least-squares methods and summarized in Table 2.

0.4

0.2

0

−0.2 0

50

100

150 200 250 Number of iterations

300

350

400

Fig. 10. Validation of pole optimization using Genetic algorithm.

1.2 ARX−Laguerre output Real output 1

0.8

y(k)

0.6

0.4

0.2

Fig. 11. Process Trainer PT326. 0

−0.2 0

50

100

150 200 250 Number of iterations

300

350

400

Fig. 8. Validation of pole optimization using Bouzrara et al. algorithm.

1.2 ARX−Laguerre output Real output 1

0.8

y(k)

0.6

0.4

0.2

0

−0.2 0

50

100

150 200 250 Number of iterations

300

350

Fig. 9. Validation of pole optimization using Newton Raphson algorithm.

400

Fig. 12. Schematic description of the heat transfer process.

The performances of the pole optimisation algorithms for several combinaisons of truncating orders (Na, Nb) are evaluated by the MSEdB summarized in Table 3 and presented in Fig. 15 We plot in Fig. 16 the ARX model output and the PT326 output for (na = nb = 2). In Figs. 17 to 19, we draw the output of the PT326 process and the ARX-Laguerre model output for the different pole optimization algorithms by ﬁxing Na = Nb = 2. In the case of GA algorithm, the objective function to be optimized is illustrated by equation (35) with several combinations of truncating orders (Na, Nb). A real-valued representation is used and the initial population is formed of 80 individuals chosen after several test for various population size. The algorithm stops when the value of the mean square error is lower than the threshold εt. The values of the genetic operations are expressed in Table 4. The performance in terms of MSE is more signiﬁcant when the Laguerre poles are estimated by Newton Raphson algorithm and Genetic algorithm. This can be justiﬁed by the choice of the cost

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

function chosen in both algorithms. In fact for the new proposed algorithms we minimize the mean square error between the system output and the ARX-Laguerre model output. However, in Bouzrara et al. algorithm, the Laguerre poles are estimated by the optimization of two cost functions that depend on the Fourier coefﬁcients.

5. Conclusion In this article, a new algorithms for ARX-Laguerre model pole optimization are proposed. It is clear that the Genetic Algorithms are more efﬁcient with respect to the Bouzrara et al. algorithm and

9

the Newton-Raphson algorithm in term of parameter number reduction. It reveals that the Genetic Algorithm copes very well for ofﬂine identiﬁcation of the ARX-Laguerre model. However, the Table 3 Performances of pole optimisation algorithms for the PT326 process. Na ¼ Nb

PRR

Bouzrara et al. MSEdB

Newton Raphson MSEdB

Genetic Algorithm MSEdB

1 2 3 4 5 6

83.33 66.66 50 33.33 16.66 0

15.5 36.53 44.24 50.53 52.62 53.61

26.43 39.14 45.82 51.20 53.15 53.77

32.05 44.57 49.59 52.27 53.68 54.05

6.5 −10 Bouzrara et al Newton Raphson Genetic algorithm

6 −15

5.5

−20 −25

MSE (dB)

u (V)

5 4.5

−30 −35 −40

4

−45

3.5 −50

3 0

100

200 300 Time steps

400

−55

500

−60 2

Fig. 13. Input signal of PT 326 process.

3

4

5

6 7 8 9 Truncation order (N=Na+Nb)

10

11

12

Fig. 15. Variation of the MSEdB for the PT326 process.

60

58

ARX model output PT 326 process 58

56

56

54

y(t)

T °(C)

54

52 52

50 50

48 48

46 44 0

46

100

200 300 Time steps

400

500

Fig. 14. Output signal of PT 326 process.

44 0

20

40

60

80 100 120 Number of iterations

140

160

180

200

Fig. 16. Validation of the ARX model.

Table 2 ARX model for PT326 process (na = nb = 6) . ARX model parameters Estimated values ha(j) (j = 1: na )

[ − 1.023, 0.08599, 0.03051, 0.1101, 0.05723, 0.002241]

hb(j) (j = 1: nb)

[0.003386, 0.00213, 0.002173, 0.0662, 0.05763, 0.01514]

MSEdB 49.17

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

60

Table 4 Parameters of the genetic algorithm.

ARX−Laguerre output PT 326 output 58 56

y(k)

54

Parameters

Values

Selection pressure (SP) Crossover rate (CR) Mutation rate (MR)

1.9 80 % 3.3 %

52

Newton-Raphson algorithm as well as Bouzrara et al. algorithm allows the online identiﬁcation of the ARX-Laguerre and so they are convenient for adaptive control.

50 48 46

Appendix A. Proof of Lemma 3.1 44 0

20

40

60

80 100 120 Number of iterations

140

160

180

200

The gradient of Ni with respect to ξi is given from relation (38) by:

Fig. 17. Validation of the ARX-Laguerre model using Bouzrara et al. algorithm.

∂ Ni

60 ARX−Laguerre output PT 326 output 58

∂X Ni(k ) ∂ξi

y(k)

54

T ⎡ ∂x NTi− 1, i (k ) ⎤⎥ ∂x (k ) ∂x1, i (k ) , i = a, b = ⎢ 0, i ⋯ ⎥⎦ ⎢⎣ ∂ξi ∂ξi ∂ξi

(A.2)

It remains to ﬁnd the analytical expression of the ﬁrst and second derivative of x n, i(k ) with respect to ξi for (n = 0, … , Ni − 1) and (i = a, b).

52 50 48 46

20

40

60

80 100 120 Number of iterations

140

160

180

200

Fig. 18. Validation of the ARX-Laguerre model using Newton-Raphson algorithm.

58 ARX−Laguerre output PT 326 output 56

54

⎧ x (k ) = Z −1 X (z ) , X (z ) = L a(z, ξ )Y (z ), n, a n, a n a ⎪ n, a ⎪ ⎪ (n = 0, …, Na − 1) ⎨ ⎪ x n, b(k ) = Z −1 X n, b(z ) , X n, b(z ) = L nb(z, ξb)U (z ), ⎪ ⎪ (n = 0, …, Nb − 1) ⎩

{

}

{

}

(A.3)

From relations (A.3) we obtain:

⎧ ∂X n, a(z ) ∂L a(z, ξa) ⎪ = n Y (z ), (n = 0, …, Na − 1) ∂ξa ⎪ ∂ξa ⎨ ⎪ ∂X n, b(z ) ∂L b(z, ξb) = n U (z ), (n = 0, …, Nb − 1) ⎪ ∂ξb ⎩ ∂ξb

(A.4)

In this case we establish the relation binder the derived of the ARX-Laguerre ﬁlters with respect to the poles to Laguerre ﬁlters.

52 y(k)

From the Z-transform of nth Laguerre function given by (8) the derivative of the Laguerre function with respect to ξi (i = a, b) is:

50

⎧ ∂L i (z, ξ ) L i(z, ξi ) i ⎪ 0 = 1 ⎪ ∂ξi 1 − ξi2 ⎪ ⎨ ∂L ni(z, ξi ) L i (z, ξi ) ⎡ −n(z − ξi )2 + (n + 1)(1 − ξiz )2 ⎤ ⎢ ⎥ = n ⎪ ξ ∂ (z − ξi )(1 − ξiz ) 1 − ξi2 ⎣ ⎦ i ⎪ ⎪ n = … N − 1, , 1 ⎩ i

48

46

44 0

(A.1)

where

56

44 0

∂ξi

T ⎡ ∂X T (1) ∂X T (2) ∂X NTi(H ) ⎤⎥ Ni Ni ⎢ = , i = a, b ⋯ ⎢⎣ ∂ξi ∂ξi ∂ξi ⎥⎦

20

40

60

80 100 120 Number of iterations

140

160

180

Fig. 19. Validation of the ARX-Laguerre model using Genetic Algorithm.

(A.5)

200

From the recurrent relation linking the Laguerre ﬁlters given by (8) the derivative of nth Laguerre function given by (A.5) is written:

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

⎧ ∂L i z, ξ i) ⎪ 0( = KiL1i(z, ξi ) ⎪ ∂ξi ⎪ ⎨ ∂L ni( z, ξi ) ⎪ = Ki⎡⎣ ( n + 1)L ni + 1( z, ξi ) − nL ni − 1( z, ξi )⎤⎦, ∂ξi ⎪ ⎪ n = 1, …, Ni − 1 ⎩ with Ki =

1 1 − ξi2

(A.6)

for i = a, b

The substitution of (A.6) in (A.4) and the Z-transform inverse allow to write, for (i = a, b): ⎧ ∂x (k ) ⎪ 0, i = K ix1, i(k ) ⎪ ∂ξi ⎨ ⎪ ∂x n, i(k ) = K i⎡⎣ (n + 1)x (n + 1), i(k ) − nx (n − 1), i(k )⎤⎦ , n = 1, …, Ni − 1 ⎪ ⎩ ∂ξi

(A.7)

Using the notations introduced previously in (A.1) and (A.2), we can deduce the expression of the ﬁrst derivative of X Ni (k ) with respect to ξi:

∂X Ni(k ) ∂ξi

= Ki ΨNi+ 1 X Ni+ 1(k )

(A.8)

where X Ni+ 1(k ) is a (Ni + 1)-dimensional vector deﬁned as:

X Ni+ 1(k ) = ⎡⎣ x 0, i (k ) x1, i (k ) x2, i (k ) … x Ni− 1, i (k ) x Ni, i (k )⎤⎦

(A.9)

with ΨNi+ 1 is a (Ni × (Ni + 1))-dimensional matrix

⎡ 0 ⎢ ⎢ −1 ⎢ 0 ΨNi+ 1 = ⎢ 0 ⎢ ⎢ ⋮ ⎣⎢ 0

1 0 −2 0 ⋮ 0

0 2 0 −3 ⋮ 0

0 0 3 0 ⋮ 0

⋯ 0 ⋯ 0 ⋯ 0 ⋯ 0 ⋮ ⋮ ⋯ −(Ni − 1)

0 0 0 0 ⋮ 0

0⎤ ⎥ 0⎥ 0⎥ 0⎥ ⎥ ⋮⎥ Ni ⎥⎦

(A.10)

From relation (A.8) we can deduce the ﬁrst derivatives of Ni with respect to ξi.

∂ Ni ∂ξi

= Ki Ni+ 1ΨNTi+ 1

(A.11)

where Ni+ 1 is a ((Ni + 1) × H )-dimensional matrix deﬁned according to the X Ni+ 1(k ) as: T Ni+ 1 = ⎡⎣ X NTi+ 1(1) X NTi+ 1(2) … X NTi+ 1(H )⎤⎦

The second derivative of Ni with respect to from differentiating (A.11) as follows:

∂ 2 Ni ∂ξi2

(A.12)

ξi can be formulated

= Ki2 ⎡⎣ 2 ξi Ni+ 1ΨNTi+ 1 + Ni+ 2ΨNTi+ 2 ΨNTi+ 1⎤⎦

(A.13)

References [1] Bouzrara K, Garna T, Ragot J, Messaoud H. Decomposition of an ARX model on

11

Laguerre orthonormal bases. ISA Trans 2012;51(6):848–60. [2] Gnaba S, Garna T, Bouzrara K, Ragot J, Messaoud H. Online identiﬁcation of the bilinear model expansion on Laguerre orthonormal bases. Int J Control 2014;87(3):441–63. [3] Bouzrara K, Garna T, Ragot J, Messaoud H. Online identiﬁcation of the ARX model expansion on Laguerre orthonormal bases with ﬁlters on model input and output. Int J control 2013;86(03):369–85. [4] Garna T, Bouzrara K, Ragot J, Messaoud H. Nonlinear system modeling based on bilinear Laguerre orthonormal bases. ISA Trans 2013;52(3):301–17. [5] Dankers AG, Westwick DT. On the relationship between the enforced convergence criterion and the asymptotically optimal Laguerre pole. IEEE Trans Autom Control 2012;57(5):1102–9. [6] Malti R, Ekongolo SB, Ragot J. Dynamic SISO and MISO system approximations based on optimal Laguerre models. IEEE Trans Autom Control 1998;43(9):1318–23. [7] Ngia LSH. Separable nonlinear least-squares methods for efﬁcient off-line and on-line modeling of systems using Kautz and Laguerre ﬁlters. IEEE Trans Circuits Syst II: Analog Digit Signal Process 2001;48(6):562–79. [8] Fu Y, Dumont GA. An optimum time scale for discrete Laguerre network. Trans Autom Control 1993;38(6):934–8. [9] Tanguy N, Morvan R, Vilbé P, Calvez LC. Online optimization of the time scale in adaptive Laguerre-based ﬁlters. IEEE Trans Signal Process 2000;48(4):1184–7. [10] Oliveira e, Silva T. On the determination of the optimal pole position of Laguerre ﬁlters. IEEE Trans Signal Process 1995;43(9):2079–87. [11] Belt HJW, Den Brinker AC. Optimality condition for truncated generalized Laguerre networks. Int J Circuit Theory Appl 1995;23(3):227–35. [12] Prokhorov SA, Kulikovskikh IM. Unique condition for generalized Laguerre functions to solve pole position problem. Signal Process 2015;108:25–9. [13] Prokhorov SA, Kulikovskikh IM. Pole position problem for meixner ﬁlters. Signal Process 2016;120:8–12. [14] Krifa A, Bouzrara K. Parametric complexity reduction of discrete-time linear systems having a slow initial onset or delay. Math Model Anal 2016;21(5):668–84. [15] Maraoui S, Krifa A, Bouzrara K. System approximations based on Meixner-like models. IET Signal Process 2016;10(5):457–64. [16] Wahlberg B. System identiﬁcation using Kautz models. IEEE Trans Autom Control 1994;39(6):1276–81. [17] Ninness B, Gustafsson F. Unifying construction of orthonormal bases for system identiﬁcation. IEEE Trans Autom Control 1997;42(4):515–21. [18] El Anes A, Maraoui S, Bouzrara K. Order reduction of MIMOARX systems using Laguerre bases. Int J Eng Syst Model Simul 2016;8(4):307–15. [19] el Anes A, Maraoui S, Bouzrara K. Optimal expansions of multivariable ARX processes on Laguerre bases via the Newton-Raphson method. Int J Adapt Control Signal Process 2016;30(4):578–98. [20] Den AC Bert B, HJW Harm B. Model reduction by orthogonalized exponential sequences. in: PRORIS/IEEE Workshop on Circuits, System and Signal Processing; 1996, p. 77-82. [21] Malti R, Maquin D, Ragot J, Optimality conditions for the truncated network of the generalized discrete orthonormal basis having real poles. In: Proceedings of the 37th IEEE Conference on Decision and Control; 1998b, p. 2189–2194. [22] Oliveira E Silva T, Modelling and Identiﬁcation with Rational Orthogonal Basis Functions; chap. Pole Selection in GOBF Models. Peter S. C. Heuberger, Paul M. J. Van den Hof and Bo Wahlberg; 2005, p. 297–336. [23] John RKoza. Genetic programming: on the programming of computer by means of natural selection. Prentice-Hall; 1992. [24] John RKoza. Genetic programming. II. Automatic discovery of reusable programs.Cambridge: The MIT Press; 1994. [25] Ljung L. System identiﬁcation - theory for the user. Prentice-Hall; 1999. [26] Zhang J, Chung H, Lo W. Clustering-based adaptive crossover and mutation probabilities for genetic algorithms. IEEE Trans Evolut Comput 2007;11(3):326–35. [27] Akbari R, Ziarati K. A multilevel evolutionary algorithm for optimizing numerical functions. Int J Ind Eng Comput 2011;2(2):419–30. [28] Chuan-Kang T, On the mean convergence time of multi-parent genetic algorithms without selection. in: European Conference on Artiﬁcial Life; 2005, p. 403–412. [29] Bies RR, Muldoon MF, Pollock BG, Manuck S, Smith G, Sale ME. A genetic algorithm-based, hybrid machine learning approach to model selection. J Pharmacokinet Pharmacodyn 2006;33(2):195–221. [30] Lothar MSchmitt. Theory of genetic algorithms II: models for genetic operators over the string-tensor representation of populations and convergence to global optima for arbitrary ﬁtness function under scaling. Theor Comput Sci 2004;310(1-3):181–231. [31] Lee J, John Mathews V. A fast recursive least squares adaptive second-order volterra ﬁlter and its performance analysis. IEEE Trans Signal Process 1993;41 (3):1087–102.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

Contents lists available at ScienceDirect

ISA Transactions journal homepage: www.elsevier.com/locate/isatrans

Research article

New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases Tawfik Najeh, Abdelkader Mbarek n, Kais Bouzrara, Lotﬁ Nabli, Hassani Messaoud Research Laboratory of Automatic Signal and Image Processing, National School of Engineers of Monastir, University of Monastir, 5019, Tunisia

art ic l e i nf o

a b s t r a c t

Article history: Received 15 January 2014 Received in revised form 14 April 2017 Accepted 21 May 2017

The ARX-Laguerre model is a very important reduced complexity representation of linear system. However a signiﬁcant reduction of this model is subject to an optimal choice of both Laguerre poles. Therefore we propose in this paper two new methods to estimate, from input/output measurements, the optimal values of Laguerre poles of the ARX-Laguerre model. The ﬁrst method is based on the NewtonRaphson's iterative technique where we prove that the gradient and the Hessian can be expressed analytically. The second method is based on Genetic Algorithms. Both proposed algorithms are tested on a numerical example and on a heating benchmark. & 2017 ISA. Published by Elsevier Ltd. All rights reserved.

Keywords: ARX-Laguerre model Pole optimisation Newton-Raphson method Genetic algorithms

1. Introduction Recently an upsurge in research relating to Laguerre ﬁlters for use in system modelling and reduced order models have been provided [1–7]. The Laguerre functions have the property of being completely characterized by one parameter entitled Laguerre pole and its optimal identiﬁcation results in a signiﬁcant reduction of the model parameter number. Many works have dealt with Laguerre modeling, among them we remind the works of Fu and Dumont (1993) [8] who established an optimality condition of the Laguerre pole using the system impulse response and those of Tanguy et al. (2000) [9] where the optimal pole is expressed in terms of Laguerre model coefﬁcients. Some researchers suggested an off-line method to estimate the position of the stationary points of the squared error curve of a truncated Laguerre series or Laguerre ﬁlter [10]. Other orthonormal function bases (OFB) have been proposed in the literature such as generalized Laguerre functions and Meixner functions [11–15], Kautz orthonormal basis [16] and generalized orthonormal basis (GOB) [17]. We note that these bases are characterized by a set of poles; the choice of which strongly inﬂuences the parsimony of the expansion. The problem of optimum choice of the free parameter of some orthogonal functions (Laguerre, Kautz, Meixner) for the conventional system modeling has already been reported by many researchers [5–7,11– n

Corresponding author. E-mail addresses: najehtawﬁ[email protected] (T. Najeh), [email protected] (A. Mbarek), [email protected] (K. Bouzrara), lotﬁ[email protected] (L. Nabli), [email protected] (H. Messaoud).

15,18,19]. Some authors were interested by the optimization of the GOB poles, Den Brinker et al. [20] developed a method to recursively determine GOB poles. Malti et al. [21] and Oliveira e Silva [22] used the gradient algorithm to minimize output quadratic error. When expanding the ARX coefﬁcients on two Laguerre bases, an easy representation and a good approximation capability of complex linear system is given. Expansion of the ARX model on Laguerre bases was ﬁrst suggested by Bouzrara et al. [1]. The resulting model is entitled ARX-Laguerre model. The parsimony of the expansion is strongly linked to the choice of the poles deﬁning both Laguerre bases. Bouzrara et al. [3] have proposed an iterative algorithm to optimise Laguerre poles based on an analytical solution which depends on the coefﬁcients deﬁning the ARX-Laguerre model. In this paper, we focus on Laguerre pole optimization of the ARX-Laguerre model by proposing two different methods. The ﬁrst one uses an iterative algorithm based on the Newton-Raphson's approximation technique that deals with black box context where only the response of the system to a persistently exciting input signal is known. Moreover this methods needs initial values of the Laguerre poles in the iteration procedure which results is a local minimum. We suggest a procedure in the black box context, that yields the optimal values of the Laguerre poles and the least squares estimates of the ARX-Laguerre model coefﬁcients. The second proposed method uses the Genetic Algorithms (GA) that are widely adopted in recent years and which are very powerful in stochastic system modelling and have no special requirement on the form of the objective function. The processing of these algorithms can be achieved in many different ways by using some

http://dx.doi.org/10.1016/j.isatra.2017.05.015 0019-0578/& 2017 ISA. Published by Elsevier Ltd. All rights reserved.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

stochastic rules and they can converge to the global optimum with high probability. Genetic Algorithms [23,24] are one of the stochastic optimization algorithms that can express the complex structure problems with its hierarchy and can determine the feasible solving space automatically without giving any superstructure. Then, the optimum can be seeked automatically in the genetic evolutionary process. The paper is organized as follows: in Section 2, we present the ARX-Laguerre model by developing the coefﬁcients of the classical ARX model on two independent Laguerre bases. In this context, we write the coefﬁcients of ARX-Laguerre model as a linear combination of Laguerre basis functions. Then, we present the network of input and output ﬁlters of the ARX-Laguerre model as well as its recursive representation. Section 3 presents in the ﬁrst part an analytical Laguerre poles optimization in which the optimal poles are expressed in terms of ARX-Laguerre model coefﬁcients. In the second part we propose two new methods for Laguerre pole optimization. Finally, Section 4 evaluates, through a simulation examples, the performances of the proposed Laguerre pole algorithms of the ARX-Laguerre model compared in term of approximation quality. Moreover, we validate the proposed algorithms on a benchmark system manufactured by Feedback known as Process Trainer PT326 [25].

∞

x n, b(k, ξb) =

2.2. Recursive representation of the ARX-Laguerre model The orthonormal functions deﬁning both independent Laguerre bases are given by their Z-transform [1]:

(n = 0, 1, …, Ni − 1) and (i = a, b)

nb

∑ ha( j)y(k − j) + ∑ hb( j)u(k − j) j=1

(1)

j=1

where u(k) and y(k) are the system input and output respectively, na and nb are the ARX model order associated to the output and to the input respectively, ha(j) and hb(j) are the ARX model parameters. These coefﬁcients can be decomposed on two independent Laguerre bases as follow: ∞

∞

∑ gn, aℓ an( j,

ξa) and hb(j ) =

n= 0

∑ gn, bℓ bn(j, n= 0

ξb )

⎧ 2 ⎪ L i (z) = 1 − ξi 0 ⎪ z − ξi ⎨ ; i = a, b ⎪ i 1 − ξiz i L n − 1(z, ξi ); n = 1, …, Ni − 1 ⎪ L n(z) = z − ξi ⎩

The Z-transform of the ARX-Laguerre model given by relation (4) gives:

i = a, b (3)

Taking into account the relation (3) and by substituting the linear combinations of relation (2) in the ARX model (1), the resulting model, entitled the ARX-Laguerre is written as [1]: ∞

n= 0

∑ gn, a x n, a(k,

ξa) +

∑ gn, bx n, b(k,

ξb )

n= 0

(4)

where x n, a(k, ξa ) and x n, b(k, ξb) are the ﬁltered output and the ﬁltered input respectively by Laguerre functions such as: ∞

∑ ℓ an(j, j=1

ξa)y(k − j ) = ℓ an(k, ξa)*y(k )

(9)

where Xn, a(z, ξa ) and Xn, b(z, ξb) are the Z-transform of x n, a(k ,, ξa ) and x n, b(k ,, ξb) given by (5) and (6), respectively, and Y(z) is the Z-transform of y(k). The ARX-Laguerre model (9) can be truncated to a ﬁnite order Na and Nb as follows: Nb − 1

Na − 1

Y (z ) =

∑

gn, aX n, a(z, ξa) +

∑

gn, bX n, b(z, ξb) + E(z, ξ )

n= 0

n= 0

(10)

where

ξ = [ξa, ξb]

|ξi| < 1

(i = a, b)

(11)

(5)

Nb − 1

Na − 1

E(z, ξ ) = Y (z ) −

∑

gn, aX n, a(z, ξa) +

n= 0

∑

gn, bX n, b(z, ξb)

n= 0

(12)

From (12), we can deﬁne the following quadratic criterion:

JT ( ξ ) = ∥ E(z, ξ )∥2

(13)

The minimisation of the quadratic criterion JT ( ξ ) with respect to the coefﬁcients (n = 0, … , Na − 1) and gn, a for gn, b for (n = 0, … , Nb − 1) is given as follows:

∂JT ( ξ ) ∂gn, a ∂JT ( ξ ) ∂gn, b

∞

n= 0

x n, a(k, ξa) =

∞

∑ gn, aX n, a(z, ξa) + ∑ gn, bX n, b(z, ξb)

(2)

∞ j=0

(8)

and E (z, ξ ) is the truncation error deﬁned by:

where ℓ in(j, ξi ), gn, i and ξi, (i = a, b), are the orthonormal functions of Laguerre bases, the Fourier coefﬁcients and the pole deﬁning the orthonormal basis {Ii} respectively. By using the stability condition Bounded Input Bounded Output (BIBO) of the ARX model, the coefﬁcients hi(j) (i = a, b) are absolutely summable.

∑ |hi(j)| < ∞,

(7)

where ξa ( ξa < 1) and ξb ( ξb < 1) are the Laguerre poles. This functions satisfy the following recurrence.

n= 0

na

y(k ) =

n 1 − ξi2 ⎛ 1 − ξi, z ⎞ ⎜ ⎟ z − ξi ⎝ z − ξi ⎠

L ni(z ) =

Y (z ) =

A discrete-time system can be described by an ARX model as [1]:

(6)

where * is the convolution product.

2.1. Principle

ha(j ) =

ξb)u(k − j ) = ℓ bn(k, ξb) * u(k )

j=1

∞

2. ARX-Laguerre model

y(k ) =

∑ ℓ bn(j,

=

∂E(z, ξ ) , E(z, ξ ) ∂gn, a

= 0, n = 0, …, Na − 1

=

∂E(z, ξ ) , E(z, ξ ) ∂gn, b

= 0, n = 0, …, Nb − 1 (14)

By using relation (12), the gradient of the truncation error E (z, ξ ) with respect to the coefﬁcients gn, a for (n = 0, … , Na − 1) and gn, b for (n = 0, … , Nb − 1) can be written as:

∂E(z, ξ ) = − X n, a(z, ξa) ∂gn, a

(15)

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

with

∂E(z, ξ ) = − X n, b(z, ξb) ∂gn, b

(16)

From relations (14), (15) and (16) we can formulate the following property: Property 2.1. When the coefﬁcients gn, a and gn, b of the ARX-Laguerre model are optimal, the inner product between the truncation error E (z, ξ ) and the output ﬁlters Xn, a(z, ξa ) for (n = 0, … , Na − 1) and Xn, b(z, ξb) for (n = 0, … , Nb − 1) are zero:

⎧ X (z, ξ ), E(z, ξ ) = 0 ⎪ n, a a ⎨ ⎪ ξ X ( z , b), E(z, ξ ) = 0 ⎩ n, b

T X (k ) = ⎡⎣ X Na (k )T X Nb (k )T ⎤⎦

(23)

where X Na(k ) and X Nb(k ) are deﬁned as follows:

X Nb (k )

T

= ⎡⎣ x 0, b(k ), …, x Nb − 1, b(k )⎤⎦

(24)

Ai for (i = a, b) are two square matrices with dimensions Ni given by:

(18)

By combining the recurrent form (8) of Laguerre orthonormal functions with relation (18), we can formulate the recurrence relation for the ﬁlters Xn, a(z ) and Xn, b(z ), (n = 1, 2, …):

⎧ 2 ⎪ X (z, ξ ) = 1 − ξa Y (z ) 0, a a ⎪ z − ξa ⎨ ⎪ 1 − ξaz X n − 1, a(z, ξa) ⎪ X n, a(z, ξa) = z − ξa ⎩

(22)

and X(k) is the (Na + Nb)-dimensional vector including the ﬁltered outputs and the ﬁltered inputs of both Laguerre bases.

(17)

From (5) and (6) we can write

X n, a(z, ξa) = L na(z )Y (z ) and X n, b(z, ξb) = L nb(z ) U (z )

T C = ⎡⎣ g0, a , …, g Na − 1, a , g0, b , …, g Nb − 1, b ⎤⎦ ∈ R Na + Nb

T X Na (k ) = ⎡⎣ x 0, a(k ), …, x Na − 1, a(k )⎤⎦ ,

■

⎧ ⎪ X (z, ξ ) = b ⎪ 0, b ⎨ ⎪ ⎪ X n, b(z, ξb) = ⎩

3

⎡ ai 0 ⎢ 11 i ⎢ ai a 22 Ai = ⎢ 21 ⋮ ⎢ ⋮ ⎢ ai i a ⎣ (Ni)1 (Ni)2

⎤ ⎥ ⋯ 0 ⎥ ⎥ ⋱ ⋮ ⎥ ⋯ a(iNi)(Ni) ⎥⎦ ⋯

0

(25)

with: i asm

(19)

⎧ξ if s = m ⎪ i = ⎨ ( − ξi )(s − m − 1)(1 − ξi2) if s > m ⎪ ⎩0 if s < m

(26)

and bi for (i = a, b) are two vectors of dimension Ni:

1−

ξb2

z − ξb

T bi = ⎡⎣ b1i b2i ⋯ b Nii ⎤⎦

U (z )

1 − ξb z X n − 1, b(z, ξb) z − ξb

with:

(20)

From (10), (19) and (20), we represent in Fig. 1, the ﬁlter network of the ARX-Laguerre model. From Fig. 1, we obtain the following recursive representation for the ARX-Laguerre model [1]:

⎧ X (k + 1) = A X (k ) + b y(k ) Na a Na a ⎪ ⎪ ⎨ X Nb (k + 1) = Ab X Nb (k ) + bbu(k ) ⎪ T ⎪ ⎩ y(k ) = C X (k ) + e(k )

(27)

i bm = ( − ξi )m − 1 1 − ξi2 for (m = 1, ⋯, Ni )

(28)

3. Laguerre pole optimisation 3.1. Method of Bouzrara et al.

(21)

In this method, a separate optimisation of both poles deﬁning the Laguerre bases relative to parameters ha and hb of the ARX model is considered. The cost functions are deﬁned as [3]: ∞

1

Ja =

2

ha

∑

∞

1

n gn2, a and Jb =

hb

n= 0

∑

2

n gn2, b

(29)

n= 0

with

ha

∞

2

=

∑ ha2(j) and

hb

∞

2

=

j=0

∑ hb2(j)

(30)

j=0 2

2

However, we note that the quantities h a and h b can be expressed as function of the square of the Fourier coefﬁcients gn, a and gn, b as:

ha

2

∞

=

∑ gn2, a n= 0

and

hb

2

∞

=

∑ g 2n, b n= 0

(31)

Thus, the two optimal Laguerre poles ξopt , a and ξopt , b in the sense of the minimization of the criteria Ja and Jb respectively are given by [3]:

ξopt, i Fig. 1. Discrete-time ARX-Laguerre ﬁlter network.

⎧ ⎪ ρi − =⎨ ⎪ρ + ⎩ i

ρi2 − 1 si ρi > 1 ρi2 − 1 si ρi < − 1

i = a, b (32)

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

their reﬁnement using the Newton-Raphson method that determines the poles vector ξ at iteration (m + 1) noticed ξ m + 1 according to poles vector ξ at iteration m, ξ m , as follows

with

ρi =

(1 + (1 +

ξi2 ) T1, i ξ 2i ) T2, i

+ 2ξi T2, i + 2 ξi T1, i

i = a, b (33)

ξm+1 = ξm − μ

where ∞ ⎧ ⎪ T1, i = ∑ (2n + 1)gn2, i ⎪ n= 0 ⎨ i = a, b ∞ ⎪ T = 2 ng g ∑ 2, i ⎪ n, i n − 1, i ⎩ n= 0

(34)

3.2.1. Method based on Newton-Raphson algorithm The Laguerre poles ξ = [ξa, ξb] are optimized in terms of minimizing the quadratic criterion given by: ⎛ ⎞2 ⎜ ⎟ Nb − 1 H ⎜ ⎛ Na − 1 ⎞⎟ 1 J ( ξ ) = ∑ ⎜ y(k) − ⎜⎜ ∑ gn, a x n, a(k, ξa) + ∑ gn, b x n, b(k, ξb)⎟⎟ ⎟ 2 k =1 ⎜ ⎝ n=0 ⎠⎟ n=0 ⎜ ⎟ ⎝ ⎠ e T (k )

(

(42) m

) ( Y − C^) = 21

(35)

(36)

with a matrix that includes the state vectors X Ni (k ) for (i = a, b) given by (24) for (k = 1 … , H ):

(43)

From the property 2.1, the gradient of J ( ξ ) with respect to ξ is simpliﬁed to:

∂J ( ξ ) ∂ ^ C = − ET ∂ξ ∂ξ

(44)

and then (44), the Hessian of J ( ξ ) is given by:

∂ξ

T

T = ⎡⎣ Na Nb ⎤⎦

⎛ ^⎞ ∂J ( ξ ) ∂C ∂ ^ C − ⎟⎟ = T ⎜⎜ − ∂ξ ∂ξ ⎠ ⎝ ∂ξ

⎛ T T ⎛ 2 ^ ⎞ ^⎞ ∂C T ⎟ ∂ ^ ∂ ^ ∂ ∂C ⎟ ^ ∂ ⎟ = ⎜⎜ C + C − ET ⎜⎜ C+ 2 ∂ξ ∂ ξ ∂ ξ ⎟⎠ ∂ξ ⎠ ∂ξ ⎝ ⎝ ∂ξ

∂ 2J ( ξ )

where H is the observation number, y(k) is the output of the system and x n, a(k, ξa ) and x n, b(k, ξb) are the ﬁltered output and the ﬁltered input by the Laguerre model, respectively. The criterion J ( ξ ) can be written in the following matrix form:

1 ^ Y − C 2

ξ= ξ

⎛ ∂J ( ξ ) ⎞ ⎜ ⎟ ∂ξ ⎠ m m⎝ ξ= ξ

) < J ( ξ ). We note that μ = [μa , μb ] is selected so that J ( ξ The gradient of J ( ξ ) with respect to ξ is obtained by differentiating equation (36).

3.2. New methods of Laguerre pole optimization

J( ξ) =

∂ ξ2

−1

m+1

Consequently, by applying relation (32), the optimal Laguerre poles ξopt , a and ξopt , b are obtained by minimizing the cost functions J a and J b respectively.

T

∂ 2J ( ξ )

2

(45)

^ where the gradient of C with respect to ξ is determined from relation (40) as:

^ ∂C = T ∂ξ

(

−1 ∂T

)

∂ξ

(

Y − T

−1⎛

)

∂T ∂ ⎞ ^ ⎜ ⎟C + T ∂ξ ⎠ ⎝ ∂ξ

(46)

and the derivative of , given by (37) with respect to ξ , is the following matrix:

⎤ ⎡ ∂ N ∂ N ⎤ ⎡ ∂ N a a a ⎥ ⎢ ⎢ 0 ⎥ ∂ξb ⎥ ⎢ ∂ξa ∂ ⎥ ⎢ ∂ξa =⎢ = ∂ Nb ∂ Nb ⎥ ⎢ ∂ Nb ⎥ ∂ξ ⎥ ⎥ ⎢ 0 ⎢ ⎢⎣ ∂ξa ∂ξb ⎥⎦ ⎢⎣ ∂ξb ⎥⎦

(47)

(37) then

where T Ni = ⎡⎣ X NTi(1) X NTi(2) ⋯ X NTi(H )⎤⎦ , i = a, b

(38)

and, Y and include the output and the truncation error, respectively, for (k = 1 … , H ) T Y = ⎡⎣ y(1) ⋯ y(H )⎤⎦ ,

T = ⎡⎣ eT (1) ⋯ eT (H )⎤⎦

(39)

^ The optimal vector C of the Fourier coefﬁcients is identiﬁed by the Least Squares (LS) method as follows:

^ C = T

(

−1

)

T Y

(40)

Criterion (35) is nonlinear with respect to the poles ξi (i = a, b), the minimum can be found by an iterative method: we start with an initial estimate of ξi then we reﬁne at each step until the poles converge. Note that, for most nonlinear optimization methods, the minimum obtained is a local minimum located in the neighborhood of the initial estimate. The latter should be carefully determined. The identiﬁcation of the initial poles is made using the minimization of the quadratic criterion J2 ( ξ ).

J2 ( ξ ) = 10 log10

2 H ∑k = 1 ⎡⎣ y(k ) − y^ (k, C , ξ )⎤⎦ 2 H ∑k = 1 ⎡⎣ y(k )⎤⎦

⎤ ⎡ ∂ 2 Na ⎢ 0 ⎥ 2 ⎥ ⎢ ∂ξa ∂ ⎥ =⎢ 2 ∂ ξ2 ⎢ ∂ Nb ⎥ 0 ⎥ ⎢ ∂ξb2 ⎦ ⎣ 2

The actual poles vector ξ given by (42) can be computed by using the gradient and the Hessian, of the criterion J ( ξ ), given by (44) and (45), respectively. These quantities depend on the ﬁrst and the second derivatives of with respect to ξ = [ξa, ξb], given by (47) and (48) respectively. Lemma 3.1. The ﬁrst and the second derivatives of the matrix containing the ARX-Laguerre state vectors X Ni (k ) for (i = a, b) with respect to ξ = [ξa, ξb] can be formulated as proved in the Appendix A, by the following relations:

The ﬁrst derivative of Ni with respect to ξi, (i = a, b), is: ∂ Ni ∂ξi

= Ki Ni+ 1ΨNTi+ 1

(49)

The second derivative of Ni with respect to ξi, (i = a, b), is: ∂ 2 Ni

(41)

(48) m+1

∂ξi2

= Ki2 ⎡⎣ 2 ξi Ni+ 1ΨNTi+ 1 + Ni+ 2ΨNTi+ 2 ΨNTi+ 1⎤⎦

(50)

Once the initial poles ξi (i = a, b) are determined, we proceed to Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

with, for (i = a, b):

Ki =

⎡ 0 ⎢ ⎢ −1 ⎢ 0 ΨNi+ 1 = ⎢ 0 ⎢ ⎢ ⋮ ⎢⎣ 0

1 ; 1 − ξi2

1 0 −2 0 ⋮ 0

0 2 0 −3 ⋮ 0

0 0 3 0 ⋮ 0

⋯ 0 ⋯ 0 ⋯ 0 ⋯ 0 ⋮ ⋮ ⋯ −(Ni − 1)

0 0 0 0 ⋮ 0

0⎤ ⎥ 0⎥ 0⎥ 0⎥ ⎥ ⋮⎥ Ni ⎥⎦

(51)

■ Remark: The calculation of the second derivative requires voluminous calculations which encourages to abandon the Newton Raphson method in favor of the Gauss Newton method where the approximated Hessian is calculated by eliminating the second term of the second member in relation (45). The calculation of the approximated Hessian occurs, therefore, only from the knowledge of the ﬁrst order sensitivity functions as:

∂ 2J ( ξ ) ∂ ξ2

⎛ T T ^ ⎞ ∂C T ⎟ ∂ ^ ^ ∂ = ⎜⎜ C + C ∂ξ ∂ ξ ⎟⎠ ∂ ξ ⎝

(52)

an optimum of a function deﬁned on a data space. These algorithms are applied to a wide variety of problems [28] regarding their simplicity and efﬁciency [29]. After ﬁxing the expression of the objective function to be optimized, probabilistic steps are involved to create an initial population of individuals [30]. The initial population will submit some genetic operations (such as evaluation, mutation, crossover,…) to converge to the optimal solution of the problem. The optimization with GA is summarized by the following steps:

Initialization: It is usually random and it is often advantageous to include the maximum knowledge about the problem.

Evaluation: This step is to compute the quality of individuals by

Algorithm 1. 1. Choose two initial poles ξ 0 = [ξa0, ξb0] 2. Fix μ = 1 and m ¼ 0 3. We propose to calculate the optimal poles as follows: (a) Form the matrices N , (i = a, b) by using relation (38) i and the matrix given by (37). (b) Compute the ﬁrst and the second derivatives of Ni with respect to ξim from relations (49) and (50), respectively. (c) Determine the ﬁrst and the second derivatives of from relation (47) and (48). (d) Estimate the optimal vector C^ of the Fourier coefﬁcients from relation (40). (e) Calculate the gradient of the vector C^ with respect to ξ m from relation (46). (f) Determine the gradient and the Hessian of J ( ξ ) from relation (44) and (45), respectively. (g) Evaluate ξ m + 1 from relation (42). (h) if ( ξam + 1 > 1) or ( ξbm + 1 > 1) i. Adjust

(ξ

m+1 a

the allocation of positive value called ”ability or ﬁtness” to each one. The highest is assigned to the individual that minimizes (or maximizes) the objective function. The selection: This step selects a deﬁnite number of individuals of the current population. The selection is probabilistic and characterized by a parameter called selection pressure (SP) ﬁxing the number of selected individuals. The crossover: The genetic crossover operator creates new individuals. From two randomly selected parents, crossover produces two descendants. This step affects only a limited number of individuals established by the crossover rate (CR) number. The mutation: The mutation consists in providing a small disruption to a number (MR) of individuals. The effect of this operator is to counteract the attraction exerted by the best individuals. This allows to explore other areas of the search space.

The optimum values of Laguerre poles are determined by using the criterion J ( ξ ) given by (35) as objective function known as ﬁtness. The minimization of which can expand the solution space of the poles that supports the achievement of better solutions if we choose a very small threshold. After determining the objective function to be minimized, we generate, randomly, a population of individuals ξ = [ξa, ξb]. Abilities of individuals ξ are evaluated by the criterion J ( ξ ). Individuals with the highest skills are selected to undergo different genetic operations (crossover, mutation and selection). After a set of generations, the genetic algorithm converges to the optimal values. The new genetic curve algorithm is constructed following the strategy outlined as:

μa or μb, with (μi < 1)

ii. Go to step 3 (i) else i. if

5

− ξam > ε or ξbm + 1 − ξbm > ε

0.8

)

A. Increment m, (m = m + 1) B. Go to step (a) ii. else B. ξopt , b =

u(k)

A. ξopt , a = ξam + 1

0.6

0.4

ξbm + 1 0.2

3.2.2. Method based on genetic algorithms The Laguerre poles are expressed in the criterion J ( ξ ), given by (35), with nonlinear way. We note that the two previous methods of the Laguerre pole optimization are adequate for determinist systems. In this section we propose to compute the optimum values of Laguerre poles by using Genetic Algorithms classiﬁed as evolutionary optimization methods [26,27]. They allow to deﬁne

0 Identification phase

−0.2 0

100

200

300

400 500 600 Number of iterations

Validation phase

700

800

900

1000

Fig. 2. Input signal.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

Algorithm 2.

4. Simulation examples 4.1. Example 1 This example illustrates the identiﬁcation of a fourth-order Butterworth ﬁlter [31] given by the following transfer function:

H (z ) = H1(z ) H2(z )

(53)

with

H1(z ) =

H2(z ) =

0.078 + 0.1559z−1 + 0.078z−2 1 − 1.3209z−1 + 0. 6327−2

(54)

0.0619 + 0.1238z−1 + 0.0619z−2 1 − 1.0486z−1 + 0. 2961−2

(55)

This system contains 9 parameters. The input system, draw in Fig. 2, is a pseudo-random sequence with variable amplitude. The output, as shown in Fig. 3, is disturbed by an additive white and gaussian noise which is independent of the input signal and with Signal to Noise Ratio (SNR) equals to 10 dB. For the different pole optimization methods, presented in this paper, we select the optimum poles ( ξopt , a and ξopt , b ) for a set of H ¼ 1000 observations and for different values of the truncation orders (Na and Nb) where we identify the Fourier coefﬁcients by the Least Squares (LS) method. In this simulation we applied the monte carlo method for 60 independent runs. For each truncated order, we evaluate the performance of the pole optimisation algorithms for ARX-Laguerre model in terms of calculation of the Parameter Reduction Ratio (PRR) and the Mean Square Error (MSE) given by: ⎛ Parameters number of ARX − Laguerre model ⎞ PRR = ⎜ 1 − ⎟ × 100 ⎝ ⎠ Parameters number of ARX model

these Figures we can conclude that the algorithm converges in relatively few number of generations when we ﬁx a population size composed of 80 individuals. Moreover, to evaluate the efﬁciency of the proposed pole optimisation algorithms as well as that proposed in [3], we study the MSEdB variation and the global computing time (given for the 1000 iterations) for different couples of truncation orders Na and Nb. The simulation results are presented in Table 1 and Figs 6 and 7. The goal of using the ARX-Laguerre model is to reduce the parameter number of ARX model for a lower MSE. From Table 1 and Fig. 6 we can conclude that the Genetic Algorithm is more efﬁcient in term of MSE. For example, In the case of Genetic Algorithm, to obtain a (MSEdB ≈ − 54 dB ) the parameter number does not exceed (Na = Nb = 2), therefore a parametric reduction ratio of (PRR = 55%). In the case of Newton-Raphson algorithm, we obtain the same MSE for (Na = 3, Nb = 2) then a parametric reduction ratio equal (PRR ≤ 44.44%). In the case of Bouzrara et al. algorithm this MSEdB is obtained for (Na = Nb = 3) then a parametric reduction ratio (PRR = 33.33%). To validate the three algorithms, we draw in Figs. 8 to 10 the variations of the outputs of the ARX-Laguerre model and the ARX model by using, for pole optimization, the Bouzrara et al. algorithm, the Newton Raphson algorithm and the Genetic Algorithm, respectively, for (Na = Nb = 2). We note that with the truncation orders (Na = Nb = 2), the performance of ARX-Laguerre model in terms of MSEdB is signiﬁcant when the Laguerre poles are 1.2

1

0.8

(56) y(k)

0.6

⎛ 1 MSEdB = 10 log10⎜⎜ ⎝H

H

⎞

k=1

⎠

∑ (y(k) − y^ (k))2⎟⎟

0.4

(57) 0.2

where y is the measured output and H is the number of observations. To evaluate the population size effects, several runs are done with different population size. The convergence speed of the Laguerre poles ξa and ξb is faster for a larger population size. Figs. 4 and 5 illustrate the behavior of the estimated Laguerre poles during the number of generations of the GA method. From

0 Validation phase

Identification phase

−0.2 0

100

200

300

400 500 600 Number of iterations

700

800

900

1000

Fig. 3. Output signal.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

7

−10

0.7

Bouzrara et al Newton Raphson Genetic algorithm

0.6

−20

−30

0.4

MSE (dB)

Pole evolution ξ b

0.5

0.3

−50

Population size =10 Population size=30 Population size=50 Population size=70 Population size=80

0.2

0.1

0 0

10

20 30 Number of generations

40

−40

−60

−70 2

50

3

Fig. 4. Estimation of pole ξa for several population size.

5 6 7 Truncation order (N=Na+Nb)

8

9

Fig. 6. Variation of the MSEdB.

10

0

Bouzrara et al Newton Raphson Genetic algorithm

9

−0.05

8

Population size=10 Population size=30 Population size=50 Population size=70 Population size=80

−0.15 −0.2

7 6 Time (s)

−0.1

Pole evolution ξa

4

−0.25

5

−0.3

4

−0.35

3

−0.4

2

−0.45

1

−0.5 0

10

20 30 Number of generations

40

50

0 2

3

4

Fig. 5. Estimation of pole ξb for several population size.

5 6 7 Truncation order (N=Na+Nb)

8

Fig. 7. Computing time.

Table 1 Performances of optimisation methods of Laguerre poles. Method TO

Bouzrara et al.

Newton-Raphson

Na

Nb

PRR

MSEdB

Poles

MSEdB

1

1

77.77

15.5

ξinit , a = 0.2

28.12

ξinit , b = 0.1 1

2

66.66

33.46

ξinit , a = 0.2

2

2

55.55

45.38

ξopt , a = −0.72

39.71

44.44

50.81

ξopt , a = −0.73

3

3

33.33

55.51

ξopt , a = −0.71

4

3

22.22

60.76

ξopt , a = −0.70

54.95

61.11

ξopt , a = −0.69

4

5

0

61.75

ξopt , a = −0.70

ξopt , a = −0.98

45.30

ξopt , a = −0.39

ξopt , a = − 0.42 ξopt , a = − 0.51 ξopt , a = 0.60

ξopt , a = 0.51

59.75

ξopt , a = 0.41

ξopt , b = 0.75

ξopt , b = 0.83 53.68

ξopt , a = −0.47

57.15

ξopt , a = 0.62

59.83

ξopt , a = 0.52

ξopt , b = 0.59

ξopt , b = 0.65

ξopt , b = 0.50

ξopt , b = 0.52 61.32

ξopt , a = 0.46

61.94

ξopt , a = 0.54

ξopt , b = 0.55 60.85

ξopt , b = 0.81 ξopt , b = 0.80

−32.09

ξopt , b = 0.65

ξopt , b = 0.80 11.11

ξopt , a = − 0.87

59.08

ξopt , b = 0.81

4

Poles

ξopt , b = 0.62

ξopt , b = 0.79

4

MSEdB

ξopt , b = 0.78 48.80

ξopt , b = 0.78 2

Poles

ξopt , b = 0.72

ξopt , b = 0.50

3

Genetic algorithm

ξopt , a = 0.52

ξopt , b = 0.66

ξopt , b = 0.46 61.14

ξopt , a = 0.50

ξopt , b = 0.48

ξopt , b = 0.47 62.04

ξopt , a = 0.54

ξopt , b = 0.47

9

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

optimized by the Genetic Algorithm.

1.2 ARX−Laguerre output Real output

4.2. Example 2

1

0.8

0.6 y(k)

This application consists in identifying a bench-scale hot airﬂow device the PT326 Process Trainer as shown in Fig. 11, from Feedback Ltd. This thermal process exhibits many linear ranges, output drift and inherent process noise. It has been used many times to illustrate the performances of other identiﬁcation methods, as in [25]. Fig. 12 shows a schematic description of this process. Air is pulled by a fan into a 30 cm length tube through a valve and heated by a mesh of resistor wires at the inlet. The air temperature is measured by a thermocouple at the outlet. The input u(t) is the voltage over of resistor wires to heat the incoming air; the output y(t) is the outlet air temperature measurement. The process perturbation can be realized by adding the ambient air the quantity of which is tuned by the angle α. Figs. 13 and 14 plot the input voltage u and the output temperature y respectively, at a sampling time of 0.08 s. The PT326 process can be described by the ARX model (1) with (na = 6) and (nb = 6) and therefore it contains 12 parameters which are estimated using least-squares methods and summarized in Table 2.

0.4

0.2

0

−0.2 0

50

100

150 200 250 Number of iterations

300

350

400

Fig. 10. Validation of pole optimization using Genetic algorithm.

1.2 ARX−Laguerre output Real output 1

0.8

y(k)

0.6

0.4

0.2

Fig. 11. Process Trainer PT326. 0

−0.2 0

50

100

150 200 250 Number of iterations

300

350

400

Fig. 8. Validation of pole optimization using Bouzrara et al. algorithm.

1.2 ARX−Laguerre output Real output 1

0.8

y(k)

0.6

0.4

0.2

0

−0.2 0

50

100

150 200 250 Number of iterations

300

350

Fig. 9. Validation of pole optimization using Newton Raphson algorithm.

400

Fig. 12. Schematic description of the heat transfer process.

The performances of the pole optimisation algorithms for several combinaisons of truncating orders (Na, Nb) are evaluated by the MSEdB summarized in Table 3 and presented in Fig. 15 We plot in Fig. 16 the ARX model output and the PT326 output for (na = nb = 2). In Figs. 17 to 19, we draw the output of the PT326 process and the ARX-Laguerre model output for the different pole optimization algorithms by ﬁxing Na = Nb = 2. In the case of GA algorithm, the objective function to be optimized is illustrated by equation (35) with several combinations of truncating orders (Na, Nb). A real-valued representation is used and the initial population is formed of 80 individuals chosen after several test for various population size. The algorithm stops when the value of the mean square error is lower than the threshold εt. The values of the genetic operations are expressed in Table 4. The performance in terms of MSE is more signiﬁcant when the Laguerre poles are estimated by Newton Raphson algorithm and Genetic algorithm. This can be justiﬁed by the choice of the cost

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

function chosen in both algorithms. In fact for the new proposed algorithms we minimize the mean square error between the system output and the ARX-Laguerre model output. However, in Bouzrara et al. algorithm, the Laguerre poles are estimated by the optimization of two cost functions that depend on the Fourier coefﬁcients.

5. Conclusion In this article, a new algorithms for ARX-Laguerre model pole optimization are proposed. It is clear that the Genetic Algorithms are more efﬁcient with respect to the Bouzrara et al. algorithm and

9

the Newton-Raphson algorithm in term of parameter number reduction. It reveals that the Genetic Algorithm copes very well for ofﬂine identiﬁcation of the ARX-Laguerre model. However, the Table 3 Performances of pole optimisation algorithms for the PT326 process. Na ¼ Nb

PRR

Bouzrara et al. MSEdB

Newton Raphson MSEdB

Genetic Algorithm MSEdB

1 2 3 4 5 6

83.33 66.66 50 33.33 16.66 0

15.5 36.53 44.24 50.53 52.62 53.61

26.43 39.14 45.82 51.20 53.15 53.77

32.05 44.57 49.59 52.27 53.68 54.05

6.5 −10 Bouzrara et al Newton Raphson Genetic algorithm

6 −15

5.5

−20 −25

MSE (dB)

u (V)

5 4.5

−30 −35 −40

4

−45

3.5 −50

3 0

100

200 300 Time steps

400

−55

500

−60 2

Fig. 13. Input signal of PT 326 process.

3

4

5

6 7 8 9 Truncation order (N=Na+Nb)

10

11

12

Fig. 15. Variation of the MSEdB for the PT326 process.

60

58

ARX model output PT 326 process 58

56

56

54

y(t)

T °(C)

54

52 52

50 50

48 48

46 44 0

46

100

200 300 Time steps

400

500

Fig. 14. Output signal of PT 326 process.

44 0

20

40

60

80 100 120 Number of iterations

140

160

180

200

Fig. 16. Validation of the ARX model.

Table 2 ARX model for PT326 process (na = nb = 6) . ARX model parameters Estimated values ha(j) (j = 1: na )

[ − 1.023, 0.08599, 0.03051, 0.1101, 0.05723, 0.002241]

hb(j) (j = 1: nb)

[0.003386, 0.00213, 0.002173, 0.0662, 0.05763, 0.01514]

MSEdB 49.17

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

60

Table 4 Parameters of the genetic algorithm.

ARX−Laguerre output PT 326 output 58 56

y(k)

54

Parameters

Values

Selection pressure (SP) Crossover rate (CR) Mutation rate (MR)

1.9 80 % 3.3 %

52

Newton-Raphson algorithm as well as Bouzrara et al. algorithm allows the online identiﬁcation of the ARX-Laguerre and so they are convenient for adaptive control.

50 48 46

Appendix A. Proof of Lemma 3.1 44 0

20

40

60

80 100 120 Number of iterations

140

160

180

200

The gradient of Ni with respect to ξi is given from relation (38) by:

Fig. 17. Validation of the ARX-Laguerre model using Bouzrara et al. algorithm.

∂ Ni

60 ARX−Laguerre output PT 326 output 58

∂X Ni(k ) ∂ξi

y(k)

54

T ⎡ ∂x NTi− 1, i (k ) ⎤⎥ ∂x (k ) ∂x1, i (k ) , i = a, b = ⎢ 0, i ⋯ ⎥⎦ ⎢⎣ ∂ξi ∂ξi ∂ξi

(A.2)

It remains to ﬁnd the analytical expression of the ﬁrst and second derivative of x n, i(k ) with respect to ξi for (n = 0, … , Ni − 1) and (i = a, b).

52 50 48 46

20

40

60

80 100 120 Number of iterations

140

160

180

200

Fig. 18. Validation of the ARX-Laguerre model using Newton-Raphson algorithm.

58 ARX−Laguerre output PT 326 output 56

54

⎧ x (k ) = Z −1 X (z ) , X (z ) = L a(z, ξ )Y (z ), n, a n, a n a ⎪ n, a ⎪ ⎪ (n = 0, …, Na − 1) ⎨ ⎪ x n, b(k ) = Z −1 X n, b(z ) , X n, b(z ) = L nb(z, ξb)U (z ), ⎪ ⎪ (n = 0, …, Nb − 1) ⎩

{

}

{

}

(A.3)

From relations (A.3) we obtain:

⎧ ∂X n, a(z ) ∂L a(z, ξa) ⎪ = n Y (z ), (n = 0, …, Na − 1) ∂ξa ⎪ ∂ξa ⎨ ⎪ ∂X n, b(z ) ∂L b(z, ξb) = n U (z ), (n = 0, …, Nb − 1) ⎪ ∂ξb ⎩ ∂ξb

(A.4)

In this case we establish the relation binder the derived of the ARX-Laguerre ﬁlters with respect to the poles to Laguerre ﬁlters.

52 y(k)

From the Z-transform of nth Laguerre function given by (8) the derivative of the Laguerre function with respect to ξi (i = a, b) is:

50

⎧ ∂L i (z, ξ ) L i(z, ξi ) i ⎪ 0 = 1 ⎪ ∂ξi 1 − ξi2 ⎪ ⎨ ∂L ni(z, ξi ) L i (z, ξi ) ⎡ −n(z − ξi )2 + (n + 1)(1 − ξiz )2 ⎤ ⎢ ⎥ = n ⎪ ξ ∂ (z − ξi )(1 − ξiz ) 1 − ξi2 ⎣ ⎦ i ⎪ ⎪ n = … N − 1, , 1 ⎩ i

48

46

44 0

(A.1)

where

56

44 0

∂ξi

T ⎡ ∂X T (1) ∂X T (2) ∂X NTi(H ) ⎤⎥ Ni Ni ⎢ = , i = a, b ⋯ ⎢⎣ ∂ξi ∂ξi ∂ξi ⎥⎦

20

40

60

80 100 120 Number of iterations

140

160

180

Fig. 19. Validation of the ARX-Laguerre model using Genetic Algorithm.

(A.5)

200

From the recurrent relation linking the Laguerre ﬁlters given by (8) the derivative of nth Laguerre function given by (A.5) is written:

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i

T. Najeh et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎

⎧ ∂L i z, ξ i) ⎪ 0( = KiL1i(z, ξi ) ⎪ ∂ξi ⎪ ⎨ ∂L ni( z, ξi ) ⎪ = Ki⎡⎣ ( n + 1)L ni + 1( z, ξi ) − nL ni − 1( z, ξi )⎤⎦, ∂ξi ⎪ ⎪ n = 1, …, Ni − 1 ⎩ with Ki =

1 1 − ξi2

(A.6)

for i = a, b

The substitution of (A.6) in (A.4) and the Z-transform inverse allow to write, for (i = a, b): ⎧ ∂x (k ) ⎪ 0, i = K ix1, i(k ) ⎪ ∂ξi ⎨ ⎪ ∂x n, i(k ) = K i⎡⎣ (n + 1)x (n + 1), i(k ) − nx (n − 1), i(k )⎤⎦ , n = 1, …, Ni − 1 ⎪ ⎩ ∂ξi

(A.7)

Using the notations introduced previously in (A.1) and (A.2), we can deduce the expression of the ﬁrst derivative of X Ni (k ) with respect to ξi:

∂X Ni(k ) ∂ξi

= Ki ΨNi+ 1 X Ni+ 1(k )

(A.8)

where X Ni+ 1(k ) is a (Ni + 1)-dimensional vector deﬁned as:

X Ni+ 1(k ) = ⎡⎣ x 0, i (k ) x1, i (k ) x2, i (k ) … x Ni− 1, i (k ) x Ni, i (k )⎤⎦

(A.9)

with ΨNi+ 1 is a (Ni × (Ni + 1))-dimensional matrix

⎡ 0 ⎢ ⎢ −1 ⎢ 0 ΨNi+ 1 = ⎢ 0 ⎢ ⎢ ⋮ ⎣⎢ 0

1 0 −2 0 ⋮ 0

0 2 0 −3 ⋮ 0

0 0 3 0 ⋮ 0

⋯ 0 ⋯ 0 ⋯ 0 ⋯ 0 ⋮ ⋮ ⋯ −(Ni − 1)

0 0 0 0 ⋮ 0

0⎤ ⎥ 0⎥ 0⎥ 0⎥ ⎥ ⋮⎥ Ni ⎥⎦

(A.10)

From relation (A.8) we can deduce the ﬁrst derivatives of Ni with respect to ξi.

∂ Ni ∂ξi

= Ki Ni+ 1ΨNTi+ 1

(A.11)

where Ni+ 1 is a ((Ni + 1) × H )-dimensional matrix deﬁned according to the X Ni+ 1(k ) as: T Ni+ 1 = ⎡⎣ X NTi+ 1(1) X NTi+ 1(2) … X NTi+ 1(H )⎤⎦

The second derivative of Ni with respect to from differentiating (A.11) as follows:

∂ 2 Ni ∂ξi2

(A.12)

ξi can be formulated

= Ki2 ⎡⎣ 2 ξi Ni+ 1ΨNTi+ 1 + Ni+ 2ΨNTi+ 2 ΨNTi+ 1⎤⎦

(A.13)

References [1] Bouzrara K, Garna T, Ragot J, Messaoud H. Decomposition of an ARX model on

11

Laguerre orthonormal bases. ISA Trans 2012;51(6):848–60. [2] Gnaba S, Garna T, Bouzrara K, Ragot J, Messaoud H. Online identiﬁcation of the bilinear model expansion on Laguerre orthonormal bases. Int J Control 2014;87(3):441–63. [3] Bouzrara K, Garna T, Ragot J, Messaoud H. Online identiﬁcation of the ARX model expansion on Laguerre orthonormal bases with ﬁlters on model input and output. Int J control 2013;86(03):369–85. [4] Garna T, Bouzrara K, Ragot J, Messaoud H. Nonlinear system modeling based on bilinear Laguerre orthonormal bases. ISA Trans 2013;52(3):301–17. [5] Dankers AG, Westwick DT. On the relationship between the enforced convergence criterion and the asymptotically optimal Laguerre pole. IEEE Trans Autom Control 2012;57(5):1102–9. [6] Malti R, Ekongolo SB, Ragot J. Dynamic SISO and MISO system approximations based on optimal Laguerre models. IEEE Trans Autom Control 1998;43(9):1318–23. [7] Ngia LSH. Separable nonlinear least-squares methods for efﬁcient off-line and on-line modeling of systems using Kautz and Laguerre ﬁlters. IEEE Trans Circuits Syst II: Analog Digit Signal Process 2001;48(6):562–79. [8] Fu Y, Dumont GA. An optimum time scale for discrete Laguerre network. Trans Autom Control 1993;38(6):934–8. [9] Tanguy N, Morvan R, Vilbé P, Calvez LC. Online optimization of the time scale in adaptive Laguerre-based ﬁlters. IEEE Trans Signal Process 2000;48(4):1184–7. [10] Oliveira e, Silva T. On the determination of the optimal pole position of Laguerre ﬁlters. IEEE Trans Signal Process 1995;43(9):2079–87. [11] Belt HJW, Den Brinker AC. Optimality condition for truncated generalized Laguerre networks. Int J Circuit Theory Appl 1995;23(3):227–35. [12] Prokhorov SA, Kulikovskikh IM. Unique condition for generalized Laguerre functions to solve pole position problem. Signal Process 2015;108:25–9. [13] Prokhorov SA, Kulikovskikh IM. Pole position problem for meixner ﬁlters. Signal Process 2016;120:8–12. [14] Krifa A, Bouzrara K. Parametric complexity reduction of discrete-time linear systems having a slow initial onset or delay. Math Model Anal 2016;21(5):668–84. [15] Maraoui S, Krifa A, Bouzrara K. System approximations based on Meixner-like models. IET Signal Process 2016;10(5):457–64. [16] Wahlberg B. System identiﬁcation using Kautz models. IEEE Trans Autom Control 1994;39(6):1276–81. [17] Ninness B, Gustafsson F. Unifying construction of orthonormal bases for system identiﬁcation. IEEE Trans Autom Control 1997;42(4):515–21. [18] El Anes A, Maraoui S, Bouzrara K. Order reduction of MIMOARX systems using Laguerre bases. Int J Eng Syst Model Simul 2016;8(4):307–15. [19] el Anes A, Maraoui S, Bouzrara K. Optimal expansions of multivariable ARX processes on Laguerre bases via the Newton-Raphson method. Int J Adapt Control Signal Process 2016;30(4):578–98. [20] Den AC Bert B, HJW Harm B. Model reduction by orthogonalized exponential sequences. in: PRORIS/IEEE Workshop on Circuits, System and Signal Processing; 1996, p. 77-82. [21] Malti R, Maquin D, Ragot J, Optimality conditions for the truncated network of the generalized discrete orthonormal basis having real poles. In: Proceedings of the 37th IEEE Conference on Decision and Control; 1998b, p. 2189–2194. [22] Oliveira E Silva T, Modelling and Identiﬁcation with Rational Orthogonal Basis Functions; chap. Pole Selection in GOBF Models. Peter S. C. Heuberger, Paul M. J. Van den Hof and Bo Wahlberg; 2005, p. 297–336. [23] John RKoza. Genetic programming: on the programming of computer by means of natural selection. Prentice-Hall; 1992. [24] John RKoza. Genetic programming. II. Automatic discovery of reusable programs.Cambridge: The MIT Press; 1994. [25] Ljung L. System identiﬁcation - theory for the user. Prentice-Hall; 1999. [26] Zhang J, Chung H, Lo W. Clustering-based adaptive crossover and mutation probabilities for genetic algorithms. IEEE Trans Evolut Comput 2007;11(3):326–35. [27] Akbari R, Ziarati K. A multilevel evolutionary algorithm for optimizing numerical functions. Int J Ind Eng Comput 2011;2(2):419–30. [28] Chuan-Kang T, On the mean convergence time of multi-parent genetic algorithms without selection. in: European Conference on Artiﬁcial Life; 2005, p. 403–412. [29] Bies RR, Muldoon MF, Pollock BG, Manuck S, Smith G, Sale ME. A genetic algorithm-based, hybrid machine learning approach to model selection. J Pharmacokinet Pharmacodyn 2006;33(2):195–221. [30] Lothar MSchmitt. Theory of genetic algorithms II: models for genetic operators over the string-tensor representation of populations and convergence to global optima for arbitrary ﬁtness function under scaling. Theor Comput Sci 2004;310(1-3):181–231. [31] Lee J, John Mathews V. A fast recursive least squares adaptive second-order volterra ﬁlter and its performance analysis. IEEE Trans Signal Process 1993;41 (3):1087–102.

Please cite this article as: Najeh T, et al. New methods of Laguerre pole optimization for the ARX model expansion on Laguerre bases. ISA Transactions (2017), http://dx.doi.org/10.1016/j.isatra.2017.05.015i