∑ ∑ - Instituto Superior Técnico

7 downloads 0 Views 254KB Size Report
In this paper Levy's identification method is extended to handle fractional order systems, that is to say, transfer functions including non-integer powers of Laplace ...
LEVY'S IDENTIFICATION METHOD EXTENDED TO COMMENSURATE FRACTIONAL ORDER TRANSFER FUNCTIONS Duarte Valério1 Technical University of Lisbon, Inst. Superior Técnico Department of Mechanical Engineering, GCAR Av. Rovisco Pais, 1049-001 Lisboa, Portugal Phone: +351 21 8417187 Fax +351 21 8498097 Email: [email protected]

Abstract In this paper Levy's identification method is extended to handle fractional order systems, that is to say, transfer functions including non-integer powers of Laplace operator s. Iterative improvements of Levy's method suggested by Sanathatan and Koerner and by Lawrence and Rogers are handled; the possibility of using weights, such as those suggested by Vinagre, is addressed. Some numerical considerations are also given. The extended methods and its integer-order counterparts to identify models from frequency data seem to share the same good characteristics and to suffer from the same drawbacks (like needing a model structure chosen beforehand, and being sensible to some levels of noise). Keywords fractional (non-integer) order systems, Levy's identification method, iterative identification, identification 1. Introduction Levy's method is a well-established method for finding the coefficients of a transfer function that models a plant with a known frequency behaviour [Levy, 1959]. In this paper the method is expanded to deal with the situation when a fractional order plant is at stake, that is to say, when non-integer powers of Laplace operator s are expected to appear in the model. There are several physical systems that may be modelled using such fractional order transfer functions [Oustaloup, 1991; Podlubny, 1999]. Furthermore, fractional order controllers are being

José Sá da Costa Technical University of Lisbon, Inst. Superior Técnico Department of Mechanical Engineering, GCAR Av. Rovisco Pais, 1049-001 Lisboa, Portugal Phone: +351 21 8417187 Fax +351 21 8498097 Email: [email protected]

increasingly used, and some methods for devising them require identifying their transfer function from a previously obtained frequency behaviour. This extension of Levy's identification method should thus prove useful. The paper is organised as follows. Section 2 presents the extension. Section 3 addresses how some improvements on Levy's method may be extended as well. Some results obtained with a Matlab implementation are given in section 4. Some final considerations are drawn in section 5. 2. Levy's method extended for fractional orders Let us suppose we have a plant G with a known frequency behaviour. Let us suppose we want to model it using a transfer function

b + b s Q + b2 s 2Q + … + bm s mQ Gˆ ( s ) = 0 1 Q = 1 + a1 s + a2 s 2Q + … + an s nQ m

=

∑b s k =0 n

kQ

k

(1)

1 + ∑ ak s kQ k =1

If Q is 1 (or any other integer), transfer function (1) becomes a usual integer-order transfer function. If Q ∈ \ , (1) is said to be a fractional order transfer function. The name non-integer order system is also in use, but fractional order system is the most widespread nomenclature, though Q may be irrational. Remark 1. It is of course possible to conceive non-integer order transfer functions where the powers of s are not multiple of some real Q. Taking these

1 Partially supported by programme POCTI, FCT, Ministério da Ciência e Tecnologia, Portugal, grant number SFRH/BD/2875/2000, and ESF through the III Quadro Comunitário de Apoio.

into account would complicate the identification problem. Since transfer functions similar to (1), known as commensurate order ones, are those normally found in practice, we will restrict our attention to them (as is done for instance in some parts of [Miller et al., 1993], or in [Hartley et al., 2003], that also deals with identification of noninteger models, but only makes uses of simpler forms of Gˆ , namely forcing the numerator to be 1). Actually this is all that is needed for engineering purposes: numerical values are known with limited precision only, and commensurate order transfer functions provide good approximations of noncommensurate order ones. Remark 2. Levy's method requires setting in advance orders m and n. With this extension Q is also needed in advance. Some comments concerning that will be found in section 4. The frequency response of (1) is given by m

Gˆ ( jω ) =

=

N (ω ) D (ω )

∑ b ( jω ) k =0 n

m

kQ α (ω ) = ∑ bk Re ⎡( jω ) ⎤ k =0

k =0

(8)



n



k =1

n

k =1





∂E

2

kQ = −2 ⎣⎡ Re ( G ) σ − Im ( G )τ − α ⎦⎤ Re ⎡( jω ) ⎤ − ⎣ ⎦ (11) ∂bk kQ −2 ⎡⎣ Re ( G )τ + Im ( G ) σ − β ⎤⎦ Im ⎡( jω ) ⎤ ⎣ ⎦

∂bk

2

=0⇔

kQ ⇔ ⎡⎣ Re ( G ) σ − Im ( G ) τ − α ⎤⎦ Re ⎡( jω ) ⎤ + (12) ⎣ ⎦

(2)

kQ + ⎡⎣ Re ( G ) τ + Im ( G ) σ − β ⎤⎦ Im ⎡( jω ) ⎤ = 0 ⎣ ⎦

where N and D are complex-valued and α, β, σ and τ, the real and imaginary parts thereof, are real-valued. The error between model and plant, for a given frequency ω, will be N (ω )

(3)

D (ω )

It might be possible to adjust the parameters of (1) by minimising the norm (or the square of the norm) of this error. Levy's method, however, tries to minimise the square of the norm of

ε (ω ) D (ω ) = G ( jω ) D (ω ) − N (ω )

(4)

instead. Let us call this new variable E and drop the dependency on ω to alleviate notation; we will have

E = GD − N = = ⎡⎣ Re ( G ) + j Im ( G ) ⎤⎦ (σ + jτ ) − (α + j β ) = = ⎡⎣ Re ( G ) σ − Im ( G ) τ − α ⎤⎦ +

(5)

+ j ⎡⎣ Re ( G )τ + Im ( G ) σ − β ⎤⎦

And if we differentiate |E|2 with respect to one of the coefficients ak and equal the derivative to zero, we shall have ∂E ∂ak

2

=

kQ = 2 ⎡⎣ Re ( G ) σ − Im ( G ) τ − α ⎤⎦ Re ( G ) Re ⎡( jω ) ⎤ + ⎣ ⎦ kQ +2 ⎡⎣ Re ( G )τ + Im ( G ) σ − β ⎤⎦ Im ( G ) Re ⎡( jω ) ⎤ − (13) ⎣ ⎦ kQ ⎡ ⎤ −2 ⎡⎣ Re ( G ) σ − Im ( G ) τ − α ⎤⎦ Im ( G ) Im ( jω ) + ⎣ ⎦ kQ +2 ⎡⎣ Re ( G )τ + Im ( G ) σ − β ⎤⎦ Re ( G ) Im ⎡( jω ) ⎤ ⎣ ⎦

∂E ∂ak

2

=0⇔

2 kQ ⇔ σ ⎡⎣ Re ( G ) ⎤⎦ Re ⎡( jω ) ⎤ − ⎣ ⎦ kQ −τ Im ( G ) Re ( G ) Re ⎡( jω ) ⎤ − ⎣ ⎦ kQ −α Re ( G ) Re ⎡( jω ) ⎤ + ⎣ ⎦ kQ +τ Im ( G ) Re ( G ) Re ⎡( jω ) ⎤ + ⎣ ⎦ 2 kQ +σ ⎡⎣ Im ( G ) ⎤⎦ Re ⎡( jω ) ⎤ − ⎣ ⎦

The square of the norm of E is

kQ − β Im ( G ) Re ⎡( jω ) ⎤ − ⎣ ⎦

E = ⎡⎣ Re ( G ) σ − Im ( G ) τ − α ⎤⎦ + 2

2

(10)

Thus, if we differentiate |E|2 with respect to one of the coefficients bk and equal the derivative to zero, we shall have

kQ

2

(9)



kQ τ (ω ) = ∑ ak Im ⎡( jω ) ⎤

=

+ ⎡⎣ Re ( G ) τ + Im ( G ) σ − β ⎤⎦



kQ σ (ω ) = 1 + ∑ ak Re ⎡( jω ) ⎤

α (ω ) + j β (ω ) σ (ω ) + jτ (ω )

ε (ω ) = G ( jω ) −

(7)



m

∂E

k =1



kQ β (ω ) = ∑ bk Im ⎡( jω ) ⎤

kQ

k

1 + ∑ ak ( jω )

=

From (2) we see that

(6)

kQ −σ Im ( G ) Re ( G ) Im ⎡( jω ) ⎤ + ⎣ ⎦ 2 kQ +τ ⎡⎣ Im ( G ) ⎤⎦ Im ⎡( jω ) ⎤ + ⎣ ⎦

Cl , c =

kQ +α Im ( G ) Im ⎡( jω ) ⎤ + ⎣ ⎦ 2 kQ +τ ⎣⎡ Re ( G ) ⎦⎤ Im ⎡( jω ) ⎤ + ⎣ ⎦

f

(14)

kQ +σ Im ( G ) Re ( G ) Im ⎡( jω ) ⎤ − ⎣ ⎦ kQ − β Re ( G ) Im ⎡( jω ) ⎤ = 0 ⇔ ⎣ ⎦

{

⇔ σ ⎡⎣ Im ( G ) ⎤⎦ + ⎡⎣ Re ( G ) ⎤⎦ 2

2

} Re ⎣⎡( jω )

kQ

2

⎤+ ⎦

kQ

kQ

kQ

kQ

kQ

=0

The m+1 equations given by (12) and the n equations given by (15) make up a linear system that may be solved so as to find the coefficients of (1). Usually the frequency behaviour of the plant is known in more than one frequency (otherwise it is likely that the model will be rather poor). Let us suppose that it is known at f frequencies. Then the system to solve, given by equations (12) and (15) written explicitly on coefficients a and b, is

⎡ A B ⎤ ⎡b ⎤ ⎡ e ⎤ ⎢ C D⎥ ⎢ a ⎥ = ⎢ g ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦

(16)

{

}

− Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ , ⎣ ⎦ ⎣ ⎦ l = 0… m ∧ c = 0… m lQ

{

}

Dl ,c = ∑ ⎛⎜ Re ⎡⎣G ( jω p ) ⎤⎦ p =1 ⎝ f

{

}

{

2

+

2 lQ cQ + Im ⎣⎡G ( jω p ) ⎦⎤ ⎟⎞ Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ + ⎣ ⎦ ⎣ ⎦ (20) ⎠

}

lQ cQ + Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ , ⎣ ⎦ ⎣ ⎦ l = 1… n ∧ c = 1… n

f

b = ⎡ b0 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎣bm ⎥⎦ a = ⎡ a1 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎣ an ⎥⎦

{

(21)

(22)

lQ el ,1 = ∑ − Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ − ⎣ ⎦ p =1

cQ

(17)

{

lQ cQ + Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎦⎤ − ⎣ ⎦ ⎣ ⎦ lQ cQ − Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ + ⎣ ⎦ ⎣ ⎦

}

+ Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ , ⎣ ⎦ ⎣ ⎦ l = 0… m ∧ c = 1… n

(23)

}+

lQ gl ,1 = ∑ − Re ⎡⎢( jω p ) ⎤⎥ ⎜⎛ Re ⎡⎣G ( jω p ) ⎤⎦ ⎣ ⎦⎝ p =1

{

}

2 + Im ⎡⎣G ( jω p ) ⎤⎦ ⎞⎟ , l = 1… n ⎠

lQ cQ = ∑ Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ + ⎣ ⎦ ⎣ ⎦ p =1

cQ

{

f

Bl ,c =

lQ

}

lQ cQ − Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ , ⎣ ⎦ ⎣ ⎦ l = 1… n ∧ c = 0… m

}

lQ cQ A l ,c = ∑ − Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ − ⎣ ⎦ ⎣ ⎦ p =1

f

(19)

lQ − Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ , ⎣ ⎦ l = 0… m

where f

lQ cQ + Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ − ⎣ ⎦ ⎣ ⎦ lQ cQ − Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ − ⎣ ⎦ ⎣ ⎦

{ } Im ⎣⎡( jω ) ⎦⎤ + +α {Im ( G ) Im ⎡( jω ) ⎤ − Re ( G ) Re ⎡( jω ) ⎤} + (15) ⎣ ⎦ ⎣ ⎦ + β {− Im ( G ) Re ⎡( jω ) ⎤ − Re ( G ) Im ⎡( jω ) ⎤} = ⎣ ⎦ ⎣ ⎦

+τ ⎡⎣ Im ( G ) ⎤⎦ + ⎡⎣ Re ( G ) ⎤⎦ 2

{

lQ cQ = ∑ − Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ + ⎣ ⎦ ⎣ ⎦ p =1

2

(24)

If Q is 1, the real and imaginary parts of (jω)k reduce (k being a natural) to either ±ωk or ±jωk and matrices A, B, C and D and vectors e and g assume the usual structures of Levy's identification method (18) [Levy, 1959]. 3. Improvements on this method

Levy's method's drawbacks are well known, one of them being that low frequency data has little influence in (16) and the resulting fit is poor for such frequencies. 3.1 Vinagre's weights

Using well-chosen weights is a means of dealing with this. [Vinagre, 2001] notes that if g(t) is the step response of our system then



+∞

g ( t ) − gˆ ( t ) dt =

0

=∫

f

+∞

0

L

−1

1⎤ ⎡ ⎢G ( s ) s ⎥ − L ⎣ ⎦

2

−1

1⎤ ⎡ˆ ⎢G ( s ) s ⎥ dt ⎣ ⎦

(25)

}

lQ cQ − Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ wp , ⎣ ⎦ ⎣ ⎦ l = 0… m ∧ c = 0… m

and that Parseval's theorem turns this into



+∞

−∞

G ( jω )

p =1

ω p2

=∑

{

f

lQ cQ = ∑ Re ⎢⎡( jω p ) ⎥⎤ Re ⎢⎡( jω p ) ⎥⎤ Re ⎣⎡G ( jω p ) ⎦⎤ + ⎣ ⎦ ⎣ ⎦ p =1

2

1 1 − Gˆ ( jω ) dω ≈ jω jω

lQ cQ + Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎦⎤ − ⎣ ⎦ ⎣ ⎦

2 2 ⎧ ⎡ε ω ε (ω p ) ⎤ ⎫⎪ ( p +1 ) ⎪1 ⎥ = (26) ≈ ∑ ⎨ (ω p +1 − ω p ) ⎢ + ⎬ ⎢ ω p2 +1 ω p2 ⎥ ⎪ p =1 ⎪ 2 ⎢ ⎥ ⎣ ⎦⎭ ⎩

ε (ω p )

(30)

Bl ,c =

f −1

f

{

lQ cQ A l ,c = ∑ − Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ − ⎣ ⎦ ⎣ ⎦ p =1

2

(31)

lQ cQ − Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ + ⎣ ⎦ ⎣ ⎦

}

lQ cQ + Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ wp , ⎣ ⎦ ⎣ ⎦ l = 0… m ∧ c = 1… n

2

ϕp

where

Cl , c = f

(27)

f



E (ω )

2

f

instead of

p =1

∑ ε (ω )

2

,

p =1

so this time, instead of the last member of (26), the quantity f

∑ E (ω ) p =1

2

p

ϕp ω 2p

(28)

will be minimised instead. The fraction multiplying the square of the norm is the weight2,

ϕp wp = 2 ωp

lQ cQ + Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ − ⎣ ⎦ ⎣ ⎦

(32)

lQ cQ − Re ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ − ⎣ ⎦ ⎣ ⎦

are the coefficients of the trapezoidal integration rule, used in the approximation of (26). Just as Levy's method minimises

{

lQ cQ = ∑ − Re ⎢⎡( jω p ) ⎥⎤ Re ⎢⎡( jω p ) ⎥⎤ Re ⎣⎡G ( jω p ) ⎦⎤ + ⎣ ⎦ ⎣ ⎦ p =1

⎧ω 2 − ω1 ⎪ 2 , if p = 1 ⎪ ⎪ω − ω p −1 ϕ p = ⎨ p +1 , if 1 < p < f 2 ⎪ ⎪ω f − ω f −1 , if p = f ⎪ 2 ⎩

(29)

that clearly increases the influence of low frequencies. Since the weight does not depend on coefficients a and b, it will not change the values of derivatives (11) and (13). The only difference in the method is that matrixes and vectors in (16) will now be given by

}

lQ cQ − Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ wp , ⎣ ⎦ ⎣ ⎦ l = 1… n ∧ c = 0… m

{

Dl ,c = ∑ ⎛⎜ Re ⎡⎣G ( jω p ) ⎤⎦ p =1 ⎝ f

{

}

{

}

2

+

2 lQ cQ + Im ⎣⎡G ( jω p ) ⎦⎤ ⎟⎞ Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎢( jω p ) ⎤⎥ + ⎣ ⎦ ⎣ ⎦ (33) ⎠

}

lQ cQ + Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎢( jω p ) ⎤⎥ wp , ⎣ ⎦ ⎣ ⎦ l = 1… n ∧ c = 1… n

f

{

lQ el ,1 = ∑ − Re ⎡⎢( jω p ) ⎤⎥ Re ⎡⎣G ( jω p ) ⎤⎦ − ⎣ ⎦ p =1

}

lQ − Im ⎡⎢( jω p ) ⎤⎥ Im ⎡⎣G ( jω p ) ⎤⎦ wp , ⎣ ⎦ l = 0… m

{

f

(34)

}+

lQ gl ,1 = ∑ − Re ⎡⎢( jω p ) ⎤⎥ ⎛⎜ Re ⎡⎣G ( jω p ) ⎤⎦ ⎣ ⎦⎝ p =1

{

}

2 + Im ⎡⎣G ( jω p ) ⎤⎦ ⎞⎟ wp , l = 1… n ⎠

2

(35)

3.2 Sanathanan's and Koerner's iterative method

Another way of improving Levy's method was proposed by [Sanathanan et al., 1963] and results from performing several iterations where the variable E is replaced by 2

Based upon energetic considerations, Vinagre (2001) adds yet another term to this weight, that depends neither on p nor on coefficients a or b, and thus may be neglected by the minimisation.

EL =

GD − N DL −1

(36)

v = ⎡b ⎤ ⎢a ⎥ ⎣ ⎦ u =⎡ t ⎤ ⎢ −Gs ⎥ ⎣ ⎦

where L is the iteration number and DL-1 is the denominator found in the previous iteration. In the first iteration this is assumed to be 1 and the result is that of Levy's method. If convergence exists, subsequent iterations will see EL converge to ε. This time the variable minimised is f

∑ E (ω )

(43)

then (39) becomes 1

2

DL −1 (ω p )

p

p =1

(42)

(37)

2

E = G − vT u

(44)

Instead of (6) we may alternatively write

and the fraction is the weight:

E = ( G − vT u )( G − vT u ) = T

2

wp =

1 DL −1 (ω p )

(38)

2

It depends on coefficients known from the last iteration, not the current one, and so derivatives (11) and (13) are again not affected. Thus (30) to (35) remain valid (save that wp is given by a different expression), and these are the values with which (16) is to be solved in each iteration. The resulting values of a will be used to find the new weights for the next iteration. The process may be stopped when no significant change in parameters is achieved or after some pre-set number of iterations (which is sometimes advisable because too many iterations may cause numerical errors to accumulate making the result diverge). 3.3 Lawrence and Rogers's iterative method

All possibilities addressed this far involve solving a linear set of equations; if new data from new frequencies appear, the system will have to be solved again. And, as will be seen in the next section, equation systems that show up may cause numerical problems to arise. [Lawrence et al., 1979] developed an iterative method to avoid solving the system if new data is obtained; this method deals with each frequency at one time. It stems from writing (5) in the following form: E = GD − N = G (1 + a s ) − b t = G + a Gs − b t (39) T

T

T

T

where

If we let

(45)

= GG − Gu v − Gv u + v uu v T

T

T

T

where it has been taken into account that G is a scalar (whereas u and v are vectors) and that v is real-valued (whereas G and u are complex-valued). Differentiating (45) in order to v gives

∂E

2

= −Gu − Gu + uuT v + uuT v

∂v

(46)

and equalling (46) to zero gives

(uu

T

)

+ uu T v = Gu + Gu

(47)

It should be noticed that both the matrix in the left-hand side multiplying v and the vector in the right-hand side are real-valued. And since we usually deal not with only one but with f frequencies, this becomes

∑ (u u f

k

k =1

T k

)

f

(

+ uk ukT v = ∑ Gk uk + Gk uk k =1

)

(48)

And, if weights are included, we shall want to minimise

(

∂ E w2 2

∂v

)=w

2

( −Gu − Gu + uu v + uu v ) T

T

(49)

and (48) becomes Q s = ⎡ ( jω ) ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ nQ ⎥ ⎣⎢( jω ) ⎦⎥

(40)

t =⎡ 1 ⎤ ⎢ Q ⎥ ⎢ ( jω ) ⎥ ⎢ ⎥ ⎢ ⎥ ⎢( jω )mQ ⎥ ⎣ ⎦

(41)

f

(

)

f

(

∑ wk2 uk ukT + uk ukT v = ∑ wk2 Gk uk + Gk uk k =1

k =1

)

(50)

Until now this is solely putting (16) under an equivalent, more compact form (the resulting system of equations is, of course, equivalent; the dimension of the matrix and the size of the vector in (50) are the same as those in (16)). Yet (50) allows for the developments that follow. Let

( + w (u u f

) +u u )

H −f 1 = ∑ wk2 uk ukT + uk ukT = k =1

= H −f 1−1

2 f

f

T f

f

(51)

T f

Z −f 1 = H −f 1−1 + 2 w2f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦

Then (50) becomes

(

f

k =1

)

(52)

2 k

k =1

k

( (G u

k

f

)

= Z f Re ⎡⎣u f ⎤⎦ 1 + 2 w2f Re ⎣⎡u Tf ⎦⎤ H f −1 Re ⎡⎣u f ⎤⎦

) ( )+H v

)

k =1

+ Gf u f

−1 f −1 f −1

Hence

(

)

)

) + u u )v

(

(

2 f

f

T f

T f

f

)

f −1

+

(54)

)

v f = v f −1 − H f w2f u f u Tf + u f u Tf v f −1 +

(

)

+ H f w Gf u f + Gf u f = 2 f

(

)

= v f −1 + H f w2f ⎡u f G f − u Tf v f −1 + u f ( G f − u Tf v f −1 ) ⎤ ⎢⎣ ⎥⎦

This last equality means that once a vector v with parameters for the model is obtained from data concerning f–1 frequencies, it is possible to improve it taking into account data from another frequency. It is even possible to find an expression for H that does not require inverting H–1, developing (51) as follows: −1 f

H =H

−1 f −1

( ( Re ⎡⎣u

+

)( ⎤⎦ ) ( Re ⎡⎣u

) ⎤⎦ ) =

+ w2f Re ⎡⎣u f ⎤⎦ + j Im ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ − j Im ⎡⎣u Tf ⎤⎦ + +w

2 f

f

⎤⎦ − j Im ⎡⎣u f

(

T f

⎤⎦ + j Im ⎡⎣u

T f

= H −f 1−1 + w2f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ + Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ − − j Re ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ + j Im ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ + + Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ + Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ +

)

+ j Re ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ − j Im ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ =

(

−1



(58)

H f −1 Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 = 1 + 2w2f Re ⎡⎣u Tf ⎤⎦ H f −1 Re ⎡⎣u f ⎤⎦

+ w2f G f u f + G f u f ⇒

(

)

Z f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 =

+ ⎡ H −f 1 − w2f u f u Tf + u f u Tf ⎤ v f −1 = ⎥⎦ ⎣⎢ = H −f 1v f −1

It should be noticed that the term within parenthesis is scalar. Rearranging and then multiplying by Re ⎡⎣uTf ⎤⎦ H f −1 ,

= H f −1 Re ⎡⎣u f ⎤⎦ 1 + 2 w2f Re ⎡⎣u Tf ⎤⎦ H f −1 Re ⎡⎣u f ⎤⎦

= w2f G f u f + G f u f +

( − w (u u

)

Z f Re ⎡⎣u f ⎤⎦ =

H −f 1v f = w2f G f u f + G f u f + H −f 1−1v f −1 =

(

(57)

= Z f Re ⎡⎣u f ⎤⎦ +

(

+ Gk uk = f −1

f

H f −1 Re ⎡⎣u f ⎤⎦ = + 2 w2f Z f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 Re ⎡⎣u f ⎤⎦ =

= w2f G f u f + G f u f + ∑ wk2 Gk uk + Gk uk = (53) = w2f

I = Z f H −f 1−1 + 2 w2f Z f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ ⇔ H f −1 = Z f + 2 w2f Z f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 ⇔

where the subscript on v has been added to show that the solution is obtained from data concerning f frequencies. Additionally,

∑ w (G u

(56)

Multiplying this by Zf, by Hf–1 and by Re[uf]

H −f 1v f = ∑ wk2 Gk uk + Gk uk

f

Let

= H −f 1−1 + 2 w2f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ + Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦

)

(55)

Recall that the denominator is a scalar. Now the second equality in (57) shows that H f −1 = Z f + 2 w2f Z f Re ⎣⎡u f ⎦⎤ Re ⎣⎡u Tf ⎦⎤ H f −1 ⇔ (59) H f −1 − Z f Z f Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 = 2 2w f

From (58) and (59) H f −1 Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 H f −1 − Z f = ⇔ T 2 2w2f 1 + 2w f Re ⎡⎣u f ⎤⎦ H f −1 Re ⎡⎣u f ⎤⎦ Z f = H f −1 −

H f −1 Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 = 1 T ⎡ ⎤ ⎡ ⎤ + Re Re u u H ⎣ f ⎦ f −1 ⎣ f ⎦ 2 w2f

(60)

⎛ ⎞ ⎜ ⎟ Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ H f −1 ⎟ = H f −1 ⎜ I − 1 ⎜ ⎟ T + Re ⎣⎡u f ⎦⎤ H f −1 Re ⎡⎣u f ⎤⎦ ⎟ 2 ⎜ 2 w f ⎝ ⎠ The identity matrix above has the same size of Hf–1, which is also the size of matrix Re ⎡⎣u f ⎤⎦ Re ⎡⎣u Tf ⎤⎦ . The steps that follow are close parallels of those from (57) to (60). From (55) and (56) we know that

H

−1 f

= Z + 2 w Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ −1 f

2 f

(61)

Actually it is possible to begin with no estimate at all, making

v0 = 0

Multiplying this by Hf, by Zf and by Im[uf] I = H f Z −f 1 + 2w2f H f Im ⎡⎣u f ⎤⎦ Im ⎡⎣uTf ⎤⎦ ⇔

Since infinity is not an available numerical value, some positive real number x is used instead [Lawrence et al., 1979] and

Z f = H f + 2w2f H f Im ⎡⎣u f ⎤⎦ Im ⎡⎣uTf ⎤⎦ Z f ⇔ Z f Im ⎡⎣u f ⎤⎦ = = H f Im ⎡⎣u f ⎤⎦ +

(62)

+ 2w2f H f Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ Z f Im ⎡⎣u f ⎤⎦ =

(

= H f Im ⎡⎣u f ⎤⎦ 1 + 2w2f Im ⎡⎣u Tf ⎤⎦ Z f Im ⎡⎣u f ⎤⎦

)

H f Im ⎡⎣u f ⎤⎦ =

(

)

−1

H f Im ⎡⎣u f ⎤⎦ Im ⎡⎣uTf ⎤⎦ Z f = =



(63)

Z f Im ⎡⎣u f ⎤⎦ Im ⎡⎣u ⎤⎦ Z f 1 + 2w2f Im ⎡⎣uTf ⎤⎦ Z f Im ⎡⎣u f ⎤⎦ T f

Z f = H f + 2 w2f H f Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ Z f ⇔ Zf −Hf H f Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ Z f = 2 w2

However, it is hard to tell in advance which number to use; large real numbers, close to the floating-point limit, are good approximations of infinity but are likely to cause overflow errors; furthermore, there are cases when a moderate choice performs better than a very large one. The specificity of the fractional case consists solely in the definition of s and t, in (40) and (41). 4. Numerical results

4.1 Exact data (64)

f

From (63) and (64) Z f Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ Z f Zf −Hf = ⇔ 2 T 2 w2f 1 + 2 w f Im ⎡⎣u f ⎤⎦ Z f Im ⎡⎣u f ⎤⎦

The exact frequency responses of some transfer functions were reckoned and the methods of the previous sections were used to reconstruct the function. As Table 1 shows, this is usually possible, provided that a compatible structure is offered. Column J of that Table presents an index showing how close the frequency response of the identified model Gˆ ( jω ) is to the data from which the model

was obtained G ( jω ) . It is given by (65)

f

⎛ ⎜ Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ Z f = Zf ⎜I − 1 ⎜ + Im ⎣⎡u Tf ⎦⎤ Z f Im ⎡⎣u f 2 ⎜ 2 w f ⎝

(67)

The identification methods above were implemented as part of toolbox Ninteger for Matlab [Valério et al., 2004]. Results that follow were obtained with that implementation.

Now the second equality in (62) shows that

Z f Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ Z f = Hf = Zf − 1 T ⎡ ⎤ ⎡ ⎤ + Im ⎣u f ⎦ Z f Im ⎣u f ⎦ 2 w2

v0 = 0

H0 = I × x

Rearranging and then multiplying by Im ⎡⎣uTf ⎤⎦ Z f ,

= Z f Im ⎡⎣u f ⎤⎦ 1 + 2 w2f Im ⎡⎣u Tf ⎤⎦ Z f Im ⎡⎣u f ⎤⎦

(66)

H 0−1 = 0 ⇒ H 0 = I × ∞

⎞ ⎟ ⎟ ⎟ ⎤⎦ ⎟ ⎠

The identity matrix above has the same size of Zf, which is also the size of matrix Im ⎡⎣u f ⎤⎦ Im ⎡⎣u Tf ⎤⎦ . The best way to use this method is to begin with some values for H and v (which is made up of parameters a and b), obtained applying (51) and (52) with a few frequencies. Data from each of the further frequencies is then taken into account using (60) and (65), with which it is possible to obtain a value for Hf,, from the value of Hf–1, inverting only a scalar. Then (54) is used to update vector of parameters v.

J=

1 nω



∑ ⎡⎣G ( jω ) − Gˆ ( jω )⎤⎦

2

(68)

i =1

where nω is the number of sampling frequencies. Insignificant values of J appear when only slight numerical discrepancies exist; higher values reflect a poorer quality of the model identified. Results of the iterative method of Lawrence and Rogers are not shown because, if the initial conditions in (67) are assumed, it is necessary to have data from many frequencies to get any acceptable results. Actually the best way of using that iterative method is to combine it with the weights of Sanathanan and Koerner's method. (For this method the last column shows the number of iterations n needed to find the result given.)

Table 1. Identification results. Levy's method Transfer function

Sampling frequencies

Q

n

m

J

J 7.1420 × 10

8.8683 × 10 −33

1.0000 1 + 1.0000s

1.0000 1 + 1.0000s

1.9033 × 10 −33

4

1.0000 1 + 1.0000s

5

5

0.5

1

0.25 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100

Identified transfer function −33

J

n −33

1.0000 1 + 1.0000s

6.4238 × 10

7.0547 × 10 −33

1.0000 1 + 1.0000s

6.4198 × 10 −34

1.0000 1 + 1.0000s

4.2560 × 10 −31

1.0000 1 + 1.0000s

5.7478 × 10 −31

1.2623 × 10 −26

1.0000 1 + 1.0000s

1.1257 × 10 −24

1.0000 1 + 1.0000s

7.3572 × 10 −26

1.000 + 0.1606 s 0.25 1 + 0.1606 s 0.25 + 1.000 s + 0.1606 s1.25

8.0421 × 10 −24

1.000 − 42.6087 s 0.25 1 − 42.6087 s 0.25 + 1.000 s − 42.6087 s1.25

1.3533 × 10 −26

1.000 + 0.3824 s 0.25 1 + 0.3824 s 0.25 + 1.000 s + 0.3824 s1.25

2.0285 × 10 −26

1

1.0000 1 + 1.0000s 0.5

1.3225 × 10 −31

1.0000 1 + 1.0000s 0.5

5.5731 × 10 −30

1.0000 1 + 1.0000s 0.5

5.6494 × 10 −33

2

2

1.0000 1 + 1.0000s 0.5

7.7515 × 10 −29

1.0000 1 + 1.0000s 0.5

3.6635 × 10 −28

1.0000 1 + 1.0000s 0.5

9.9147 × 10 −30

0.25

3

3

1.0000 + 0.0104 s 0.25 1 + 0.0104 s 0.25 + 1.0000 s 0.5 + 0.0104 s 0.75

4.7447 × 10 −30

1.0000 + 1.1260 s 0.25 1 + 1.1260 s 0.25 + 1.0000 s 0.5 + 1.1260 s 0.75

8.5356 × 10 −30

1.0000 − 2.5156 s 0.25 1 − 2.5156 s 0.25 + 1.0000 s 0.5 − 2.5156 s 0.75

0.1, 1, 10

0.5

2

2

4.0000 + 5.0000 s 0.5 + 6.0000 s 1 + 2.0000 s 0.5 + 3.0000 s

2.7242 × 10 −25

4.0000 + 5.0000 s 0.5 + 6.0000 s 1 + 2.0000 s 0.5 + 3.0000 s

50 logarithmically spaced points between 10-3 and 103

0.5

3

3

1 1+ s

1

1

1

1.0000 1 + 1.0000s

1

2

2

1.0000 − 0.0042s 1 + 0.9958s − 0.0042s 2

0.5

2

2

0.25

4

0.25

0.01, 0.1, 1, 10, 100

0.1, 1, 10

1 1 + s 0.5

4.0000 + 5.0060 s 0.5 + 6.0075s + 0.0090 s1.5 1 + 2.0015s 0.5 + 3.0030 s + 0.0045s1.5

4.7506 × 10

Identified transfer function −33

Sanathanan and Koerner's method

1.0000 1 + 1.0000s

0.1, 1, 10

4 + 5s 0.5 + 6 s 1 + 2 s 0.5 + 3s

Identified transfer function

Levy's method with Vinagre's weights

1.5720 × 10 −20

4.0000 + 5.1137 s 0.5 + 6.1421s + 0.1705s1.5 1 + 2.0284 s 0.5 + 3.0568s + 0.0852 s1.5

2.4901 × 10 −28

7.7583 × 10 −28

4.0000 + 5.0000 s 0.5 + 6.0000 s 1 + 2.0000 s 0.5 + 3.0000 s 4.0000 + 4.5038s 0.5 + 5.3798s − 0.7443s1.5 1 + 1.8760 s 0.5 + 2.7519 s − 0.3721s1.5

2 3 3 3 100 2 2

5.0961 × 10 −29 100 2.4993 × 10 −28

2

1.5516 × 10 −28 100

Table 2. Identification results when noise is present. Levy's method Transfer function

1 1+ s

Sampling frequencies

Sanathanan and Koerner's method

n

m

Identified transfer function

J

Identified transfer function

J

Identified transfer function

J

n

1

1

1

0.9683 + 0.0025s 1 + 1.0457s

1.1557 × 10 −3

0.9553 + 0.0188s 1 + 0.9682s

7.9640 × 10 −4

0.9636 − 0.0071s 1 + 0.9535s

6.2058 × 10 −4

4

0.5

2

2

0.9731 − 0.0956 s 0.5 + 0.0227 s 1 − 0.0519 s 0.5 + 0.8812 s

4.1194 × 10 −4

1.0119 − 0.3437 s 0.5 + 0.0609 s 1 − 0.0539 s 0.5 + 0.6127 s

9.9986 × 10 −4

0.9877 − 0.1884 s 0.5 + 0.0370 s 1 − 0.0522 s 0.5 + 0.7805s

2.9884 × 10 −4

7

0.5

1

1

0.9940 + 0.0091s 0.5 1 + 0.9384 s 0.5

4.2694 × 10 −4

0.9949 + 0.0355s 0.5 1 + 0.9656 s 0.5

9.4836 × 10 −4

0.9951 + 0.0082 s 0.5 1 + 0.9726 s 0.5

3.4494 × 10 −4

4

0.25 50 logarithmically 0.25 spaced points between 10-3 and 103

2

2

0.9294 − 0.0517 s 0.25 + 0.0070 s 0.5 1 − 0.1255s 0.25 + 0.8591s 0.5

2.5614 × 10 −3

1.0096 − 2.8410 s 0.25 + 0.7753s 0.5 1 − 2.2890 s 0.25 − 0.5489 s 0.5

2.5935 × 10 −2

0.9547 − 0.3434 s 0.25 + 0.0686 s 0.5 1 − 0.2973s 0.25 + 0.6253s 0.5

2.8507 × 10 −3

8

3

3

0.9255 − 0.3572s 0.25 +

2.8170 × 10 −3

11

0.1, 1, 10

0.1, 1, 10

1 1 + s 0.5

Levy's method with Vinagre's weights

Q

+ 0.0223s 0.5 − 0.0022s 0.75 1 − 0.4995s 0.25 + 0.9439 s 0.5 − 0.2930 s 0.75

0.9458 − 3.3266s 0.25 + 2.4983 × 10

−3

+ 0.7870s 0.5 − 0.1249s 0.75 1 − 3.4428s 0.25 + 1.3446s 0.5 − 2.2052s 0.75

0.9537 − 1.1239s 0.25 + 2.5448 × 10

−3

+ 0.3482s 0.5 − 0.0560 s 0.75 1 − 1.1235s 0.25 + 0.8792s 0.5 − 0.5235s 0.75

4.2 Corrupted data

Noise is usually present in experimental data. To check how it may affect this identification method, frequency responses were added a Gaussian distributed, zero mean noise, with a 1 dB or 1 degree variance. As Table 2 shows, this does not necessarily prevent a reasonable approximation of the original transfer function to be found, but the structure offered needs to be closer to the correct one. 4.3 Comments

In what concerns numerical problems, it should be noticed that all poor results of Table 1 were returned by Matlab together with a warning about a bad condition number of the matrix in (16). Choosing structures that avoid such condition numbers may prove to be a good option. The number of sampling frequencies also affects the conditioning of the matrix. Increasing the number of sampling frequencies usually helps improving the results, though not always, there being a few exceptions in which results even deteriorate. It is also worth of notice that index J is not always a sufficient indicator of the goodness of an approximation. Sometimes poorer models achieve a lower value of this index. Above all, the improvements of Levy's method do sometimes lead to better models, but, again, this is not always so. It seems that the best policy is to use all methods— Levy's method, Levy's method with weights such as those devised by Vinagre, Sanathanan and Koerner's iterative method—and then check which one has a more reasonable result, even using Lawrence and Roger's iteration to assist in the reckonings, avoiding inverting ill-conditioned matrixes, when good estimates for H and v are already available. 5. Conclusions

This extension of Levy's method appears to enjoy the same merits and suffer from the same drawbacks of the original integer-order version. It namely requires providing in advance the orders of the numerator and the denominator, n and m; and this extension also requires Q, the least common multiple of the powers of Laplace variable. Of course, numerical problems usually arise when excessively high values for n and m or excessively low values for Q are provided. There are two possible solutions for dealing with this requirement: a visual inspection of frequency data may suggest the appropriate orders; or several possible combinations of values may be tried, and the best retained. This last option is possible because the algorithm runs fast enough in modern computers. The same comments may be made on the improvements of Levy's method also extended in this paper. These improvements may lead to better results but this is not always the case. It is better to check the results provided by all methods for each case, and then choose the most reasonable one. Future work should include adapting other

identification methods known to be less sensitive to noise, and of stochastic nature, such as, for instance, those proposed by [Schoukens et al., 1988] or those proposed by [Helmicki et al., 1991]. References

Hartley, T. and Lorenzo, C. (2003). Fractional-order system identification based on continuous orderdistributions. Signal processing, 83, 2287-2300. Helmicki, A., Jacobson, C. and Nett, C. (1991). Control oriented system identification: a worstcase / deterministic approach in H∞. IEEE transactions on automatic control, 36, 1163-1176. Lawrence, P. J. and Rogers, G. J. (1979) Sequential transfer-function synthesis from measured data. Proceedings of the IEE, 126:1, 104-106 Levy, E. (1959). Complex curve fitting. IRE transactions on automatic control, 4, 37-43. Miller, K. and Ross, B. (1993). An introduction to the fractional calculus and fractional differential equations. John Wiley and Sons, New York. Oustaloup, A. (1991). La commande CRONE: commande robuste d’ordre non entier. Hermès, Paris. Podlubny, I. (1999). Fractional differential equations. Academic Press, San Diego. Sanathanan, C. K. and Koerner, J. (1963) Transfer function synthesis as a ratio of two complex polynomials. IEEE transactions on automatic control, 8, 56:58. Schoukens, J., Pintelon, R. and Renneboog, J. (1988). A maximum likelihood estimator for linear and nonlinear systems—a practical application of estimation techniques in measurement problems. IEEE transactions on instrumentation and measurement, 37, 10-17. Valério, D. and Sá da Costa, J. (2004). Ninteger: a non-integer control toolbox for MatLab. In: Fractional derivatives and applications, Bordeaux. Vinagre, B. (2001) Modelado y control de sistemas dinámicos caracterizados por ecuaciones íntegrodiferenciales de orden fraccional. Madrid: UNED, PhD thesis.