Modeling the Nonlinear Power System ... - Semantic Scholar

7 downloads 0 Views 456KB Size Report
Automatic Differentiation Technique (ADT) is used for building Algebraic Companion Form (ACF) models of nonlinear power system components in a time ...
Modeling the Nonlinear Power System Components Using Symbolically Assisted Computational Procedure Eugene Solodovnik, George Cokkinides, Athan Meliopoulos*, and Roger Dougal Department of Electrical and Computer Engineering University of South Carolina, Columbia, SC 29208 *School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332

Abstract This paper presents an application of a symbolically assisted computational procedure. Specifically, an Automatic Differentiation Technique (ADT) is used for building Algebraic Companion Form (ACF) models of nonlinear power system components in a time domain simulation algorithm. The ACF is a generalization of the resistive companion form that is widely used in time-domain simulators. The method has been implemented within the VTB simulation environment. The Algebraic Companion form results from the proper implicit integration of the differential equations of any system component. The process requires the computation of the Jacobian (sensitivity) matrices of the system component. Since modern power system components are often nonlinear, the efficient computation of the derivatives affects greatly the overall computational effectiveness. This paper presents an effective application of ADT, which results in large savings in the computation of the required derivatives compared to other commonly employed numerical differentiation techniques and minimizes human intervention and therefore the possibility of errors. The algorithm for efficient generation of the ACF models, which utilizes automatic differentiation method and enables VTB users to introduce their own models, is described. To demonstrate its capabilities, ACF models of nonlinear power system components are built employing ADT and are used in practical power system circuit. Specifically, the combined effects of surge arrester application in a power system that contains a nonlinear inductor in its load are analyzed. 1

1. Introduction Nonlinear devices are widely used in power system applications. In some cases, the nonlinear devices are employed to enhance the system. For example, use of a surge arrester allows limiting the voltage surge caused by different reasons, i.e. strike of the lightening. For some other applications, nonlinearities of the power system components are often considered as parasitic. For example, the effect of the magnetic flux saturation in the core of inductors and transformers can lead to undesirable distortion of the waveforms and hence, to the poorer quality of the electrical power.

The nonlinear dynamics of the power systems is usually studied using time domain simulation software, since analytical methods for the solution of most nonlinear problems in power systems are very complex and practically limited. The Virtual Test Bed (VTB) employs the time domain simulator for virtual prototyping of power electronics systems. The VTB is an experimental environment, which goals and architecture are described in some detail in reference [1]. One of the unique capabilities of the VTB is highly interactive user involvement with the simulation. This capability allows a user to change the system configuration, the values of components or control algorithms during a simulation and to immediately see the results of those changes as reflected in a waveform display that is much like that of a traditional laboratory oscilloscope. The combination of these two features allows one to rapidly explore the detailed waveform behavior of a power electronic system.

One of the cha llenges in implementing such an interactive user environment is to achieve high computational efficiency so that the user is unhindered by the response of the computing system. Such performance is achieved in the VTB by a variety of techniques, including distribution of the computing

2

load to multiple computers, but especially, design of an efficient computing engine. The basic network solver of the VTB is based on the Algebraic Companion approach [2, 3], which is itself an extension of the Resistive Companion approach [2]. While the VTB provides a powerful capability for importing models that were created in other simulation environments, peak computational speed is achieved by the use of efficient native- form models. For these models, computational speed is achieved at the cost of a relatively large intellectual effort to create the models in the first place.

Creation of native form models is a tedious process. Even though the basic dynamics of a system may be described by a relatively few equations, the na tive form model may be substantially more complex. Algebraic Companion method belongs to a large class of implicit numerical methods, which usually have an important advantage of improved numerical stability [4] under a wide range of the integration time step size. Many implicit numerical methods used for power system simulations require the computation of the Jacobian or sensitivity matrices. This computation may often take a significant proportion of the CPU time unless Jacobian matrices have already been obtained analytically. The VTB software environment uses several types of models, which can be classified as internal native models, translated from other simulation environments, and user-defined models. The internal native models are usually implemented in a computer language (Visual C++), by first developing the analytic expressions for the Jacobian matrix, integrating the model equations and casting the results in the Algebraic Companion format. The translated models are obtained by means of translators. Jacobian matrices for these types of models are computed via numerical differentiation technique. Finally, there is a third category of user-defined models. These additional models are generally described as a set of the nonlinear integro-differential equations. The Algebraic Companion Form (ACF) for these models is obtained using the symbolically assisted computational procedure based on Automatic Differentiation

3

Technique (ADT). In general, the facility of having user-defined models results in increase in the CPU time for sensitivity matrices computation. Indeed, the functions associated with these additional models are not known in advance, they may not be analytical and hence their derivatives are not easily obtained. The commonly used numerical differentiation techniques (divided differences approximation) are known to be computationally very intensive and CPU time consuming especially for large and complex systems. Thus, the ADT, being computationally more effective, is a convenient alternative to the numerical differentiation.

The symbolically assisted computational procedure (SACP) described in this paper, facilitates the creation of the native form models by symbolic manipulation of the basic dynamic equations. It enables the users to introduce their own models relatively easy, so the users are able to focus on the model development, rather than on the implementing the model by means of the computer language. The SACP uses new method for rapid calculation of the sensitivity matrices, which is known as automatic differentiation technique [5, 6]. The ADT basis has been developed in the late 1980s. The method allows to obtain the exact derivative values. It is very efficient, since the ratio of the cost of evaluating the derivatives to the cost of the evaluating the function does not depend on the number of variables. In comparison with numerical differentiation, ADT leads to a saving of at least 80% of the time required for the Jacobian matrices computation [5]. Because there is very little overhead in the symbolic manipulation of the model, the efficiency of the method is comparable to the analytic approach of developing a native model. Our work focuses in the application of the automatic differentiation for the purpose of generating an implementation of the Algebraic Companion models in the VTB environment.

4

2. Algebraic Companion Model Form An Algebraic Companion device normally has one or more terminals through which it can be connected to other devices. Each terminal is

AC

associated with one across and one through variable (see fig.1).

Device

V1 V2 V3

I1 I2 I3

Electrical analogy for across variable is voltage, and for through variable is electric current. Set of the terminals connected together

Figure 1. AC Device

form a node. Terminals of any physical type are supported, provided that they satisfy energy conservation equations in the form of across and through variables. ACF model form requires the device equations to be expressed in a specific form, which allows for handling of the device-to-device interconnections. This form is known as Algebraic Companion model form.

2.1. General Model of the Device A general device model is described with a set of algebraic- integral-differential equations. It is always possible to cast these equations in the following general form:

 i   f1 ( v& , y& ,..., ∫ v, ∫ y,...,v, y, u, t )  , 0 =  & &   f 2 ( v , y ,..., ∫ v, ∫ y,...,v, y, u, t ) 

(1)

where f1 ,f 2 are arbitrary vector functions, and thus the model can be adapted to a wide variety of devices, i is a vector of terminal through variables (terminal currents), v is vector of terminal across variables (terminal voltages), y is a vector of device internal state variables, u is a vector of independent controls.

5

Note that this form includes two sets of equations, which are named external equations and internal equations respectively. The terminal currents appear only in the external equations. Similarly, the device states consist of two sets: external states (i.e. terminal across variables, y(t ) ).

v(t ) )

and internal states (i.e.

The set of equations (1) is consistent in the sense that the number of external states and the number

of internal equations equals the number of external and internal equations respectively.

2.2. Time Discretization Equations (1) are integrated using a suitable numerical integration method. Assuming an integration time step h , the result of the integration is approximated with an equation of the form:  i(t )  v (t )  0  = G ( v (t ), v ( t − h ), i ( t ), i (t − h ), y ( t ), y ( t − h ), t )  y (t )  +     1  q1 ( v (t ), v ( t − h ), i ( t ), i ( t − h ), y ( t ), y ( t − h ), t )   b1 (t − h )   δ 1   − +  2  q 2 ( v (t ), v ( t − h ), i ( t ), i ( t − h ), y (t ), y ( t − h ), t )   b 2 (t − h )  δ 2 

(2)

where G is the Jacobian matrix (here and further we omit the arguments for the sake of simplicity):  ∂f 1  ∂v G =  ∂f 2  ∂v

∂f 1  ∂y  , ∂f 2  ∂y 

(3)

b1 (t − h), b 2 ( t − h ) are vectors depending only on past history values of through, across or internal states, δ 1 ,δ 2 denote cubic and other higher order terms. Vectors q1 ,q 2 represent the quadratic term, which can be given by the expression (4):

[ x(t ) − x( t − h)]T Q1[ x(t ) − x( t − h)]     q1  [ x(t ) − x( t − h)]T Q2 [x (t ) − x(t − h)] q= = ,  M q 2     [ x(t ) − x( t − h)]T Qn [ x(t ) − x(t − h)]

(4)

6

where matrices Qi have the following form: T

∂  ∂f  Qi = , ∂x  ∂x  i

i = 1,2,..., n

(5)

In (4) and (5) vectors f and x are given by the following equations:

[ x = [v

] . y ]

f = f1T M f 2T T

M

T

(6)

T T

Form (2) is general in the sense that it represents many different implicit methods. For examp le, the trapezoidal method can be cast into this form by neglecting the quadratic terms q1 ,q 2 and other higher order terms, and performing the integration with the trapezoidal rule. The Algebraic Companion form method can also be cast in the form (2) (q1 ,q 2 in this case are nonzero). Use of the quadratic terms improves the convergence rate and makes the numerical process more robust and stable [3]. Note that if the device equations (1) are linear, the Jacobian matrix G is constant and higher order terms are equal to zero. However, if the system contains one or more nonlinear devices, the simultaneous solution of the system equations can only be found through iterative techniques such as Newton’s method. That is why it is imperative to compute all of the necessary derivatives most efficiently.

3. Network Solution The network solution is obtained by application of energy conservation laws at each node of the system. This procedure results in the set of equations (7). To these equations, the internal equations are appended resulting to the following set of equations.

∑A i

k k

(t ) = I inj

(7)

k

where I inj is a vector of nodal current injections, Ak is a component incidence matrix with:

7

1, if terminal j of component k is connected to node i Aijk =  , 0, otherwise and i k (t ) are the terminal currents of component k. The component k terminal across variables v k (t ) is related to the nodal vector of across variables v (t ) by: v k (t ) = ( Ak )T v (t ) .

(8)

Upon substitution of device equations (2), the set of equations (7) become a set of quadratic equations. These equations are solved using Newton’s method. Specifically, the solution is given by the following expression. −1

0 0  v (t )   v ( t)   G11 G12  1 ∂q  m 1  , = − +       y (t )    0 0    y ( t)   G 21 G 22  2 ∂x  m 2 

(9)

where : v 0 ( t ), y 0 (t ) are the values of the state variables at the previous iteration, m10 ,m 02 represents the mismatch of the system equations of the previous iteration.

Note that at each time step, the quadratic device model approximates the nonlinear device equations. For this reason, the above procedure utilizes an iterative algorithm, which is applied at each time step.

Next section describes an algorithm for generation of ACF models and automatic differentiation technique used for effective computation of the derivatives.

8

4. Algorithm for Generation of ACF Equations The users of simulation tools often wish to create their own models. The VTB software environment, having number of other unique capabilities, was also intended to have such a facility. In a number of software packages, the new models can be built by means of block diagrams from a catalogue of elementary blocks such as gain, integrator, time constant, dead band, relay, etc. This approach can sometimes be inconvenient, since relatively simple model may consist of a significant number of blocks that is essentially waste of space in a schematic view. Since user may wish to have several user-defined models in one schematic, the overall circuit may often become hardly manageable and less understandable. Creation of such models becomes a very tedious process, which may take significant amount of time. The other simulation packages offer input of the models in the form of the differential equations only. This approach is also inconvenient, since real- life physical systems are often very complex, high order, and are described by large number of coupled nonlinear differential equations. Thus, in this case development and input of the equations of the overall system are very abstract, time consuming and error prone process. The VTB software simulation environment uses mixed approach for model input. The internal commonly used VTB models are represented in the software as blocks, i.e. for example, PWM switch or PID controller, induction motor or generator are represented by a corresponding block or icon. For each of the internal model user is able to change or adjust necessary parameters. The user-defined models are also represented as blocks, so that user is able to connect its own models with other models (internal or translated). However, in contrast to fixed-structure blocks (such as gain, relay, integrator, etc.), the VTB user-defined model blocks have changeable structure. User is able to input the equations of the device in the block in form given by equation (1). Such an approach developed in the VTB software gives user more flexibility and is the most convenient way for rapid development of the native and efficient VTB models.

9

4.1. Methods for Calculating Derivatives of a Function In section 2 of this paper, it was mentioned that effective computation of the Jacobian matrices is necessary for the development of the symbolically assisted computational procedure for generation of the ACF models. There are several techniques available for the purpose of obtaining of the derivatives: •

Manual differentiation (or differentiation by hand),



Symbolic differentiation,



Numerical differentiation,



Automatic differentiation.

Manual differentiation was used for internal VTB models creation. Using this technique, the most efficient and most sophisticated models can be obtained. However, this method requires large intellectual efforts and hence is very time consuming and error prone. Moreover, knowledge of the computer language is required from the model developer.

Symbolic differentiation technique is commonly associated with use of the formula symbolic manipulation software such as Maple, Macsyma, Reduce, etc. This method can be useful for development of very large and complex models. It can reduce time required for model development and errors associated with mathematical manipulations. However, this method still requires further human efforts for model implementation. Knowledge of the computer language is required.

Numerical differentiation was used for translated models creation. This technique is based on the fact that the derivatives of the function can be approximated either by one-sided first order accurate difference:

10

∂F ∂xi

≈ x = x0

F ( x0 ± hei ) − F ( x 0 ) , ±h

(10)

or central second order accurate difference: ∂F ∂xi

≈ x = x0

F ( x0 + hei ) − F ( x0 − hei ) . 2h

(11)

In (10), (11) h denotes the time step, e i is the unit base vector. It is well known that the numerical differentiation technique is very intensive computationally, and its accuracy is hard to assess since method is time step dependent.

Automatic differentiation technique was used for user-defined models creation. This technique is based on the fact that exact derivatives of the function can be obtained by applying the chain rule, which is known well from the differential calculus. Accuracy of this method is independent on time step in contrast with the numerical differentiation technique. ADT is very efficient, and knowledge of the computer language is not required for model implementation.

4.2. Automatic Differentiation Technique All of the features of the automatic differentiation technique make this method very attractive for use in the

Elementary operations

f (x) =

VTB environment for the generation of the ACF models. It is important to point out that ADT relies on the fact that any function, no matter how complicated, can be considered as a composition of elementary functions and

11

x1 + sin(x2 ) − 2x1 x3 − x4

(

)

Elementary functions

Figure 2. Example: elementary functions and elementary operations

elementary operations (fig. 2). This implies that the application of the chain rule will ensure that the derivatives of the function can be readily determined.

The automatic differentiation algorithm consists of two phases: forward and reverse. The forward phase involves the calculation of the derivative values of the elementary functions. The reverse phase concludes the computation of the derivatives of the given function using data obtained at the forward phase.

4.2.1. Forward Phase The forward phase consists of introduction of intermediate variables and obtaining the derivatives of the elementary functio ns.

Let us consider a function f : ℜ n → ℜ , which depends on n variables

x1, x2 ,..., xn called independent variables. The value of the function is obtained through use of the intermediate variables x n+1 , x n+ 2 ,..., x N , so that f (x ) = x N . Each variable is calculated from an elementary function g i (see fig.2). For example, for the simple instruction xi = xi−1 + xi− 5 , we associate the function g i ( xi−1 , x i−5 ) = xi−1 + x i−5 . These functions depend only on independent and intermediate variables already processed. Let also Pi denote the subset of such variables, i.e. Pi = {x j : j ∈ [1, i − 1]}.

Consider the function of the fig. 2 as an example: f (x ) =

x1 + sin( x2 ) − 2 x1 x3 − x4

(12)

It is possible to construct 7 intermediate variables for the function (12) as shown in the table 1.

12

Table 1. Intermediate variables x5 = sin( x 2 )

Elementary functions x5 = g 5 ( x 2 )

Derivatives of elementary functions g 5, 2 ( x 2 ) = cos( x 2 )

x 6 = x4

x 6 = g 6 (x 4 )

g 6, 4 ( x4 ) = 1 2 x4

x 7 = 2x1

x7 = g 7 ( x1 )

g 7,1 ( x1 ) = 2

x8 = x1 + x5

x8 = g 8 ( x1, x5 )

g 8,1 = 1, g 8, 5 = 1

x9 = x 3 − x6

x9 = g 9 ( x3 , x6 )

g 9, 3 = 1, g9, 6 = −1

x10 = x8 x9

x10 = g 10 ( x8 , x9 )

g10,8 ( x9 ) = 1 x9

(

)

g10, 9 ( x8 , x9 ) = − x8 x9− 2

x11 = x10 − x7

x11 = g11 ( x10 , x7 )

g11,10 = 1, g11, 7 = −1

f (x ) = x11 During this process, the derivatives for each of the elementary functions can be determined. These

Begin

f ( x1 ,...x n )

derivatives are stored in an appropriate array. The Construct intermediate variables

third column of the table 1 contains the derivatives

Pi

of the elementary functions, which are denoted as g i, j ( x Pi ) , i.e.

∂g g i, j ( x Pi ) = i , ∂x j

xi = gi (xPi ) is a subset of

[1, i −1]

Compute derivatives of elementary

i ∈ [ n + 1, N ],

functions ∂g i g i, j (x Pi ) = , x j ∈ Pi ∂x j

x j ∈ Pi .

The forward phase algorithm is presented in fig. 3.

No i = i+1

As soon as all of the intermediate variables are constructed and all of the derivatives of the

Is i > N Yes Reverse Phase

elementary functions are obtained, the reverse phase Figure 3. Forward phase algorithm of the algorithm follows.

13

4.2.2. Reverse Phase

and that the correspond ing them derivatives g i, j are found. Let us now visualize the construction of the intermediate variables of the

Independent variables 1

2

7

5

3

6

section 4.2.1 example as a graph (fig. 4). Each node of the graph 8

9

represents a variable (independent or intermediate). Each arc of the graph runs from the node xi to the node x j ( i < j ), if elementary function g i depends on the variable x j . It is possible to show [5] that the derivative of the function f (x ) with respect to the variable

10 11

Figure 4. Graph: independent and intermediate variables

x k can be obtained through adding all of the products of g i, j between nodes x k and x N , i.e. ∂f (x) = ∑ ∂xk path C

∏g

from xk to xN

i, j

.

arc(i , j )∈C

To save time on repetitive calculations, the additional variables



pk =

∏g

i, j

,

1 ≤ k ≤ N −1,

pN = 1

path C arc( i , j )∈C from x k to x N

are introduced and then the derivatives are computed using following equations: pN = 1 pk =

N

∑g

j = k +1

∂f = ∂xk

j, k

p j,

N

∑g

j ,k

pj,

n +1 ≤ k ≤ N − 1. 1≤ k ≤ n

j = n +1

14

4

Intermediate variables

At this point, it is assumed that the elementary functions are derived

The reverse phase algorithm is shown in fig. 5. For the

Reverse phase

example above (section 4.2.1), the reverse phase

Initialize pi = 0, ∀i ∈[1, N −1] pN = 1

algorithm is demonstrated in table 2.

k=N

j ∈ Pk compute

For

pj = p j + gk, j pk

It can be seen that the algorithm introduces a number of redundant operations such as multiplication by 1,

No k=k-1

Is

k>n+1

summation with zero and etc. Since the sensitivities Yes ∀i ∈ [1, n]

matrices for the nonlinear devices intended to be used

For

set

∂f (x1,...,xn ) = pi ∂xi

many times during the simulation, it was imperative for derivatives to be as much efficient as possible. The big

End

improvement to the presented technique was further optimization of the derivatives. Optimization consists of

Figure 5. Reverse phase algorithm

searching for the redundancies and eliminating them.

Table 2.

p11 = 1 p10 = p10 + g11,10 p11 p7 = p7 + g11, 7 p11

p11 = 1 p10 = 1 p7 = −1

p9 = p9 + g10, 9 p10

p9 = − x8 x9−2

p8 = p8 + g10, 8 p10

p8 = 1/ x9

p6 = p6 + g 9, 6 p9

p6 = x8 x9−2

p3 = p3 + g 9, 3 p9 p1 = p1 + g8,1 p8

p3 = − x8 x9−2 p5 = 1/ x9 p1 = 1 / x9

p1 = p1 + g7,1 p7

p1 = 1 / x9 − 2

p 4 = p4 + g6 , 4 p6

p 4 = 1 /( 2 x4 ) x8 x9−2 p 2 = cos( x2 ) / x9

p5 = p5 + g 8,5 p8

p 2 = p2 + g5, 2 p5

15

4.3. Algorithm for Generation of the ACF Models To make simulation most efficient, the Algebraic Companion models are conveniently organized on such a way that allows to eliminate unnecessary repetitive computations. One part of the ACF model (an initial block) is executed just one time per simulation. During initialization, the internal parameters are computed, and if they contain any error, user is notified and the simulation is stopped. The other part of the ACF model (time stepping block) is executed at every time step as long as simulation runs. The time stepping block contains highly optimized equations of the form (2), which are obtained during initialization. During execution of the time stepping block, the solution of the system is reached that concludes the simulation.

The symbolically assisted computational procedure for generation of the Algebraic Companion Form models builds the ACF model during initialization, i.e. once per simulation. Once the ACF model is build, it is then executed in the time stepping block. Such an organization allows the model to be run during the simulation most effectively, since during time stepping the Network Solver does not need to re-build the model all over again. The block-diagram of the algorithm for generation of the ACF models is presented in fig. 6.

Note that there is an option between choosing the device model. It is possible to use either quadratic model or conve ntional first order model. If user chooses to use the first order model, vectors q1, q 2 in equation (2) are set to zero. This option gives control over the computational process. Quadratic device model is more accurate, and therefore more precise simulation results can be obtained. Thus, if user is interested in detailed responses or high frequency effects in the simulating system or circuit, it is possible to use quadratic model for this purpose. Sometimes the device equations may contain “severe”

16

Begin

Compute second derivatives

Compute device Jacobian matrix using Automatic Differentiation method

Optimization Optimization Compute Past History vector

Initialization block

Discretize Equations

Yes Build Quadratic device model ?

Compute Past History vector

block

Execute Algebraic Companion model

Time stepping

No

Figure 6. Algorithm for generation of the ACF models nonlinearities. In that case, use of the quadratic device model may significantly improve both numerical stability and convergence rate [3]. In some other cases, the first order device model is perfectly suitable. Use of the first order model may reduce in some cases computational work and increase the speed of the calculations. However, this statement cannot be generalized, since first order models may require more Newton’s iterations to converge in comparison with quadratic models. Therefore, cho ice of the most suitable model depends on the problem to be solved and on the goal of the particular simulation task.

5. Use in the VTB environment From the VTB user point of view, use of the ACF model generator is quite convenient. User must enter equations of the form (1) in the editable box of the ACF model generator, and specify number of terminals, type of the model (linear or nonlinear) and kind of the model (switching or continuous). If the

17

model is linear, the Jacobian matrix is constant, quadratic terms q1, q 2 of equation (2) are zero, and the VTB Network Solver thus can use an appropriate algorithm during time stepping, which does not involve Newton’s iterations. This leads to the optimal use of the computational resources. It is known that there is a class of devices, which models can be considered as switching models. Switching model is the model, which parameters or equations change depending on some logical statement. For example, simplest model of the diode can be considered as the switching model, since the diode has high conductivity when forward biased and low conductivity, if reverse biased. The symbolically assisted computational procedure (SACP) for generation of the ACF models enables also use of the switching models. This feature widens the class of the models, which can be simulated using the SACP.

5.1. Numerical Example Voltage surge arresters are widely used in power system applications to protect the load from the voltage surge caused for example, by strike of the lightening. The typical topology of the power system consisting of power supply (V1), transmission lines (XL1, XL2), nonlinear load (R1, R2, L1) and solidstate surge arrester (L2, L3, R3, R4, C1, C2, SA1, SA2) is shown in fig. 7. The strike of the lightening is initiated at the specified time instance. The lightening strike is represented by impulse current source (I1), which is modeled by the following equation:

(

)

i e−α (t − t0 ) − e − β ( t −t0 ) , if t − t0 > 0 i (t ) =  0 . 0, othewise

(13)

In equation (13), parameters i0 ,α , β , t0 denote the current amplitude, rise time, fall time, and strike initiation time respectively.

18

Figure 7. Power system (power supply, transmission lines, nonlinear load, and surge arrester)

Typical loads used in power system applications are resistive and inductive, for example, transformers, induction motors, etc. Often, these devices may exhibit nonlinear behavior due to saturation of the magnetic core materials. This effect is taken into account through the use of the nonlinear inductor (L1) in the model of the load.

The model of the solid-state surge arrester consists of two highly nonlinear elements (SA1, SA2), and parasitic inductances (L2, L3), resistances (R3, R4) and capacitances (C1, C2). The mathematical

19

descriptions for the nonlinear elements SA1 and SA2 as well as for the nonlinear inductor L1 are developed in corresponding sections below.

5.1.1. Time Domain Model of Nonlinear Inductor The inductor L1 (see fig.7) is nonlinear, i.e. flux linkage in the inductor core saturates under influence of relatively large current flowing trough the inductor winding. The current-flux relationship of the nonlinear inductor can be obtained empirically by performing necessary measurements, or it can be computed using suitable field solver [7]. Then, the nonlinear current-flux dependence is approximated by the polynomial function: i (t ) = f 1 ( λ (t )) = b0 + b1λ ( t ) + ... + bn λ (t ) n ,

(14)

t

where λ ( t ) = ∫ v(τ ) dτ denotes the magnetic flux linkage. The Algebraic Companion Form for the 0

nonlinear component can now be obtained using equation (14) in conjunction with the symbolically assisted computational procedure based on ADT. Specifically, the ACF has a quadratic form that is obtained as follows. First, we introduce the following variables: x1 (t ) = x2 ( t ) λ (t ) x2 (t ) = x3 ( t ) λ (t ) ... xn (t ) = λ (t )

(15)

With the new variables, equation (14) becomes

20

i (t ) = b0 + b1 xn ( t ) + b2 xn−1 (t ) + b3 xn−2 (t ) + ... + bn x1 (t ) dx (t ) 0 = v( t ) − n dt 0 = x1( t ) − x2 (t )λ ( t ) 0 = x2 ( t ) − x3 ( t ) λ (t )

(16)

... 0 = xn ( t ) − λ ( t ) Integration of above equations yields the ACF for this component in quadratic form. Note that the through variable for this device is i (t ) , across variable for this device is v (t ) , and the internal state variables are λ (t ), x1( t ), x2 ( t ),..., xn ( t ) .

5.1.2. Time Domain Model for Nonlinear Element of Surge Arrester The time domain model for the nonlinear elements SA1 and SA2 can be developed similarly to the procedure described above in section 5.1.1. The nonlinear element of the surge arrester basically, exhibits behavior of a highly nonlinear resistor. The voltage-current relationship can be approximated by the following equation: i (t ) = f 2 ( v(t )) = b0 + b1v( t ) + ... + bnv (t ) n ,

(17)

where i (t ) is current flowing through the nonlinear element, v (t ) denotes voltage across the nonlinear element, and b0 , b1,..., bn are parameters. The use of equation (17) along with the symbolically assisted computational procedure produces the desired ACF model of the nonlinear element. The derivation of the ACF for this model follows similar procedure as for the nonlinear inductor. Specifically, we introduce the following variables: y1 (t ) = y2 ( t ) v(t ) y2 (t ) = y3 (t ) v(t ) . ... yn (t ) = v(t )

(18)

21

Then, equation (17) can be represented in terms of new variables as follows: i (t ) = b0 + b1 y n (t ) + b2 y n−1 (t ) + b3 y n−2 ( t ) + ... + bn y1( t ) 0 = y1( t ) − y 2 (t )v (t ) 0 = y2 (t ) − y3 (t )v (t ) ... 0 = yn ( t ) − v (t )

.

(19)

Integration of above equations yields in ACF in quadratic form.

5.1.3. Numerical Results The power system shown in fig. 7 has been simulated using the VTB simulation environment. To demonstrate the effectiveness of the surge arrester for the protection of the load from the extreme voltage surges, simulation of the system has been performed twice. First, the voltage surge arrester was disconnected from the system. The lightening stroke at specified time instance at the sending end of the transmission line XL1. Results of the simulation for this case are presented in the fig. 8. The top

Figure 8. Power system performance without surge arrester

22

waveform shows voltage across the load (L1, R1, R2). As it is possible to see, the load for a short period of time is experiencing large voltage surge, which can damage the load if it is unprotected. The waveform in the middle presents the voltage at sending end of the transmission line XL1, and the bottom waveform depicts the voltage at receiving end of the line XL1.

Figure 9 shows results of the simulation for the system, which load is protected by the surge arrester.

Figure 9. Power system performance with surge arrester The top waveform shows voltage across the load. It is possible to see that in this case voltage across the load did not reach extremely high and dangerous for the load levels. The high frequency effects following immediately after the lightening strike are due to the signal reflections inside of the transmission line. The waveforms in the middle and in the bottom of the fig. 9 show voltages at sending and receiving ends of the transmission line XL1 respectively. Top and bottom waveforms of fig. 8 and fig. 9 are also distorted due to the nonlinear nature of the load.

23

6. Conclusions An effective symbolically assisted computational procedure (SACP), based on the Automatic Differentiation technique, for generation of the native form Algebraic Companion Form models was presented.

This approach to ACF model generation performs very well in terms of CPU time.

Specifically, it requires considerably less time for the computation of Jacobian matrices compared to the numerical differentiation technique. The developed SACP allows users to define their own models in the VTB simulation environment without programming in the computer language (Visual C++). Generated models are optimal in sense of the elementary computer instructions to be executed. To demonstrate the use of SACP, time domain models for the nonlinear inductor and surge arrester were developed. The simulation of practical power system demonstrates the effectiveness of the surge arresters employment in power system applications for the load protection. It is shown that the effects of nonlinearities of the power system components can significantly affect the quality of the electrical power.

7. Acknowledgments The work reported in this paper has been supported by the ONR Grant No. N00014-96-1-0926. This support is gratefully acknowledged.

References [1]

Charles W. Brice, Levent U. Gökdere, Roger A. Dougal, “The Virtual Test Bed: An Environment for Virtual Prototyping”, Proceedings of International Conference on Electric Ship, pp 27-31, Istanbul, Turkey, September.

24

[2]

A. P. Sakis Meliopoulos, G. J. Cokkinides, “A time Domain Model for Flicker Analysis,” Proceedings of International Conference on Power System Transients, pp. 365-368, Seattle, WA, Jun. 22-26, 1997.

[3]

E. V. Solodovnik, A. P. Sakis Meliopoulos, G. J. Cokkinides, “On Stability of Implicit Numerical Methods in Nonlinear Dynamical Systems Simulation, ” Proceedings of the Thirtieth IEEE Southeastern Symposium on System Theory, pp. 27-31, Morgantown, WV, Mar. 8-10, 1998.

[4]

C. William Gear, Numerical Initial Value Problems in Ordinary Differential Equations, Englewood Cliffs, NJ, Prentice-Hall, 1971.

[5]

M. Jerosolimski, L. Levacher, “A New Method For Fast Calculation of Jacobian Matrices: Automatic Differentiation For Power System Simulation,” IEEE Transactions on Power Systems, Vol. 9, No. 2, pp. 700-706, May 1994.

[6]

B. M. Averick, J. J. More, C. H. Bischof, A. Carle, A. Griewank, “Computing Large Sparse Jacobian

Matrices

Using

Automatic

Differentiation”, Argonne

National

Laboratory,

Mathematics and Computer Science Division, Preprint MCS-P10-1088, January 1993.

[7]

G. Cokkinides and B. Beker, “Modeling the Effects of Non- Linear Materials in De-Coupling Components of Digital Systems”, Proc. 8th Topical Meeting on Electrical Performance of Electronic Packaging, Oct. 1999.

25