an approximate method for structural optimisation

1 downloads 0 Views 661KB Size Report
Jan 4, 1978 - limits around the design point, or from the activity level of the constraints at the previous approximated solution, making some of the inactive ...
OMS-794917810501457bl2

Compvfenr & Srncfvns Vol. 8. Pp. W-363 @ Pergamon Press Ltd., 1978. Printed in Great Britain

W/O

AN APPROXIMATE METHOD FOR STRUCTURAL OPTIMISATION A. S. L. CHAN and E.

TURLEA

Imperial College of Science and Technology. (Received

November

1977; received for publication

London, England 4 January

1978)

Abstract-The nonlinear structural optimisation problem is solved by successive approximations in which the objective function and the constraint functions are approximated as single term posynomials about a given design point at each step, and transformed into a linear programming problem in the logarithmic space. Each feasible basic solution is constructed with the aid of pseudo constraint limits, which are obtained either from specified move limits around the design point, or from the activity level of the constraints at the previous approximated solution, making some of the inactive constraints artificially active. The method is demonstrated by some examples.

1.

INTRODUCTION

The problem of finding an optimum structural design has been approached in many different ways. Professor Argyris employed a stress ratioing technique to obtain the optimum design of a Concorde type aircraft over IO years ago [ I]. The optimality criteria method 121utilising simple iterative procedures for sizing the structural members is very efficient when the nature of the constraints are convenient for this form of treatment. When this cannot be done simply, the method of mathematical programming offers the most promising alternative. As the structural optimisation problems are essentially nonlinear, various types of technique such as the method of feasible direction, steepest descent, penalty function, sequence of linear programming (SLP) approximation, and geometric programming, etc. have all been employed[3-51. These methods are in general satisfactory for small problems, although some might encounter convergence difficulties near optimum, but are usually very demanding on computational efforts for large problems. Some recent researchers therefore aim at finding more economical ways of computing improved redesigns. For example, Ref. [6] uses a combination of the otimality criteria method and the unconstrained minimisation technique iteratively for seeking redesigns of lower weight when both the structural member sizes and the geometric configuration parameters are the design variables. And Ref. [7] (see also [8]) took advantage of a geometric programming dual formulation to generate a lower bound for the cost function (structural weight) as well as to obtain the activity levels of the constraints and hence values of the design variables at the optimum. Since, in the geometric programming formulation, the constraints can only be approximated, an iterative application of the procedure is usually necessary. The present method is a development of the method of Ref. [7], but the geometrical programming formulation is transformed and solved as a linear programming problem, thus achieving a more systematic and even less expensive solution computationally. To be specific, the merit function and the constraints are expanded approximately around a given design (or a so-called “operating point” in the design space) as single term positive polynomials (or “posynomials”), changing the minimisation problem into a special class of geometrical 357

programming problem. This primal problem and its geometrical programming dual, can both be transformed into similarly related linear programming problems by a change of variables. The solution of this problem in the transformed space gives a set of “optimum” design variables together with a bound. However, remembering that this is only the optimum solution of the approximation problem associated with the initial design, the whole process has to be iterated for a true (local) optimum to be found. The method is therefore reduced to a sequence of linear programming procedures, differing with the orthodox SLP method only in the nature of the constraints. Whereas, in the latter, the constraints are approximated by the linear terms of the Taylor series expansion, they remain essentially nonlinear in the present case. Only in the transformed space will the problem become linear. Nevertheless, because the optimum solution will correspond to a vertex in the transformed space, it will also be at a vertex of the approximated constraints, and the difficulties associated with the SLP method will also occur here. They can be avoided however by methods similar to those employed for the SLP technique, namely, by imposing move limits and restricting the step sizes. These pseudolimits in effect ensure the supply of a sufficient number of active constraints, real or artificial, to create an extreme point solution for the approximate linear program. Each step then becomes a sub-problem within the original feasible space, and the (local) optimum of the exact problem, whether it corresponds to a vertex or not, can be approached successively by this artificial device. The structure to be optimised is represented by a finite element idealisation. For preliminary development of the method it is assumed that each element is governed by one design variable only, and that the stiffness matrix of the element is linearly proportional to this variable, making the re-analysis of an updated design particularly simple. For the time being, therefore, attention is confined to problems of optimum sizing of members for structures of fixed geometry, or optimum geometry for framework structures of fixed member sizes. The constraints considered so far have been limited to constraints on stresses, minimum member sizes and displacements. However, the formulation is completely general and may be Applied to more complicated problems, e.g. each element may be gover-

A. S. L. CHAN and E. TURLEA

358

ned by more than one variable, and other types of constraints may also be considered, although the calculations will naturally be less straightforward. The computer program solves the dual linear programming sub-problems by the simplex method, giving the values of the design variables of the primal problem at the same time. 2. FORMULATION OF THE PROBLEM The structural optimisation problem may be stated as follows: minimise

fob)

Thus at the optimum, either A? = 0, or f,(x*) = I, respectively signifying that the ith constraint is inactive or active in the primal problem. For any other feasible sets of x and A (satisfying the constraints in (lb) and (4b) respectively), f,,(x) h fo(x*) = L(x*, A*) 2 L(x(A),

(14 gi(X)=

i=l,....m

(lb)

(6)

These are true if the function f. is convex and the functions fi are concave. The present method starts from a given feasible design i and approximates each of the functions f, by a single term posynomial of the form

subject to fi(x)GI

A 1.

F,

fJ, 1 a” i=O ,=I 0

. . . ..m

(7a)

where

where f. is a nonlinear cost function of the design variables and x=

{XI.. . ., x,) 2 0, I

l-;$yxn

typically the total weight of the idealised structural members, and f, is a function of x of the ith constraint, generally also nonlinear. These usually consist of stress constraints -0, d u c a,,

@a)

displacement constraints u < c.

(3b)

and minimum size constraints x2 f

(3c)

They can all be written into the form of (lb). Other types of constraints can be similarly represented. Equality constraints rarely occur in structural optimisation problems and may be handled as active inequality constraints if present (i.e. when a constraint in (lb) takes an equal sign). It is well established that the dual problem of (I) is the maximisation of the associated Lagrangian function, and can be stated as maximise Ux* A) = f&4+ z, Ai(fi(X)- 1) subject

(4a)

to h,30, $=O,

I

i=l,..., j=

m

E = f;(X).

(2)

(W

It can be seen that the approximate function gi and its gradients (dgJdx,) retain the values of fi and (dfrldx,) at the operating point 9. Note that if f, is already a single term polynomial, then gi = fi. This approximation is the result of expanding In f, as a power series in terms of the variables In (Xj/fj) and neglecting all but the linear terms. It is tentatively suggested here that this ought to be somewhat better than the two term Taylor Series expansion around the operating point usually adopted for the SLP approximation since the range of validity over positive values of x should be wider as the expansion is now linear in In Xi rather than xi. The original problem stated in (1) is now approximated LY

minimise

subject to gi(x)Gl,

i=

I ,.... m,

(gb)

with g, given in the form of eqn (7). This is a typical geometric programming problem[S, 7-91, its solution is usually obtained by solving the dual problem, generated as in (4). However, the approximate problem in (8) can be transformed into a linear programming problem by considering the logarithm of the functions in (7a) and changing the variables from Xj to .Zj= In x,. Then, instead of (8). the minimisation problem may be written into the form

Mb) minimise

l,...,n

1.

GO(Z)= In go = CO+ ,z, cjzj

Then if x* solves the primal problem, the solution of the dual problem gives the Lagrange multipliers A* such that fo(x*) = ux*, A*).

(5)

subject to Gi(z)=bd+,$,atjZjSO

(i=l,...,m)

@b)

An approximatemethodfor structural optimisation where z = {z,, . ., z.} = {In xi,. . ., In 1”).

(IO)

Or, in matrix form minimise G”(Z)= co + c’z

subject to Az+bSO.

(11)

A is (7).

a m x n matrix of the coefficients a,, defined in eqn The coefficients in b are calculated from taking logarithm of the constants in (7) b=-AZtInF

(l2a)

where Z= {In f,. In F=

c’ = [aa,, . co=

. ., In f”}

{InF,, . . ., In F.}

(l2b)

.*aon1

In FO-c’Z

1

(13)

Usually, the number of constrints M is greater than the number of variables n if stress constraints are considered. The dual linear programming problem of (II) formulated in accordance with (4) is maximise b’y + co

subject to A’ytc=O,

~30.

(14)

The dual variables y in fact are the Lagrange multipliers of (4). It has been shown that this is also the transformation of the dual geometric programming problem of (8) (Appendix D, Ref. [9]). Solution of the optimisation problem in either form (II) or (14) produces a new design x* with an improved cost function (typically a lower total weight) according to (6), providing that the initial design ji is feasible. Indeed, any result contradictory with (6) can only be obtained from an unfeasible original design %. 3. CONSIDERATIONON THE METHOD OF SOLUTION The optimum solution of the problem stated in (8) in the form of either (II) or (14) will of course be only an approximation of that of the exact problem (I). On the face of it, the improved design thus given may be taken as a new starting point and a new approximate problem created and solved. Providing each successive problem gives a feasible solution, not only of the approximate problem but also in the original feasible space, the process should converge to an optimum since the merit function is reduced at every step.

359

Several difficulties immediately arise from such a simplistic view. One is that even if a feasible solution can be obtained for the approximate problem, it is by no means certain that it will be a feasible one in the original design space. This could be resolved by a simple upscaling of the member sizes. It is also possible to accept an unfeasible operating point for the next approximation, the calculation may still arrive back at a sensible result, although a larger number of iterations may be envisaged. A more serious obstacle to this procedure is that the linear programming (and the associated particular form of geometric programming) solution is always at an extreme point of the feasible region (in this case, approximate). That is, the solution for R variables requires n constraints to be active, whereas for the actual nonlinear problem, this may not be the case at all. It is therefore entirely possible, in solving the dual problem (14). that a basis cannot be found which would give a feasible set of dual variables (Lagrange multipliers) y. This difficulty can be overcome by changing artificially the constraint limits to ensure that there will be a sufficient number of active constraints to define a feasible solution point. Providing that the artificial limits are such that the reduced feasible sub-space is always within the original feasible space, each successive solution will again be an improvement on the previous one. It is in fact possible to implement this scheme in two different ways. Assuming that it is possible to obtain a basic feasible solution for (14) as a first approximation to the actual problem (I), the activity level of the non-active constraints at this point (identified either by the magnitude itself or by the associated non-positive y, at the next attempted solution) may be used as the constraint limits to make them active artificially for a second approximate solution. And so on until the optimum is reached. Thus an extreme point solution is artificially created for the nonlinear exact problem. For each update. only the right hand side of the primal problem needs changing. The basis and the dual variables remain unchanged as long as the operating point remains the same. They change only if the operating point has to move to a better position. The second method is to calculate the activity level of all constrained functions of a given design and impose limits to their changes within the feasible region, thus effectively creating artificially a feasible subspace for each step. The linear programming routine seeks out the best solution point in this sub-region, which is then used as the new operating point for the next approximation. In this way, the (local) optimum is approached in a series of cautious steps. as the change limits will have to be progressively smaller if the exact optimum point is not to be missed by the constraints artificially made active. This procedure has much in common with the move limit method often used with the SLP technique of optimisation, and is in fact the one adopted in the computer program for the following calculations. However, the basis changes with every iteration by this method. The computer program solves the problem in the dual form (14) since the solution involves a basis of the order n instead of m(>n) for the primal. However, from the optimum basis, the primal variables at the optimum can also be found. Indeed, if the simplex or revised simplex method is used, their values are directly obtainable from the final tableau. Furthermore, not all the constraints need to be taken into account in the linear programming search. Only those (actual or artificial) which are active or nearly active have to be considered. For instance, if

A. S. L. CHAN and E. TURLEA

360

the stress level of an element is at or near one of the (actual or artificial) limits in (3a), it is obvious that the other constraint of the pair can be ignored and the size of the problem reduced. It should be remembered that, by virtue of the one-toone correspondence between the actual space and the transformed logarithmic space, the solution of (I 1) is also the solution of (8) and vice versa. Working with the linear form is of course much more convenient, making the nature of the solution also much clearer.

constant

w. 4. EXAMPLES The examples presented here are taken from the literature so that the results can be readily compared. When change limits are appropriate in the calculations. an initial value is set in the data and is multiplied by a reduction factor k at the beginning of every iteration. Its value therefore reduces in a geometric progression.

2o

0 Fig.

20 2. Three

L.0

bar

minimise subject

w= to

x

-5

i=l

P,l,X#.

G u,,(x) s 0

\

,...,3bars

j = 1,2 loading cases

80

truss-design space and proximate solutions.

A. Three bar truss (Fig. 1) This simple problem, which can be solved by using a slide rule, is given in detail to clarify ideas. The problem is to find the areas for a minimum weight design subjecting to stress constraints:

6.0

Y, (cm*1

successive

ap-

the constraint uII will be active at the optimum point 0. This example is therefore used to demonstrate how this point can be obtained by using an artificial limit on the inactive constraint. Starting from an initial design represented by point A where x, =x2 = 6.45 and approximating the- merit function

!

w = 0.1988x,

+ 0.0703~2,

and the constraints by single term posynomials around the operating point, the exact problem is approximated A simple analysis of the redundant obviously xl = x1. gives

structure,

where

by minimise

(ISa) P UZl= nz = x, + d/(2)x2 u3,

=

u,2

=

-

subject

4.5fj2~,-~~**~x~-~ ‘72S 1

Px2

V(2)xr2

25.L cm

_,

_

2.672x,-’

f 2x,x2’

As long as xz~d/(2)x,. the constraints on U~I and u12 are obviously satisfied if the constraints on uzl and ~22 are satisfied. Hence the problem can be reduced to havinp. two constraints on two variables, and its solution can beillustratedgraphically in Fig. 2. It isevident that only

The logarithmic dual is

form

4’4x2-o586

< 1I

(1%)



of this problem

is linear,

and its

maximise -1.31+

1.52y, +0.983)‘2

(16a)

25.4 cm

3

/ J’ P

to

P

Fig. I. Three bar truss: E = 6.895 MN/cm’ (lO’Ib/in’); p= e= P = 88.96 kN (20,000 lb): (0.1 lb/in’): 2.768 g/cm’ 13.79 kN/cm’ (20,000 lb/in’): x,. x2, x1 z 0.645 cm’ (0. I in’).

0.74-

0.828y,

- 0.414)‘~ = 0

0.26-

0.172yr

-0.586~2

= 0I

.

Wb)

The solution of (16b) gives y, = 0.7875 and y2 = 0.2125 which is feasible, i.e. (yi >O), and the corresponding primal variables are z, = 1.7431 = In x,, z2 = 0.446 = In x2 giving x, =5.715, x2= 1.562 which is in fact the intersection point B of the approximated constraint curves of, and a;, of eqn (15b) shown in Fig. 2. If this point is now taken to be the new operating point and the constraints approximated by the curves a:~ and u,“,,, the solution point C in Fig. 2 is associated with y, = 1.179 and y2= -0.179, which is unfeasible and in-

An approximate dicates

that y2 should be zero at the optimum and the associated constraint inactive. However, the linear programming procedure insists on having the optimum at the intersection of the two (approximate) constraints. This requirement can be satisfied by letting the stress limit on oZ, be equal to the stress level at point B, thus artificially making the constrating active. This effectively replaces the constraint curve m2, on Fig. 2 by o&, with a pseudo stress limit = 88.96/(5.715+ lS62d/(2)) = 1I .227 kN/cm’. If the solution point B is regarded as a first approximation to the true optimum for discovering the stress level of u2, to make the constraint on uzI artificially active, then another solution can be obtained with the single term posynomial approximation of the curve us, around point A, which only needs changing the coefficient of the second constraint in (15b) to 3.282, and the coefficient of yz to 1.189 in (16a). The new approximated solution at point B’ gives xl = 5.23 and x2 = 2.357 (w = 1.205).A repetition of this calculation with the stress level of u2, = 10.39kN/cm* at B’ as the pseudo limit, making the coefficient of yZin (16a) 1.265,gives the result of x, = 5.075 and xZ= 2.745 (w = 1.198). almost exactly at the optimal point 0, (x, = 5.058 and x2 = 2.723 (w = 1.197)). Notice that in this sequence of successive approximations, the basis for the solution of the dual problem (and hence also the dual variables) never changes. Only the coefficient of y2 in the approximate function and the primal variables zI and 22 are updated. The method takes advantage of the linear programming (and the associated particular form of geometric programming) procedure which produces the optimum solution at the intersection of the (approximate) constraints. When it discovers that one of the constraints should in fact not be active at the optimum as indicated by an unfeasible dual variable y, it regards the solution as an approximation, and utilises the (approximate) stress level at that point as pseudo limit for the corresponding constraint to obtain a better extreme point approximation. This approach seems so promising that it is really worth a more thorough investigation for the solution of large problems, although the automation of the procedure might require some effort. However, recognising that the optimum solution may be constructed as a pseudo extreme point solution of a linear programming problem by the imposition of artificial constraint limits to non-active constraints, it is also possible to obtain the optimum solution in a different way. Starting from a feasible initial design, this method simply specifies a Table

Change Limit ho,, Reduction Factor k