Rank Based Evolutionary Algorithm For Structural ... - CiteSeerX

2 downloads 0 Views 1MB Size Report
Crossover based reproduction is the means by which the algorithm searches the solution space for ...... 1997, Chicago, IL, ASCE, 1997:43-54. [13] Furuta, H.
Rank-Based Evolutionary Algorithm For Structural Optimization M.S. Voss and C.M. Foley Department of Civil & Environmental Engineering, Marquette University, Haggerty Engineering Hall, P.O. Box 1881, Milwaukee, WI, USA

Submitted For Publication January 1999 To

Computers & Structures An International Journal

Editors K.J. Bathe & B.H.V. Topping

Address for correspondence: Professor Barry H.V. Topping Department of Mechanical Engineering Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected]

Rank-Based Evolutionary Algorithm For Structural Optimization M.S. Voss and C.M. Foley Department of Civil & Environmental Engineering, Marquette University, Haggerty Engineering Hall, P.O. Box 1881, Milwaukee, WI, USA

Abstract An evolutionary algorithm that utilizes a-priori problem specific information and allows intuitive representation of the problem design variables is proposed. A technique for conditioning the components of the fitness statement using ranking and a graphical method for monitoring components of the rank based fitness function are presented. By utilizing generationally dependant non-linear rank based selection along with translocation crossover and intelligent mutation to maintain genetic diversity, the proposed algorithm is able to operate directly on a heuristic tree representation of the design variables. Performance and control of the evolutionary algorithm is demonstrated and discussed via a cantilever column example problem.

Keywords:

Structural Optimization, Evolutionary Algorithm, Rank Based, Genetic Algorithm, Heuristic Tree Representation, Diversity.

1. Introduction Optimal structural design using genetic algorithms has received much research attention in recent years. The attraction of the genetic algorithm for structural optimization application stems from its ability to easily handle discrete variable optimization problems [1-4, 10, 12, 13, 16, 23-26, 32-34, 36, 39, 40]. In the past these problems have been solved using simulated annealing techniques [6, 7], dual methods [35] and branch/bound methods using sequential quadratic programming [21, 22, 34]. The genetic algorithm is a rather recent means with which to solve optimal design problems [5, 15, 17, 18, 30, 31] and it is based on Darwin’s Theory of Evolution. In the classical genetic algorithm formulation, possible design Voss & Foley: 1

configurations are termed individuals and their characteristics are defined using genetic coding (usually binary strings representing design variables). Each individual in a population (pool of prospective designs) is assigned a fitness for reproduction. Fit individuals are mated to produce offspring (future candidate designs) through crossover between genetic strings. This crossover of the two individuals is commonly referred to as reproduction. Crossover based reproduction is the means by which the algorithm searches the solution space for promising individuals. Although the solution found by the genetic algorithm is not guaranteed to be a global optimum, it is usually a very good solution. An additional aspect of the evolutionary process used in a genetic algorithm is mutation. Mutation is the method through which the algorithm recovers information that may not have been present in the initial population or that may have been lost during crossover. The genetic algorithm can be classified as a stochastic procedure and its success depends on the algorithm’s ability to effectively search the solution space, while exploiting good solutions through genetic reproduction. As a result, researchers have attempted to control the genetic algorithm and develop improved GA operators [14, 29]. The generality of the classical genetic algorithm is one of its main assets since it can be applied to a large realm of problems without any a-priori problem specific information being required. However, this generality can cause the algorithm to spend time searching regions of the solution space that are known to be unprofitable. A question then arises: Given two designs for the same structure, what are the structure’s building blocks [14, 18-20] and in what meaningful ways could they be exchanged in the crossover operations typically found in genetic algorithms ? Traditional multistory buildings tend to have their heaviest members near the base and their lightest members near the top. They also tend to gradually change the weight of their members from floor to floor as one travels up from the base to the roof. In this sense, the genetic building blocks could be seen as the building’s beams, columns and entire floors. This suggests that all corresponding building components (beams, columns and floors) and nearby building components (beams and columns from nearby floors above/below or an entire nearby Voss & Foley: 2

floor above/below) could participate in meaningful crossover operations. This a-priori knowledge of the building blocks contained within the problem was the motivation for implementation of a heuristic tree representation for an individual and its design variables (to be subsequently discussed). Representation of design variables in a hierarchical structure (rather than a binary string) suggests reproduction with crossover of genetic hierarchies between mating individuals where crossover occurs at corresponding or nearby locations.[9] “The easiest way to accomplish this is to introduce an exceptional crossover operator, the translocation operator, which produces crossing-over between randomly chosen non-homologous pairs” [18]. Furthermore, the extension of crossover to higher-order representations (referred to here as macro crossover operations on a heuristic tree representation) was anticipated by Holland[18] as a means with which to increase the efficiency of the genetic algorithm. The implementation of translocation and macro crossover leads to some interesting issues with respect to traditional genetic algorithms [14] which are a main motivation for this study and the proposed evolutionary algorithm. Furthermore, the ability to balance exploration of the solution space while exploiting good solutions is an extremely important attribute for robust GA optimization architectures. The present paper seeks to illustrate a general evolutionary algorithm whereupon the solution space is effectively explored through probabilistic reproduction and fair participation of fitness function components throughout the evolutionary process. Furthermore, exploitation of good solutions is performed through selection pressure applied at flexible stages during the evolution of the optimum solution. Lastly, the paper seeks to present a general, flexible GA architecture that can be applied to a wide range of problems outside the simple cantilever column example provided.

Voss & Foley: 3

2. Formulation of Optimization Problem The three dimensional, ten segment, rectangular cantilever column shown in Figure 1 was used to study the effectiveness of the evolutionary algorithm proposed. The optimization statement contains the following: (a) an objective (minimize the cantilever’s volume); and (b) constraints (deflections, stiffness and shape in two directions). Two design variables (hx and hy) are possible for each of the ten segments and the values for these variables are assumed to take on discrete quantities. Therefore, the optimization problem is expected to have 20 discrete design variables. Three concentrated loads are applied to the top of the column. The horizontal loads are assumed to be applied orthogonally to one another in the positive x- and y-directions. The deflection and buckling loads for the three dimensional column in both planes are assumed to be uncoupled. Since this model was constructed explicitly for the purposes of testing the performance of an evolutionary algorithm, this simplification was felt to be acceptable. The following discussion is limited to a single plane of deformation and it is assumed that the equations described can be generalized to the other perpendicular direction of deformation and buckling. In lieu of specification equations driving the optimization algorithm, the critical load of the member will be used to define expressions for the tangent stiffness and magnified deflection (due to the interaction of axial load with horizontal deformation). Therefore, an expression for this quantity must be derived for the stepped column shown in Figure 1. The elastic buckling load of the ten segment column was determined using an energy method described in [37]. The buckled shape of the member is assumed to be of the form, y ' * 1 & cos

Bz 2L

(1)

Voss & Foley: 4

where: * is the amplitude of the deflection at the column tip; L is the cantilever length; and z is the distance along the cantilever length (assumed to be zero at the base). The strain energy due to bending can be written as follows, Li

1 )U ' j 2 E i'1 10

m

Li& 1

P@ y 2 dz Ii

(2)

where: E is the modulus of elasticity; P is the applied axial load in segment i; Ii is the moment of inertia of segment i. The work done by the applied compression force, P, can be expressed as, L

)T '

P dy m 2 dz

2

dz

(3)

0

A symbolic expression for the critical load of the stepped cantilever column was found using Mathematica [38], via setting equation (3) equal to (2) and solving for the critical load, Pcr. The resulting expression is not presented here. Horizontal deflection limits in both directions were also used as constraints in the optimization procedure. Using the method of virtual work, the first-order deflection of the cantilever column can be computed using, 1 *1 ' j E i'1 10

Li

H L&z m Ii

2

dz

(4)

Li& 1

where: H is the horizontal load at the top of the cantilever. It is well known that second-order (P-)) effects cause non-linearity in the load deformation response. The second-order deflection (magnified first-order deflection) of the stepped cantilever column can then be expressed approximately using the following, *2 ' max[1 ,

P 2 ] Pcr

*1 1 & min[.99 ,

P ] Pcr

(5)

where: Pcr is the elastic critical load. Since an analytically correct solution for the magnified deflection approaches infinity as P approaches Pcr , it was necessary to modify the typical amplified sway equation ( Voss & Foley: 5

through addition of max and min functions) to return fictitious increasing positive deflection values in the post critical region so that penalties could be assigned. The tangent stiffness of the structure is given by,

KT '

P dP P ' sign Pcr & P @ cr @ 1 & *1 d *2 Pcr

2

(6)

In the current study, the weight of the structure is considered as the characteristic quantity to be optimized. It follows that the optimization problem may be expressed as, Minimize: V ' j Li & Li & 1 @ hx @ hy i i 10

(7)

i'1

Subject to: i

goal

x

x

KT $ KT goal

*2

x

i

$ *2

x& deflection

L

hx $ hx $ hx

i

goal

y

y

KT $ KT goal

*2

y

x

Sxgoal $ Sxi U

x& stiffness

y& stiffness

i

$ *2

y& deflection (8)

y

x& shape

Sygoal $ Syi

x& dimension

h y $ h y $ hy

U

y& shape L

y& dimension

where: V is the volume of the cantilever; and Si are the shape constraints which will be defined later. In order to employ a genetic algorithm, the above problem statement needs to be reformulated as an unconstrained optimization problem. This necessitates the development of penalty functions which are functions of the constraint violations. Therefore, constraint functions will be defined first. The development of the rank base penalty functions will be presented later. Constraint functions for a particular individual are given below:

Voss & Foley: 6

i

MK ' K Tx

i

MK ' K Ty

i

goal Tx

& KT

i

goal Ty

& KT

i

i

i M* 2

goal

x

x& deflection

x

(9) i

y

y& stiffness

y

M* ' *2 & *2 2x

x& stiffness

x

goal

' *2 & *2 y

y& deflection

y

i

MS ' Sxi & Sxgoal

x& shape

x

i

i

goal

MS ' S y & S y

y& shape

y

4. Potential Pitfalls in the Traditional Genetic Algorithm Approach The constraint values from equation (9) are usually conditioned and then added or multiplied together to form an individual’s fitness. These fitness values are the usual starting point for implementation of a genetic algorithm in an optimization problem such as that described here. In order for the individual fitness to properly filter information and differentiate individuals, the penalty components need to be scaled/modified so that they can participate in a meaningful way in the selection process used to determine the next generation. This is typically accomplished through normalization of the constraint violations which are then multiplied by a scalar which can remain constant[11] or dynamically change [2]. Depending on the method of constraint modification and the form of the penalty functions, the magnitudes of the components in the fitness equation can vary widely relative to one another. For example, some components may have very large magnitudes and vary in a highly non-linear fashion, while others may have small magnitudes and vary in a nearly linear fashion. If one scales the stiffness and deflection constraint components using a single multiplier so that only a certain range of values can participate in a meaningful way in the optimization statement, it is equivalent Voss & Foley: 7

to unrealistically giving certain individuals in the population the death penalty. In other words, when individuals might have a stiffness problem, but are fit with respect to the other constraints, they should still be allowed to participate since there may be building blocks within these individuals that others might find beneficial (and therefore could attain through reproduction). Preventing over discrimination becomes more important as one increases the number of terms (objectives and penalties) present in the fitness statement. An over discriminating fitness statement can remove diversity too fast from a population which could adversely impact the success of the genetic search. Furthermore, traditional genetic algorithms assign probabilities for selection that are proportional to an individual’s fitness value. The proposed algorithm needed to be able to differentiate and select individuals for crossover, carryover, and mutation based on a fitness function that was made up from seven components. When assembling fitness functions with this many components, it is necessary to condition the components so that they participate in a meaningful way in the selection process [11]. It was felt that proportional fitness did not allow enough control over convergence in the present problem since the conditioning of the components makes the value of the fitness dependant on the method of conditioning employed. In light of these concerns [14], a rank based penalty function was developed along with a graphical technique for monitoring the contribution of the components throughout the evolutionary process. It should be noted that ranking is nothing more than a functional that serves to linearize the constraint functions, mapping the constraints from 1 to the number of individuals in violation. This may eliminate the need for piecewise-linear shifting and scaling [11]. Traditional genetic algorithms [14, 18] are based on the Schemata Theorem which says that there is an exponential generational convergence of Schemata. This theorem essentially provides a mathematical basis for convergence of the traditional genetic algorithm. In order for the Schemata Theorem to be applicable, crossover on the genetic chromosome between individuals should be one-to-one. In addition, the Schemata Theorem advocates low order genetic representations (low cardinality) for maximum Voss & Foley: 8

effectiveness [14]. The presence of the Schemata Theorem has historically been the driving force behind representation of individuals as binary strings. The traditional genetic algorithm works well when one is optimizing multi-variate functions when there is no a-priori knowledge of the interrelationships between the solution variables of the objective function. For example, when one knows that the function, f a , b , c, d

is minimized when , a # b # c# d

the traditional genetic algorithm is committed to searching unprofitable regions of the solution universe. One method to remedy this situation is to modify the penalty functions residing in the fitness statement. Thus, a constraint function of the following form could be used, Morder ' max (a & b) , 0 % max (b & c) , 0 % max (c& d) , 0

(10)

to move the population in the direction of promising regions of the search space. However, a-priori knowledge of the design variable interrelation also suggests an ordering of the classical genetic algorithm’s binary string representation, such that, a # b # c #

d

7 goal

|0 0 0 1 |0 1 0 1 |1 0 0 0 |1 0 0 1 | This provides a basis for hybrid macro crossover operations between individuals whereby two individuals selected for crossover would exchange the value of whole nearby variables (translocation crossover). Although translocation crossover operations make intuitive sense in this case, they are not encompassed by the current Schemata Theorem and therefore, do not fall under its proof of convergence. The low cardinality (i.e. 0 and 1) requirement for maximum solution universe exploration, is necessitated by the restrictive one-to-one crossover found in the traditional genetic algorithm. One-to-one crossover makes sense when no a-priori knowledge of the interrelationships between the design variables of the objective function exists. Translocation crossover increases diversity over generations by allowing emigration of variables from one gene location to another. This emigration increases the diversity of

Voss & Foley: 9

potential values that a gene can express with a given genetic representation. If this diversity is maintained, the low cardinality requirement of the Schemata Theorem may be relaxed. The present study proposes a problem specific heuristic tree representation as shown in Figure 2. The tree representation facilitates the recognition of building blocks used in exceptional crossover operations involving homologous and nonhomologous pairs and could also be thought of as a genetic program with a static representation[8, 27, 28]. Nodes a, b, and c represent locations for x-dimension, y-dimension and whole level crossover, respectively. The nodes are labled at the second level, but should be considered as generalized locations for crossover at a given hierarchy level.

5. Objectives There are several objectives to the present paper. First of all, the paper aims to present the implementation of an evolutionary algorithm on the optimization problem discussed in the previous section. Secondly, this problem exhibits many characteristics to be found later in a current research project underway at Marquette University examining implementation of genetic algorithms for the optimization of structural steel frames using advanced (nonlinear inelastic) analysis. The stepped cantilever column exhibits qualitatively, much of the same behavior expected in later problems. The paper also aims to develop a flexible / tuneable evolutionary algorithm architecture with applicability to a wide variety of structural optimization problems . Finally, the paper aims to investigate several hypotheses related to macro crossover and potential violation of the Schemata Theorem. These hypotheses are stated as follows: (a) intelligent macro crossover and mutation can improve convergence while maintaining diversity; (b) when apriori knowledge of the inter-relationships between the solution variables of the objective function are known, intelligent macro/translocation crossover and mutation operations can be designed in such a way that the low cardinality requirements of the Schemata Theorem can be relaxed; and (c) with the low cardinality requirement relaxed due to translocation crossover and mutation, it is possible to generalize the Voss & Foley: 10

traditional string representation to a heuristic tree (refer to Figure 2) which does not require the additional binary mapping layer.

6. Rank Based Optimization Statement There is a subjective component to optimization in that a designer might be willing to live with a few less than optimal components to get a desired result. Therefore, a highly desirable algorithm would contain options allowing the user to impose the importance of individual components. This, in turn, would give control to a seemingly random optimization procedure. The above considerations motivated the development of a rank based optimization statement. The rank based optimization statement proposed is easily implemented and somewhat semi-automatic. The designer must assign a scalar multiple and exponent to each optimization statement component based on both objective knowledge and subjective preference. The weight and all constraint components of each individual’s fitness are ranked with all components less than zero given a rank of zero and the smallest non-negative component(s) given a rank of one with the next largest given a rank of two, etc. Individuals that have the same magnitude are assigned the same rank. Once the ranks have been assigned they are multiplied by a scaling multiplier and added to a constant. The result is then raised to a Generationally Dependant Penalty Exponent (GDPE) to tune the relative weight of each component in the objective function. For example, given the following numerical values for the displacement constraint violation for a population of seven individuals: M* ' *i & *goal ' 13 , & 52 , 25 , 2 , & 1000 , 342 , 13 the ranks of the displacement constraint violations for this population are then defined as, R ( M* ) ' 2 , 0 , 3 , 1 , 0 , 4 , 2 The rank for the sixth individual in the population is denoted, R6 ( M* ) ' 4 Voss & Foley: 11

The component penalty for the ith individual in the population is defined as, *

f i ' 1 % >* Ri ( M* )

n*

where: >* is the multiplier for the deflection constraint. Similar values are defined for volume, stiffness, and shape. In the present formulation, all values of > are 1.0 as indicated in Table 1. More specifically, the component penalty for the sixth individual in the population is, *

f6 ' 1 % >* @ 4

n*

The rank based fitness for an individual ten segment cantilever column can then be written as follows; Fi ' f i

W

% fi

KT x

% fi

KTy

*

*

% f i x % fi y % fi

Sx

% fi

Sy

(11)

In the present study, the scaling multipliers are assigned constant values which reflect the importance of each component. The penalty exponents are functions of the generation number allowing for targeted dynamic convergence control.

7. Evolutionary Algorithm Components Generationally Dependant Non-linear Rank Based Selection (GDNLRBS)[5, 30] combined with GDPE(s) was employed to allow dynamic control of algorithm convergence. By ranking the rank based fitness values themselves (each individual given a fitness rank from 1 to the population size) and then using GDNLRBS, it was possible to dynamically control how fast the selection pressure was increased from somewhat egalitarian (low) pressure during early generations, to high (focused on the exploitation of fit individuals) during later generations. It should be noted that GDNLRBS is a global selection tuning mechanism whereas the GDPE(s) are local selection tuning mechanisms. In effect, the penalty exponents turn up the selection pressure locally for a particular constraint component. The interrelation of these tuning mechanisms is integral to the efficient operation of the proposed algorithm. The GDPE(s) are defined by: Voss & Foley: 12

ni ' " % $ @

Gcurr

(12)

Gmax

where: " and $ are user selected constants; Gcurr is an integer representing the current generation; and Gmax is an integer defining the maximum number of generations to be carried out in the genetic algorithm. By choosing the values of " and $ carefully it is possible to adjust the relative contribution of a particular penalty based on the generational time-line. In this way it is possible to adjust the point in the evolutionary processes at which a particular penalty function should become more or less important in relation to the other competing penalty functions. This technique can be used to focus the evolutionary search on different penalties at different points during the evolution of the optimum design. Separate functions are used for GDNLRBS with respect to crossover, carryover, and mutation. The general form of the selection probability for crossover, carry-over, and mutation is given by: p r '

q 1 & q

r& 1

1 & 1 & q

(13)

m

where: r is the rank of the individual; m is the population size. The parameter q 0 (0ÿ1) does not depend on the population size. Larger values of q imply stronger selective pressure of the algorithm [30]. Probability based reproduction techniques have been found to be useful in various engineering problems[29]. A generationally dependent selection pressure parameter, q, is defined as, q ' ( % . min 1 ,

Gcurr % 5

8

(14)

Gmax

where; (, 8 and . are user defined constants. All multipliers, constants and exponents used in the proposed algorithm are given in Table 1. A cumulative probability density function is constructed using the nonlinear probability function, p(r). An individual can then be chosen by selecting a random real number r 0 (0ÿ1) and mapping it onto

Voss & Foley: 13

the cumulative probability density function to determine the rank of the individual to be selected. This function is graphically depicted in Figure 3. Elitism was also employed in the proposed evolutionary algorithm. The top two individuals of every generation are carried into the next generation. Since GDNLRBS is used, the algorithm has a lessened tendency to exploit super-fit individuals than proportional fitness selection. This allows the benefits of elitism to be employed without premature convergence on super-fit individuals during early generations. As previously mentioned, separate GDNLRBS functions were used for selecting individuals for participation in crossover, carryover and intelligent mutation. It was felt that creation of separate GDNLRBS distributions for crossover, carryover, and mutation did not overly complicate the algorithm and allows for maximum flexibility with respect to tuning. Figure 4 illustrates the flow diagram for the proposed hybrid genetic algorithm. A discussion of the GDNLRBS algorithm components are given below (also refer to values contained in Table 1): a.)

GDNLRBS for Crossover: (.1 , (1 , 81) GDNLRBS can be used to select the parents for the next generation with a bias toward selecting fit individuals that is dynamically increased as the population ages. This allows the population to maintain diversity during early generations while exploiting fit individuals in later generations. If used by itself, convergence on a relatively good answer is likely, albeit with no implied guarantee that the answer represents a global optimum.

b.)

GDNLRBS for Carry Over; (.2 , (2 , 82) Diversity is prolonged by using the same nonlinear rank based selection technique as for parental selection to select individuals from the current generation that will be carried over without change to the next generation. The bias toward selecting fit individuals is not as strong as that used for parental selection during early generations, but increases rapidly during later generations. This allows an exploitation of fit individuals in later generations without prematurely removing population diversity or adding Voss & Foley: 14

significant computation time. If used alone, the population will convergence to a population consisting entirely of one of the better individuals from the initial population. c.)

GDNLRBS for Intelligent Mutation; (.3 , (3 , 83) A portion of the current population is also selected for intelligent mutation using nonlinear rank based selection. A strong bias toward selecting fit individuals for intelligent mutation is maintained. Intelligent mutation maintains diversity by introducing genetic material that may have been prematurely removed during early generations. This allows for local exploration around promising areas of the search space. If used by itself, convergence to a quasi-static population of individuals that were created by local exploration around one of the best individuals in the initial generation is likely.

The three components above are used in conjunction with one another to tune the evolutionary algorithm. Percentages for reproduction using NLRBS is given in Figure 4. In traditional crossover operations it is possible to define the amount of material swapped during reproduction, since there is generally only one type of crossover used (uniform, single/double point, etc.). The present study implemented 6 exceptional crossover operations during reproduction. Due to the complexity of these crossover mechanisms cascading over one another, it is not possible to directly set the amount of genetic material that is swapped during reproduction. It was therefore necessary to run a simulation to determine the percentage of genetic material swapped by each type of crossover. The individuals used in the simulation are shown in Figure 5. Initially, the variable sum for individual 1 and 2 was 20 and 0, respectively. The histograms contained in the figure show the frequency of the variable sums for individuals 1 and 2 after reproduction. The frequencies were calculated as the average of three simulation runs, each consisting of 1000 matings. The crossover operations (operating on generalized nodal locations a, b, and c) are listed and discussed as follows:

Voss & Foley: 15

a.)

Homologous Partial Segment Crossover: corresponding “a” or “b” locations on two unique individuals are crossed over.

b.)

Non-homologous (Translocation) Partial Segment Crossover: “a” or “b” location crossed over with an “a” or “b” location offset up to four levels away on two unique individuals.

c.)

Non-homologous (Translocation) Self Partial Segment Crossover: “a” or “b” location crossed over with an “a” or “b” location offset up to four levels away on the same individual.

d.)

Homologous Segment Crossover: Corresponding “c” locations on two unique individuals are crossed over.

e.)

Non-homologous (Translocation) Segment Crossover: “c” location crossed over with another “c” location offset up to four levels away on two unique individuals.

f.)

Non-homologous (Translocation) Self Segment Crossover: “c” location crossed over with another “c” location offset up to four levels away on the same individuals.

Figure 4 illustrates the percentages of each crossover operation in the proposed algorithm. The average amount of genetic material swapped between individuals and swapped internally during reproduction was found to be 41.5% and 6% respectively. This results in a total average reproductive genetic modification due to crossover of 47.5 % . The proposed algorithm does not employ a criteria for termination. Instead, the algorithm uses the GDNLRBS to force convergence after a given number of generations. By simply tuning the convergence parameters, it was possible to orchestrate a soft landing close to the global optimum.

8. Results and Discussion The cantilever that was examined was given a fixed length of 500 inches divided into 10 equal segments of 50 inches. The orientation of the loads and degrees of freedom are shown graphically in Figure 1. Each of Voss & Foley: 16

the segments could vary between 10 and 80 inches in both the x- and y-directions adhering to discrete increments of 0.35 inches. An axial load of Pz = 80,000 kips combined with horizontal loads of Hx = Hy = 50 kips were applied at the top of the cantilever. The proposed evolutionary algorithm was run for 50 generations with a fixed population size of 80. It should be noted that the evolutionary procedure employed here will not give identical results each time the algorithm is run. Two example runs are provided for overall algorithm discussion. It is recognized that the most correct procedure is to report averages of many runs [29]. Furthermore, Table 1 should be referenced for information regarding the evolutionary algorithm parameters used for all runs. The design loads were chosen to illustrate one of the complexities of multiple constraint optimization. Zero stiffness as shown in Figure 6(a) can rarely be achieved, because this stiffness exists in a region of the solution space where small increases in the applied loads cause large (non-proportional) increases in deflection. As a result, it is likely that the deflection goal shown in Figure 6(b) won’t be achieved at the same time the small stiffness is attained. For the present study, deflection constraints of 2 and 20 inches are imposed in the x- and y-directions, respectively. Both of these are at least an order of magnitude larger than the corresponding first-order deflections experienced at the optimum configuration. This requires the algorithm to search in a volatile region of the solution space. The 2 inch constraint required the cantilever to become larger in the direction parallel to that constraint, allowing the perpendicular dimensions to configure themselves to minimize the volume of the cantilever. For comparison purposes, an exhaustive search optimization study was performed. It is assumed that the cantilever (for the exhaustive search study only) is allowed to vary (dimensionally) in a linear fashion from the base to the top. The results of this procedure are contained in Table 2. The proposed algorithm may seem excessively complicated due to the number of parameters that can be modified. It will be demonstrated that this concern is based on the black box genetic algorithm paradigm that is prevalent in the literature. By overly concentrating on algorithms that do not need any human Voss & Foley: 17

intervention, a high price is being paid in terms of what can be accomplished via human / GA interaction. Fitness function component graphs allow the user to monitor the progress of the evolutionary process. Algorithm parameters can be defined such that if one observes that some components are dominating the process (at a time when it would be better for the population to be focusing on other components) one can easily adjust their values to invoke the desired effect. Figures 7, 8 and 9 show the component ranking of fitness function components for individuals in the population throughout the evolutionary process. The graphs are constructed by plotting the sorted values of the rank based component penalty functions. It should be noted that the independent values do not necessarily correspond to the same individual for all component plots. The plots illustrate the relative contribution a particular component plays in establishing the fitness of individuals throughout the population during the evolution. It is easy to monitor these graphs to determine if any one component is dominating the selection process. The user can easily modify the parameter values associated with a dominating component to reduce its contribution throughout the population. The effects of any modifications are then observed through the use of these plots. Figure 7 illustrates the component plots for the algorithm when no shape penalties are applied. The shape components are observed at a value of one on each plot. Since the addition of a constant does not change the value of the rank based fitness, the shape component does not participate in the selection process. It is observed that the initial penalty exponent of one for the deflection and stiffness components, results in a linear component plot (Figure 7(a)). The fact that all of the component plots (with the exception of shape) are on essentially the same line for the first generation implies that only a few individuals in the initial population are without both deflection and stiffness violations in both directions. Since the value of a component rank for an individual without a constraint violation with respect to that component is zero, the progress of the algorithm is apparent in Figure 7(b). In this figure, it can be seen that only 30 individuals (of the population - 80 total) have constraint violations with respect to individual Voss & Foley: 18

components. As the algorithm progresses (Figures 7(c), (d) and (e)), the data points move farther to the right indicating that more individuals become penalty free with respect to all components. Also, flat regions begin appearing in the component plots corresponding to individuals with the same constraint component values. This indicates increased exploitation of certain individuals. Figure 7(e) illustrates that even though the algorithm has converged from a practical point of view, diversity is still being generated thru intelligent mutation and translocation crossover as evidenced by the data points at the far right of the plot. Similar behavior can be observed in Figures 8 and 9 where there are moderate and large generationally dependant shape penalty exponents being applied, respectively. Contrasting Figures 7(c) and 8(c) with Figure 9(c), it is evident that the large shape penalty exponent is delaying the convergence of the other penalty components. The shape component data lies along the horizontal axis in Figure 7(c) indicating that the shape component is not participating in the selection process. In Figure 8(c), the shape penalties are more severe, but not so large as to dominate the selection process. This can be observed by contrasting the progress of the other penalty components in Figures 7(c) and 8(c). When large shape penalties are applied as in Figure 9(c), the shape components are dominating the selection process. This can degrade the performance of the algorithm since the selection process is overly biased toward one penalty component. This tends to prematurely eliminate diversity with respect to the others. It can be inferred by contrasting Figures 7(c) and 8(c) with 9(c) that by focusing too heavily on the shape component during early generations, the algorithm was unable to concentrate on the deflection and stiffness constraints. By allowing the shape penalty to dominate the selection process, intermediate algorithmic moves with respect to the other components of the fitness statement may have been missed. These choices could have been used to maintain diversity and improve overall convergence of the algorithm. This behavior can

Voss & Foley: 19

be observed in Figure 10(a) where runs with large generationally dependant shape penalties were not as successful in achieving the smallest cantilever volume. The objective of the optimization procedure was to find the minimum volume cantilever meeting all constraints. Since the cantilever was assumed to be made of steel with homogeneous properties, minimizing the volume is analogous to minimizing the weight. Therefore the graphs (Figure 10) are given as plots of the generation number versus the volume (in cubic inches) of the best individual of that generation. One needs to be careful when interpreting the convergence plots. The most meaningful information can be inferred from the trajectory of the convergence plot. If very small penalties were applied at the beginning of a run, the volume would be dominant in the selection process. In this case, the convergence plots would approach the global minimum value from below. As the component penalties begin to participate to a larger extent, heavier individuals would become fitter than the lighter individuals with high penalty components. This scenario was observed experimentally, but is presented here only as a discussion to aid in interpretation of the convergence plots. The average volume of the randomly generated initial population also affects the convergence trajectory. A heavy initial population would tend to approach the global minimum from the top whereas a light initial population would tend to approach from the bottom. With this said, the plots presented in Figure 10(a) are meant to illustrate the relatively good convergence characteristics with respect to the global minimum (found via an exhaustive search procedure). Referring to Table 2, it can be seen that the present algorithm (assuming a step-tapered cantilever) achieved a minimum volume that was approximately 2.8% larger than the exhaustive search procedure (assuming a linearly tapering cantilever). From Figure 10(a), it can be seen that all runs (with the exception of a large shape penalty run) achieve similar (favorable) results. Figures 10(b) and 10(c) illustrate the importance of translocation crossover and intelligent mutation. It is observed that without translocation crossover, the algorithm reaches a minimum volume at about 25 generations. Since the population has exhausted its diversity around generation 25, the only mechanism Voss & Foley: 20

that the algorithm has left to combat the increasing penalty exponents is intelligent mutation. The increasing volumes after generation 25 are therefore attributable to intelligent mutation. Any mutation that tends to decrease the shape penalty is accepted. This is further illustrated in Figure 10(c) where it is observed that without translocation crossover and intelligent mutation, the algorithm runs out of diversity at around generation 20. Figures 10(b) and 10(c) together demonstrate the ability of translocation crossover combined with intelligent mutation to maintain diversity during the evolutionary process. The effectiveness of the algorithm can also be inferred from the displacement and stiffness plots shown in Figure 11. All of the runs where able to produce feasible designs. Figure 11(b) demonstrates the volatility of the solution space in the vicinity of the global optimum. This volatility is apparent because the 20 inch deflection constraint in the y-direction is two orders of magnitude larger than the first-order deflection. This also demonstrates that the algorithm is pushing against the constraints, which is required for any effective optimization algorithm. Figures 10(b) and 10(c) should also be compared with their corresponding displacement and stiffness plots shown in Figure 12. It can be seen that the algorithm is still able to find admissible solutions without translocation crossover and intelligent mutation. Figure 12(c) illustrates an increasing tangent stiffness as intelligent mutation evolves more acceptable solutions with respect to the shape constraint. Figures 13, 14 and 15 graphically illustrate the evolutionary process for the cantilever column. Figure 13 illustrates the rather random shape that is obtained using the algorithm without a shape penalty. Figure 14 shows the aesthetically pleasing effect of the moderate generationally dependant shape penalty on the final result. Although the runs with large generationally dependant shape penalties produced satisfactory designs as shown in Figure 15, they were not as effective with respect to minimizing the weight as evidenced in Figure 10(a). This exemplifies the notion that more is not always necessary better in terms of component penalties.

Voss & Foley: 21

9. Concluding Remarks As mentioned previously, the proposed evolutionary algorithm was designed to be utilized in a larger research project underway at Marquette University focused on the implementation of hybrid genetic algorithms for the optimization of structural steel frames using advanced design analysis. The proposed evolutionary algorithm meets all of the original design requirements with respect to the theoretical test problem studied here. Given an initial population of 80 randomly generated 10 segment cantilevers, where each segment’s x and y dimension could take on 200 discrete values, the algorithm was easy to setup so that it could consistently come within 3% of the solution found by an exhaustive search in 50 generations. It should be emphasized that only 42 of the 80 individuals need to be evaluated each generation due to carryover and elitism. The results indicate that the proposed evolutionary algorithm will scale well and allows a great deal of flexibility to deal with the complexities of multi-constraint optimization. A graphical method for interactive algorithm tuning was also developed which allows the user’s intuition to be readily incorporated into the selection process. The rate of convergence was graphically demonstrated and easily controlled by modifying components of the algorithm. Therefore, using the proposed algorithm, the rate of convergence could be easily controlled while increasing the selective pressure. This allows the algorithm to focus on problem areas without degrading the population diversity which could have detrimental effect with a high cardinality genetic representation such as the heuristic tree used here. The proposed evolutionary algorithm combines much of the state of the art in genetic and evolutionary algorithm design: (a.) incorporation of problem specific information via translocation crossover and shape constraints; (b) intuitive heuristic tree based genetic representation; (c) relaxation of low cardinality representation advocated by the Schemata Theorem, justified by generational diversity created through translocation crossover and intelligent mutation; (d) implementation of 6 exceptional crossover operations; (e) low overhead adjustable diversity maintenance through carry over; (f) diversity Voss & Foley: 22

maintenance and local exploration through intelligent mutation; (g) semi-automatic constraint conditioning via intuitive rank based penalty functions with generationally dependent penalty exponents; (h) implementation of generationally dependant non-linear rank based selection. The proposed algorithm has much in common with simulated annealing [6, 7] and was designed such that the final population could be reheated thru increased intelligent mutation and/or migration from multiple populations. This, combined with generationally dependent non-linear rank based selection and generationally dependent penalty exponents allows for the implementation of iterative improvement and multiple population evolutionary algorithms. Iterative improvement combined with multiple population evolution could be used to solve optimization problems where the number of constraints is excessive for a single population to filter. A comprehensive numerical parameter study of the evolutionary algorithm was beyond the scope of the current investigation, but will be the focus of a future paper which utilizes a meta-GA [5] and parallel processing to tune the proposed parameters for specific problems. It should be emphasized that a traditional binary representation is still possible, but may not be necessary depending on the problem and type of translocation crossover and mutation applied.

10.

Acknowledgments

The authors would like to acknowledge the support of the National Science Foundation (USA) - Grant Number CMS 9813216 - under the direction of Dr. Priscilla P. Nelson. The research summarized in this paper forms the basis for a larger effort studying the optimal design of structural steel frames using advanced analysis and genetic algorithms carried out at Marquette University. The views expressed in the paper are those of the authors and not necessarily the sponsor. Voss & Foley: 23

11.

References

[1] Adeli, H., Cheng, N.-T. Integrated Genetic Algorithm for Optimization of Space Structures. Journal of Aerospace Engineering 1993; 6:315-28. [2] Adeli, H., Cheng, N.-T. Augmented Lagrangian Genetic Algorithm for Structural Optimization. Journal of Aerospace Engineering 1994; 7:104-18. [3] Adeli, H., Cheng, N.-T. Concurrent Genetic Algorithms for Optimization of Large Structures. Journal of Aerospace Engineering 1994; 7:276-196. [4] Adeli, H., Kumar, S. Distributed Genetic Algorithm for Structural Optimization. Journal of Aerospace Engineering 1995; 8:156-63. [5] Bach, T., Evolutionary Algorithms in Theory and Practice. New York, New York: Oxford University Press, 1996. [6] Balling, R. J. Optimal Steel Frame Design by Simulated Annealing. Journal of Structural Engineering 1991; 117:1780-95. [7] Balling, R. J. Stochiastic Search, Simulated Annealing Algorithm. In: Arora, J. S. Arora, J. S.(s). Guide To Structural Optimization. New York, NY: American Society of Civil Engineers, 1997:347-80. [8] Banzhaf, Nordin, Keller, Francone, Genetic Programming - An Introduction. San Francisco: Morgan Kaufmann Publishers, Inc, 1998. [9] Berg, P., Singer, M., Dealing With Genes - The Language Of Heredity. Mill Valley: University Science Books, 1992. [10] Camp, C., Pezeshk, S., Cao, G. Optimized Steel Frame Design Using a Genetic Algorithm. In: Proceedings of the 15th Structures Congress. 1997, Portland, OR, vol.2. ASCE, 1997:803-7. [11] Camp, C., Pezeshk, S., Cao, G. Optimized Design Of Two-Dimensional Structures Using A Genetic Algorithm. Journal Of Structural Engineering 1998; 551-9.

Voss & Foley: 24

[12] Chen, S.-Y., Situ, J., Mobasher, B., Rajan, S. D. Use of Genetic Algorithms for the Automated Design of Residential Steel Roof Trusses. In: Proceedings of the U.S.-Japan Joint Seminar on Structural Optimization. 1997, Chicago, IL, ASCE, 1997:43-54. [13] Furuta, H., Masahiro, D., Teteishi, K. Aesthetic Design of Arched Bridges Using Genetic Algorithms. In: Proceedings of the Building to Last: Structures Congress XV. April 13-16 1997, Portland, OR, vol.2. ASCE, 1997:808-12. [14] Goldberg, D. E., Genetic Algorithms in Search, Optimization, and Machine Learning. New York: Addison-Wesley, 1989. [15] Goldberg, D. E., Samtani, M. P. Engineering Optimization via Genetic Algorithm. In: Proceedings of the Ninth Conference on Electronic Computation. 1986, ASCE, 1986:471-82. [16] Grierson, D. E., Pak, W. H. Optimal Sizing, Geometrical and Topological Design Using a Genetic Algorithm. Structural Optimization 1993; 6:151-9. [17] Haupt, R. L., Haupt, S. E., Practical Genetic Algorithms. New York: John Wiley & Sons, Inc., 1998. [18] Holland, J. H., Adaptation In Natural And Artificial Systems, (An Introductory Analysis With Applications To Biology, Control, and Artificial Intelligence). Cambridge, London: MIT Press, 1975. [19] Holland, J. H., Hidden Order: How Adaption Builds Complexity. New York: Addison-Wesley, 1996. [20] Holland, J. H., Emergence: From Chaos to Order. Reading, Mass.: Addison-Wesley, 1998. [21] Huang, E. J., Arora, J. S., Applied Optimal Design. New York: John Wiley & Sons, 1979. [22] Huang, E. J., Arora, J. S. Linear and Quadratic Programming. In: Applied Optimal Design. New York: John Wiley & Sons, 1979:109-55. [23] Huang, M.-W., Arora, J. S. Optimal Design with Discrete Variables: Some Numerical Experiments. International Journal for Numerical Methods In Engineering 1997; 40:165-88. Voss & Foley: 25

[24] Huang, M.-W., Arora, J. S. Performance of a Genetic Algorithm for Structural Design Using Available Sections. In: Proceedings of the 15th Structures Congress. 1997, Portland, OR, vol.2. ASCE, 1997:793-7. [25] Jenkins, W. M. Plane Frame Optimum Design Environment Based on Genetic Algorithm. Journal of Structural Engineering 1992; 118:3103-12. [26] Koumousis, V. K., Georgiou, P. G. Genetic Algorithms in Discrete Optimization of Roof Trusses. Journal of Computing in Civil Engineering 1994; 8:309-25. [27] Koza, J. R., Genetic Programming II: Automatic Discovery of Reusable Programs. Cambridge: MIT Press, 1994. [28] Koza, J. R., Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge: MIT Press, 1996. [29] Leite, J. P. B., Topping, B. H. V. Improved Genetic Operators for Structural Engineering Optimization. Advances in Engineering Software 1998; 29:529-62. [30] Michalewicz, Z., Genetic algorithms + data structures = evolution programs. New York: SpringerVerlag, 1992. [31] Mitchell, M., An Introduction to Genetic Algorithms. Cambridge: MIT Press, 1996. [32] Nha, C. D., Xie, Y. M., Steven, G. P. An Evolutionary Structural Optimization Method for Sizing Problems with Discrete Design Variables. Computers & Structures 1998; 68:419-31. [33] Pezeshk, S., Camp, C. V., Chen, D. Optimal Design of 2-D Frames Using a Genetic Algorithm. In: Proceedings of the International Workshop on Optimal Performance of Civil Infrastructure Systems. April 12 1997, Portland, OR, ASCE, 1997:155-68. [34] Rajeev, S., Krishnamoorthy, C. S. Discrete Optimization of Structures Using Genetic Algorithms. Journal of Structural Engineering 1992; 118:1233-50.

Voss & Foley: 26

[35] Schmit, L. A., Fleury, C. Discrete-Continuous Variable Structural Synthesis Using Dual Methods. AIAA Journal 1980; 18:1515-24. [36] Sugimoto, H., Bianli, L. Fully-Stressed Design of Framed Structures with Discrete Variables and Application of Genetic Algorithms. In: Proceedings of the U.S.-Japan Joint Seminar on Structural Optimization. 1997, Chicago, IL, ASCE, 1997:180-91. [37] Timoshenko, S. P., Gere, J. M., Theory of Elastic Stability. Singapore: McGraw-Hill Book Co., 1961. [38] Wolfram, S., Mathematica, v. 3.0. Champaign, IL: Wolfram Research, Inc., 1998. [39] Xie, Y. M., Steven, G. P. A Simple Evolutionary Procedure for Structural Optimization. Computers & Structures 1993; 49:885-96. [40] Xie, Y. M., Steven, G. P. Evolutionary Structural Optimization for Dynamic Problems. Computers & Structures 1996; 58:1067-73.

Voss & Foley: 27

Table 1: User Defined Constants for the Evolutionary Algorithm.

Constant

No Shape Penalty

Medium Shape Penalty

Large Shape Penalty

"

0.000

0.500

1.000

$

0.000

2.000

2.000

>

1.000

1.000

1.000

(1, (3

0.030

0.030

0.030

(2

0.003

0.003

0.003

.1, .2, .3

0.190

0.190

0.190

81, 82, 83

3.000

3.000

3.000

Table 2: Data From Exhaustive Search and Medium Shape Penalty Run Exhaustive Search X - Direction P

EA (GA) with Medium Shape Penalty

Y - Direction

X - Direction

80,000 kip

Y - Direction

80,000 kip

H

50 kip

50 kip

50 kip

50 kip

htop

29.9 in

27.0 in

NA

NA

hbase

49.1 in

54.9 in

NA

NA

Pcr

89,109 kip

80,884 kip

89,484 kip

85,950 kip

KT

4564 (kip/in)

42.1 (kip/in)

4335 (kip/in)

1905 (kip/in)

*1

0.204 in

0.229 in

0.207 in

0.204 in

*2

1.996 in

18.666 in

1.956 in

3.001 in

E

29,000 ksi

29,000 ksi

Volume

831,083 in3

854,178 in3

Voss & Foley: 28

z P Hx Hy

hy

L

hx x

y

Figure 1:Segmental Cantilever Column Used to Formulate the Optimization Problem in the Present Study.

Voss & Foley: 29

1st LEVEL

2nd LEVEL

c

a b

3rd LEVEL 4th LEVEL 5th LEVEL 6th LEVEL 7th LEVEL 8th LEVEL 9th LEVEL 10th LEVEL

hx 1 hy 1 hx 2 hy 2 hx3 hy 3 hx 4 h y4 hx 5 h y5 h x6 h y6 hx 7 h y7 h x8 h y8 hx 9 h y9 h x 10 h y 10

Figure 2:Tree Hierarchy Used for Genetic Representation in Lieu of Traditional Binary String.

Voss & Foley: 30

Figure 3: Generationally Dependent Nonlinear Rank Based Selection Pressure.

Voss & Foley: 31

Population

Compute Generationally Dependant Penalty Exponents Rank Components of Fitness Function S * K W

Form Individual Fitness

Rank Individual Fitness Values

Mating - Non-Linear Rank-Based Selection Carryover (45%)

Crossover (47.5 %) p(r)

Mutation (5%)

p(r)

p(r)

Fitness Rank

Fitness Rank

Elitism (2.5%) Fitness Rank

23.4 % Part Seg Homologous 2.6 % Part Seg Translocation 2.0 % Part Seg Translocation - Self 13.6 % Full Seg Homologous 1.8 % Full Seg Translocation 4.0 % Full Seg Translocation - Self

Add Elite

Figure 4:Flow Diagram for Hybrid Genetic Algorithm Employing translocation crossover and non-linear rank based selection for cross over, carry-over and mutation.

Voss & Foley: 32

Changed Material Frequency Individual 1

220 200

Initial Heuristic Trees for Mated Individuals

Variable Sum = 11.60

180 160 140

1st LEVEL

120 100 80

ca

a b

1

b

1

2nd LEVEL

1

60

1

40

1

3rd LEVEL

20

1

0

1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

4th LEVEL

1 1

Sum of Variables 5th LEVEL

220

Changed Material Frequency Individual 2

7th LEVEL

200 180

6th LEVEL

Variable Sum = 8.40

160

8th LEVEL

1 1 1 1 1 1

140

1

120

1

9th LEVEL

100 80

10th LEVEL

60

1 1 1

1st LEVEL

c a

b a

0

b

0

2nd LEVEL 3rd LEVEL 4th LEVEL 5th LEVEL 6th LEVEL 7th LEVEL 8th LEVEL 9th LEVEL 10th LEVEL

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

40

INDIVIDUAL 1

20

INDIVIDUAL 2

0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Sum of Variables

Figure 5:Histograms Depicting the Amount of Changed Genetic Material for Two Individuals Following the Proposed Mating Procedure.

Voss & Foley: 33

(a)

(b)

Figure 6: Description of Penalty Application for Deflection and Stiffness

Voss & Foley: 34

90

W

δx

80

δx

70

δy

70

δy

60

Kx

60

Kx

50

Ky

50

Ky

40

Sx

40

Sx

30

Sy

30

Sy

i

f R ( Φι )

W

80

i

f R ( Φι )

90

20

20

10

10

0 0

10

20

30

40

50

60

70

80

0

90

0

Individual (sorted)

10

Kx

50

Ky

40

Sx

30

Sy

50

60

70

80

90

W

250

δx δy

200 175

Kx

150

Ky

125

Sx

100

Sy

i

δy

60

275 225

f R ( Φι )

i

f R ( Φι )

δx

70

40

(b) Generation 10

W

80

30

Individual (sorted)

(a) Generation 0 (the first)

90

20

75

20

50

10

25

0

0 0

10

20

30

40

50

60

70

80

90

0

(c) Generation 20

20

30

40

50

60

70

80

90

(d) Generation 30 5500

W

5000 4500

δx

4000

δy

3500 3000

Kx

2500

Sx

Ky

i

f R ( Φι )

10

Individual (sorted)

Individual (sorted)

2000

Sy

1500 1000 500 0 0

20

40

60

80

Individual (sorted)

(e) Generation 50 Figure 7:Ranking of Fitness Function Components Throughout the Evolutionary Process with No Shape Penalty Exponent: " = 0 and $ = 0.

Voss & Foley: 35

90

δx

80

δy

δx

70

70

K

δy

60

x

60

K

Kx

y

50

Ky

40

Sx

30

Sy

40

Sx

30

Sy

ι

W

i

50

f R( Φ )

W

80

i

ι

f R(Φ ι )

90

20

20

10

10

0

0 0

10

20

30

40

50

60

70

80

90

0

10

20

Individual (sorted)

δx

70

δy

60

K

x

50

K

y

40

Sx

30

Sy

50

60

70

80

90

80

90

W δx

200

i

ι

f R(Φ ι )

i

ι

f R( Φ ι )

250

W

80

40

(b) Generation 10

(a) Generation 0 (the first)

90

30

Individual (sorted)

δy

Kx

150

Ky Sx

100

Sy

20

50

10 0

0 0

10

20

30

40

50

60

70

80

90

0

10

In d i v i d u a l ( s o r t e d )

20

30

40

50

60

70

Individual (sorted)

(c) Generation 20

(d) Generation 30

12000

W δx

10000

K

x

K

y

ι

f R (Φ )

δy

8000 6000

i

Sx 4000

Sy

2000 0 0

10

20

30

40

50

60

70

80

90

I n d iv i d u a l ( s o r t e d )

(e) Generation 50 Figure 8:Ranking of Fitness Function Components Throughout the Evolutionary Process with Medium Shape Penalty Exponent: " = 0.5 and $ =2.0.

Voss & Foley: 36

90

W

90

W

80

δx

80

δx

70

δy

70

δy

60

Kx

50

Ky

40

Sx

40 30

x y

S

x

S

y

ι

K

Sy

i

50

i

ι

f R (Φ ι )

60

f R (Φ ι )

K

30

20

20

10

10 0

0 0

10

20

30

40

50

60

70

80

0

90

10

20

Individual (sorted)

160

W

140

140

δx

120

δy

δx

120

100

δy

Ky

60

70

80

90

80

90

Kx 80

Ky

i

Sx

60

50

W

ι

Kx

80

40

(b) Generation 10

f R (Φ ι )

i

ι

f R (Φ ι )

(a) Generation 0 (the first)

100

30

Individual (sorted)

60

Sx

40

Sy

Sy

40

20

20

0

0 0

10

20

30

40

50

60

70

80

0

90

10

20

Individual (sorted)

(c) Generation 20

i

40

50

60

70

(d) Generation 30

4500

f R(Φ ι )

30

In d iv id u a l ( s o r t e d )

W

4000

δx

3500

δy

3000

Kx

2500

Ky

2000

Sx

1500

Sy

1000 500 0 0

10

20

30

40

50

60

70

80

90

Individual (sorted)

Figure 9:Ranking of Fitness Function Components Throughout the Evolutionary Process with Large Shape Penalty Exponent: " = 1.0 and $ = 2.0.

Voss & Foley: 37

1500 R u n # 1 : α = 0.0; β = 0.0 R u n # 2 : α = 0.0; β = 0.0

1350

R u n # 1 : α = 0.5; β = 2.0

Volume x 1000

R u n # 2 : α = 0.5; β = 2.0 R u n # 1 : α = 1.0; β = 2.0

1200

R u n # 2 : α = 1.0; β = 2.0 Exhaustive Search

1050

900

750 0

5

10

15

20

25

30

35

40

45

50

Generation

(a) Variation in Shape Penalty Exponent 1300 Translocation and Mutation

1250

Volume x 1000

No Translocation

1200

No Translocation; No Mutation

1150

Exhaustive Search

1100 1050 1000 950 900 850 800 750 0

5

10

15

20

25

30

35

40

45

50

Generation

(b) Run #1 assuming " = 0.5 and $ = 2.0 1500 Translocation and Mutation

1425

No Translocation

1350

No Translocation; No Mutation Exhaustive Search

Volume x 1000

1275 1200 1125 1050 975 900 825 750 0

5

10

15

20

25

30

35

40

45

Generation

(c) Run #2 with " = 0.50 and $ = 2.0. Figure 10: Genetic Algorithm Convergence.

Voss & Foley: 38

50

35

50

35 No Shape Penalty

30

30

No Shape Penalty

45

40

Medium Shape Penalty

40

25

Large Shape Penalty

25

35

Large Shape Penalty

35

20

Criteria: 2 inches

20

30

Criteria: 20 inches

30

15

15

10

10

5

5

0 -5 0

5

10

15

20

25

30

35

40

45

50

δy

δx

Medium Shape Penalty

50

45

25

25

20

20

15

15

10

10

5

5

0

0

0

-5

-5

-5 0

55

5

10

15

20

25

(a) 200 No Shape Penalty

50

55

100

100

50

50

150

K Ty (x 1000)

Large Shape Penalty

K Tx (x 1000)

45

175

Medium Shape Penalty

125

40

(b)

200

150

35

Generation Number

Generation Number

175

30

125

Criteria: 0.0

100

100

75

75

50

50

25

0

-50

-50

Small Shape Penalty Medium Shape Penalty

-100

-100

Large Shape Penalty

25

0

0

-150

-150

Criteria: 0.0

0

-25

-200

-25 0

5

10

15

20

25

30

35

40

45

50

-200 0

55

5

10

15

20

25

30

35

40

45

50

55

Generation Number

Generation Number

(c)

(d)

Figure 11: Generational Variation of Components with All Evolutionary Algorithm Operators: (a) Horizontal Deflection: x-direction; (b) Horizontal Deflection: y-direction; (c) Tangent Stiffness: x-axis; (d) Tangent Stiffness y-axis.

Voss & Foley: 39

45

40

Translocation : Mutation

40

35

No Translocation : Mutation

35

30

No Translocation : No Mutation

30

25

Criteria: 2 inches

25

20

20

15

15

10

10

5

5

0

0

-5

δy

δx

45

-5 0

5

10

15

20

25

30

35

40

45

50

55

70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 -5

No Translocation : Mutation No Translocation : No Mutation Criteria: 20 inches

0

5

10

15

Generation Number

20

No Translocation : Mutation

50 0

150

100

100

0

-50 -100 20

25

55

150

50

15

50

200

100

10

45

200

K Ty (x 1000)

K Tx (x 1000)

100

5

40

200

150

Criteria: 0

0

35

250

No Translocation : No Mutation

150

30

(b)

Translocation : Mutation

200

25

Generation Number

(a)

250

70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 -5

Translocation : Mutation

30

35

40

45

50

50

50

0

0

-50 -100

-50

-150

-100

-200

-50

No Translocation : Mutation

-100

No Translocation : No Mutation

-150

Criteria: 0

-200 0

55

Translocation : Mutation

5

10

15

20

25

30

35

40

45

50

55

Generation Number

Generation Number

(c)

(d)

Figure 12: Generational Variation of Components with Limited Evolutionary Algorithm Operators: (a) Horizontal Deflection: x-direction; (b) Horizontal Deflection: y-direction; (c) Tangent Stiffness: xaxis; (d) Tangent Stiffness y-axis.

Voss & Foley: 40

Generations 1 through 10

Generations 11 through 20

Generations 21 through 30

Generations 31 through 40

Generations 41 through 50

Figure 13: Evolution of the Cantilever Column Design for No Shape Penalty Exponent: " = 0 and $ = 0.

Voss & Foley: 41

Generations 1 through 10

Generations 11 through 20

Generations 21 through 30

Generations 31 through 40

Generations 41 through 50

Figure 14: Evolution of the Cantilever Column Design for Medium Shape Penalty Exponent: " = 0.5 and $ =2.0.

Voss & Foley: 42

Generation 1 through 10

Generation 11 through 20

Generation 21 through 30

Generation 31 through 40

Generation 41 through 50

Figure 15: Evolution of the Cantilever Column Design for Large Shape Penalty Exponent: " = 1.0 and $ =2.0.

Voss & Foley: 43