Structural optimization complexity - Springer Link

8 downloads 110749 Views 1MB Size Report
Sep 30, 2004 - optimization to real-life problems is, of course, due to the increasing .... tial cost of design and development of new automotive vehicles goes ...
Forum

Struct Multidisc Optim 28, 375–387 (2004) DOI 10.1007/s00158-004-0415-y

Structural optimization complexity: what has Moore’s law done for us? S. Venkataraman and R.T. Haftka

Abstract Rapid increases in computer processing power, memory and storage space have not eliminated computational cost and time constraints on the use of structural optimization for design. This is due to the constant increase in the required fidelity (and hence complexity) of analysis models. Anecdotal evidence seems to indicate that analysis models of acceptable accuracy have required at least six to eight hours of computer time (an overnight run) throughout the last thirty years. This poses a severe challenge for global optimization or reliability-based design. In this paper, we review how increases in computer power were utilized in structural optimization. We resolve problem complexity into components relating to complexity of analysis model, analysis procedure and optimization methodology. We explore the structural optimization problems that we can solve at present and conclude that we can solve problems with the highest possible complexity in only two of the three components of model, analysis procedure or optimization. We use examples of optimum design of composite structures to guide the discussion due to our familiarity with such problems. However, these are supplemented with other structural optimization examples to illustrate the universality of the message.

Key words complexity, Moore’s law, optimization Received: 5 August 2002 Revised manuscript received: 11 February 2004 Published online: 30 September 2004  Springer-Verlag 2004 S. Venkataraman1, u and R.T. Haftka2 1

Department of Aerospace Engineering and Engineering Mechanics, San Diego State University, San Diego, CA 921821308, USA e-mail: [email protected] 2 Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, Florida 32611-6250, USA e-mail: [email protected]

1 Introduction In a 1965 speech, computer scientist Gordon Moore – later founder of Intel – predicted that the number of devices that will be packed in a silicon chip would double every 18 months. This observation, now known as Moore’s Law (see Schaller 1997), presaged the development of faster and cheaper computing equipment that has revolutionized every aspect of our lives. While debate (Bondhophadhyay 1998) continues on whether the prediction propelled the development or development followed the prediction, engineers as consumers of computing devices have greatly benefited from this rapid growth. Niordson’s (2002) historical perspective of the development in computing equipment during his career from 1959 to 2001 details the kinds of mechanics problems that became possible to solve with every new advance. During the last thirty some years that one of us (Haftka) has worked in the area of structural optimization, computing speed and storage capacity increased one million fold at the rate of about a 100-fold increase every decade. In 1971, when structural optimization was still struggling to deal with any “real-life” problems, a onemillion factor increase in computational speed would have seemed to solve all problems. After all, engineers were already analyzing complex structures with detailed finite element (FE) models, and the ability to do one million analyses is sufficient for most optimization problems. Yet, in 2004 structural optimization practitioners hear the same refrain from structural analysts that was heard in 1971: a single structural analysis takes several hours or even several days to run on my computer; how can I even afford to think about optimization? This curious state of affairs is even more astounding in view of the remarkable progress in analysis and optimization algorithms. Here, we know of no simple trend lines; however, several developments have reduced substantially the cost of structural analysis and optimization. Improved finite element modeling and improved error estimates for adaptive mesh refinement mean that we can use fewer degrees of freedom (DOF) for a given

376 accuracy. Improved solvers mean that we can solve the resulting systems of equations faster. Based on anecdotal evidence, we estimate that there have been improvements of one or two orders of magnitude in the cost of structural analysis between 1971 and 2004. Similarly, improved optimization algorithms, approximations and derivative calculations may allow for a similar improvement in the number of analyses required for performing optimization. For gradient-based optimizations, the development in approximation methods has reduced the number of functional evaluations required from a quadratic dependence to a linear dependence. Recent developments in approximation methods (e.g. response surface approximations) have contributed to further increase in algorithmic efficiency. For discrete and global optimization, the improvements due to algorithmic innovations may be higher1 . Combining hardware and algorithmic improvements indicates that it may be at least one billion times cheaper to perform structural optimization today than it was in 1971. Recent developments in parallel computing hardware and programs that take advantage of such parallel architecture can further increase computing efficiency by a factor of ten to one hundred. The paradox of the continuing difficulty of applying optimization to real-life problems is, of course, due to the increasing complexity of the structural models that analysts consider “adequate” for predicting structural response. A few years ago, one of us talked to an engineer from Boeing who was transitioning at the time from linear to nonlinear structural analysis. The engineer expressed wonder that his company used linear analysis to design its airplanes for so many years, because the results of the linear and nonlinear analyses were so different. Of course, the excellent safety record of Boeing transports is an indication that somehow the linear models did their job. Detailed information on problems that have been possible to solve with time can be traced from the review papers published in the last thirty years [e.g. Venkaya (1978), Vanderplaats (1982, 1993), Sobieszcsanski-Sobieski (1986), Levy and Lev (1987), Grandhi (1993), Olhoff (1996), Sobieszcsanski-Sobieski and Haftka (1997)]. We use two structural optimization examples to provide a historical perspective of how things have changed in the area of structural optimization. Figure 1 (Van Bloemen Waanders et al. 2001) shows that for the design of electronic packages, the number of degrees of freedom (DOF) used to model the structure has increased by a factor of 4000 over 15–20 years. A similar example is found in the design of aircraft structures. At a 1986 NATO advanced Study Institute2 , Lecina and Petiau (1987) from Avions Marcel Dassault, France and Wellen 1

e.g., Furuya and Haftka (1995), Nagendra et al. (1996) Proceeding of the NATO/NASA/NAF/USAF Advanced Study Institute on Computer Aided Optimal Design: Structural and Mechanical Systems, held in Troia, Portugal, June 29–July 11, 1986

Fig. 1 Evolution in analysis model complexity of electronic packaging design problems over the last 20 years. The number of degrees of freedom in the current model used for determining the forced response by a transient FE analysis (30 seconds) has increased by a factor of 4000 from that used 15–20 years ago. Source: Sandia Electronic package optimization with multi-level parallelism (van Bloemen Waanders et al. 2001). Transient analysis (300 times), ∼ 1400 analyses, 2560 processors

and Bartholomew (1987) of the Royal Aircraft Establishment, United Kingdom, presented linear FE analysis models used in gradient-based structural optimization of real aircraft structures. Lecina and Petiau (1987) required 3500 DOF to model a composite wing, using 476 design variables and 1000 (strength failure, buckling and aeroelastic flutter) constraints. The wing box optimization reported by Wellen and Bartholomew (1987) used an FE model with 4500 DOF to optimize 294 design variables for strength constraints for one load case with linear analysis. By 2001, the FE model used to simulate the ultimate failure test of a full-scale composite wing by Jegley et al. (2001), used 428 000 DOF. The model had to capture the response of the wing after it was subject to impact damage and several discrete source damages in the form of a cut (7-inch crack) through the cover panel. In presenting the results of the verification at the 2001 SDM meeting3 , the authors apologized that the model prediction for the ultimate failure load was off by 3% from that observed experimentally. This brings to mind another law from the middle of the last century, Parkinson’s Law that stated that the work done expands to fill up all the available time (see Parkinson 1959). The computerized Parkinson’s law (Thimbleby 1993) manifests itself by software applications growing to fill up increased computer memory, processing capabilities and storage space. In term of structural analysis, it appears that the computer time required for an “adequate” structural analysis has been

2

3

Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, April 16–19, 2001, Seattle, Washington

377 fixed at several hours. Essentially, if a single analysis cannot be completed overnight on the available computer, progress in terms of debugging models and improving structures can become intolerably slow. So a requirement that a working model can be analyzed in a few hours appears to be the main restraint to the desire of the structural analyst to have a high-fidelity structural analysis. In their paper on optimal design of a sandwich composite fuselage section, Ley and Peltier (2001), discuss how they had to limit the analysis model size so as to be able to finish a local optimization overnight. In spite of the increasing appetite of structural analysts for more complex models and analyses, there is no question that structural optimization did benefit from faster computers and more efficient analysis and optimization algorithms. The turning point was in the mid 1980s. In 1981, Holt Ashley still lamented that optimization was used by researchers but not by industry. In 1985, James Bennett and Mark Botkin organized a shape optimization conference at the General Motors Research Laboratory that signaled the growing interest of the automotive industry in structural optimization. Since then, the increasing number of optimization papers reflecting industrial applications attest to the acceptance of structural optimization as a design tool. In addition, in the 1970s most engineering computations were performed on expensive (millions of dollars) mainframe or supercomputers. Today this is not so common, unless it is for very large-scale simulations [e.g. electronic packaging design problem of van Bloemen et al. (2001), at Sandia National Labs]. That is, one benefit of having faster computers is that engineering computations and challenging optimization are now routinely performed on much cheaper desktop computers. The objectives of this paper are to review the increasing complexity of the problems that structural optimization can deal with and to check how increases in computer power have been utilized. For this purpose we need to identify some factors that contribute to the complexity of a structural optimization problem. This will enable us to (i) classify complexity in a systematic way and (ii) provide a framework that allows designers to trade off the different aspects of complexity.

2 The three axes of complexity The terms difficulty or complexity are not easy to quantify in terms of a structural analysis problem. However, since the subject of this article is the effect of computing power, we will use computational cost as a surrogate for both. This means that we will ignore other aspects of complexity and difficulty. For some structural optimization problems complex geometry requires many hours of human involvement in modeling, or difficult design criteria require many hours of human involvement in the formulation of the optimization problem. For example, substan-

Fig. 2 Three complexity axes marked with some examples. Moving from point O along one of the axes towards points A, B or C indicates increasing complexity in the model, analysis or optimization method used. The desire of the structural optimization specialist is to reach point P where all three levels of complexity are maximal. This appears a difficult if not impossible target as disciplinary specialists constantly extend the limits of points A, B and C. Therefore state-ofthe-art optimization problems will have to form a surface in the box-shaped region shown above, on which moving further along any one direction will require reduction in the other two directions

tial cost of design and development of new automotive vehicles goes into the generation of required discretized FE models. Similar to the manufacturing assembly lines, most auto-makers today also have in their employment thousands of engineers who man the FE assembly line4 . These engineers continually create and refine FE models used for analysis, design and verification. Because these costs are difficult to quantify and are highly dependent on the application, they will not be addressed here5 . The 4 One of the reviewers asked if all that detail was indeed necessary or the mesh refinements we are using today are an overkill. The authors contend that the analysts who choose model fidelity and mesh refinement perhaps overdo this to be safe. The use of extra finely refined mesh for the analysis requires more computational effort, however this eliminates some time and effort spent in mesh convergence studies 5 Shape optimization and optimization of problems that undergo large nonlinear large deformations (e.g. metal forming and crash simulation) require remeshing or recreating the finite element model as the initial shape is significantly changed. Even the best auto-remeshing algorithms require human intervention during the optimization process. The recent development in meshless analysis methods (Belytschko et al. 1996) will help us overcome the remeshing issue in shape optimization problems. Kim et al. (2001, 2002) demonstrated the use of meshless Galerkin analysis for shape optimization of a connecting rod that has been previously presented by Botkin et al. (1985). The optimization using a meshless technique did not require any human intervention while, when implemented using finite elements, it required eight manual mesh creations in intermediate stages because the automatic re-meshing could not handle the large changes in

378 three aspects of complexity considered in our discussion relate to the difficulty or complexity of the analysis model in terms of geometry, loading and material (model complexity), the complexity or cost of the analysis procedure (analysis complexity) and the cost or difficulty of the optimization procedure used (optimization complexity). In the next few sections we discuss each complexity aspect and provide some examples to aid the presentation. Figure 2 shows the three-dimensional complexity space which we have identified and will use to classify structural optimization examples. Moving from origin along any one of the axes indicates increasing complexity in model, analysis or optimization method used. State-ofthe-art optimization problems will have to form a surface in the complexity space. In this article we present examples from structural optimization literature that fall on this surface and discuss how the various complexities limit us from reaching the diagonal point from the shape that were allowed. This, though, comes at a significant computational expenditure. The meshless technique used by Kim et al. (2001) was found to be ten times more expensive per analysis when compared to the finite element analysis. This large difference in cost is due to the approximate way in which meshless methods implement essential boundary conditions

origin where all three levels of complexity are at their maximum.

3 Model complexity In terms of computer cost, it is convenient to separate the causes of increased cost into two categories: modeling and analysis. The number of DOF in the FE model of a structure is often used as a first indication of the computer cost associated with modeling, but the topology of the structure has an important effect too. The topology determines the bandwidth of the stiffness matrix, which determines the computational cost for solving the system of equations in a FE model. Usually a slender beam type structure is faster to analyze than a compact threedimensional structure with the same number of DOF. However, since most applications fall somewhere in between these two extremes, we will follow the common usage of reporting on the number of DOF in the discretized model and the bandwidth of the stiffness matrix as the primary measure of modeling complexity. Figure 3 shows examples of composite structures with increasing model complexity and a small sample of papers that have performed optimization of such structures.

Fig. 3 Examples of increasing model complexity encountered in the optimal design of laminated composite structures, with references to some example papers. The simplest is a uniform plate under uniform in-plane loading, leading to a point optimization of the laminate stacking sequence for stiffness, strength and stability requirements. Adding stiffeners to increase flexural stiffness increases the model complexity. However, many identical stiffeners and simple loadings allow the use of simple methods with repeating elements. Thin-walled structures often require openings and cutouts, requiring mesh refinement to capture accurately the stress concentration at the cutouts. Typical composite laminates have straight fibers, but with tow placement fibers can curve and model complexity increases. Complexity also increases when micromechanical models are required for accurate failure prediction. Stiffened shell structures such as aircraft fuselage or launch vehicle tanks require more complex models to capture accurately load paths and the non-uniform stress state. Finally, the most detailed models are needed when the composite structure cannot be approximated as a thin plate or shell and requires 3-D FE analysis

379

Fig. 4 Examples of problems at various levels of analysis complexity in design of laminated structures, with sample publications. The simplest analysis is linear static analysis. Eigenvalue problems such as bifurcation buckling and normal mode analysis require more effort for the same model size (DOF) than static analysis. Transient and nonlinear analyses require repeated evaluation of the incremental response and the cost depends on the time or load step and maximum time up to which the response is desired

4 Analysis complexity In terms of analysis complexity, linear elastic analysis is the simplest, and it is convenient to measure other types of analysis by their cost relative to linear analysis. This is particularly meaningful since often more complex analysis such as nonlinear elastic analysis, linear bifurcation analysis or dynamic response use linearization algorithms that require linear analysis as a repeated step. History-dependent nonlinear analysis, nonlinear dynamic analysis, such as that required for metal forming and crash simulation, as well as nonlinear crack propagation are currently the extremes of analysis complexity. They typically require hundreds or thousands of linear analysis equivalents. The analysis complexity can be considered as the number of matrix inversions performed. Figure 4 shows examples of increasing complexity in analysis methods and provides some examples from composite structures optimization.

5 Optimization complexity The cost of optimization is typically measured in the number of analyses required. However, this is an imperfect measure for several reasons. First, when gradientbased optimization is used, the cost of calculating the gradients can be the dominating factor, and the cost of a derivative may be much lower than the cost of an analysis (e.g. when an adjoint method is used). Second, a large number of analyses may reflect an inefficient algorithm rather than a difficult problem. Third, when the number of design variables is comparable to the number of degrees of freedom, the cost of the optimization algorithm itself may dominate the cost of the analyses. Finally, there are simultaneous analysis and design (SAND) formulations where no analyses need to be performed in order to obtain optimal designs. Because of these shortcomings of the number of analyses required as a measure of optimization cost, we will

add two other indicators that correlate with cost. One is the number of design variables and the other is the type of optimization. The simplest type of optimization is local, gradient-based optimization. Local optimization without gradients (e.g. based on pattern search methods) is more difficult. Global optimization with continuous variables or discrete and combinatorial optimization are the most difficult, typically leading to NP-hard problems (that is, the solution time may be expected to rise exponentially with the number of variables). Another factor in optimization complexity is the effect of uncertainty. Structural design in the presence of uncertainty typically requires the calculation of the probability of failure, which can even be formulated as an optimization problem of finding the most probable failure point. Structural optimization under uncertainty, therefore, becomes the equivalent of two-level optimization, with greatly increased cost compared to deterministic optimization (for this reason we include uncertainty as a determinant of optimization complexity rather than of analysis complexity). Similarly, multi-objective optimization adds another order of magnitude of required analyses. Instead of finding a single optimum, we now need to find all the designs on the Pareto surface. Figure 5 presents examples of optimization complexity encountered in design optimization of composite structures. On the three axes of complexity shown schematically in Fig. 2, the most expensive structural design problems involve maximum complexity on all three fronts. For example, this could be global optimization of a structure with millions of degrees of freedom requiring nonlinear history-dependent response to minimize cost, weight and the probability of failure. This describes the design of a realistic model of a car for high probability of its occupants surviving a crash. At the present time solving a structural optimization problem with all three measures of complexity at their maxima appears to be a target for the distant future. The reason is that structural analysts are constantly improving model fidelity and analysis accuracy as computer resources become increasingly available, and so the most complex structural analysis may still require sev-

380

Fig. 5 Various levels of optimization complexity used in optimum design of laminated composite structures, with sample publications. The easiest is point optimization of a laminate stacking sequence using lamination parameter graphs (lamination parameters are extensional and flexural stiffness terms that are integral quantities of lamina or laminate properties). Continuous local optimizations are next in complexity. When there are multiple objective or criteria that have to be optimized, the designer need to performs a sequence of optimizations to develop the Pareto surface to identify the design that provides the best compromise between the different objectives. This is even more complicated when the laminates are made out of discrete thickness plies. The laminate stacking sequence design with discrete thickness plies becomes a combinatorial optimization. The most difficult optimization is to design for uncertainty, for example reliability-based optimization. The calculation of reliability can be formulated as an optimization problem, so that optimization for reliability can be viewed as a two-level optimization problem

eral hours of computer time on the fastest available computers not only in 2004, but also in 2034. The designer therefore will always have to make a judicious choice of compromise between the model, analysis and optimization complexities. In the following sections we provide examples of compromises that are seen in structural opti-

mization of laminated composite structures. A laminated plate model, a stiffened plate model and a launch vehicle tank model are chosen as (Fig. 6) examples of low, intermediate and high complexity. For each model, the types of optimizations possible for a required analysis are discussed.

Fig. 6 Trade-offs in model, analysis and optimization complexity in optimization of composite laminates structures. The 2-D graphs under each model show qualitatively the compromise between analysis and optimization complexity. The points on the graphs represent examples of specific problems that have been published before. The bounding curves of the points in each graph qualitatively represent the most complex problems permitted by the current state of the art

381 6 Structural optimization with the simplest of models (laminate optimization or point design) The lowest level of modeling and analysis complexity is when a closed-form solution is available. The design of composite laminates is a prime example. Design for strength and buckling of simply supported composite plates is amenable to closed-form solution. In designing a laminate the number of plies, ply angles and ply thicknesses can be used as design variable. Furthermore, because ply orientations are often limited to a small set of angles (e.g. 0◦ , ±45◦ , and 90◦ ) the design of the stacking sequence becomes a combinatorial optimization problem that has many local optima. For this reason laminate optimizations are widely used as examples in structural optimization literature. Genetic algorithms have been popular for the solution of this problem (e.g. Callahan and Weeks 1992). These algorithms require thousands to millions of analyses, but specialized genetic operators, tailored to the needs of stacking sequence optimizations, can be developed (e.g. Le Riche and Hatfka 1995). However, genetic algorithms are stochastic, so that the result of a single optimization run is somewhat random. This creates a problem in comparing the effectiveness of two competing genetic algorithms: good results can be a matter of chance. A standard way around this problem is to use the reliability of the algorithm after a given number of runs as a measure of effectiveness. For example, if one algorithm finds the optimum of a known problem 40% of the time, and a second finds it 60% of the time, it is tempting to conclude that the second algorithm is better. However, if these percentages (reliabilities) are estimated by running each algorithm only ten times, it is easy to check (G¨ urdal et al. 1998, p. 211) that the standard deviation of the reliability estimate is about 15.5%, and the difference between 40% and 60% may not be significant. This is another way of saying that when one algorithm succeeds four out of ten times and the other six out of ten times, the difference may still be a matter of chance. When the estimate is done based on 100 runs, the standard deviation of the reliability drops to 4.9%, so that the difference between 40% and 60% becomes more meaningful. The development of specialized genetic algorithms for the design of composite structures may therefore require the performance of genetic optimizations in batches of 100. Our experience has been that this development process can require altogether tens of thousands of genetic optimizations, and hence many millions of analyses. This is currently possible only with the simplest of structural models. Inexpensive laminate analyses have also been used extensively to explore the effect of uncertainty on the design of composite structures. Kogiso et al. (1996), explored buckling reliability maximization of laminated plates with continuous ply orientation angles. Le Riche and Gaudin (1998), designed laminates for dimensional sta-

bility under thermal and hygral expansion, using Monte Carlo simulation of variability in material properties. Similar design optimization of laminates for reliability was performed by Qu et al. (2001) and Kim and Goo (1993) using continuous ply angles and thicknesses as variables with displacement and strength constraints. Kim and Goo (1993) used fuzzy sets to model the uncertain variables, while Qu et al. (2003) used probability distributions. Lombardi and Haftka (1998), used a worstcase formulation that involves optimization within optimization with stress and displacement constraints. Venter and Haftka (2000), developed a two-species genetic algorithm for dealing with this problem. One species, akin to a parasite, finds worst loading cases, while the other, akin to a host, finds the best designs to carry these loading. The two-species approach reduced the number of required linear analyses for designing a laminate with about 50 plies from four million to between 50 and 300,000. More recently, the residual strength of laminates has been used as an objective function for the laminate design. This is important so that when the design load is exceeded the laminate does not suffer catastrophic failure. The availability of residual strength will allow for structural design with multiple load paths, such that a local failure of a laminate does not affect the integrity or functionality of the larger structure (e.g. a whole wing or aircraft fuselage). Reserve strength optimizations have been performed for laminates with respect to stress induced failure and also for postbuckling response. Capturing damage growth and postbuckling response of a laminate requires history-dependant nonlinear analysis. The increase in analysis complexity limits the optimizations that can be performed. Shin et al. (1991), presented optimum design of laminated composites for postbuckling response. Hammer (2000) presented the optimum design of composite laminates undergoing progressive damage. Multi-objective optimizations are often performed in the context of understanding trade-offs or designing for conflicting requirements. Adali et al. (1995), presented the maximization of prebuckling stiffness, buckling load limit and postbuckling of composite laminates. Grosset et al. (2001), presented the multi-objective optimization for minimum mass and cost with constraints on natural frequency for bi-material composite laminates. An important emerging area of optimization is design for reliability. Multi-objective optimizations of laminated structures for minimum mass and cost and maximum reliability have been presented by Yu et al. (1991), with buckling constraints and by Qu et al. (2003) with strength constraints. Qu et al. (2003) presented reliability-based optimum design of laminates used for cryogenic tanks. A nonlinear laminate analysis was used to incorporate the effect of temperature-dependent material properties. Calculating the probability of failure of 10−6 with a 1% accuracy required 108 nonlinear laminate analyses. In order to reduce the analysis costs, the probability of failure was approximated by a response surface model and then used in the optimization for reliability maximization.

382 7 Structural optimization with models of intermediate complexity (composite stiffened shell or plate optimization) From the standpoint of structural optimization, it could be that the most interesting applications are for models of intermediate complexity. Unlike the most complex models, these still allow us to perform complex and relatively expensive optimization. Stiffened shell and plate structures are used in a variety of applications, such as pressure vessels, aircraft and space vehicle structures, and marine structures. Stiffeners are used to increase the bending stiffness of thin-walled members (plates and shells) and make them more resistant to buckling. Stiffened shells often exhibit complex nonlinear behavior under compressive loads and need to be designed to prevent buckling. The buckling modes can be global in nature or highly localized to a small region of the panel. Curved stiffened panels are sensitive to initial imperfections, which are often stochastic, and a signature of the manufacturing process used. In composite shells, the laminate stacking sequence influences the initial imperfections. The analysis must predict the nonlinear prebuckling state of the shell with imperfections for accurate estimation of the stresses and buckling loads. Typical design variables in design of stiffened panels are the laminate stacking sequence design variables of the different segments such as the shell wall, the stiffener flanges/web, the cross-section shape of the stiffener, the stiffener size and spacing. The optimization has a mixture of both discrete variables (e.g. ply thickness and/or orientation, and number of stiffeners) and continuous variables (stiffener cross-sectional sizes). The simplest optimization is for linear static analysis where the laminates are chosen and only the stiffener spacing and dimensions are varied. This is often used in the preliminary design phase of most aerospace shell structures. Since only a small number of load cases are used for preliminary design, stacking sequence design is not performed. The next order of complexity is in buckling load maximization or weight minimization of shell structures with stress and buckling constraints. This is more expensive. Examples of stiffened panel FE analysis models that were used for optimization in the 1970s typically had on the order of 1000 DOF, optimized about 10 design variables (DV) using gradient-based methods for linear static and buckling problems. Examples of stiffened panel optimization work from the late 1970s and early 1980s are those presented by Stroud and Agranoff (1976, 1977) and Bushnell (1987). The design optimized 10 to 20 variables using gradient search techniques using simple analysis models, finite strip models and shell of revolution models. The panels were designed for a small number of load cases subject to constraints on buckling (linear bifurcation) and stress.

The computational effort for the FE analysis is proportional to the bandwidth of the stiffness matrix. However, for most practical cases it is reasonable to assume that the computation√effort for analysis of a model is proportional to DOF DOF. (This is based on the assumption that in a 2-D shell/plate type structure having a mesh with n × n nodes, the number of DOF is proportional to n2 and the bandwidth is proportional to n.) If the optimization problem is convex and well behaved, the gradient search requires order of DV searches. It is reasonable to assume these are performed using finite difference sensitivities. So the number of analyses per search is proportional to DV2 . An acceptable accuracy FE model for stiffened panel analysis today is of the order of 40 000 DOF, a factor of 40 higher than in √ 1970s. This represents a factor of 250 ≈ 40 40 increase in computational effort due to increase in model complexity. However, the analyses today are mostly nonlinear, requiring multiple linear analyses (Newton–Raphson iterations). We estimate a factor of 100 increase in effort due to greater analysis complexity. If by Moore’s law computers today are at least a million times faster than those used in the 1970s, the number of analyses possible using current day models and nonlinear analysis is about 40 [= 1 000 000/(250 × 100)] times larger than then (more if we consider algorithmic improvements). In terms of computing, this implies we can afford optimization of stiffened panels that require 4000 nonlinear analyses or 400 000 linear analyses. This improvement allows substantial increase in number of design variables for gradient-based optimization, or some global or discrete optimization but not global multi-objective reliabilitybased optimization. Of the large collection of papers on stiffened panel optimization, only a small number report on nonlinear optimization of stiffened shell structures. For example, Antonio (1999) presented the optimization of composite stiffened spherical shell with nonlinear analysis. The optimization determined optimal ply orientations and thickness and stiffener dimensions (18 design variables) using a gradient-based optimization with a model that had about O(103 ) DOF. Optimal design of unstiffened shell structures using nonlinear FE analyses has been reported by others, e.g. Lee and Hinton (2000) and Moita et al. (2000). With linear buckling and stress analysis, it is possible to perform more complex optimizations. For example, Nagendra et al. (1996) used a genetic algorithm (GA) for the design of a blade-stiffened panel for strength and buckling. The design variables controlled the stacking sequence of the blade and the skin and the height of the blade. They used the PASCO finite strip analysis code (Stroud and Anderson 1981), performing about 3000 analyses per optimization, which at the time (1994) required about 24 hours of run time on an IBM RISC machine. The reliability of the algorithm was poor, so that at least ten runs were required for good results. These ten days of computer time in 1994 can probably be compressed

383 to a few hours in 2004. Similar GA based optimization of grid-stiffened panels with strength and buckling constraints was published by Jaunky et al. (1998). Crossley and Laananen (1996) have presented genetic optimization of stiffened panels in which they maximized the energy absorption under crash. The examples reported do not appear to reflect what is possible with the available computer power. This is because the problems reported in literature now are solved on personal computers and low-end workstations. In fact, commercial software for design and analysis of stiffened shell structures, such as COSTADE (Mabson et al. 1994), VICONOPT (Butler and Williams 1992) and PANDA2 (Bushnell 1987) still rely on less expensive analysis methods including closed solutions, Rayleigh–Ritz approximations, models specialized to shells of revolution and finite strip analysis. For stiffened composite panels, methods such as shell-of-revolution analysis and finite strip methods produce models that may be equivalent to FE models with several dozen or few hundreds degrees of freedom. These models can then be used for expensive optimization involving many thousands of linear analyses or linear bifurcation analyses. Alternatively, less expensive optimization involving hundreds of nonlinear analyses are also possible. The present authors (Lamberti et al. 2000, 2003) used the PANDA2 program that employs a collection of simple models based on analytical Rayleigh–Ritz solutions and shell-of-revolution solutions for performing a design trade study to choose best stiffener design. The nonlinear prebuckling deformations and the ensuing load redistribution due to the presence of initial imperfections were included in the analysis. The global optimization of the different stiffened panel designs required a minimum of 85 000 analyses. Using a FE model for the analysis, that could capture all relevant buckling modes (local and global) and accurately predict stresses, would not have permitted us to perform global optimization. An example of optimization problems requiring expensive analysis is found in sheet metal forming. Kim et al. (2000) optimized the shape of the die and punch for a metal forming problem to minimize the springback after deep drawing of the plate and thickness variations of the stamped part. The analysis was performed using the meshless analysis model with 186 particles (reproducing kernel particle method). The authors developed and implemented design sensitivity calculations for this elasto-plastic deformation problem. The loading rate is kept very small so that the analysis can be considered quasi-static. A Newton–Raphson method is used for the nonlinear analysis with 300 time steps. The CPU time needed for each analysis was 2843 seconds and the CPU time for calculating the sensitivities (18 variables) required 2350 seconds on a single processor HewlletPackard s-class workstation. The gradient-based optimization required 8 iterations. It is now possible to solve this optimization problem overnight on a high-end personal computer. This is made possible by the availability

of analytical gradients and use of the meshless analysis method that does not suffer from the mesh distortion problems that FE-based optimization would have to deal with.

8 Structural optimization with highly complex models (composite fuselage, aircraft wing and launch vehicle design) In this section, we review optimization of structural models that have a large number of DOF. Typical examples from the aerospace discipline would include wing design, fuselage detailed design and launch vehicle design. In the area of structural optimization of composite wings the number of DOF used in FE models and the number of design variables gradually increase. Aerospace companies do not readily divulge information on their design models and capabilities, but papers from universities and government laboratories provide some information. Schmit and Miura (1976) reported the strength optimizations of swept and delta wings with 120 and 103 DOF respectively. The largest optimization problem had 32 design variables for the swept wing and 40 design variable for the delta wing. Haftka and Starnes (1976) optimized a composite wing for minimum weight subject to stress and displacement constraints with about 200 DOF and 298 elements with 146 design variables using 1708 CPU seconds of a CDC 6000 computer. A few years later, Starnes and Haftka (1979) presented a minimum wing weight optimization for buckling, strength, and displacement constraints using 413 elements. The optimization required a total of 45 analyses (with analytical gradients) to optimize 33 design variables. The optimization performed on a CDC CYBER 175 digital computer required up to 1478 s of CPU time. A few years later at the 1986 NATO advanced Study Institute Meeting Lecina and Petiau (1987), presented a wing box design problem with 476 design variables using a FE model with 3500 DOF and considered a total of 1000 (stress failure, buckling and aeroelastic flutter) constraints. Vanderplaats (1993) presented a wing box optimization problem with 11 400 DOF, 125 design variables and 12 000 design constraints. More recently, Vanderplaats (2002), performed a wing optimization problem with 1251 design variables using a model with almost 50 000 DOF, subject to a total of 4800 constraints arising from eight load cases. The optimization using the modified method of feasible directions implemented in DOT required 2870 CPU seconds, while a SUMT method (penalty function) implementation required only 345 seconds, on a low-end desktop SGI workstation. Multidisciplinary design of aircraft structures increases the computational complexity of the analyses. Walsh (2001) presented the optimization of a HSCT plane that required structural, aerodynamics and flutter analysis. The structural model used for the design had

384 40 000 DOF. The optimization was limited due to the analysis cost, since a single iteration between aerodynamics and structural analyses required seven hours (wall time) for load convergence. Parallel implementation of the analysis reduced this to two to three hours. However, at two to three hours per analysis it was still impossible to optimize a large number of variables. In the automotive industry, models with few hundred thousand DOF are routinely used in optimizing for noise, vibration and harshness (NVH) and crash performance. Sobieszczanski-Sobieski et al. (2001) used a 390 000 DOF model for optimization of the NVH and crash performance. NVH performance evaluations required normal mode (eigenvalue) analysis and static analysis of model with 68 000 shell elements. Crash analysis was performed by explicit FE analysis methods using 12 000 shell elements. A linear response surface approximation was created for the crash response using 21 crash simulations (for 20 design variables). The CPU time (and approximately the wall time) for a single analysis on a single processor of an SGI Origin 200 computer was 454 hours. Using 12 processors reduced the wall time for a single analysis to 24–38 hours depending on the type of data storage option chosen. The analyses for fitting the response surface approximation required 24 hours using 252 processors (12 processors for each of the 21 crash analyses). Similar study on crash optimization performed by Craig et al. (2002) demonstrated the usefulness of response surface approximations in performing expensive optimizations. In his 1970 book on optimization methods, Fox remarked on the choice of a method: “For very large, illconditioned problems (200 variables or more) with no gradients available, one is likely to need divine assistance, . . . ”. However, developments in optimization algorithms particularly in global optimization and discrete (or integer) variable optimizations coupled with the availability of faster computers with larger memory and storage have made it possible to perform such large problems for which it is difficult to obtain gradients easily. With analytically derived gradients it is possible to perform optimization of a very large number (100 000 or more) of design variables (e.g. topology optimization) with models that have as much as 1 000 000 DOF. It has become common in keynote addresses for engineering managers to talk of a new paradigm in computational design. James Renton from Boeing Company (Renton 2001), commenting on the future of aerospace structures, said, “So what does the future look like in 10 to 20 years? Simulation/analysis is done over an array of networked machines. 3D simulation is the standard way of doing it. Fluids/structures/and material behavior are simulated all at once from first principles. Computing speed allows us to look at more and more detail.” This is an indication that analysis models will continue to gain in complexity for years to come. Optimization with such models will therefore remain a moving target. At present, even maximizing two components of complexity is difficult. The crash design we discussed is one ex-

ample of optimization problems with complexity in two components (Point 1, Fig. 2). With the use of approximations and analytical sensitivities it is possible to perform optimization problems. However, the authors are not aware of problems that represent Points 2 and 3 in Fig. 2, which combine maximal model and analysis complexity, respectively, with optimization complexity. Computing has limited the use of global and local design or multidisciplinary design optimizations for large industrial applications.

9 Concluding remarks We have presented a system for classifying complexity in optimization problems based on model size, analysis procedure and optimization size and methodology. The developments in computer technology as predicted by Moore’s law have given us a one-million-fold increase in computer speed and memory over the last thirty years. A review of optimization problems being solved at present indicates that we have benefited from this increase in computer power. Improvements in analysis methods and optimization algorithms, particularly approximation methods have also played a significant role in increasing the limit of what we can optimize. However, we still cannot solve optimization problems that have maximal complexity in more than two of its components. Solving optimization problems with maximal complexity in all components appears to be a distant and almost impossible target as long as the maximal complexity in the components continues to increase. Acknowledgements This work was supported by grants from NASA Langley Research Center (NAG-1-02042 and NAG-102067) and Air Force Office of Sponsored Research (AFOSR F49620-02-1-0070).

References Adali, S.; Richter, A.; Verijenko, V.E. 1995: Multiobjective Design of laminated Cylindrical Shells for Maximum Pressure and Buckling Load. Microcomputers in Civil Engineering 10, 269–279 Antonio, C.A.C. 1999: Optimisation of geometrically nonlinear composite structures based on load-displacement control. Composite Struct 46, 345–356 Ashley, H. 1982: TOn making things the best – Aeronautical Uses of Optimization. J Aircr 19(1), 5–28 Backlund, J.; Ishby, R. 1998: Shape Optimization of holes in composite shear panels. In: Rozvany, G.I.N.; Karihaloo, B.L. (eds.), Structural Optimization. Kluwer Publishers, 9–16 Belytschko, T.; Krongauz, Y.; Organ, D.; Fleming, M.; Krysl, P. 1996: Meshless methods: An overview and recent developments. Comput Methods Appl Mech Eng 139, 3–47

385 Bennett, J.A.; Botkin, M.E. 1985: Structural Shape Optimization with Geometric Description and Adaptive Mesh Refinement. AIAA J 23(3), 458–464 Bennet, J.A.; Botkin, M.E. 1986: The Optimum Shape: Automated Structural Design. General Motors Research Laboratories Symposia Series, New York: Plenum Press Bondophadhyay, P.K. 1998: Moore’s Law governs the silicon revolution. IEEE 86(1), 78–81 Botkin, M.E.; Yang, R.J.; Bennett, J.A. 1986: Shape Optimization of Three Dimensional Stamped and Solid Automotive Components. New York: Plenum Press Bushnell, D. 1987: PANDA2 – program for minimum weight design of stiffened, composite, locally buckled panels. Comput Struct 25(4), 469–605 Butler, R.; Williams, F.W. 1992: Optimum Design Using Viconopt, A buckling and strength constraint program for prismatic assemblies of anisotropic plates. Comput Struct 43(4), 699–708 Callahan, K.J.; Weeks, G.E. 1992: Optimum design of composite laminates using genetic algorithms. Comput Struct 2(3), 149–160 Chamis, C.C.; Murthy, P.L.N.; Gotsis, P.K.; Mital, S.K. 2000: Telescoping composite mechanics for composite behavior simulation. Comput Methods Appl Mech Eng 185(2–4), 399–411

Grosset, L.; Venkataraman, S.; Haftka, R.T.; Rastogi, N. 2001: Genetic optimization of two-material composite laminates. Proceeding of the American Society for Composites 16th Technical Conference, September 9–12, Blacksburg, Virginia, USA Grindeanu, I.; Kim, N.H.; Choi, K.K.; Chen, J.S., to appear, 2002: CAD-based shape optimization using a meshfree method. Concurrent Eng: Res Appl , 10(1), 55–66 Gurdal, Z.; Haftka, R.T. 1988: Automated Design of Composite Plates for Improved Damage Tolerance, Composite Materials: testing and Design (Eighth Conference). ASTM STP 972, Whitcomb, J.D. (ed.), American Society for Testing and Materials, Philadelphia, pp. 5–22 Gurdal, Z.; Haftka, R.T.; Hajela, P. 1998: Design and Optimization of Laminated Composite Structures. New York: Wiley Ha, S.K.; Jeong, J.Y. 1996: Design Optimization of Hip Prosthesis of Thick Laminated Composites by Developing Finite Element Method and Sensitivity Analysis. KSME J 10(1), 1–11 Haftka, R.T.; Starnes, J.H. 1976: Applications of a quadratic extended interior penalty function for structural optimization. AIAA J 14(6), 718–724 Haftka, R.T.; Starnes, J.H. Jr. 1988: Stiffness tailoring for Improved Compressive Strength of composite plates with Holes. AIAA J 26(1), 72–77

Craig, K.J.; Stander, N.; Doodge, D.A.; Varadappa, S. 2002: Multidisciplinary Design Optimization of Automotive Crashworthiness and NVH Using Response Surface Methods. AIAA Paper No. 2002-5607, Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, 4–6 September 2002, Atlanta, Georgia

Hajela, P.; Shih, C.-J. 1989: Optimal Design of Laminated Composites Using a Modified Mixed Integer and Discrete Programming Algorithm. Comput Struct 32(1), 213–221

Crossley, W.A.; Laananen, D.H. 1996: Genetic Algorithm Based Optimal Design of Stiffened panels for Energy Absorption. Proceedings of the American Helicopter Society 52nd Annual Forum. Washington, D.C., June 4–6, pp. 1367–1376

Hyer, M.W.; Charette, R.F. 1991: Use of curvilinear fiber format in composite structural design. AIAA J 29, 1011–1015

Falzon, B.G.; Steven, G.P.; Xie, Y.M. 1996: Shape optimization of interior cutouts in composite panels. Struct Optim 11, 43–49 Fu, W.; Biggers, S.B.; Latour, R.A. 1998: Design optimization of a laminated composite femoral component for hip joint. Thermoplastic Composite Mater 11(2), 99–112 Furuya, H.; Haftka, R.T. 1995: Placing Actuators on Space Structures by Genetic Algorithms and Effectiveness Indices. Struct Optim 9, 69–75 Graesser, D.L.; Zabinsky, Z.B.; Tuttle, M.; Kim, G.I. 1993: Optimal design of a composite structure, Composite Struct 24, 273–281 Grandhi, R. 1993: Structural Optimization with Frequency Constraints – A Review. AIAA J 31(12), 2296–2303 Groenwold, A.A.; Snyman, J.A.; Stander, N. 1996: Modified Trajectory Method for Practical Global Optimization Problems. AIAA J 34(10), 2126–2131

Hammer, V.B. 2000: Optimization of fibrous laminates undergoing progressive damage. Numer Methods Eng 48, 1265–1284

Hu, H.T.; Wang, S.S. 1992: Optimization for buckling resistance of fiber composite laminate shells with and without cutouts. Composite Struct 22, 3–13 Jaunky, N.; Knight, N.F. Jr.; Ambur, D.R. 1998: Optimal design of general stiffened composite circular cylinders for global buckling with strength constraints. Composite Struct 41, 243–252 Jegley, D.C.; Bush, H.G.; Lovejoy, A.E. 1999: Structural Response and Failure of a Full Scale Stiched Graphite Epoxy Wing. AIAA Paper No. 2001-1334 Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, April 16–19, 2001, Seatlle, Washington Kassapogolou, C. 1997: Simultaneous cost and weight minimization of composite-stiffened panels under compress and shear. Composites – Part A 28A, 419–435 Kassapogolou, C.; Dobyns, A.L. 2001: Simultaneous cost and weight minimization of postbuckled composite panels under combined compression and shear. Struct Multidisc Optim 21, 372–382

386 Katz, Y.; Haftka, R.T.; Altus, E. 1989: Optimization of fiber directions for increasing the failure load of a plate with a hole. Proceedings of the ASC 4th Technical Conference on Composite Materials, October 3–5, Blacksburg, Virginia, pp. 62–71 Khot, N.S.; Venkaya, V.B.; Berke, L. 1999: Optimum Design of Composite Structures with Stress and Displacement Constraints. AIAA 14(2), 131–132 Kim, N.H.; Choi, K.K.; Botkin, M.E. 2003: Numerical method for shape optimization using meshfree method. Struct Multidisc Optim 2(6), 418–429 Kim, N.H.; Choi, K.K.; Chen, J.S. 2001: Die Shape Optimization of Sheet Metal Stamping Process Using Meshfree Method. Numer Methods Eng 51, 1385–1405 Kim, S.J.; Goo, N.S. 1999: Optimal design of laminated composite plates in a fuzzy environment. AIAA J 31(3), 578–583 Kogiso, N.; Shao, S.; Murotsu, Y. 1997: TReliability-based optimum design of symmetric laminated plate subject to buckling. Struct Optim 14, 184–192 Lamberti, L.; Venkataraman, S.; Haftka, R.T.; Johnson, T.F. 2000: Comparison of Preliminary Designs of Stiffened Panels Optimized Using PANDA2 for Reusable Launch Vehicle Propellant Tanks. AIAA Paper No. 2000-1657, Proceedings of the 41st AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Atlanta, Georgia, USA Lamberti, L.; Venkataraman, S.; Haftka, R.T.; Johnson, T.F. 2002: Preliminary Design optimization of Stiffened Panels Using Approximate Analysis Models. Int J Numer Methods Eng 57, 2003, 1351–1380 Lecina, A.; Petiau, C. 1987: Advances in optimal design with composite materials. Computer Aided Optimal Design: Structural and Mechanical Systems,NATO ASI Series, Vol F27, edited by Mota Soares, C.A., Berlin Heidelberg New York: Springer, pp. 943–953 Lee, S.J.; Hinton, S.J. 2000: Dangers inherited in shells optimized with linear assumptions. Comput Struct 78, 473–486

35th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and materials Conference, April 18–20, Hilton Head, South Carolina, 1384–1393 Mesquita, L.; Kamat, M.P. 1987: Optimization of Stiffened laminated Composite Plates with Frequency Constraints. Eng Optim 11, 77–88 Miki, M.; Sugiyama, Y. 1993: Optimum Design of Laminated Composite Plates using Lamination Parameters. AIAA J 31(5), 921–922 Miki, M.; Murotsu, Y.; Murayama, N.; Tanaka, T. 1993: Application of Lamination Parameters to Reliability-Based Stiffness Design of Composites. AIAA 31(10), 1938–1945 Moita, J.S.; Barbosa, J.I.; Mota Soares, C.M.; Mota Soares, C.A. 2000: Sensitivity analysis and optimal design of geometrically non-linear laminated plates and shells. Comput Struct 76, 407–420 Moore, G.E. 1999: Cramming more components onto integrated circuits. Electronics 38(8), 114–117 Nagendra, S.; Jestin, D.; Gurdal, Z.; Haftka, R.T.; Watson, L.T. 1996: Improved Genetic Algorithm for the Design of Stiffened Composite Panels. Comput Struct 58(3), 543–555 Niordson, F.I. 2001: Early numerical computations in engineering. Appl Mech Rev 54(6), R17–R19 Olhoff, N. 1996: On Optimum Design of Structures and Materials. Meccanica 31, 143–161 Parkinson, C.N. 1959: Parkinson’s Law or the pursuit of progress. London: John Murray Perry, C.A.; Gurdal, Z.; Starnes, J.H. 1997: Minimum Weight Design of Compressively Loaded Stiffened Panels for Post buckling Response. Eng Optim 28, 175–197 Qu, X.; Venkataraman, S.; Haftka, R.T.; Johnson, T.F. 2003: Deterministic and reliability based optimization of composite laminates for cryogenic environments. AIAA J 41(10), 2029–2036

LeRiche, R.; Haftka, R.T. 1995: Improved genetic Algorithm for Minimum Thickness Composite Laminate Design. Composites Eng 5(2), 143–161

Renton, W.J. 2001: Aerospace and structures, where are we headed? Int J Solids Struct 38, 3309–3319

LeRiche, R.; Gaudin, J. 1998: Design of dimensionally stable composites by evolutionary optimization. Composite Struct 41, 97–111

Saravanos, D.A.; Chamis, C.C. 1992: Multiobjective shape and material optimization of composite structures including damping. AIAA J 30(3), 805–813

Levy, R.; Lev, O.E. 1987: Recent Developments in Structural Optimization. Struct Eng 113(9), 1939–1962

Sargent, P.M.; Ige, D.O.; Ball, N.R. 1995: Design of Laminate Composite Layups using Genetic Algorithms. Eng Comput 11, 59–69

Ley, R.; Peltier, A. 2001: Optimal sizing of a composite sandwich fuselage component. Vehicle Des 25(1–2), 89–114 Lombardi, M.; Haftka, R.T. 1998: Anti-optimization technique for structural design under load uncertainties. Comput Methods Appl Mech Eng 157(1–2), 19–31 Mabson, G.E.; Flynn, B.W.; Ilcewiz, L.B.; Graesser, D.L. 1994: The Use of COSTADE in developing commercial aircraft fuselage structures. AIAA-94-1492, Proceedings of the

Schaller, R.J. 1997: Moore’s Law: past, present and future. IEEE Spectrum, June 1997, 53–59 Schmit, L.A.; Farshi, B. 1977: Optimum design of laminated fibre composite plates. Int J Numer Methods Eng 11, 623–640 Schmit, L.A.; Miura, H. 1976: A New Structural Analysis/ Synthesis Capability – ACCESS 1. AIAA J 14(5), 661–671

387 Shin, D.K.; Gurdal, Z.; Griffin, O.H. Jr. 1991: Minimum weight design of laminated composite plates for post buckling performance. Appl Mech Rev 44(11), part 2, 219–231 Sobieszczanski-Sobieski, J. 1986: Structural optimization: challenges and opportunities. Vehicle Des 7(3–4), 242–263 Sobieszczanski-Sobieski, J.; Haftka, R.T. 1997: Multidisciplinary aerospace design optimization: Survey of recent developments. Struct Optim 14, 1–23 Sobieszczanski-Sobieski, J.; Kodiyalam, S.; Yang, R.J. 2001: Optimization of car body under constraints of noise, vibration and harshness (NVH) and crash. Struct Multidisc Optim 22, 295–306 Starnes, J.H. Jr.; Haftka, R.T. 1979: Preliminary Design of Composite Wings for Buckling, Strength, and Displacement Constraints. J Aircr 16(8), 564–570 Stroud, J.W.; Agranoff, N. 1976: Minimum Mass design of Filamentary Composite Panels Under Combined Loads: Design Procedure Based on Simplified Equations, NASA-TN-D-8257, Washington, D.C. Stroud, J.W.; Agranoff, N. 1977: Minimum Mass design of Filamentary Composite Panels Under Combined Loads: Design Procedure Based on a Rigorous Buckling Analysis, NASA-TND-8257, Washington, D.C. Stroud, W.J.; Anderson, M.S. 1981: PASCO: Structural Panel Analysis and Sizing Code, Capability and Analytical Foundations, NASA-TM-80181, Washington, D.C. Thimbleby, H. 1993: Computerized Parkinson’s Law. Comput Control Eng 4(5), 197–198 van Bloemen Waanders, B.; Eldered, M.; Guinta, A.; Reese, G.; Bhardwaj, M.; Fulcher, C. 2001: AIAA-2001-1625, Proceedings of the 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, April 16–19, 2001, Seattle, Washington

Vanderplaats, G.N. 1982: Structural Optimization – Past, Present and Future. AIAA J 20(7), 992–1000 Vanderplaats, G.N. 1993: Thirty years of modern structural optimization. Adv Eng Softw 16, 81–88 Vanderplaats, G.N. 2002: Very Large Scale Optimization. NASA/CR-2002-211768, Washington, D.C. Venkaya, V.B. 1978: Structural Optimization – Review and Some Recommendations. Numer Methods Eng 13(2), 203–228 Venter, G.; Haftka, R.T. 2000: Two-Species Genetic Algorithm for Design under Worst Case Conditions. Evol Optim (Internet J) 2(1), 1–19 Walsh, J.L.; Weston, R.P.; Samareh, J.A.; Mason, B.H.; Green, L.L.; Bierdon, R.T. 2000: Multidisciplinary HighFidelity Analysis and Optimization of Aerospace Vehicles, Part 2: Preliminary Results. AIAA-2000-0419, Proceedings of the 38th Aerospace Sciences Meeting and Exhibit, 10–13 January 2000, Reno, Nevada, USA Walsh, J.L.; Weston, R.P.; Samareh, J.A.; Mason, B.H.; Green, L.L.; Bierdon, R.T. 1999: Cryogenic tank structure sizing with structural optimization method. AIAA-2001-1599, Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Seattle Washington, 2001 Wellen, H.; Bartholomew, P. 1987: Structural Optimization in Aircraft Construction, Computer Aided Optimal Design: Structural and Mechanical Systems NATO ASI Series, Vol. F27 , edited by Mota Soares, C.A., Berlin Heidelberg New York: Springer, pp. 955–970 Yu, M.; Pochtman, L.; Derevyanko, L.V.; Mormul, N.F. 1990: Multicriterial Optimization of Hybrid Composite Cylindrical Shells Under a Stochastic Combined laoding. Mech Composite Mater 26(6), 801–806 Yang, L.; Ma, Z.K. 1965: Optimum Design Based on Reliability for a Composite Structural System. Comput Struct 36(5), 785–790