Titre de la communication

1 downloads 0 Views 1MB Size Report
a. ESAZO, Airbus Operations SAS, 316, Route de Bayonne, 31060 Toulouse CEDEX, France, [email protected] b. Applied Mathematics Group ...
DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

Comparison of different techniques for surrogate modeling of stability constraints for composite structures S. GRIHONa, S. ALESTRAb, D. BETTEBGHORc, 1E. BURNAEVd,e,f, 1P. PRIKHODKOd,e,f a. ESAZO, Airbus Operations SAS, 316, Route de Bayonne, 31060 Toulouse CEDEX, France, [email protected] b. Applied Mathematics Group, EADS IW, 18 rue Marius Terce, BP 13050, 31025 TOULOUSE CEDEX 03, France, [email protected] c. Aeroelasticity and Structural Dynamics Department, ONERA, 29 avenue de la Division Leclerc, France, [email protected] d. Intelligent Data Analysis Group, DATADVANCE, Pokrovsky blvd. 3 building 1B, 109028, Moscow, Russia, [email protected], [email protected] e. Data Analysis and Modeling Lab, Institute for Information Transmission Problems, Bolshoy Karetny per. 19, Moscow, 127994, Russia f. PreMoLab, Moscow Institute of Physics and Technlogy, 141700, 9, Institutskii per., Dolgoprudny, Moscow Region, Russia Abstract: When aircraft composite structures (fuselage, wing panels, etc.) are designed, stability analyses is performed for different geometries, loadings, composites. For laminated composites the feasibility rules and the discreteness of orientations lead to tremendous amount of combinations that should be analysed in order to prevent instability and fulfil strength criteria. Population-based optimization algorithms is often used for automation of this design process. Unlike gradient-based methods, such algorithms require many function evaluations making design process intractable due to significant computational cost of stability analysis. Computational burden can be decreased by using surrogate models. Since many stability criteria are defined as minimum of several modes, then the response function has discontinuities. Therefore surrogate models should adapt to such peculiarities in order to accurately approximate stability criteria depending on composite material properties and loadings. Two original methods to tackle with this behaviour are presented and extensively compared on the problem of Airbus in-house stability tools approximation. These methods use the idea of ensembles of approximators. Both methods provide significantly higher accuracy than conventional methods and one of the methods turned out to be significantly more accurate than other. Therefore, proposed methods can be used for construction of accurate surrogate models to replace the computationally expensive stability analyses. Key Words: Buckling approximation, Mixture of Experts, HDA 1 Introduction Structural optimization for composite thin-walled structures often exhibits large computational times due to repetitive stability analysis. For composite structure made of thin plates or thin shallow shells (such as aircraft fuselage), buckling computation is of primary importance since it is one of the critical sizing constraints when minimizing the weight. This computational burden becomes even critical when we address laminated composites and when the stacking sequence is optimized [1]. The prescribed orientations and the feasibility rules for stacking sequences make this kind of problems NP-complete. To solve such problems, one often goes to population-based heuristics E. Burnaev and P. Prikhodko were partially supported by Laboratory for Structural Methods of Data Analysis in Predictive Modeling, MIPT, RF government grant, ag. 11.G34.31.0073 1

1

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

(evolutionary algorithms, particle swarm optimization...) to address the discreteness of the design variables. Such methods require many evaluations to ensure that a reasonably good optimum has been found. Even though more and more computational resources are available today, these resources do not increase in such a way that the optimization of large-composite structures that include realistic constraints such as blending (ply’s orientation continuity) can be easily performed. If we think of the maximization of the critical buckling factor for one sole reasonably thick laminated plate (2 × 30 plies for instance), the complexity of the problem is still not treated in a satisfactory way since in general heuristics never guarantee that we find the global optimum. If we think of an assembly of many plates reinforced by laminated composites stringers, complexity of the problem grows even more than exponentially. A realistic fuselage optimization, that include say 10, 000 super-stiffeners, 100 load cases with a few super-stiffener design variables (geometric dimensions, stacking sequence) and a few different stability and strength criteria, will require more than one million of buckling evaluations per optimization iteration (without any sensitivity analysis), it will also require finite elements analyzes to redistribute internal loads for the overall structure (it also quite complicates the sensitivity analysis). If we think of a standard buckling analysis that lasts a few seconds, say one second, one sole iteration would take more than one week. In case such an optimization is based on gradient methods and requires a few dozens of iterations to converge, this amount of computations could still be handled. However, we can not still guarantee that this optimum is global. To get to the optimal design, we would need a real mixed-integer non linear programming technique (branch-and-bound, Gomory's cuts) to include combinatorial constraints such that blending, resulting in tremendous amounts of evaluations, several billions of buckling evaluation even for small structures. The motivation of approximation models to replace standard buckling analysis (performed through Rayleigh-Ritz methods or detailed finite element methods) is not thought as a replacement for an analysis considered as alone but with respect to that huge number of repetitive analyses.

Figure 1. Stiffened panel with Omega-shaped stringers As we will see in the next section, stability constraints and more specifically buckling computations exhibit derivative discontinuities with respect to the buckling analysis parameters (lamination parameters w.r.t changes of the stacking sequence, forces w.r.t changes in the loading conditions, geometric dimensions w.r.t a fixed profile). These discontinuities are physics-based and relies on the very computation of the critical buckling load factor (or Reserve Factor) formally defined as the smallest positive eigenvalue of a partial differential equation. Moreover, if we think of a more general composite structural optimization where for instance the shape of the stringer can change and takes different categorical variables (T-shaped, I-shaped,...) the switch from one type of domain to another one also leads to discontinuities. The physics behind lead to derivative discontinuities but there are also practical reasons that can make the computation not of the critical buckling load factor but of quantities derived from them really discontinuous. Think for instance of margin policies, rule-of-thumbs that are based on the art of the engineer that favor one buckling critical 2

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

factor over another one for some prescribed values of the parameters. These discontinuities can also derive from programming reasons, a common practice for in-house tools is to reduce the number of outputs to check is to consider minimum of several different buckling modes. To make this clearer, the one of Airbus skill tools, which analyses the stability of a super-stiffener (two halves thin shallows shells of possibly different curvature radii, thicknesses and stacking sequences reinforced by a stiffener and simply supported on the frames), as an output for stiffener buckling, that generally analyses the stability of the different parts of the stiffener (web, flange, foot), gives the minimum buckling reserve factor. These types of buckling essentially depends on the geometric dimensions of the stiffener, the stacking sequence of the stiffener but also on the in-plane properties of the panels (proportions of orientations and forces (loads per unit length) that the two half-panels endures). However, for some specific shapes or stringer, e.g. omega-shaped stringers (see figure 1), this stringer buckling reserve factor also features a reserve factor associated to the buckling of the small panel between the two webs of the stringer: this is typical skin buckling and it essentially depends on the stacking sequences of the panels. The typical inputs, that we have for such stability analyses, can be divided into three general types 1. Geometric input. These are dimensions of the super-stiffener and possibly of the frame around. It can be panel thickness's, curvature radii, width of the flange of the stiffener, dimensions of the foot and possibly extra cap that is added on top of some subcomponents of the stringer. 2. Loading input. Buckling analysis usually assumes that a pre-stress is applied on the structure to analyze. These loadings usually come from a linear static analysis performed at the whole structure level. Panels usually endure in-plane loads (tension, compression and shear) and out-of-plane loads (pressure), stringers usually endure axial force and possibly shear. 3. Stacking sequence input. When designing composite structures, one important source of weight saving over traditional homogeneous isotropic materials such as metals come from the lay-out of the material that can be tailored to meet specific strength or buckling requirements. These extra degrees-of-freedom usually concerns the orientation of plies whenever we address laminates. But there exist equivalent representations: lamination parameters, polar invariant, A and D tensors. We see that many parameters can be taken as input of our approximation. Their influence might be considerably different. In our application, we consider that these three types of input vary, resulting in large dimensional approximation problem (up to 20). We already saw two essential characteristics of the functions that we want to approximate. First the output to approximate, when switching from one buckling type to another, shows discontinuities. Second the variables explaining the function behavior can change. So, a clever strategy to accurately approximate these outputs should not only detect discontinuities but also locally weights the predictive variables by allowing some variables to be more important in some areas and switching to other predictive variables. 2

Pitfalls of buckling response approximation

The buckling phenomenon for thin plates and shells has been extensively studied since the beginning of the 20th century. The buckling phenomenon corresponds to a bifurcation ot the system response path. This means that the structure can take several configurations, i.e. there exist several equilibrium and for thin-walled structures as plates or shells, this leads to the sudden large out-ofplane deflection (see figure 2). 3

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

Figure 2. Panel shear buckling The first rigorous equations for large deflections of thin plates were written down by von Karman in 1910. In their first derivation for isotropic homogeneous structures, these equations were a system of two nonlinear partial differential equations of order four. In many cases, these equations are linearized, resulting in a large generalized eigenvalue problem, as the ones classically encountered in structural dynamics (vibrations...). Buckling eigenvalue (or Reserve Factor) is defined as the minimum positive eigenvalue  of the eigenvalue problem Kw- Lw=0 . A classical numerical method to approximate buckling eigenvalue is Rayleigh-Ritz method but standard finite elements method can be used as well. It is worth noting that linearized buckling can only lead to the prediction of the onset of buckling, the behavior of the structure after the bifurcation point has to be computed on the basis of nonlinear analysis methods (arc-length methods, Newton methods). This is the subject of postbuckling analysis. The stability tools described in the former section perform many different buckling computations. They do not only analyse stability of the overall superstiffener assembly but also the stability of subcomponents (flange, web...), they also provide a postbuckling analysis based on von Karman and Koiter theory. As noted in many references, the dependency of buckling eigenvalue exhibit discontinuity of the derivative. The main reason is that the critical mode can change depending of the value of the parameters. This phenomenon is known in dynamics as mode switching. Formally the critical buckling factor is defined as a minimum between several functions and depending on the parameter values this can lead to change of the critical factor. This phenomenon complicates the approximation since it is not easy to ensure that we are dealing with the same mode, this is difficult problem of mode tracking and many criteria (Modal Assurance Criteria, MAC) can be used to solve this problem. However as the number of parameters gets high, this mode tracking simply becomes too much diffucult to be used. Furthermore another difficult problem might appear, known as mode veering, as opposed to simple mode switching, in case of veering the associated modes starts to interact and abruptly diverge or 'veer' from each other while exchanging modal information [2]. Historically mode veering was first assumed to be a numerical aberration but recent experiments exhibit mode veering on real structures. We depicted the difference between mode veering and mode crossing on figure 3 (a) and (b). When assessing a label to the associated modes, one encounters in the veering zone sudden but continuous change of the MAC criteria. Roughly speaking, the mode switch continuously by combination of the two veering modes. This makes veering quite hard to detect. Besides, there is another difficulty associated to the approximation of modes in our case. Indeed, we assume that we only know the critical mode at some sampled places of the input space. We also assume that we do not know the associated displacement (eigenvector). In case the sample points are located in the zone where mode change is not enough dense we may be unable to tell whether or not it is a veering or a crossing or even worse we may not detect that the critical mode did change. In case extra information is known, we can think of different strategies, but in the present paper we assume that 4

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

we only have the critical mode with no deeper insight on the response. The difference between the two situations is illustrated on figure 3 (c) and figure 3 (d).

Figure 3. Multimodal response (a) Mode crossing (b) Mode veering (c) Sampled data (d) Available data 3

Description of approximation strategy based on the mixture of experts

We briefly describe here the IMAGE strategy (Improved Metamodeling Approximation through Gaussian mixture of Experts) [3]. We call x the input variable which is in our experiments multivariate (up to dimension 20) and y the output to approximate. We assume that we have a set S  xi , yi i 1 of N observed data points from a random variable Z   X , Y  , where X is a vectorN

valued random variable lying in din and Y a random variable lying in 1 . Instead of solely modeling the marginal law of X , we can consider Z as a vector-valued random variable lying in din 1 and model its density as a mixture of different probability laws. Once this density has been estimated we can easily derive the marginal law of X . To illustrate our point, we will take the example of Gaussian mixtures of K components and get the marginal law. We estimate through EM algorithm the parameters of the K multivariate Gaussians in din 1 such that

 X ,Y  ~ k 1k  k , k  K

where  k k 1 are the mixture parameters, i.e. k  0,1 for all k  1, K

 k , k 

denotes a Gaussian multivariate distribution with k 

din 1

, K and



K k 1

k  1 , and

the mean of the Gaussian

distribution k and k 

 din 1 din 1

kX 

the X -coordinates of the mean vector k and the covariance matrix of

din

and kX 

din din

is the corresponding covariance matrix. Denote also by

X for gaussian distribution k correspondingly. At this point, Gaussian mixture parameters estimates allow us to perform a clustering of the sample S  xi , yi i 1 by classical application of N

standard Bayes rule. A point  x, y  is assigned to the component k that gives the highest posterior probability   k  k |  X , Y    x, y   , where k  [1,..., K ] is a random variable, associating to each point corresponding cluster. By computing posterior probability over all k , we are able to assign a region for each point

 x, y 

by taking the k that maximizes this probability (Maximum A

Posteriori approach). We then have K disconnected regions where the function is expected to behave more nicely. Over each region, we build an approximation, that can be linear, polynomial regression, radial basis function interpolation or artificial neural network. Denote by f i the approximator associated to region i . Based on the Gaussian mixture parameters estimates (weights, 5

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

mean, covariance matrices), we are able to derive the probability value gi  x     k  i | X  x  to lie in region i for a new input point x . Such a function gi  x  associates the weight to the approximation in the global recombination. It is called a gating network. In case of Gaussian mixture distribution, this gating network can be computed exactly. Denote by





h  x;  ,     2  in det    d

1 2



exp 1 2  x     1  x    T



the probability density function for a multivatiate Gaussian, then we have gi  x   i h  x; iX , iX 

We see that



K i 1



K k 1

k h  x; kX , kX 

gi  x   1 , i.e. the gating networks are a partition of unity. The final approximation

K model fˆ  x  is fˆ  x   i 1 gi  x   f i  x  . We depicted the sketch of the algorithm in figure 4.

Figure 4. Sketch of the IMAGE algorithm (a) Original function to approximate and sample (b) EM clustering and associated law (c) Local experts (approximators) and associated gating network (d) Two different recombination based on our mixture of experts strategy: smooth and discontinuous compared with artificial neural network and true function 4 Description of HDA HDA method (High Dimensional Approximation method) is a part of so-called Generic Tool for Approximation, being a component of MACROS software toolkit for surrogate modelling and optimization, developed by DATADVANCE [4]. Generic Tool for Approximation allows automatic construction of fast-running data based surrogate models using best-in-class predictive modeling techniques. The tool includes built-in robustness and accuracy assessment, control of surrogate model smoothness, etc. Adaptive and automatic selection of the best method for a given problem on basis of specially designed decision trees opens up elaborated methods for use by people who interact with problems on the engineering rather than mathematical level. 4.1

Structure of HDA

HDA approximator fˆ  x  (see also [5] for details) for the considered function f  x  consists of several basic approximators gi  x  , i  1,2,

, which are iteratively constructed and integrated into

fˆ  x  using specially elaborated boosting algorithm until the accuracy of HDA approximator does B not stop to increase. In fact, fˆ  x   k 1 gk  x  B , where the number B of basic approximators

gk  x  , k  1,2,

, B is also estimated by the boosting algorithm. Each gk  x  , k  1,2, 6

, B is

DYNACOMP 2012 1st International Conference on Composite Dynamics

 X, Y  ,   yˆ  g  x  ,

trained on the modified sample N ˆ Y   yi  f  xi i 1 and Y j

In

turn, basic

N

i

approximators

gk  x   l 1k hk ,l  x  M k , k  1,2, M

k  1,2,

k

j

i

i 1

May 22-24 2012, Arcachon, France

where X  xi i 1 , Yk is a some function of N

j  1,2,

gk  x  , k  1,2,

,B

, k  1 , see [6] for details.

are

represented

as

, B of elementary approximators hk ,l  x  ,

some

averages

l  1,2,

, Mk ,

, B , obtained using multistart on their parameters. The value of M k is estimated by the

training algorithm of HDA. Elementary approximator model is described in the next section. 4.2

Elementary Approximator Model

Linear expansions in parametric functions from the dictionary is used as an elementary approximator model in HDA, i.e. the elementary approximator has the form h  x    j 1 j j  x  , p

where  j  x  , j  1,2,

, p are some parametric functions. Three main types of parametric

functions are used, namely   j  1  d 1. Sigmoid basis function  j  x     i in1  j ,i x i sign 



is adjusted independently of the parameters  j   j ,1 , x   x1 , x 2 ,



din i 1

 

 j ,i x i  , where parameter  j



,  j ,din ,   z    e z  1  e z  1 ,

, x din  . For big negative  j the function  j  x  behaves like a step function.





2. Gaussian functions  j  x   exp i in1 j2,i  xi  d j ,i  . 3. Linear functions  j  x   x j , j  1,2, Thus the index set J  1,

J lin  1,2,

2

d

, din , x   x1 , x 2 ,

, x din  .

, p can be decomposed into three parts J  J lin  J sigmoid  J gf , where

, din  corresponds to the linear part (linear functions), J sigmoid and J gf correspond to

sigmoid and gaussian functions correspondingly. Therefore, in order to fit the elementary approximator model to the data we should choose the number of functions p , their type and estimate their parameters by minimizing mean-square error on the training set. 4.3

Training of Elementary Approximator Model

Training of elementary approximator model consists of the following steps: 1. Parameters of the functions from parametric dictionary are initialized, see description of the algorithm in [7]. Initial number of functions is selected with redundancy. 2. Model selection is done, i.e. values of p , card  J sigmoid  and card  J gf  are estimated

( card  A denotes the cardinality of the set A ), redundant functions are removed from the expansion, see description of the algorithms in [8], [9], [10]. 3. Parameters of the approximator are tuned using Hybrid Algorithm based on Regression Analysis and gradient optimization methods. On each step of this algorithm  parameters  j  j 1 of linear decomposition are estimated using ridge regression with p

adaptive selection of regularization parameter, while parameters of functions from the decomposition are kept fixed, 7

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

 parameters of functions from decomposition are tuned using specially elaborated gradient method, while parameters  j  j 1 of linear decomposition are kept fixed. p

Specific details of the algorithm can be found in [5], [8], [11]. 5

Comparison of approximation algorithms

Comparison of IMAGE and HDA with conventional methods on the problem of Airbus in-house stability tools approximation showed significantly bigger accuracy of these methods over conventional ones (see [3] for IMAGE and [5], [13] for HDA). Therefore, the main goal of this section is to compare accuracies of IMAGE and HDA. 5.1

Setup of experiments

Setup of experiments can be described as follows:  Input dimension din  20 , input vector defines geometries, material properties, etc.  

Typical learning sample size used for construction of approximation is N ~ 200000 . Output characteristics consists of various reserve factors (RF for stringer local buckling, RF for skin local buckling, etc.) and strain values at different locations, considering also bending. The number of output characteristics to approximate is equal to 25. A Reserve Factor indicates whether the structure is feasible with respect to a given failure mode. If the RF is greater than one, the structure is feasible. If the RF is less than one it is not feasible. Therefore, when modeling dependency of an RF on a vector of design variables x , the highest possible accuracy should be provided for values of RF close to one. Thus for each output characteristic two domains were identified, namely, A1  {( x, y ) : y  [0.5,1.5]} and

A2  {( x, y ) : y  [0,3]} , and for each of these domains an approximator was constructed using the corresponding train sample. Let us denote these approximators as Appr1 and Appr2 correspondingly. As Accuracy Indicators we used Emean (mean absolute error), Eq95 (95% quantile absolute error), Eq99 (99% quantile absolute error). Accuracy Indicators were estimated using separate test samples for sets {( x, y ) : y  [0.8,1.2]} and {( x, y ) : y  [1.2,1.5]} in case of Approx1 and for sets {( x, y ) : y  [0,1.5]} and {( x, y ) : y  [1.5,3]} in case of Approx2.

5.2

Obtained results

Due to lack of space detailed results are given only for two RFs: RF for stringer local buckling (RF STR, see figure 5) and RF for first skin buckling mode (RF PND GEN, see figure 6). We can see that HDA provides significantly higher accuracy. Let us consider the fraction of cases (out of 50), for which the considered approximation method has the smallest error (of considered type), see results in figure 7. We can see that for the most of cases HDA achieves the smallest approximation errors. Let us quantify, to what extent on average the accuracy of HDA is higher than the accuracy of IMAGE. We denote by E  M  an approximation error of some particular type, obtained by a method M on some test problem,   M1, M 2    E  M1   E  M 2   E  M1   100% for some two methods M 1 and M 2 . Then average gain (AvgGain) of using M 2 instead of M 1 is equal to the average of positive values of   M1 , M 2  over all considered problems. Analogously, average loss (AvgLoss) of using M 2 instead of M 1 is equal to the average of negative values of   M1 , M 2  8

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

over all considered problems. Obtained results for IMAGE ( M 1 ) and HDA ( M 2 ) are given in the figure 8. We can see that HDA is significantly more accurate.

Figure 5. Accuracy Indicators for RF STR

Figure 6. Accuracy Indicators for RF PND GEN

Figure 7. Fraction of cases

Figure 8. Average gain and loss 6 Conclusions Two approximation methods IMAGE (Improved Metamodeling Approximation through Gaussian mixture of Experts) and HDA (High Dimensional Approximation) are presented. Both methods provides better accuracy than conventional methods, in particular, proposed methods can adapt to discontinuous behaviour of response function. Extensive comparison on the problem of Airbus inhouse stability tools approximation showed that HDA is significantly more accurate than IMAGE. 9

DYNACOMP 2012 1st International Conference on Composite Dynamics

May 22-24 2012, Arcachon, France

Therefore, proposed methods can be used for construction of accurate surrogate models to replace the computationally expensive stability analyses. 7

Bibliography

[1] M. Miki, Y. Sugiyama, Optimum design of laminated composite plates using lamination parameters, AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Baltimore, USA, 1991, Technical Papers. Pt. 1, Vol. 8, Numb. 10. [2] A. Gallina, L. Pichler, T. Uhl. Enhanced meta-modelling technique for analysis of mode crossing, mode veering and mode coalescence in structural dynamics, Mechanical Systems and Signal Processing, Elsevier, 2011. [3] D. Bettebghor, N. Bartoli, S. Grihon, J. Morlier, M. Samuelides. Surrogate modeling approximation using a mixture of experts based on EM joint estimation, Structural and Multidisciplinary Optimization (2011) 1-17, Springer. [4] DATADVANCE, www.datadvance.net [5] E. Burnaev, M. Belyaev, P. Prihodko. About hybrid algorithm for tuning of parameters in approximation based on linear expansions in parametric functions, Proceedings of the 8th international conference Intellectualization of Data Processing, Cyprus, 2011, pp. 200-205. [6] E. Burnaev, P. Prikhodko. Theoretical properties of procedure for construction of regression ensemble based on bagging and boosting, Proceedings of the conference Information Technology and Systems, Gelendzhik, Russia, 2011, pp. 438-443. [7] M. Belyaev, E. Burnaev, P. Yerofeyev, P. Prikhodko, Comparative performance of nonlinear regression initialization methods, Proceedings of the conference Information Technology and Systems, Gelendzhik, Russia, 2011, pp. 315-320. [8] E. Burnaev, M. Belyaev, A. Lyubin. Construction of approximation based on linear expansions in heterogeneous nonlinear functions, Proceedings of the conference Information Technology and Systems, Gelendzhik, Russia, 2011, pp. 344-348. [9] E. Burnaev, P. Prikhodko, I. Panin. About criterion for selection of regression model, Proceedings of the conference Information Technology and Systems, Gelendzhik, Russia, 2011, pp. 340-343. [10] M. Belyaev, E. Burnaev, A. Lyubin. Parametric dictionary selection for approximation of multidimensional dependency, Proceedings of All-Russian conference Mathematical methods of pattern recognition, Petrozavodsk, Russia, 2011, pp. 146-150. [11] M. Belyaev, A. Lyubin. Some features of optimization problem arising in construction of multidimensional approximation, Proceedings of the conference Information Technology and Systems, Gelendzhik, Russia, 2011, pp. 415-422. [12] E. Burnaev, S. Grihon, Construction of the metamodels in support of stiffened panel optimization, Proceedings of the International Conference on Mathematical Methods in Reliability, Moscow, Russia, 2009, pp. 124-128. [13] E.V. Burnaev, S. Grihon, Construction of the metamodels in support of stiffened panel optimization, Proceedings of the International Conference on Mathematical Methods in Reliability, Moscow, Russia, 2009, pp. 124-128.

10