• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 7
  • 3
  • 2
  • 2
  • Tagged with
  • 45
  • 45
  • 28
  • 14
  • 11
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Multiploid Genetic Algorithms For Multi-objective Turbine Blade Aerodynamic Optimization

Oksuz, Ozhan 01 December 2007 (has links) (PDF)
To decrease the computational cost of genetic algorithm optimizations, surrogate models are used during optimization. Online update of surrogate models and repeated exchange of surrogate models with exact model during genetic optimization converts static optimization problems to dynamic ones. However, genetic algorithms fail to converge to the global optimum in dynamic optimization problems. To address these problems, a multiploid genetic algorithm optimization method is proposed. Multi-fidelity surrogate models are assigned to corresponding levels of fitness values to sustain the static optimization problem. Low fidelity fitness values are used to decrease the computational cost. The exact/highest-fidelity model fitness value is used for converging to the global optimum. The algorithm is applied to single and multi-objective turbine blade aerodynamic optimization problems. The design objectives are selected as maximizing the adiabatic efficiency and torque so as to reduce the weight, size and the cost of the gas turbine engine. A 3-D steady Reynolds-Averaged Navier-Stokes solver is coupled with an automated unstructured grid generation tool. The solver is validated by using two well known test cases. Blade geometry is modelled by 37 design variables. Fine and coarse grid solutions are respected as high and low fidelity surrogate models, respectively. One of the test cases is selected as the baseline and is modified in the design process. The effects of input parameters on the performance of the multiploid genetic algorithm are studied. It is demonstrated that the proposed algorithm accelerates the optimization cycle while providing convergence to the global optimum for single and multi-objective problems.
22

Développement d'une méthodologie pour l'optimisation multicritère de scénarios d'évolution du parc nucléaire / Methodology implementation for multiobjective optimisation for nuclear fleet evolution scenarios

Freynet, David 30 September 2016 (has links)
La question de l’évolution du parc nucléaire français peut être considérée via l’étude de scénarios électronucléaires. Ces études présentent un rôle important, compte-tenu des enjeux, de l’ampleur des investissements, des durées et de la complexité des systèmes concernés, et fournissent des éléments d’aide au processus décisionnel. Elles sont menées à l’aide du code COSI (développé au CEA/DEN), qui permet de calculer les inventaires et les flux de matières transitant dans le cycle (réacteurs nucléaires et installations associées), via notamment le couplage avec le code d’évolution CESAR. Les études actuelles menées avec COSI nécessitent de définir les paramètres d’entrée des scénarios simulés, de sorte à satisfaire différents critères comme minimiser la consommation d’uranium naturel, la production de déchets, etc. Ces paramètres portent notamment sur les quantités et l’ordonnancement des combustibles usés au retraitement ou encore le nombre, la filière et les dates de mises en service des réacteurs à déployer. Le présent travail vise à développer, valider et appliquer une méthodologie d’optimisation couplée à COSI pour la recherche de scénarios électronucléaires optimaux pour un problème multicritère. Cette méthodologie repose en premier lieu sur la réduction de la durée d’évaluation d’un scénario afin de permettre l’utilisation de méthodes d’optimisation en un temps raisonnable. Dans ce cadre, des métamodèles d’irradiation par réseaux de neurones sont établis à l’aide de la plateforme URANIE (développée au CEA/DEN) et sont implémentés dans COSI. L’objet du travail est ensuite d’utiliser, adapter et comparer différentes méthodes d’optimisation, telles que l’algorithme génétique et l’essaim particulaire disponibles dans la plateforme URANIE, afin de définir une méthodologie adéquate pour ce sujet d’étude spécifique. La mise en place de cette méthodologie suit une approche incrémentale qui fait intervenir des ajouts successifs de critères, contraintes et variables de décision dans la définition du problème d’optimisation. Les variables ajoutées au problème, qui décrivent la cinétique de déploiement des réacteurs et la stratégie de retraitement des combustibles usés, sont choisies en fonction de leur sensibilité sur les critères définis. Cette approche permet de faciliter l’interprétation des scénarios optimaux, la détection d’éventuelles difficultés liées au processus d’optimisation, et finalement d’émettre des recommandations d’utilisation de la méthodologie mise en place en fonction de la nature du problème. Les études d'optimisation s’appuient sur un scénario de déploiement de réacteurs à neutrons rapides avec recyclage du plutonium, inspiré des études menées dans le cadre de la loi de 2006 sur la gestion des matières et déchets radioactifs. Une illustration des possibilités de la méthodologie est réalisée sur ce scénario, et permet notamment de démontrer le caractère optimal du scénario issu des études menées selon cette loi vis-à-vis de la limitation de l’entreposage de matières fissiles. Ce résultat souligne l’importance de la mise en œuvre d’une gestion dynamique du plutonium via le recours au combustible MOX pour le déploiement progressif des RNR. / The issue of the evolution French nuclear fleet can be considered through the study of nuclear transition scenarios. These studies are of paramount importance as their results can greatly affect the decision making process, given that they take into account industrial concerns, investments, time, and nuclear system complexity. Such studies can be performed with the COSI code (developed at the CEA/DEN), which enables the calculation of matter inventories and fluxes across the fuel cycle (nuclear reactors and associated facilities), especially when coupled with the CESAR depletion code. The studies today performed with COSI require the definition of the various scenarios’ input parameters, in order to fulfil different objectives such as minimising natural uranium consumption, waste production and so on. These parameters concern the quantities and the scheduling of spent fuel destined for reprocessing, and the number, the type and the commissioning dates of deployed reactors.This work aims to develop, validate and apply an optimisation methodology coupled with COSI, in order to determine optimal nuclear transition scenarios for a multi-objective platform. Firstly, this methodology is based on the acceleration of scenario evaluation, enabling the use of optimisation methods in a reasonable time-frame. With this goal in mind, artificial neural network irradiation surrogate models are created with the URANIE platform (developed at the CEA/DEN) and are implemented within COSI. The next step in this work is to use, adapt and compare different optimisation methods, such as URANIE’s genetic algorithm and particle swarm methods, in order to define a methodology suited to this type of study. This methodology development is based on an incremental approach which progressively adds objectives, constraints and decision variables to the optimisation problem definition. The variables added, which are related to reactor deployment and spent fuel reprocessing strategies, are chosen according to their sensitivity to the defined objectives. This approach makes optimal scenarios interpretation easier, makes it possible to identify potential difficulties with the optimisation process, and then to provide recommendations on the use of the deployed methodology according to the problem type. The optimisation studies consider a fast reactor deployment scenario with plutonium recycling, which is inspired by studies carried out in the scope of the 2006 Act for Waste Management. An illustration of the possibilities of this methodology is provided with this scenario, demonstrating the optimality of the scenario inspired by the studies that were carried out for the 2006 Act, regarding stored fissile materials limitation. This result highlights the importance of dynamic plutonium management through MOX fuel usage during fast reactor deployment.
23

Algoritmo genético com regressão: busca direcionada através de aprendizado de máquina

Fonseca, Tales Lima 31 August 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-09T11:49:25Z No. of bitstreams: 1 taleslimafonseca.pdf: 6292275 bytes, checksum: 0e7e3e7f61b734dce43a0db483431c0f (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-23T11:18:02Z (GMT) No. of bitstreams: 1 taleslimafonseca.pdf: 6292275 bytes, checksum: 0e7e3e7f61b734dce43a0db483431c0f (MD5) / Made available in DSpace on 2018-01-23T11:18:02Z (GMT). No. of bitstreams: 1 taleslimafonseca.pdf: 6292275 bytes, checksum: 0e7e3e7f61b734dce43a0db483431c0f (MD5) Previous issue date: 2017-08-31 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Problemas de otimização são comuns em diversas áreas. Nas engenharias, em muitas situações, os problemas de otimização eram modelados desconsiderando certas características do fenômeno estudado com a finalidade de simplificar as simulações durante o processo de busca. Contudo, com o passar do tempo, a evolução das máquinas possibilitou a modelagem de problemas de otimização com mais informações, aproximando os modelos da forma mais fidedigna possível. No entanto, uma parcela significativa desses problemas demanda um alto custo computacional para realizar as avaliações das soluções candidatas, tornando muitos deles de difícil análise e simulação. Dessa forma, o objetivo deste trabalho é a utilização de métodos de aprendizado de máquina acoplado a um algoritmo de otimização com intuito de direcionar o processo de busca de um algoritmo genético, inserindo possíveis soluções na população do algoritmo genético a cada geração com o intuito de reduzir o alto custo computacional de se encontrar as soluções ótimas. Além disso, é realizado um estudo comparativo para verificar quais métodos de aprendizado de máquina obtêm bons resultados na técnica proposta. Os experimentos são realizados em problemas de otimização com um alto custo computacional comumente encontrados na literatura. / Optimization problems are common in many areas. In engineering, in many situations optimization problems were modeled disregarding certain characteristics of the studied phenomenon in order to simplify the simulations during the search process. However, over time, the evolution of the machines allowed the modeling of optimization problems with more information, approaching the models in the most reliable way possible. In this way, a significant portion of these problems requires a high computational cost to perform the evaluations of candidate solutions, making many of them difficult to analyze and simulate. Thus, the objective of this work is the use of machine learning methods coupled with an optimization algorithm with the purpose of directing the search process of a genetic algorithm, inserting new good quality solution into the population at each generation with the intention of reducing the high computational cost of finding the optimal solutions. In addition, a comparative study is carried out to verify which machine learning methods obtain good results in the proposed technique. The experiments are performed on optimization problems with a high computational cost commonly found in the literature.
24

Quantification de radionucléides par approche stochastique globale / Global stochastic approach for radionuclides quantification

Clément, Aloïs 19 December 2017 (has links)
Parmi les techniques de mesure nucléaire non destructives utilisées par les instrumentistes du noyau atomique, la spectrométrie gamma est aujourd’hui une méthode d’identification et de quantification de radionucléides largement employée dans le cadre de la gestion d’objets nucléaires complexes tels que des déchets radioactifs, des fûts de déchets ou des boîtes à gants. Les caractéristiques physico-nucléaires non-reproductibles et variées de ces objets, telles que leurs compositions, la répartition des matériaux, leurs densités et formes géométriques, ou le nombre et la forme de leurs termes sources émetteurs, induisent une inaptitude des méthodes d’étalonnage traditionnel à permettre l’obtention de l’activité d’un matériau nucléaire donné. Cette thèse propose une méthode de quantification de radionucléides multi-émetteurs, limitant, voire supprimant, l’utilisation d’informations dites a priori issues de l’avis d’expert ou du retour d’expériences. Cette méthode utilise entre autres la métamodélisation pour construire une efficacité de détection gamma équivalente de la scène de mesure, le formalisme de résolution de problème inverse par Chaines de Markov Monte-Carlo (MCMC), le tout placé dans un cadre de travail probabiliste bayésien afin d’estimer les densités de probabilités des variables d’intérêt telle qu’une masse de radionucléide. Un protocole de validation expérimentale permet de vérifier la robustesse de la méthode à estimer une masse de 239Pu au sein d’objets similaires à ceux traités en routine par le laboratoire. Les perspectives de la méthode concernent la réduction des temps de calcul, des coûts financiers et humains par limitation de l’approche type expert, et la réduction des incertitudes associées. / Gamma spectrometry is a commonly passive nondestructive assay used to identify and quantify radionuclides present in more or less complex objects such as waste packages, waste drums or glove boxes. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the absolute calibration efficiency are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modeling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometrical shape, matrix composition and source distribution. Some of them are strongly dependent on package data knowledge. The method developed in this thesis suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability density functions of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum and outside dimensions of interest objects. The methodology is testing to standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology. An experimental protocol is built to validate the quantification method in terms of robustness with the quantification of 239Pu. The perspectives of the method are to save time by improving the nuclear measurement process, to cut back on costs by avoiding as far as possible expert approaches, and to reduce the actinide mass uncertainties by increasing the precision of quantification process.
25

Surrogate Modeling for Uncertainty Quantification in systems Characterized by expensive and high-dimensional numerical simulators

Rohit Tripathy (8734437) 24 April 2020 (has links)
<div>Physical phenomena in nature are typically represented by complex systems of ordinary differential equations (ODEs) or partial differential equations (PDEs), modeling a wide range of spatio-temporal scales and multi-physics. The field of computational science has achieved indisputable success in advancing our understanding of the natural world - made possible through a combination of increasingly sophisticated mathematical models, numerical techniques and hardware resources. Furthermore, there has been a recent revolution in the data-driven sciences - spurred on by advances in the deep learning/stochastic optimization communities and the democratization of machine learning (ML) software.</div><div><br></div><div><div>With the ubiquity of use of computational models for analysis and prediction of physical systems, there has arisen a need for rigorously characterizing the effects of unknown variables in a system. Unfortunately, Uncertainty quantification (UQ) tasks such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying physical models. In order to deal with the high cost of the forward model, one typically resorts to the surrogate idea - replacing the true response surface with an approximation that is both accurate as well cheap (computationally speaking). However, state-ofart numerical systems are often characterized by a very large number of stochastic parameters - of the order of hundreds or thousands. The high cost of individual evaluations of the forward model, coupled with the limited real world computational budget one is constrained to work with, means that one is faced with the task of constructing a surrogate model for a system with high input dimensionality and small dataset sizes. In other words, one faces the <i>curse of dimensionality</i>.</div></div><div><br></div><div><div>In this dissertation, we propose multiple ways of overcoming the<i> curse of dimensionality</i> when constructing surrogate models for high-dimensional numerical simulators. The core idea binding all of our proposed approach is simple - we try to discover special structure in the stochastic parameter which captures most of the variance of the output quantity of interest. Our strategies first identify such a low-rank structure, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the low dimensional structure is small enough, learning the map between this reduced input space to the output is a much easier task in</div><div>comparison to the original surrogate modeling task.</div></div>
26

Méthodes avancées d'optimisation par méta-modèles – Applicationà la performance des voiliers de compétition / Advanced surrogate-based optimization methods - Application to racing yachts performance

Sacher, Matthieu 10 September 2018 (has links)
L’optimisation de la performance des voiliers est un problème difficile en raison de la complexité du systèmemécanique (couplage aéro-élastique et hydrodynamique) et du nombre important de paramètres à optimiser (voiles, gréement,etc.). Malgré le fait que l’optimisation des voiliers est empirique dans la plupart des cas aujourd’hui, les approchesnumériques peuvent maintenant devenir envisageables grâce aux dernières améliorations des modèles physiques et despuissances de calcul. Les calculs aéro-hydrodynamiques restent cependant très coûteux car chaque évaluation demandegénéralement la résolution d’un problème non linéaire d’interaction fluide-structure. Ainsi, l’objectif central de cette thèseest de proposer et développer des méthodes originales dans le but de minimiser le coût numérique de l’optimisation dela performance des voiliers. L’optimisation globale par méta-modèles Gaussiens est utilisée pour résoudre différents problèmesd’optimisation. La méthode d’optimisation par méta-modèles est étendue aux cas d’optimisations sous contraintes,incluant de possibles points non évaluables, par une approche de type classification. L’utilisation de méta-modèles à fidélitésmultiples est également adaptée à la méthode d’optimisation globale. Les applications concernent des problèmesd’optimisation originaux où la performance est modélisée expérimentalement et/ou numériquement. Ces différentes applicationspermettent de valider les développements des méthodes d’optimisation sur des cas concrets et complexes, incluantdes phénomènes d’interaction fluide-structure. / Sailing yacht performance optimization is a difficult problem due to the high complexity of the mechanicalsystem (aero-elastic and hydrodynamic coupling) and the large number of parameters to optimize (sails, rigs, etc.).Despite the fact that sailboats optimization is empirical in most cases today, the numerical optimization approach is nowconsidered as possible because of the latest advances in physical models and computing power. However, these numericaloptimizations remain very expensive as each simulation usually requires solving a non-linear fluid-structure interactionproblem. Thus, the central objective of this thesis is to propose and to develop original methods aiming at minimizing thenumerical cost of sailing yacht performance optimization. The Efficient Global Optimization (EGO) is therefore appliedto solve various optimization problems. The original EGO method is extended to cases of optimization under constraints,including possible non computable points, using a classification-based approach. The use of multi-fidelity surrogates isalso adapted to the EGO method. The applications treated in this thesis concern the original optimization problems inwhich the performance is modeled experimentally and/or numerically. These various applications allow for the validationof the developments in optimization methods on real and complex problems, including fluid-structure interactionphenomena.
27

A Systematic Process for Adaptive Concept Exploration

Nixon, Janel Nicole 29 November 2006 (has links)
This thesis presents a method for streamlining the process of obtaining and interpreting quantitative data for the purpose of creating a low-fidelity modeling and simulation environment. By providing a more efficient means for obtaining such information, quantitative analyses become much more practical for decision-making in the very early stages of design, where traditionally, quants are viewed as too expensive and cumbersome for concept evaluation. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The results show that the combination of a tailored data set, and an informed model structure work together to provide a meaningful quantitative representation of the system while relying on only a small amount of resources to generate that information. In comparison to more traditional modeling and simulation approaches, the SPACE method provides a more accurate representation of the system using fewer resources to generate that representation. For this reason, the SPACE method acts as an enabler for decision making in the very early design stages, where the desire is to base design decisions on quantitative information while not wasting valuable resources obtaining unnecessary high fidelity information about all the candidate solutions. Thus, the approach enables concept selection to be based on parametric, quantitative data so that informed, unbiased decisions can be made.
28

A Methodology for Capability-Based Technology Evaluation for Systems-of-Systems

Biltgen, Patrick Thomas 26 March 2007 (has links)
Post-Cold War military conflicts have highlighted the need for a flexible, agile joint force responsive to emerging crises around the globe. The 2005 Joint Capabilities Integration and Development System (JCIDS) acquisition policy document mandates a shift away from stove-piped threat-based acquisition to a capability-based model focused on the multiple ways and means of achieving an effect. This shift requires a greater emphasis on scenarios, tactics, and operational concepts during the conceptual phase of design and structured processes for technology evaluation to support this transition are lacking. In this work, a methodology for quantitative technology evaluation for systems-of-systems is defined. Physics-based models of an aircraft system are exercised within a hierarchical, object-oriented constructive simulation to quantify technology potential in the context of a relevant scenario. A major technical challenge to this approach is the lack of resources to support real-time human-in-the-loop tactical decision making and technology analysis. An approach that uses intelligent agents to create a "Meta-General" capable of forecasting strategic and tactical decisions based on technology inputs is used. To demonstrate the synergy between new technologies and tactics, surrogate models are utilized to provide intelligence to individual agents within the framework and develop a set of tactics that appropriately exploit new technologies. To address the long run-times associated with constructive military simulations, neural network surrogate models are implemented around the forecasting environment to enable rapid trade studies. Probabilistic techniques are used to quantify uncertainty and richly populate the design space with technology-infused alternatives. Since a large amount of data is produced in the analysis of systems-of-systems, dynamic, interactive visualization techniques are used to enable "what-if" games on assumptions, systems, technologies, tactics, and evolving threats. The methodology developed in this dissertation is applied to a notional Long Range Strike air vehicle and system architecture in the context of quantitative technology evaluation for the United States Air Force.
29

Surrogate-Assisted Evolutionary Algorithms

Loshchilov, Ilya 08 January 2013 (has links) (PDF)
Les Algorithmes Évolutionnaires (AEs) ont été très étudiés en raison de leur capacité à résoudre des problèmes d'optimisation complexes en utilisant des opérateurs de variation adaptés à des problèmes spécifiques. Une recherche dirigée par une population de solutions offre une bonne robustesse par rapport à un bruit modéré et la multi-modalité de la fonction optimisée, contrairement à d'autres méthodes d'optimisation classiques telles que les méthodes de quasi-Newton. La principale limitation de AEs, le grand nombre d'évaluations de la fonction objectif, pénalise toutefois l'usage des AEs pour l'optimisation de fonctions chères en temps calcul. La présente thèse se concentre sur un algorithme évolutionnaire, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), connu comme un algorithme puissant pour l'optimisation continue boîte noire. Nous présentons l'état de l'art des algorithmes, dérivés de CMA-ES, pour résoudre les problèmes d'optimisation mono- et multi-objectifs dans le scénario boîte noire. Une première contribution, visant l'optimisation de fonctions coûteuses, concerne l'approximation scalaire de la fonction objectif. Le meta-modèle appris respecte l'ordre des solutions (induit par la valeur de la fonction objectif pour ces solutions) ; il est ainsi invariant par transformation monotone de la fonction objectif. L'algorithme ainsi défini, saACM-ES, intègre étroitement l'optimisation réalisée par CMA-ES et l'apprentissage statistique de meta-modèles adaptatifs ; en particulier les meta-modèles reposent sur la matrice de covariance adaptée par CMA-ES. saACM-ES préserve ainsi les deux propriété clé d'invariance de CMA-ES~: invariance i) par rapport aux transformations monotones de la fonction objectif; et ii) par rapport aux transformations orthogonales de l'espace de recherche. L'approche est étendue au cadre de l'optimisation multi-objectifs, en proposant deux types de meta-modèles (scalaires). La première repose sur la caractérisation du front de Pareto courant (utilisant une variante mixte de One Class Support Vector Machone (SVM) pour les points dominés et de Regression SVM pour les points non-dominés). La seconde repose sur l'apprentissage d'ordre des solutions (rang de Pareto) des solutions. Ces deux approches sont intégrées à CMA-ES pour l'optimisation multi-objectif (MO-CMA-ES) et nous discutons quelques aspects de l'exploitation de meta-modèles dans le contexte de l'optimisation multi-objectif. Une seconde contribution concerne la conception d'algorithmes nouveaux pour l'optimi\-sation mono-objectif, multi-objectifs et multi-modale, développés pour comprendre, explorer et élargir les frontières du domaine des algorithmes évolutionnaires et CMA-ES en particulier. Spécifiquement, l'adaptation du système de coordonnées proposée par CMA-ES est couplée à une méthode adaptative de descente coordonnée par coordonnée. Une stratégie adaptative de redémarrage de CMA-ES est proposée pour l'optimisation multi-modale. Enfin, des stratégies de sélection adaptées aux cas de l'optimisation multi-objectifs et remédiant aux difficultés rencontrées par MO-CMA-ES sont proposées.
30

Progressive Validity Metamodel Trust Region Optimization

Thomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.

Page generated in 0.0636 seconds