• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 60
  • 60
  • 23
  • 17
  • 14
  • 13
  • 13
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Construction automatique de modèles multi-corps de substitution aux simulations de crashtests / Automatized multi-body surrogate models creation to replace crashtests simulations

Loreau, Tanguy 18 December 2019 (has links)
Chez Renault, pour réaliser les études amont, les équipes en charge de la prestation du choc automobile disposent de modèles très simples leur permettant de pré-dimensionner le véhicule. Aujourd'hui, ils sont construits à partir du comportement d'un ou quelques véhicules de référence. Ils sont fonctionnels et permettent le dimensionnement. Mais à présent, l'entreprise souhaite construire ses modèles amont en s'appuyant sur l'ensemble de ses véhicules. En d'autres termes, elle souhaite disposer d'une méthode d'analyse automatique de simulations de crashtests afin de capitaliser leurs résultats dans une base de données de modèles simplifiés.Pour répondre à cet objectif, nous développons une méthode permettant d'extraire des simulations de crashtests les données nécessaires à la construction d'un modèle multi-corps de substitution : CrashScan. Le processus d'analyse implémenté dans CrashScan se résume en trois étapes majeures.La première consiste à identifier l'ensemble des zones peu déformées sur une simulation de crashtest. Cela nous permet de dresser le graphe topologique du futur modèle de substitution. La seconde étape est une analyse des cinématiques relatives entre les portions peu déformées : les directions principales et les modes de déformation (e.g. compression, flexion) sont identifiés en analysant le mouvement relatif. La dernière étape consiste à analyser les efforts et les moments situés entre les zones peu déformées dans les repères associés aux directions principales des déformations en fonction des déformations. Cela nous permet d'identifier des modèles hystérétiques de Bouc-Wen équivalents. Ces modèles disposent de trois paramètres utiles dans notre cas : une raideur, un effort seuil avant plastification et une pente d'écrouissage. Ces paramètres peuvent être utilisés directement par les experts des études amont.Enfin, nous construisons les modèles multi-corps de substitution pour trois cas d'étude différents. Nous les comparons alors à leur référence sur les résultats qu'ils fournissent pour les critères utilisés en amont : les modèles générés par CrashScan semblent apporter la précision et la fidélité nécessaires pour être utilisés en amont du développement automobile.Pour poursuivre ces travaux de recherche et aboutir à une solution industrielle, il reste néanmoins des verrous à lever dont les principaux sont la synthèse d'un mouvement quelconque en six mouvements élémentaires et la synthèse multi-corps sur des éléments autres que des poutres. / At Renault, to fulfill upstream studies, teams in charge of crashworthiness use very simple models to pre-size the vehicle. Today, these models are built from the physical behavior of only one or some reference vehicles. They work and allow to size the project. But today, the company wishes to build its upstream models using all its vehicles. In other words, it wishes to get an automatic method to analyze crashtests simulations to capitalize their results in a database of simplified models.To meet this goal, we decide to use the multi-body model theory. We develop a method to analyze crashtests simulations in order to extract the data required to build a surrogate multi-body model : CrashScan. The analysis process implemented in CrashScan can be split into three major steps.The first one allows to identify the low deformed zones on a crashtest simulation. Then, we can build the topological graph of the future surrogate model. The second step is to analyze the relative kinematics between the low deformed zones : major directions and deformation modes (e.g. crushing or bending) are identified analysing relative movements. The last step is to analyze strengths and moments located between the low deformed zones, viewed in the frames associated to the major directions of deformations in function of the deformations. This allows us to identify equivalent Bouc-Wen hysteretic models. These models have three parameters that we can use : a stiffness, a threshold strength before plastification and a strain of hardening. These parameters can directly be used by upstream studies experts.Finally, we build multi-body models for three different use case. We compare them to their reference over the results they produce for the upstream criteria : models generated with CrashScan seems to grant the precision and the fidelity required to be used during automotive development's upstream phases.To continue this research work and get an industrial solution, there are still some locks to lift, the main ones are : synthesis of any movement into six elementary ones and multi-body synthesis on elements other than beams.
22

Metody evoluční optimalizace založené na modelech / Model-based evolutionary optimization methods

Bajer, Lukáš January 2018 (has links)
Model-based black-box optimization is a topic that has been intensively studied both in academia and industry. Especially real-world optimization tasks are often characterized by expensive or time-demanding objective functions for which statistical models can save resources or speed-up the optimization. Each of three parts of the thesis concerns one such model: first, copulas are used instead of a graphical model in estimation of distribution algorithms, second, RBF networks serve as surrogate models in mixed-variable genetic algorithms, and third, Gaussian processes are employed in Bayesian optimization algorithms as a sampling model and in the Covariance matrix adaptation Evolutionary strategy (CMA-ES) as a surrogate model. The last combination, described in the core part of the thesis, resulted in the Doubly trained surrogate CMA-ES (DTS-CMA-ES). This algorithm uses the uncertainty prediction of a Gaussian process for selecting only a part of the CMA-ES population for evaluation with the expensive objective function while the mean prediction is used for the rest. The DTS-CMA-ES improves upon the state-of-the-art surrogate continuous optimizers in several benchmark tests.
23

BAYESIAN OPTIMIZATION FOR DESIGN PARAMETERS OF AUTOINJECTORS.pdf

Heliben Naimeshkum Parikh (15340111) 24 April 2023 (has links)
<p>The document describes the computational framework to optimize spring-driven Autoinjectors. It involves Bayesian Optimization for efficient and cost-effective design of Autoinjectors.</p>
24

Vibration and Buckling Analysis of Unitized Structure Using Meshfree Method and Kriging Model

Yeilaghi Tamijani, Ali 07 June 2011 (has links)
The Element Free Galerkin (EFG) method, which is based on the Moving Least Squares (MLS) approximation, is developed here for vibration, buckling and static analysis of homogenous and FGM plate with curvilinear stiffeners. Numerical results for different stiffeners configurations and boundary conditions are presented. All results are verified using the commercial finite element software ANSYS® and other available results in literature. In addition, the vibration analysis of plates with curvilinear stiffeners is carried out using Ritz method. A 24 by 28 in. curvilinear stiffened panel was machined from 2219-T851 aluminum for experimental validation of the Ritz and meshfree methods of vibration mode shape predictions. Results were obtained for this panel mounted vertically to a steel clamping bracket using acoustic excitation and a laser vibrometer. Experimental results appear to correlate well with the meshfree and Ritz method results. In reality, many engineering structures are subjected to random pressure loads in nature and cannot be assumed to be deterministic. Typical engineering structures include buildings and towers, offshore structures, vehicles and ships, are subjected to random pressure. The vibrations induced from gust loads, engine noise, and other auxiliary electrical system can also produce noise inside aircraft. Consequently, all flight vehicles operate in random vibration environment. These random loads can be modeled by using their statistical properties. The dynamical responses of the structures which are subjected to random excitations are very complicated. To investigate their dynamic responses under random loads, the meshfree method is developed for random vibration analysis of curvilinearly-stiffened plates. Since extensive efforts have been devoted to study the buckling and vibration analysis of stiffened panel to maximize their natural frequencies and critical buckling loads, these structures are subjected to in-plane loading while the vibration analysis is considered. In these cases the natural frequencies calculated by neglecting the in-plane compression are usually over predicted. In order to have more accurate results it might be necessary to take into account the effects of in-plane load since it can change the natural frequency of plate considerably. To provide a better view of the free vibration behavior of the plate with curvilinear stiffeners subjected to axial/biaxial or shear stresses several numerical examples are studied. The FEM analysis of curvilinearly stiffened plate is quite computationally expensive, and the meshfree method seems to be a proper substitution to reduce the CPU time. However it will still require many simulations. Because of the number of simulations may be required in the solution of an engineering optimization problem, many researchers have tried to find approaches and techniques in optimization which can reduce the number of function evaluations. In these problems, surrogate models for analysis and optimization can be very efficient. The basic idea in surrogate model is to reduce computational cost and giving a better understanding of the influence of the design variables on the different objectives and constrains. To use the advantage of both meshfree method and surrogate model in reducing CPU time, the meshfree method is used to generate the sample points and combination of Kriging (a surrogate model) and Genetic Algorithms is used for design of curvilinearly stiffened plate. The meshfree and kriging results and CPU time were compared with those obtained using EBF3PanelOpt. / Ph. D.
25

Fiabilité résiduelle des ouvrages en béton dégradés par réaction alcali-granulat : application au barrage hydroélectrique de Song Loulou / Residual reliability of alkali-aggregate reaction affected concrete structures : application to the song Loulou hydroelectric dam

Ftatsi Mbetmi, Guy-De-Patience 31 August 2018 (has links)
Ce travail de thèse propose une méthodologie multi-échelle basée sur l'utilisation de modèles de substitution fonction de variables aléatoires, pour évaluer la fiabilité résiduelle d'ouvrages en béton atteints de réaction alcali-granulat (RAG), dans l'optique d'une meilleure maintenance. Les modèles de substitution, basés sur des développements en chaos de polynômes des paramètres d'une fonction de forme (sigmoïde dans les cas traités), ont été constitués à plusieurs échelles, afin notamment de réduire les temps de calculs des modèles physiques sous-jacents. A l'échelle microscopique, le modèle de RAG employé est celui développé par Multon, Sellier et Cyr en 2009, comprenant initialement une vingtaine de variables aléatoires potentielles. A l'issue d'une analyse de sensibilité de Morris, le modèle de substitution permet de reproduire la courbe de gonflement dans le temps du volume élémentaire représentatif en fonction de neuf variables aléatoires. L'utilisation du modèle de substitution construit, pour la prédiction des effets mécaniques du gonflement dû à la RAG sur une éprouvette, a nécessité de prendre en compte l'anisotropie de ces effets en améliorant les fonctions poids proposées par Saouma et Perotti en 2006. L'échelle de l'éprouvette étant validée par la confrontation des prédictions aux données expérimentales des travaux de thèse de Multon, une application à l'échelle du barrage de Song Loulou a été entreprise. Le calcul du comportement thermo-chemo-mécanique d'une pile d'évacuateur de crues, dont les résultats en déplacements ont pu être confrontés aux données d'auscultation fournies par l'entreprise AES-SONEL (devenue ENEO), a été réalisé. Des modèles de substitution ont été construits ensuite à l'échelle de la structure afin d'obtenir les déplacements aux points d'intérêt, liés aux états limites de fonctionnement des évacuateurs, et procéder ainsi à l'estimation de la fiabilité résiduelle du barrage. Les calculs d'analyse de sensibilité et la construction des modèles de substitution ont été implémentés en Fortran, Java et OpenTURNS Les calculs sur éprouvette et pile de barrage ont été effectués sous Cast3M. / This work proposes a multi-scale methodology based on the use of surrogate models function of random variables, to evaluate the residual reliability of concrete structures suffering from alkali-aggregate reaction (AAR), for a better maintenance purpose. Surrogate models, based on polynomial chaos expansion of the parameters of a shape function (sigmoid in the studied cases), have been constituted at several scales, in particular in order to reduce computation time of the underlying physical models. At the microscopic scale, the AAR model employed is that developed by Multon, Sellier and Cyr in 2009, initially comprising about twenty potential random variables. At the end of a Morris sensitivity analysis, the surrogate model enables to reproduce the expansion curve over time of the representative elementary volume as a function of nine random variables. The use of the built-in surrogate model in predicting the mechanical effects of AAR expansion on a concrete core required to take into account the anisotropy of these effects by improving the weight functions proposed by Saouma and Perotti in 2006. The core's scale being validated by the comparison of the predictions with the experimental data of Multon's thesis work, an application at the scale of the Song Loulou dam was undertaken. The computation of the thermo-chemo-mechanical behavior of a spillway stack, whose results in displacement could be compared with the auscultation data provided by the company AES-SONEL (now ENEO), was realized. Surrogate models were then constructed at the scale of the structure to obtain displacements at the points of interest, related to the operating limit states of the spillways, and thus to estimate the residual reliability of the dam. The sensitivity analysis computations as well as the construction of the surrogate models were implemented in Fortran, Java and OpenTURNS. Computations on concrete cores and Song Loulou dam spillway were performed under Cast3M.
26

Prise en compte des incertitudes des problèmes en vibro-acoustiques (ou interaction fluide-structure) / Taking into account the uncertainties of vibro-acoustic problems (or fluid-structure interaction)

Dammak, Khalil 27 November 2018 (has links)
Ce travail de thèse porte sur l’analyse robuste et l’optimisation fiabiliste des problèmes vibro-acoustiques (ou en interaction fluide-structure) en tenant en compte des incertitudes des paramètres d’entrée. En phase de conception et de dimensionnement, il parait intéressant de modéliser les systèmes vibro-acoustiques ainsi que leurs variabilités qui peuvent être essentiellement liées à l’imperfection de la géométrie ainsi qu’aux caractéristiques des matériaux. Il est ainsi important, voire indispensable, de tenir compte de la dispersion des lois de ces paramètres incertains afin d’en assurer une conception robuste. Par conséquent, l’objectif est de déterminer les capacités et les limites, en termes de précision et de coûts de calcul, des méthodes basées sur les développements en chaos polynomiaux en comparaison avec la technique référentielle de Monte Carlo pour étudier le comportement mécanique des problèmes vibro-acoustique comportant des paramètres incertains. L’étude de la propagation de ces incertitudes permet leur intégration dans la phase de conception. Le but de l’optimisation fiabiliste Reliability-Based Design Optimization (RBDO) consiste à trouver un compromis entre un coût minimum et une fiabilité accrue. Par conséquent, plusieurs méthodes, telles que la méthode hybride (HM) et la méthode Optimum Safety Factor (OSF), ont été développées pour atteindre cet objectif. Pour remédier à la complexité des systèmes vibro-acoustiques comportant des paramètres incertains, nous avons développé des méthodologies spécifiques à cette problématique, via des méthodes de méta-modèlisation, qui nous ont permis de bâtir un modèle de substitution vibro-acoustique, qui satisfait en même temps l’efficacité et la précision du modèle. L’objectif de cette thèse, est de déterminer la meilleure méthodologie à suivre pour l’optimisation fiabiliste des systèmes vibro-acoustiques comportant des paramètres incertains. / This PhD thesis deals with the robust analysis and reliability optimization of vibro-acoustic problems (or fluid-structure interaction) taking into account the uncertainties of the input parameters. In the design and dimensioning phase, it seems interesting to model the vibro-acoustic systems and their variability, which can be mainly related to the imperfection of the geometry as well as the characteristics of the materials. It is therefore important, if not essential, to take into account the dispersion of the laws of these uncertain parameters in order to ensure a robust design. Therefore, the purpose is to determine the capabilities and limitations, in terms of precision and computational costs, of methods based on polynomial chaos developments in comparison with the Monte Carlo referential technique for studying the mechanical behavior of vibro-acoustic problems with uncertain parameters. The study of the propagation of these uncertainties allows their integration into the design phase. The goal of the reliability-Based Design Optimization (RBDO) is to find a compromise between minimum cost and a target reliability. As a result, several methods, such as the hybrid method (HM) and the Optimum Safety Factor (OSF) method, have been developed to achieve this goal. To overcome the complexity of vibro-acoustic systems with uncertain parameters, we have developed methodologies specific to this problem, via meta-modeling methods, which allowed us to build a vibro-acoustic surrogate model, which at the same time satisfies the efficiency and accuracy of the model. The objective of this thesis is to determine the best methodology to follow for the reliability optimization of vibro-acoustic systems with uncertain parameters.
27

Application of Design-of-Experiment Methods and Surrogate Models in Electromagnetic Nondestructive Evaluation / Application des méthodes de plans d’expérience numérique et de modèles de substitution pour le contrôle nondestructif électromagnétique

Bilicz, Sandor 30 May 2011 (has links)
Le contrôle non destructif électromagnétique (CNDE) est appliqué dans des domaines variés pour l'exploration de défauts cachés affectant des structures. De façon générale, le principe peut se poser en ces termes : un objet inconnu perturbe un milieu hôte donné et illuminé par un signal électromagnétique connu, et la réponse est mesurée sur un ou plusieurs récepteurs de positions connues. Cette réponse contient des informations sur les paramètres électromagnétiques et géométriques des objets recherchés et toute la difficulté du problème traité ici consiste à extraire ces informations du signal obtenu. Plus connu sous le nom de « problèmes inverses », ces travaux s'appuient sur une résolution appropriée des équations de Maxwell. Au « problème inverse » est souvent associé le « problème direct » complémentaire, qui consiste à déterminer le champ électromagnétique perturbé connaissant l'ensemble des paramètres géométriques et électromagnétiques de la configuration, défaut inclus. En pratique, cela est effectué via une modélisation mathématique et des méthodes numériques permettant la résolution numérique de tels problèmes. Les simulateurs correspondants sont capables de fournir une grande précision sur les résultats mais à un coût numérique important. Sachant que la résolution d'un problème inverse exige souvent un grand nombre de résolution de problèmes directs successifs, cela rend l'inversion très exigeante en termes de temps de calcul et de ressources informatiques. Pour surmonter ces challenges, les « modèles de substitution » qui imitent le modèle exact peuvent être une solution alternative intéressante. Une manière de construire de tels modèles de substitution est d'effectuer un certain nombre de simulations exactes et puis d'approximer le modèle en se basant sur les données obtenues. Le choix des simulations (« prototypes ») est normalement contrôlé par une stratégie tirée des outils de méthodes de « plans d'expérience numérique ». Dans cette thèse, l'utilisation des techniques de modélisation de substitution et de plans d'expérience numérique dans le cadre d'applications en CNDE est examinée. Trois approches indépendantes sont présentées en détail : une méthode d'inversion basée sur l'optimisation d'une fonction objectif et deux approches plus générales pour construire des modèles de substitution en utilisant des échantillonnages adaptatifs. Les approches proposées dans le cadre de cette thèse sont appliquées sur des exemples en CNDE par courants de Foucault / Electromagnetic Nondestructive Evaluation (ENDE) is applied in various industrial domains for the exploration of hidden in-material defects of structural components. The principal task of ENDE can generally be formalized as follows: an unknown defect affects a given host structure, interacting with a known electromagnetic field, and the response (derived from the electromagnetic field distorted by the defect) is measured using one or more receivers at known positions. This response contains some information on the electromagnetic constitutive parameters and the geometry of the defect to be retrieved. ENDE aims at extracting this information for the characterization of the defect, i.e., at the solution of the arising “inverse problem”. To this end, one has to be able to determine the electromagnetic field distorted by a defect with known parameters affecting a given host structure, i.e., to solve the “forward problem”. Practically, this is performed via the mathematical modeling (based on the Maxwell's equations) and the numerical simulation of the studied ENDE configuration. Such simulators can provide fine precision, but at a price of computational cost. However, the solution of an inverse problem often requires several runs of these “expensive-to-evaluate” simulators, making the inversion procedure firmly demanding in terms of runtime and computational resources. To overcome this challenge, “surrogate modeling” offers an interesting alternative solution. A surrogate model imitates the true model, but as a rule, it is much less complex than the latter. A way to construct such surrogates is to perform a couple of simulations and then to approximate the model based on the obtained data. The choice of the “prototype” simulations is usually controlled by a sophisticated strategy, drawn from the tools of “design-of-experiments”. The goal of the research work presented in this Dissertation is the improvement of ENDE methods by using surrogate modeling and design-of-experiments techniques. Three self-sufficient approaches are discussed in detail: an inversion algorithm based on the optimization of an objective function and two methods for the generation of generic surrogate models, both involving a sequential sampling strategy. All approaches presented in this Dissertation are illustrated by examples drawn from eddy-current nondestructive testing.
28

An Automated Method for Optimizing Compressor Blade Tuning

Hinkle, Kurt Berlin 01 March 2016 (has links)
Because blades in jet engine compressors are subject to dynamic loads based on the engine's speed, it is essential that the blades are properly "tuned" to avoid resonance at those frequencies to ensure safe operation of the engine. The tuning process can be time consuming for designers because there are many parameters controlling the geometry of the blade and, therefore, its resonance frequencies. Humans cannot easily optimize design spaces consisting of multiple variables, but optimization algorithms can effectively optimize a design space with any number of design variables. Automated blade tuning can reduce design time while increasing the fidelity and robustness of the design. Using surrogate modeling techniques and gradient-free optimization algorithms, this thesis presents a method for automating the tuning process of an airfoil. Surrogate models are generated to relate airfoil geometry to the modal frequencies of the airfoil. These surrogates enable rapid exploration of the entire design space. The optimization algorithm uses a novel objective function that accounts for the contribution of every mode's value at a specific operating speed on a Campbell diagram. When the optimization converges on a solution, the new blade parameters are output to the designer for review. This optimization guarantees a feasible solution for tuning of a blade. With 21 geometric parameters controlling the shape of the blade, the geometry for an optimally tuned blade can be determined within 20 minutes.
29

Reliability-based structural design: a case of aircraft floor grid layout optimization

Chen, Qing 07 January 2011 (has links)
In this thesis, several Reliability-based Design Optimization (RBDO) methods and algorithms for airplane floor grid layout optimization are proposed. A general RBDO process is proposed and validated by an example. Copula as a mathematical method to model random variable correlations is introduced to discover the correlations between random variables and to be applied in producing correlated data samples for Monte Carlo simulations. Based on Hasofer-Lind (HL) method, a correlated HL method is proposed to evaluate a reliability index under correlation. As an alternative method for computing a reliability index, the reliability index is interpreted as an optimization problem and two nonlinear programming algorithms are introduced to evaluate reliability index. To evaluate the reliability index by Monte Carlo simulation in a time efficient way, a kriging-based surrogate model is proposed and compared to the original model in terms of computing time. Since in RBDO optimization models the reliability constraint obtained by MCS does not have an analytical form, a kriging-based response surface is built. Kriging-based response surface models are usually segment functions that do not have a uniform expression over the design space; however, most optimization algorithms require a uniform expression for constraints. To solve this problem, a heuristic gradient-based direct searching algorithm is proposed. These methods and algorithms, together with the RBDO general process, are applied to the layout optimization of aircraft floor grid structural design.
30

An efficient approach for high-fidelity modeling incorporating contour-based sampling and uncertainty

Crowley, Daniel R. 13 January 2014 (has links)
During the design process for an aerospace vehicle, decision-makers must have an accurate understanding of how each choice will affect the vehicle and its performance. This understanding is based on experiments and, increasingly often, computer models. In general, as a computer model captures a greater number of phenomena, its results become more accurate for a broader range of problems. This improved accuracy typically comes at the cost of significantly increased computational expense per analysis. Although rapid analysis tools have been developed that are sufficient for many design efforts, those tools may not be accurate enough for revolutionary concepts subject to grueling flight conditions such as transonic or supersonic flight and extreme angles of attack. At such conditions, the simplifying assumptions of the rapid tools no longer hold. Accurate analysis of such concepts would require models that do not make those simplifying assumptions, with the corresponding increases in computational effort per analysis. As computational costs rise, exploration of the design space can become exceedingly expensive. If this expense cannot be reduced, decision-makers would be forced to choose between a thorough exploration of the design space using inaccurate models, or the analysis of a sparse set of options using accurate models. This problem is exacerbated as the number of free parameters increases, limiting the number of trades that can be investigated in a given time. In the face of limited resources, it can become critically important that only the most useful experiments be performed, which raises multiple questions: how can the most useful experiments be identified, and how can experimental results be used in the most effective manner? This research effort focuses on identifying and applying techniques which could address these questions. The demonstration problem for this effort was the modeling of a reusable booster vehicle, which would be subject to a wide range of flight conditions while returning to its launch site after staging. Contour-based sampling, an adaptive sampling technique, seeks cases that will improve the prediction accuracy of surrogate models for particular ranges of the responses of interest. In the case of the reusable booster, contour-based sampling was used to emphasize configurations with small pitching moments; the broad design space included many configurations which produced uncontrollable aerodynamic moments for at least one flight condition. By emphasizing designs that were likely to trim over the entire trajectory, contour-based sampling improves the predictive accuracy of surrogate models for such designs while minimizing the number of analyses required. The simplified models mentioned above, although less accurate for extreme flight conditions, can still be useful for analyzing performance at more common flight conditions. The simplified models may also offer insight into trends in the response behavior. Data from these simplified models can be combined with more accurate results to produce useful surrogate models with better accuracy than the simplified models but at less cost than if only expensive analyses were used. Of the data fusion techniques evaluated, Ghoreyshi cokriging was found to be the most effective for the problem at hand. Lastly, uncertainty present in the data was found to negatively affect predictive accuracy of surrogate models. Most surrogate modeling techniques neglect uncertainty in the data and treat all cases as deterministic. This is plausible, especially for data produced by computer analyses which are assumed to be perfectly repeatable and thus truly deterministic. However, a number of sources of uncertainty, such as solver iteration or surrogate model prediction accuracy, can introduce noise to the data. If these sources of uncertainty could be captured and incorporated when surrogate models are trained, the resulting surrogate models would be less susceptible to that noise and correspondingly have better predictive accuracy. This was accomplished in the present effort by capturing the uncertainty information via nuggets added to the Kriging model. By combining these techniques, surrogate models could be created which exhibited better predictive accuracy while selecting the most informative experiments possible. This significantly reduced the computational effort expended compared to a more standard approach using space-filling samples and data from a single source. The relative contributions of each technique were identified, and observations were made pertaining to the most effective way to apply the separate and combined methods.

Page generated in 0.0441 seconds