• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 3
  • Tagged with
  • 42
  • 42
  • 24
  • 20
  • 19
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Gaussian process regression of two nested computer codes / Métamodélisation par processus gaussien de deux codes couplés

Marque-Pucheu, Sophie 10 October 2018 (has links)
Cette thèse traite de la métamodélisation (ou émulation) par processus gaussien de deux codes couplés. Le terme « deux codes couplés » désigne ici un système de deux codes chaînés : la sortie du premier code est une des entrées du second code. Les deux codes sont coûteux. Afin de réaliser une analyse de sensibilité de la sortie du code couplé, on cherche à construire un métamodèle de cette sortie à partir d'un faible nombre d'observations. Trois types d'observations du système existent : celles de la chaîne complète, celles du premier code uniquement, celles du second code uniquement.Le métamodèle obtenu doit être précis dans les zones les plus probables de l'espace d'entrée.Les métamodèles sont obtenus par krigeage universel, avec une approche bayésienne.Dans un premier temps, le cas sans information intermédiaire, avec sortie scalaire, est traité. Une méthode innovante de définition de la fonction de la moyenne du processus gaussien, basée sur le couplage de deux polynômes, est proposée. Ensuite le cas avec information intermédiaire est traité. Un prédicteur basé sur le couplage des prédicteurs gaussiens associés aux deux codes est proposé. Des méthodes pour évaluer rapidement la moyenne et la variance du prédicteur obtenu sont proposées. Les résultats obtenus pour le cas scalaire sont ensuite étendus au cas où les deux codes sont à sortie de grande dimension. Pour ce faire, une méthode de réduction de dimension efficace de la variable intermédiaire de grande dimension est proposée pour faciliter la régression par processus gaussien du deuxième code.Les méthodes proposées sont appliquées sur des exemples numériques. / Three types of observations of the system exist: those of the chained code, those of the first code only and those of the second code only. The surrogate model has to be accurate on the most likely regions of the input domain of the nested code.In this work, the surrogate models are constructed using the Universal Kriging framework, with a Bayesian approach.First, the case when there is no information about the intermediary variable (the output of the first code) is addressed. An innovative parametrization of the mean function of the Gaussian process modeling the nested code is proposed. It is based on the coupling of two polynomials.Then, the case with intermediary observations is addressed. A stochastic predictor based on the coupling of the predictors associated with the two codes is proposed.Methods aiming at computing quickly the mean and the variance of this predictor are proposed. Finally, the methods obtained for the case of codes with scalar outputs are extended to the case of codes with high dimensional vectorial outputs.We propose an efficient dimension reduction method of the high dimensional vectorial input of the second code in order to facilitate the Gaussian process regression of this code. All the proposed methods are applied to numerical examples.
22

Enhancement of CFD Surrogate Approaches for Thermo-Structural Response Prediction in High-Speed Flows

Brouwer, Kirk Rowse January 2018 (has links)
No description available.
23

An Exploration and Demonstration of System Modeling for Profitable Urban Air Mobility Operations Using Simulation and Optimization

Brandon E Sells (16807035) 09 August 2023 (has links)
<p>The research effort addressed important gaps in the modeling to simulate Urban Air Mobility (UAM) operations and couple optimization analyses for vehicle design, fleet allocations, and operational choices for next generation urban travel. Urban Air Mobility is expected to be a \$1 trillion dollar industry by 2040, but operators and designers have limited models and tools to estimate fleet performance, cost metrics, emissions performance, and profit for a given concept under future concepts of operations. A review of the literature reveals 14 modeling gaps related to infrastructure, operations, airspace, vehicles, and customers. In addition, the UAM industry requires better understanding of how operational choices may impact vehicle design and fleet allocations in a market with significant economic barriers and infrastructure needs. To address those needs, this effort proposed alternatives to address modeling challenges and develop studies to evaluate UAM vehicle concepts and concepts of operations in ways once not possible using the enhanced modeling tools. The research findings revealed that modeling coupled design/fleet and operational choices can affect daily profitability potential by 2-4\times\, for piloted and autonomous operations and affect the fleet size from between 12-50 vehicles across small, medium, and large metropolitan areas. The modeling capability provided by the improvements in UAM operations simulations and accessing vehicle and fleet metrics enables future studies to address UAM in a holistic manner. The increased capability could benefit the UAM community and inform future operations and concepts of operations in preparation for ubiquitous operations.</p>
24

Surrogate Modeling for Optimizing the Wing Design of a Hawk Moth Inspired Flapping-Wing Micro Air Vehicle

Huang, Wei 27 January 2023 (has links)
No description available.
25

A Distributed Surrogate Methodology for Inverse Most Probable Point Searches in Reliability Based Design Optimization

Davidson, James 28 August 2015 (has links)
No description available.
26

Multidisciplinary Design Under Uncertainty Framework of a Spacecraft and Trajectory for an Interplanetary Mission

Siddhesh Ajay Naidu (18437880) 28 April 2024 (has links)
<p dir="ltr">Design under uncertainty (DUU) for spacecraft is crucial in ensuring mission success, especially given the criticality of their failure. To obtain a more realistic understanding of space systems, it is beneficial to holistically couple the modeling of the spacecraft and its trajectory as a multidisciplinary analysis (MDA). In this work, a MDA model is developed for an Earth-Mars mission by employing the general mission analysis tool (GMAT) to model the mission trajectory and rocket propulsion analysis (RPA) to design the engines. By utilizing this direct MDA model, the deterministic optimization (DO) of the system is performed first and yields a design that completed the mission in 307 days while requiring 475 kg of fuel. The direct MDA model is also integrated into a Monte Carlo simulation (MCS) to investigate the uncertainty quantification (UQ) of the spacecraft and trajectory system. When considering the combined uncertainty in the launch date for a 20-day window and the specific impulses, the time of flight ranges from 275 to 330 days and the total fuel consumption ranges from 475 to 950 kg. The spacecraft velocity exhibits deviations ranging from 2 to 4 km/s at any given instance in the Earth inertial frame. The amount of fuel consumed during the TCM ranges from 1 to 250 kg, while during the MOI, the amount of fuel consumed ranges from 350 to 810 kg. The usage of the direct MDA model for optimization and uncertainty quantification of the system can be computationally prohibitive for DUU. To address this challenge, the effectiveness of utilizing surrogate-based approaches for performing UQ is demonstrated, resulting in significantly lower computational costs. Gaussian processes (GP) models trained on data from the MDA model were implemented into the UQ framework and their results were compared to those of the direct MDA method. When considering the combined uncertainty from both sources, the surrogate-based method had a mean error of 1.67% and required only 29% of the computational time. When compared to the direct MDA, the time of flight range matched well. While the TCM and MOI fuel consumption ranges were smaller by 5 kg. These GP models were integrated into the DUU framework to perform reliability-based design optimization (RBDO) feasibly for the spacecraft and trajectory system. For the combined uncertainty, the DO design yielded a poor reliability of 54%, underscoring the necessity for performing RBDO. The DUU framework obtained a design with a significantly improved reliability of 99%, which required an additional 39.19 kg of fuel and also resulted in a reduced time of flight by 0.55 days.</p>
27

Décompositions tensorielles et factorisations de calculs intensifs appliquées à l'identification de modèles de comportement non linéaire / Tensor decompositions and factorizations of intensive computing applied to the calibration of nonlinear constitutive material laws

Olivier, Clément 14 December 2017 (has links)
Cette thèse développe une méthodologie originale et non intrusive de construction de modèles de substitution applicable à des modèles physiques multiparamétriques.La méthodologie proposée permet d’approcher en temps réel, sur l’ensemble du domaine paramétrique, de multiples quantités d’intérêt hétérogènes issues de modèles physiques.Les modèles de substitution sont basés sur des représentations en train de tenseurs obtenues lors d'une phase hors ligne de calculs intensifs.L'idée essentielle de la phase d'apprentissage est de construire simultanément les approximations en se basant sur un nombre limité de résolutions du modèle physique lancées à la volée.L'exploration parcimonieuse du domaine paramétrique couplée au format compact de train de tenseurs permet de surmonter le fléau de la dimension.L'approche est particulièrement adaptée pour traiter des modèles présentant un nombre élevé de paramètres définis sur des domaines étendus.Les résultats numériques sur des lois élasto-viscoplastiques non linéaires montrent que des modèles de substitution compacts en mémoire qui approchent précisément les différentes variables mécaniques dépendantes du temps peuvent être obtenus à des coûts modérés.L'utilisation de tels modèles exploitables en temps réel permet la conception d'outils d'aide à la décision destinés aux experts métiers dans le cadre d'études paramétriques et visent à améliorer la procédure de calibration des lois matériaux. / This thesis presents a novel non-intrusive methodology to construct surrogate models of parametric physical models.The proposed methodology enables to approximate in real-time, over the entire parameter space, multiple heterogeneous quantities of interest derived from physical models.The surrogate models are based on tensor train representations built during an intensive offline computational stage.The fundamental idea of the learning stage is to construct simultaneously all tensor approximations based on a reduced number of solutions of the physical model obtained on the fly.The parsimonious exploration of the parameter space coupled with the compact tensor train representation allows to alleviate the curse of dimensionality.The approach accommodates particularly well to models involving many parameters defined over large domains.The numerical results on nonlinear elasto-viscoplastic laws show that compact surrogate models in terms of memory storage that accurately predict multiple time dependent mechanical variables can be obtained at a low computational cost.The real-time response provided by the surrogate model for any parameter value allows the implementation of decision-making tools that are particularly interesting for experts in the context of parametric studies and aim at improving the procedure of calibration of material laws.
28

A methodology for ballistic missile defense systems analysis using nested neural networks

Weaver, Brian Lee 10 July 2008 (has links)
The high costs and political tensions associated with Ballistic Missile Defense Systems (BMDS) has driven much of the testing and evaluation of BMDS to be performed through high fidelity Modeling and Simulation (M&S). In response, the M&S environments have become highly complex, extremely computationally intensive, and far too slow to be of use to systems engineers and high level decision makers. Regression models can be used to map the system characteristics to the metrics of interest, bringing about large quantities of data and allowing for real-time interaction with high-fidelity M&S environments, however the abundance of discontinuities and non-unique solutions makes the application of regression techniques hazardous. Due to these ambiguities, the transfer function from the characteristics to the metrics appears to have multiple solutions for a given set of inputs, which combined with the multiple inputs yielding the same set of outputs, causes troubles in creating a mapping. Due to the abundance of discontinuities, the existence of a neural network mapping from the system attributes to the performance metrics is not guaranteed, and if the mapping does exist, it requires a large amount of data to be for creating a regression model, making regression techniques less suitable to BMDS analysis. By employing Nested Neural Networks (NNNs), intermediate data can be associated with an ambiguous output which can allow for a regression model to be made. The addition of intermediate data incorporates more knowledge of the design space into the analysis. Nested neural networks divide the design space to form a piece-wise continuous function, which allows for the user to incorporate system knowledge into the surrogate modeling process while reducing the size of a data set required to form the regression model. This thesis defines nested neural networks along with methods and techniques for using NNNs to relieve the effects of discontinuities and non-unique solutions. To show the benefit of the approach, these techniques are applies them to a BMDS simulation. Case studies are performed to optimize the system configurations and assess robustness which could not be done without the regression models.
29

Métamodèles adaptatifs pour l'optimisation fiable multi-prestations de la masse de véhicules / Adaptive surrogate models for the reliable lightweight design of automotive body structures

Moustapha, Maliki 27 January 2016 (has links)
Cette thèse s’inscrit dans le cadre des travaux menés par PSA Peugeot Citroën pour l’allègement de ses véhicules. Les optimisations masse multi-prestations réalisées sur le périmètre de la structure contribuent directement à cette démarche en recherchant une allocation d’épaisseurs de tôles à masse minimale qui respectent des spécifications physiques relatives à différentes prestations (choc, vibro-acoustique, etc.). Ces spécifications sont généralement évaluées à travers des modèles numériques à très haute-fidélité qui présentent des temps de restitution particulièrement élevés. Le recours à des fonctions de substitution, connues sous le nom de métamodèles, reste alors la seule alternative pour mener une étude d’optimisation tout en respectant les délais projet. Cependant la prestation qui nous intéresse, à savoir le choc frontal, présente quelques particularités (grande dimensionnalité, fortes non-linéarités, dispersions physique et numérique) qui rendent sa métamodélisation difficile.L’objectif de la thèse est alors de proposer une approche d’optimisation basée sur des métamodèles adaptatifs afin de dégager de nouveaux gains de masse. Cela passe par la prise en compte du choc frontal dont le caractère chaotique est exacerbé par la présence d’incertitudes. Nous proposons ainsi une méthode d’optimisation fiabiliste avec l’introduction de quantiles comme mesure de conservatisme. L’approche est basée sur des modèles de krigeage avec enrichissement adaptatif afin de réduire au mieux le nombre d’appels aux modèles éléments finis. Une application sur un véhicule complet permet de valider la méthode. / One of the most challenging tasks in modern engineering is that of keeping the cost of manufactured goods small. With the advent of computational design, prototyping for instance, a major source of expenses, is reduced to its bare essentials. In fact, through the use of high-fidelity models, engineers can predict the behaviors of the systems they design quite faithfully. To be fully realistic, such models must embed uncertainties that may affect the physical properties or operating conditions of the system. This PhD thesis deals with the constrained optimization of structures under uncertainties in the context of automotive design. The constraints are assessed through expensive finite element models. For practical purposes, such models are conveniently substituted by so-called surrogate models which stand as cheap and easy-to-evaluate proxies. In this PhD thesis, Gaussian process modeling and support vector machines are considered. Upon reviewing state-of-the-art techniques for optimization under uncertainties, we propose a novel formulation for reliability-based design optimization which relies on quantiles. The formal equivalence of this formulation with the traditional ones is proved. This approach is then coupled to surrogate modeling. Kriging is considered thanks to its built-in error estimate which makes it convenient to adaptive sampling strategies. Such an approach allows us to reduce the computational budget by running the true model only in regions that are of interest to optimization. We therefore propose a two-stage enrichment scheme. The first stage is aimed at globally reducing the Kriging epistemic uncertainty in the vicinity of the limit-state surface. The second one is performed within iterations of optimization so as to locally improve the quantile accuracy. The efficiency of this approach is demonstrated through comparison with benchmark results. An industrial application featuring a car under frontal impact is considered. The crash behavior of a car is indeed particularly affected by uncertainties. The proposed approach therefore allows us to find a reliable solution within a reduced number of calls to the true finite element model. For the extreme case where uncertainties trigger various crash scenarios of the car, it is proposed to rely on support vector machines for classification so as to predict the possible scenarios before metamodeling each of them separately.
30

From Horns to Helmets: Multi-Objective Design Optimization Considerations to Protect the Brain

Johnson, Kyle Leslie 12 August 2016 (has links)
This dissertation presents an investigation and design optimization of energy absorbent protective systems that protect the brain. Specifically, the energy absorption characteristics of the bighorn sheep skull-horn system were quantified and used to inform a topology optimization performed on a football helmet facemask leading to reduced values of brain injury indicators. The horn keratin of a bighorn sheep was experimentally characterized in different stress states, strain rates, and moisture contents. Horn keratin demonstrated a clear strain rate dependence in both tension and compression. As the strain rate increased, the flow stress increased. Also, increased moisture content decreased the strength and increased ductility. The hydrated horn keratin energy absorption increased at high strain rates when compared to quasi-static data. The keratin experimental data was then used to inform constitutive models employed in the simulation of bighorn sheep head impacts at 5.5 m/s. Accelerations values as high as 607 G’s were observed in finite element simulations for rams butting their heads, which is an order of magnitude higher than predicted brain injury threshold values. In the most extreme case, maximum tensile pressure and maximum shear strains in the ram brain were 245 kPa and 0.28, respectively. These values could serve as true injury metrics for human head impacts. Finally, a helmeted human head Finite Element (FE) model is created, validated, and used to recreate impacts from a linear impactor. The results from these simulations are used to train a surrogate model, which is in turn utilized in multi-objective design optimization. Brain injury indicators were significantly reduced by performing multi-objective design optimization on a football helmet facemask. In particular, the tensile pressure and maximum shear strain in the brain decreased 7.5 % and 39.5 %, respectively when comparing the optimal designs to the baseline design. While the maximum tensile pressure and maximum shear strain values in the brain for helmeted head impacts (30.2 kPa and 0.011) were far less than the ram impacts (245 kPa and 0.28), helmet impacts up to 12.3 m/s have been recorded, and could easily surpass these thresholds.

Page generated in 0.1013 seconds