Spelling suggestions: "subject:"multifidelity"" "subject:"mutifidelity""
11 |
Optimisation of liquid fuel injection in gas turbine enginesComer, Adam Landon January 2013 (has links)
No description available.
|
12 |
An efficient approach for high-fidelity modeling incorporating contour-based sampling and uncertaintyCrowley, Daniel R. 13 January 2014 (has links)
During the design process for an aerospace vehicle, decision-makers must have an accurate understanding of how each choice will affect the vehicle and its performance. This understanding is based on experiments and, increasingly often, computer models. In general, as a computer model captures a greater number of phenomena, its results become more accurate for a broader range of problems. This improved accuracy typically comes at the cost of significantly increased computational expense per analysis.
Although rapid analysis tools have been developed that are sufficient for many design efforts, those tools may not be accurate enough for revolutionary concepts subject to grueling flight conditions such as transonic or supersonic flight and extreme angles of attack. At such conditions, the simplifying assumptions of the rapid tools no longer hold. Accurate analysis of such concepts would require models that do not make those simplifying assumptions, with the corresponding increases in computational effort per analysis. As computational costs rise, exploration of the design space can become exceedingly expensive. If this expense cannot be reduced, decision-makers would be forced to choose between a thorough exploration of the design space using inaccurate models, or the analysis of a sparse set of options using accurate models. This problem is exacerbated as the number of free parameters increases, limiting the number of trades that can be investigated in a given time. In the face of limited resources, it can become critically important that only the most useful experiments be performed, which raises multiple questions: how can the most useful experiments be identified, and how can experimental results be used in the most effective manner?
This research effort focuses on identifying and applying techniques which could address these questions. The demonstration problem for this effort was the modeling of a reusable booster vehicle, which would be subject to a wide range of flight conditions while returning to its launch site after staging. Contour-based sampling, an adaptive sampling technique, seeks cases that will improve the prediction accuracy of surrogate models for particular ranges of the responses of interest. In the case of the reusable booster, contour-based sampling was used to emphasize configurations with small pitching moments; the broad design space included many configurations which produced uncontrollable aerodynamic moments for at least one flight condition. By emphasizing designs that were likely to trim over the entire trajectory, contour-based sampling improves the predictive accuracy of surrogate models for such designs while minimizing the number of analyses required.
The simplified models mentioned above, although less accurate for extreme flight conditions, can still be useful for analyzing performance at more common flight conditions. The simplified models may also offer insight into trends in the response behavior. Data from these simplified models can be combined with more accurate results to produce useful surrogate models with better accuracy than the simplified models but at less cost than if only expensive analyses were used. Of the data fusion techniques evaluated, Ghoreyshi cokriging was found to be the most effective for the problem at hand.
Lastly, uncertainty present in the data was found to negatively affect predictive accuracy of surrogate models. Most surrogate modeling techniques neglect uncertainty in the data and treat all cases as deterministic. This is plausible, especially for data produced by computer analyses which are assumed to be perfectly repeatable and thus truly deterministic. However, a number of sources of uncertainty, such as solver iteration or surrogate model prediction accuracy, can introduce noise to the data. If these sources of uncertainty could be captured and incorporated when surrogate models are trained, the resulting surrogate models would be less susceptible to that noise and correspondingly have better predictive accuracy. This was accomplished in the present effort by capturing the uncertainty information via nuggets added to the Kriging model.
By combining these techniques, surrogate models could be created which exhibited better predictive accuracy while selecting the most informative experiments possible. This significantly reduced the computational effort expended compared to a more standard approach using space-filling samples and data from a single source. The relative contributions of each technique were identified, and observations were made pertaining to the most effective way to apply the separate and combined methods.
|
13 |
A multi-fidelity framework for physics based rotor blade simulation and optimizationCollins, Kyle Brian 17 November 2008 (has links)
New helicopter rotor designs are desired that offer increased efficiency, reduced vibration, and reduced noise. This problem is multidisciplinary, requiring knowledge of structural dynamics, aerodynamics, and aeroacoustics. Rotor optimization requires achieving multiple, often conflicting objectives. There is no longer a single optimum but rather an optimal trade-off space, the Pareto Frontier. Rotor Designers in industry need methods that allow the most accurate simulation tools available to search for Pareto designs. Computer simulation and optimization of rotors have been advanced by the development of "comprehensive" rotorcraft analysis tools. These tools perform aeroelastic analysis using Computational Structural Dynamics (CSD). Though useful in optimization, these tools lack built-in high fidelity aerodynamic models. The most accurate rotor simulations utilize Computational Fluid Dynamics (CFD) coupled to the CSD of a comprehensive code, but are generally considered too time consuming where numerous simulations are required like rotor optimization. An approach is needed where high fidelity CFD/CSD simulation can be routinely used in design optimization. This thesis documents the development of physics based rotor simulation frameworks. A low fidelity model uses a comprehensive code with simplified aerodynamics. A high fidelity model uses a parallel processor capable CFD/CSD methodology. Both frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test this process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist. Approximation models are built for high fidelity metrics related to rotor efficiency and vibration. Optimization using the approximation models found the designs having maximum rotor efficiency and minimum vibration. Various Pareto generation methods are used to find frontier designs between these two anchor designs. The Pareto anchors are tested in the high fidelity simulation and shown to be good designs, providing evidence that the process has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors.
|
14 |
Effective formulations of optimization under uncertainty for aerospace designCook, Laurence William January 2018 (has links)
Formulations of optimization under uncertainty (OUU) commonly used in aerospace design—those based on treating statistical moments of the quantity of interest (QOI) as separate objectives—can result in stochastically dominated designs. A stochastically dominated design is undesirable, because it is less likely than another design to achieve a QOI at least as good as a given value, for any given value. As a remedy to this limitation for the multi-objective formulation of moments, a novel OUU formulation is proposed—dominance optimization. This formulation seeks a set of solutions and makes use of global optimizers, so is useful for early stages of the design process when exploration of design space is important. Similarly, to address this limitation for the single-objective formulation of moments (combining moments via a weighted sum), a second novel formulation is proposed—horsetail matching. This formulation can make use of gradient- based local optimizers, so is useful for later stages of the design process when exploitation of a region of design space is important. Additionally, horsetail matching extends straightforwardly to different representations of uncertainty, and is flexible enough to emulate several existing OUU formulations. Existing multi-fidelity methods for OUU are not compatible with these novel formulations, so one such method—information reuse—is generalized to be compatible with these and other formulations. The proposed formulations, along with generalized information reuse, are compared to their most comparable equivalent in the current state-of-the-art on practical design problems: transonic aerofoil design, coupled aero-structural wing design, high-fidelity 3D wing design, and acoustic horn shape design. Finally, the two novel formulations are combined in a two-step design process, which is used to obtain a robust design in a challenging version of the acoustic horn design problem. Dominance optimization is given half the computational budget for exploration; then horsetail matching is given the other half for exploitation. Using exactly the same computational budget as a moment-based approach, the design obtained using the novel formulations is 95% more likely to achieve a better QOI than the best value achievable by the moment-based design.
|
15 |
Planification d’expériences numériques en multi-fidélité : Application à un simulateur d’incendies / Sequential design of numerical experiments in multi-fidelity : Application to a fire simulatorStroh, Rémi 26 June 2018 (has links)
Les travaux présentés portent sur l'étude de modèles numériques multi-fidèles, déterministes ou stochastiques. Plus précisément, les modèles considérés disposent d'un paramètre réglant la qualité de la simulation, comme une taille de maille dans un modèle par différences finies, ou un nombre d'échantillons dans un modèle de Monte-Carlo. Dans ce cas, il est possible de lancer des simulations basse fidélité, rapides mais grossières, et des simulations haute fidélité, fiables mais coûteuses. L'intérêt d'une approche multi-fidèle est de combiner les résultats obtenus aux différents niveaux de fidélité afin d'économiser du temps de simulation. La méthode considérée est fondée sur une approche bayésienne. Le simulateur est décrit par un modèle de processus gaussiens multi-niveaux développé dans la littérature que nous adaptons aux cas stochastiques dans une approche complètement bayésienne. Ce méta-modèle du simulateur permet d'obtenir des estimations de quantités d'intérêt, accompagnés d'une mesure de l'incertitude associée. L'objectif est alors de choisir de nouvelles expériences à lancer afin d'améliorer les estimations. En particulier, la planification doit sélectionner le niveau de fidélité réalisant le meilleur compromis entre coût d'observation et gain d'information. Pour cela, nous proposons une stratégie séquentielle adaptée au cas où les coûts d'observation sont variables. Cette stratégie, intitulée "Maximal Rate of Uncertainty Reduction" (MRUR), consiste à choisir le point d'observation maximisant le rapport entre la réduction d'incertitude et le coût. La méthodologie est illustrée en sécurité incendie, où nous cherchons à estimer des probabilités de défaillance d'un système de désenfumage. / The presented works focus on the study of multi-fidelity numerical models, deterministic or stochastic. More precisely, the considered models have a parameter which rules the quality of the simulation, as a mesh size in a finite difference model or a number of samples in a Monte-Carlo model. In that case, the numerical model can run low-fidelity simulations, fast but coarse, or high-fidelity simulations, accurate but expensive. A multi-fidelity approach aims to combine results coming from different levels of fidelity in order to save computational time. The considered method is based on a Bayesian approach. The simulator is described by a state-of-art multilevel Gaussian process model which we adapt to stochastic cases in a fully-Bayesian approach. This meta-model of the simulator allows estimating any quantity of interest with a measure of uncertainty. The goal is to choose new experiments to run in order to improve the estimations. In particular, the design must select the level of fidelity meeting the best trade-off between cost of observation and information gain. To do this, we propose a sequential strategy dedicated to the cases of variable costs, called Maximum Rate of Uncertainty Reduction (MRUR), which consists of choosing the input point maximizing the ratio between the uncertainty reduction and the cost. The methodology is illustrated in fire safety science, where we estimate probabilities of failure of a fire protection system.
|
16 |
Multi-fidelity, Multidisciplinary Design Analysis and Optimization of the Efficient Supersonic Air VehicleLickenbrock, Madeline Clare January 2020 (has links)
No description available.
|
17 |
MULTI-FIDELITY MODELING AND MULTI-OBJECTIVE BAYESIAN OPTIMIZATION SUPPORTED BY COMPOSITIONS OF GAUSSIAN PROCESSESHomero Santiago Valladares Guerra (15383687) 01 May 2023 (has links)
<p>Practical design problems in engineering and science involve the evaluation of expensive black-box functions, the optimization of multiple—often conflicting—targets, and the integration of data generated by multiple sources of information, e.g., numerical models with different levels of fidelity. If not properly handled, the complexity of these design problems can lead to lengthy and costly development cycles. In the last years, Bayesian optimization has emerged as a powerful alternative to solve optimization problems that involve the evaluation of expensive black-box functions. Bayesian optimization has two main components: a probabilistic surrogate model of the black-box function and an acquisition function that drives the optimization. Its ability to find high-performance designs within a limited number of function evaluations has attracted the attention of many fields including the engineering design community. The practical relevance of strategies with the ability to fuse information emerging from different sources and the need to optimize multiple targets has motivated the development of multi-fidelity modeling techniques and multi-objective Bayesian optimization methods. A key component in the vast majority of these methods is the Gaussian process (GP) due to its flexibility and mathematical properties.</p>
<p><br></p>
<p>The objective of this dissertation is to develop new approaches in the areas of multi-fidelity modeling and multi-objective Bayesian optimization. To achieve this goal, this study explores the use of linear and non-linear compositions of GPs to build probabilistic models for Bayesian optimization. Additionally, motivated by the rationale behind well-established multi-objective methods, this study presents a novel acquisition function to solve multi-objective optimization problems in a Bayesian framework. This dissertation presents four contributions. First, the auto-regressive model, one of the most prominent multi-fidelity models in engineering design, is extended to include informative mean functions that capture prior knowledge about the global trend of the sources. This additional information enhances the predictive capabilities of the surrogate. Second, the non-linear auto-regressive Gaussian process (NARGP) model, a non-linear multi-fidelity model, is integrated into a multi-objective Bayesian optimization framework. The NARGP model offers the possibility to leverage sources that present non-linear cross-correlations to enhance the performance of the optimization process. Third, GP classifiers, which employ non-linear compositions of GPs, and conditional probabilities are combined to solve multi-objective problems. Finally, a new multi-objective acquisition function is presented. This function employs two terms: a distance-based metric—the expected Pareto distance change—that captures the optimality of a given design, and a diversity index that prevents the evaluation of non-informative designs. The proposed acquisition function generates informative landscapes that produce Pareto front approximations that are both broad and diverse.</p>
|
18 |
Uncertainty Quantification Using Simulation-based and Simulation-free methods with Active Learning ApproachesZhang, Chi January 2022 (has links)
No description available.
|
19 |
MULTI-LEVEL DEEP OPERATOR LEARNING WITH APPLICATIONS TO DISTRIBUTIONAL SHIFT, UNCERTAINTY QUANTIFICATION AND MULTI-FIDELITY LEARNINGRohan Moreshwar Dekate (18515469) 07 May 2024 (has links)
<p dir="ltr">Neural operator learning is emerging as a prominent technique in scientific machine learn- ing for modeling complex nonlinear systems with multi-physics and multi-scale applications. A common drawback of such operators is that they are data-hungry and the results are highly dependent on the quality and quantity of the training data provided to the models. Moreover, obtaining high-quality data in sufficient quantity can be computationally prohibitive. Faster surrogate models are required to overcome this drawback which can be learned from datasets of variable fidelity and also quantify the uncertainty. In this work, we propose a Multi-Level Stacked Deep Operator Network (MLSDON) which can learn from datasets of different fidelity and is not dependent on the input function. Through various experiments, we demonstrate that the MLSDON can approximate the high-fidelity solution operator with better accuracy compared to a Vanilla DeepONet when sufficient high-fidelity data is unavailable. We also extend MLSDON to build robust confidence intervals by making conformalized predictions. This technique guarantees trajectory coverage of the predictions irrespective of the input distribution. Various numerical experiments are conducted to demonstrate the applicability of MLSDON to multi-fidelity, multi-scale, and multi-physics problems.</p>
|
20 |
Bayesian adaptive sampling for discrete design alternatives in conceptual designValenzuela-Del Rio, Jose Eugenio 13 January 2014 (has links)
The number of technology alternatives has lately grown to satisfy the increasingly demanding goals in modern engineering. These technology alternatives are handled in the design process as either concepts or categorical design inputs. Additionally, designers desire to bring into early design more and more accurate, but also computationally burdensome, simulation tools to obtain better performing initial designs that are more valuable in subsequent design stages. It constrains the computational budget to optimize the design space. These two factors unveil the need of a conceptual design methodology to use more efficiently sophisticated tools for engineering problems with several concept solutions and categorical design choices. Enhanced initial designs and discrete alternative selection are pursued.
Advances in computational speed and the development of Bayesian adaptive sampling techniques have enabled the industry to move from the use of look-up tables and simplified models to complex physics-based tools in conceptual design. These techniques focus computational resources on promising design areas. Nevertheless, the vast majority of the work has been done on problems with continuous spaces, whereas concepts and categories are treated independently. However, observations show that engineering objectives experience similar topographical trends across many engineering alternatives.
In order to address these challenges, two meta-models are developed. The first one borrows the Hamming distance and function space norms from machine learning and functional analysis, respectively. These distances allow defining categorical metrics that are used to build an unique probabilistic surrogate whose domain includes, not only continuous and integer variables, but also categorical ones. The second meta-model is based on a multi-fidelity approach that enhances a concept prediction with previous concept observations. These methodologies leverage similar trends seen from observations and make a better use of sample points increasing the quality of the output in the discrete alternative selection and initial designs for a given analysis budget. An extension of stochastic mixed-integer optimization techniques to include the categorical dimension is developed by adding appropriate generation, mutation, and crossover operators. The resulted stochastic algorithm is employed to adaptively sample mixed-integer-categorical design spaces.
The proposed surrogates are compared against traditional independent methods for a set of canonical problems and a physics-based rotor-craft model on a screened design space. Next, adaptive sampling algorithms on the developed surrogates are applied to the same problems. These tests provide evidence of the merit of the proposed methodologies. Finally, a multi-objective rotor-craft design application is performed in a large domain space.
This thesis provides several novel academic contributions. The first contribution is the development of new efficient surrogates for systems with categorical design choices. Secondly, an adaptive sampling algorithm is proposed for systems with mixed-integer-categorical design spaces. Finally, previously sampled concepts can be brought to construct efficient surrogates of novel concepts. With engineering judgment, design community could apply these contributions to discrete alternative selection and initial design assessment when similar topographical trends are observed across different categories and/or concepts. Also, it could be crucial to overcome the current cost of carrying a set of concepts and wider design spaces in the categorical dimension forward into preliminary design.
|
Page generated in 0.0492 seconds