• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 242
  • 242
  • 62
  • 58
  • 53
  • 36
  • 35
  • 34
  • 33
  • 28
  • 26
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction Device

Freeman, Jacob Andrew 07 September 2012 (has links)
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study. To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°. The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA. Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty. / Ph. D.
102

Advanced Sampling Methods for Solving Large-Scale Inverse Problems

Attia, Ahmed Mohamed Mohamed 19 September 2016 (has links)
Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies. / Ph. D.
103

Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning

Wu, Jinlong 25 September 2018 (has links)
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled. / Ph. D. / Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
104

Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes

Macatula, Romcholo Yulo 21 July 2020 (has links)
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method. / Master of Science / Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
105

Multiscale Methods and Uncertainty Quantification

Elfverson, Daniel January 2015 (has links)
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
106

Quantification of uncertainty in the magnetic characteristic of steel and permanent magnets and their effect on the performance of permanent magnet synchronous machine

Abhijit Sahu (5930828) 15 August 2019 (has links)
<div>The numerical calculation of the electromagnetic fields within electric machines is sensitive to the magnetic characteristic of steel. However, the magnetic characteristic of steel is uncertain due to fluctuations in alloy composition, possible contamination, and other manufacturing process variations including punching. Previous attempts to quantify magnetic uncertainty due to punching are based on parametric analytical models of <i>B-H</i> curves, where the uncertainty is reflected by model parameters. In this work, we set forth a data-driven approach for quantifying the uncertainty due to punching in <i>B-H</i> curves. In addition to the magnetic characteristics of steel lamination, the remanent flux density (<i>B<sub>r</sub></i>) exhibited by the permanent magnets in a permanent magnet synchronous machine (PMSM) is also uncertain due to unpredictable variations in the manufacturing process. Previous studies consider the impact of uncertainties in <i>B-H</i> curves and <i>B<sub>r</sub></i> of the permanent magnets on the average torque, cogging torque, torque ripple and losses of a PMSM. However, studies pertaining to the impact of these uncertainties on the combined machine/drive system of a PMSM is scarce in the literature. Hence, the objective of this work is to study the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the performance of a PMSM machine/drive system using a validated finite element simulator. </div><div>Our approach is as follows. First, we use principal component analysis to build a reduced-order stochastic model of <i>B-H</i> curves from a synthetic dataset containing <i>B-H</i> curves affected by punching. Second, we model the the uncertainty in <i>B<sub>r</sub></i> and other uncertainties in <i>B-H</i> characteristics e.g., due to unknown state of the material composition and unavailability of accurate data in deep saturation region. Third, to overcome the computational limitations of the finite element simulator, we replace it with surrogate models based on Gaussian process regression. Fourth, we perform propagation studies to assess the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the average torque, torque ripple and the PMSM machine/drive system using the constructed surrogate models.</div>
107

Analyse de sensibilité pour systèmes hyperboliques non linéaires / Sensitivity analysis for nonlinear hyperbolic equations of conservation laws

Fiorini, Camilla 11 July 2018 (has links)
L’analyse de sensibilité (AS) concerne la quantification des changements dans la solution d’un système d’équations aux dérivées partielles (EDP) dus aux varia- tions des paramètres d’entrée du modèle. Les techniques standard d’AS pour les EDP, comme la méthode d’équation de sensibilité continue, requirent de dériver la variable d’état. Cependant, dans le cas d’équations hyperboliques l’état peut présenter des dis- continuités, qui donc génèrent des Dirac dans la sensibilité. Le but de ce travail est de modifier les équations de sensibilité pour obtenir un syst‘eme valable même dans le cas discontinu et obtenir des sensibilités qui ne présentent pas de Dirac. Ceci est motivé par plusieurs raisons : d’abord, un Dirac ne peut pas être saisi numériquement, ce qui pourvoit une solution incorrecte de la sensibilité au voisinage de la discontinuité ; deuxièmement, les pics dans la solution numérique des équations de sensibilité non cor- rigées rendent ces sensibilités inutilisables pour certaines applications. Par conséquent, nous ajoutons un terme de correction aux équations de sensibilité. Nous faisons cela pour une hiérarchie de modèles de complexité croissante : de l’équation de Burgers non visqueuse au système d’Euler quasi-1D. Nous montrons l’influence de ce terme de correction sur un problème d’optimisation et sur un de quantification d’incertitude. / Sensitivity analysis (SA) concerns the quantification of changes in Partial Differential Equations (PDEs) solution due to perturbations in the model input. Stan- dard SA techniques for PDEs, such as the continuous sensitivity equation method, rely on the differentiation of the state variable. However, if the governing equations are hyperbolic PDEs, the state can exhibit discontinuities yielding Dirac delta functions in the sensitivity. We aim at modifying the sensitivity equations to obtain a solution without delta functions. This is motivated by several reasons: firstly, a Dirac delta function cannot be seized numerically, leading to an incorrect solution for the sensi- tivity in the neighbourhood of the state discontinuity; secondly, the spikes appearing in the numerical solution of the original sensitivity equations make such sensitivities unusable for some applications. Therefore, we add a correction term to the sensitivity equations. We do this for a hierarchy of models of increasing complexity: starting from the inviscid Burgers’ equation, to the quasi 1D Euler system. We show the influence of such correction term on an optimization algorithm and on an uncertainty quantification problem.
108

Uncertainty Quantification for Scale-Bridging Modeling of Multiphase Reactive Flows

Iavarone, Salvatore 24 April 2019 (has links) (PDF)
The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of novel and cost-effective combustion technologies and the minimization of environmental concerns at industrial scale. CFD simulations facilitate scaling-up procedures that otherwise would be complicated by strong interactions between reaction kinetics, turbulence and heat transfer. CFD calculations can be applied directly at the industrial scale of interest, thus avoiding scaling-up from lab-scale experiments. However, this advantage can only be obtained if CFD tools are quantitatively predictive and trusted as so. Despite the improvements in the computational capability, the implementation of detailed physical and chemical models in CFD simulations can still be prohibitive for real combustors, which require large computational grids and therefore significant computational efforts. Advanced simulation approaches like Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) guarantee higher fidelity in computational modeling of combustion at, unfortunately, increased computational cost. However, with adequate, reduced, and cost-effective modeling of physical phenomena, such as chemical kinetics and turbulence-chemistry interactions, and state of the art computing, LES will be the tool of choice to describe combustion processes at industrial scale accurately. Therefore, the development of reduced physics and chemistry models with quantified model-form uncertainty is needed to overcome the challenges of performing LES of industrial systems. Reduced-order models must reproduce the main features of the corresponding detailed models. They feature predictivity and capability of bridging scales when validated against a broad range of experiments and targeted by Validation and Uncertainty Quantification (V/UQ) procedures. In this work, V/UQ approaches are applied for reduced-order modeling of pulverized coal devolatilization and subsequent char oxidation, and furthermore for modeling NOx emissions in combustion systems.For coal devolatilization, a benchmark of the Single First-Order Reaction (SFOR) model was performed concerning the accuracy of the prediction of volatile yield. Different SFOR models were implemented and validated against experimental data coming from tests performed in an entrained flow reactor at oxy-conditions, to shed light on their drawbacks and benefits. SFOR models were chosen because of their simplicity: they can be easily included in CFD codes and are very appealing in the perspective of LES of pulverized coal combustion burners. The calibration of kinetic parameters was required to allow the investigated SFOR model to be predictive and reliable for different heating rates, hold temperatures and coal types. A comparison of several calibration approaches was performed to determine if one-step models can be adaptive and able to bridge scales, without losing accuracy, and to select the calibration method to employ for wider ranges of coal rank and operating conditions. The analysis pointed out that the main drawback of the SFOR models is the assumption of a constant ultimate volatile yield, equal to the value from the coal proximate analysis. To overcome this drawback, a yield model, i.e. a simple functional form that relates the ultimate volatile yield to the particle temperature, was proposed. The model depends on two parameters that have a certain degree of uncertainty. The performances of the yield model were assessed using a collaboration of experiments and simulations of a pilot-scale entrained flow reactor. A consistency analysis, based on the Bound-to-Bound Data Collaboration (B2B-DC) approach, and a Bayesian method, based on Gaussian Process Regression (GPR), were employed for the investigation of experiments and simulations. In Bound-to- Bound Data Collaboration the model output, evaluated at specified values of the model parameters, is compared with the experimental data: if the prediction of the model falls within the experimental uncertainty, the corresponding parameter values would be included in the so-called feasible set. The existence of a non-empty feasible set signifies consistency between the experiments and the simulations, i.e. model-data agreement. Consistency was indeed found when a relative error of 19% for all the experimental data was applied. Hence, a feasible set of the two SFOR model parameters was provided. A posterior state of knowledge, indicating potential model forms that could be explored in yield modeling, was obtained by Gaussian Process Regression. The model form evaluated through the consistency analysis is included within the posterior derived from GPR, indicating that it can satisfactorily match the experimental data and provide reliable estimation in almost every range of temperatures. CFD simulations were carried out using the proposed yield model with first-order kinetics, as in the SFOR model. Results showed promising agreement between predicted and experimental conversion for all the investigated cases.Regarding char combustion modeling, the consistency analysis has been applied to validate a reduced-order model and quantify the uncertainty in the prediction of char conversion. The model capability to address heterogeneous reaction between char carbon and O2, CO2 and H2O reagents, mass transport of species in the particle boundary layer, pore diffusion, and internal surface area changes was assessed by comparison with a large number of experiments performed in air and oxy-coal conditions. Different model forms had been considered, with an increasing degree of complexity, until consistency between model outputs and experimental results was reached. Rather than performing forward propagation of the model-form uncertainty on the predictions, the reduction of the parameter uncertainty of a selected model form was pursued and eventually achieved. The resulting 11-dimensional feasible set of model parameters allows the model to predict the experimental data within almost ±10% uncertainty. Due to the high dimensionality of the problem, the employed surrogate models resulted in considerable fitting errors, which led to a spoiled UQ inverse problem. Different strategies were taken to reduce the discrepancy between the surrogate outputs and the corresponding predictions of the simulation model, in the frameworks of constrained optimization and Bayesian inference. Both strategies succeeded in reducing the fitting errors and also resulted in a least-squares estimate for the simulation model. The variety of experimental gas environments ensured the validity of the consistent reduced model for both conventional and oxy-conditions, overcoming the differences in mass transport and kinetics observed in several experimental campaigns.The V/UQ-aided modeling of coal devolatilization and char combustion was done in the framework of the Predictive Science Academic Alliance Program II (PSAAP-II) funded by the US Department of Energy. One of the final goals of PSAAP-II is to develop high-fidelity simulation tools that ensure 5% uncertainty in the incident heat flux predictions inside a 1.2GW Ultra-Super-Critical (USC) coal-fired boiler. The 5% target refers to the expected predictivity of the full-scale simulation without considering the uncertainty in the scenario parameters. The data-driven approaches used in this Thesis helped to improve the predictivity of the investigated models and made them suitable for LES of the 1.2GW USC coal-fired boiler. Moreover, they are suitable for scale-bridging modeling of similar multi-phase processes involved in the conversion of solid renewable sources, such as biomass.In the final part of the Thesis, the sensitivity to finite-rate chemistry combustion models and kinetic mechanisms on the prediction of NO emissions was assessed. Moreover, the forward propagation of the uncertainty in the kinetics of the NNH route (included in the NOx chemistry) on the predictions of NO was investigated to reveal the current state of the art of kinetic modeling of NOx formation. The analysis was carried out on a case where NOx formation comes from various formation routes, both conventional (thermal and prompt) and unconventional ones. To this end, a lab-scale combustion system working in Moderate and Intense Low-oxygen Dilution (MILD) conditions was selected. The results showed considerable sensitivity of the NO emissions to the uncertain kinetic parameters of the rate-limiting reactions of the NNH pathway when a detailed kinetic mechanism is used. The analysis also pointed out that the use of one-step global rate schemes for the NO formation pathways, necessary when a skeletal kinetic mechanism is employed, lacks the required chemical accuracy and dims the importance of the NNH pathway in this combustion regime. An engineering modification of the finite-rate combustion model was proposed to account for the different chemical time scales of the fuel-oxidizer reactions and NOx formation pathways. It showed an equivalent impact on the emissions of NO than the uncertainty in the kinetics of the NNH route. At the cost of introducing a small mass imbalance (of the order of ppm), the adjustment led to improved predictions of NO. The investigation established a possibility for the engineering modeling of NO formation in MILD combustion with a finite-rate chemistry combustion model that can incorporate a detailed mechanism at affordable computational costs. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
109

Predictive numerical simulations for rebuilding freestream conditions in atmospheric entry flows / Simulations numériques prédictives pour la reconstruction des conditions en amont dans les écoulements de rentrée atmosphérique

Cortesi, Andrea Francesco 16 February 2018 (has links)
Une prédiction fidèle des écoulements hypersoniques à haute enthalpie est capitale pour les missions d'entrée atmosphérique. Cependant, la présence d'incertitudes est inévitable, sur les conditions de l'écoulement libre comme sur d'autres paramètres des modèles physico-chimiques. Pour cette raison, une quantification rigoureuse de l'effet de ces incertitudes est obligatoire pour évaluer la robustesse et la prédictivité des simulations numériques. De plus, une reconstruction correcte des paramètres incertains à partir des mesures en vol peut aider à réduire le niveau d'incertitude sur les sorties. Dans ce travail, nous utilisons un cadre statistique pour la propagation directe des incertitudes ainsi que pour la reconstruction inverse des conditions de l'écoulement libre dans le cas d'écoulements de rentrée atmosphérique. La possibilité d'exploiter les mesures de flux thermique au nez du véhicule pour la reconstruction des variables de l'écoulement libre et des paramètres incertains du modèle est évaluée pour les écoulements de rentrée hypersoniques. Cette reconstruction est réalisée dans un cadre bayésien, permettant la prise en compte des différentes sources d'incertitudes et des erreurs de mesure. Différentes techniques sont introduites pour améliorer les capacités de la stratégie statistique de quantification des incertitudes. Premièrement, une approche est proposée pour la génération d'un métamodèle amélioré, basée sur le couplage de Kriging et Sparse Polynomial Dimensional Decomposition. Ensuite, une méthode d'ajoute adaptatif de nouveaux points à un plan d'expériences existant est présentée dans le but d'améliorer la précision du métamodèle créé. Enfin, une manière d'exploiter les sous-espaces actifs dans les algorithmes de Markov Chain Monte Carlo pour les problèmes inverses bayésiens est également exposée. / Accurate prediction of hypersonic high-enthalpy flows is of main relevance for atmospheric entry missions. However, uncertainties are inevitable on freestream conditions and other parameters of the physico-chemical models. For this reason, a rigorous quantification of the effect of uncertainties is mandatory to assess the robustness and predictivity of numerical simulations. Furthermore, a proper reconstruction of uncertain parameters from in-flight measurements can help reducing the level of uncertainties of the output. In this work, we will use a statistical framework for direct propagation of uncertainties and inverse freestream reconstruction applied to atmospheric entry flows. We propose an assessment of the possibility of exploiting forebody heat flux measurements for the reconstruction of freestream variables and uncertain parameters of the model for hypersonic entry flows. This reconstruction is performed in a Bayesian framework, allowing to account for sources of uncertainties and measurement errors. Different techniques are introduced to enhance the capabilities of the statistical framework for quantification of uncertainties. First, an improved surrogate modeling technique is proposed, based on Kriging and Sparse Polynomial Dimensional Decomposition. Then a method is proposed to adaptively add new training points to an existing experimental design to improve the accuracy of the trained surrogate model. A way to exploit active subspaces in Markov Chain Monte Carlo algorithms for Bayesian inverse problems is also proposed.
110

Quantification d'incertitudes aléatoires et épistémiques dans la prédiction d'instabilités aéroélastiques / Quantification of aleatory and epistemic uncertainties in the prediction of aeroelastic instabilities

Nitschke, Christian Thomas 01 February 2018 (has links)
La vitesse critique de flottement est un facteur essentiel à la conception aéronautique car elle caractérise le régime de vol au-delà duquel l’aéronef risque de subir un mécanisme de ruine. L’objectif de cette thèse est d’étudier l’impact des incertitudes d’origines aléatoires et épistémiques sur la limite de stabilité linéaire pour des configurations aéroélastiques idéalisées. Dans un premier temps, un problème de propagation directe d’incertitudes aléatoires relatives à des paramètres de fabrication d’une aile en forme de plaque en matériau composite stratifié a été considéré. La représentation du matériau par la méthode polaire lève la contrainte de grande dimensionnalité du problème stochastique initial et permet l’utilisation du Chaos Polynômial. Cependant, la corrélation introduite par cette paramétrisation nécessite une adaptation de la base polynômiale. Enfin, un algorithme d’apprentissage automatique a été employé pour traiter des discontinuités dans le comportement modal des instabilités aéroélastiques. Le second volet de la thèse concerne la quantification d’incertitudes de modélisation de caractère épistémique qui sont introduites au niveau de l’opérateur aérodynamique. Ces travaux, menés à partir d’un formalisme Bayésien, permettent non seulement d’établir des probabilités de modèle, mais aussi de calibrer les coefficients des modèles dans un contexte stochastique afin d’obtenir des prédictions robustes pour la vitesse critique. Enfin, une étude combinée des deux types d’incertitude permet d’améliorer le processus de calibration. / The critical flutter velocity is an essential factor in aeronautic design because it caracterises the flight envelope outside which the aircraft risks to be destroyed. The goal of this thesis is the study of the impact of uncertainties of aleatory and epistemic origin on the linear stability limit of idealised aeroelastic configurations. First, a direct propagation problem of aleatory uncertainties related to manufacturing parameters of a rectangular plate wing made of a laminated composite material was considered. The representation of the material through the polar method alleviates the constraint of the high number of dimensions of the initial stochastic problem, which allows the use of polynomial chaos. However, the correlation which is introduced by this parametrisation requires an adaption of the polynomial basis. Finally, a machine learning algorithm is employed for the treatment of discontinuities in the modal behaviour of the aeroelastic instabilities. The second part of the thesis is about the quantification of modelling uncertainties of epistemic nature which are introduced in the aerodynamic operator. This work, which is conducted based on a Bayesian formalism, allows not only to establish model probabilities, but also to calibrate the model coefficients in a stochastic context in order to obtain robust predictions for the critical velocity. Finally, a combined study of the two types of uncertainty allows to improve the calibration process.

Page generated in 0.1477 seconds