Spelling suggestions: "subject:"incertainty quantification"" "subject:"ncertainty quantification""
111 |
Computational Methods for Random Differential Equations: Theory and ApplicationsNavarro Quiles, Ana 01 March 2018 (has links)
Desde las contribuciones de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob y Johann Bernoulli en el siglo XVII hasta ahora, las ecuaciones en diferencias y las diferenciales han demostrado su capacidad para modelar satisfactoriamente problemas complejos de gran interés en Ingeniería, Física, Epidemiología, etc. Pero, desde un punto de vista práctico, los parámetros o inputs (condiciones iniciales/frontera, término fuente y/o coeficientes), que aparecen en dichos problemas, son fijados a partir de ciertos datos, los cuales pueden contener un error de medida. Además, pueden existir factores externos que afecten al sistema objeto de estudio, de modo que su complejidad haga que no se conozcan de forma cierta los parámetros de la ecuación que modeliza el problema. Todo ello justifica considerar los parámetros de la ecuación en diferencias o de la ecuación diferencial como variables aleatorias o procesos estocásticos, y no como constantes o funciones deterministas, respectivamente. Bajo esta consideración aparecen las ecuaciones en diferencias y las ecuaciones diferenciales aleatorias. Esta tesis hace un recorrido resolviendo, desde un punto de vista probabilístico, distintos tipos de ecuaciones en diferencias y diferenciales aleatorias, aplicando fundamentalmente el método de Transformación de Variables Aleatorias. Esta técnica es una herramienta útil para la obtención de la función de densidad de probabilidad de un vector aleatorio, que es una transformación de otro vector aleatorio cuya función de densidad de probabilidad es conocida. En definitiva, el objetivo de este trabajo es el cálculo de la primera función de densidad de probabilidad del proceso estocástico solución en diversos problemas basados en ecuaciones en diferencias y diferenciales aleatorias. El interés por determinar la primera función de densidad de probabilidad se justifica porque dicha función determinista caracteriza la información probabilística unidimensional, como media, varianza, asimetría, curtosis, etc., de la solución de la ecuación en diferencias o diferencial correspondiente. También permite determinar la probabilidad de que acontezca un determinado suceso de interés que involucre a la solución. Además, en algunos casos, el estudio teórico realizado se completa mostrando su aplicación a problemas de modelización con datos reales, donde se aborda el problema de la estimación de distribuciones estadísticas paramétricas de los inputs en el contexto de las ecuaciones en diferencias y diferenciales aleatorias. / Ever since the early contributions by Isaac Newton, Gottfried Wilhelm Leibniz, Jacob and Johann Bernoulli in the XVII century until now, difference and differential equations have uninterruptedly demonstrated their capability to model successfully interesting complex problems in Engineering, Physics, Chemistry, Epidemiology, Economics, etc. But, from a practical standpoint, the application of difference or differential equations requires setting their inputs (coefficients, source term, initial and boundary conditions) using sampled data, thus containing uncertainty stemming from measurement errors. In addition, there are some random external factors which can affect to the system under study. Then, it is more advisable to consider input data as random variables or stochastic processes rather than deterministic constants or functions, respectively. Under this consideration random difference and differential equations appear. This thesis makes a trail by solving, from a probabilistic point of view, different types of random difference and differential equations, applying fundamentally the Random Variable Transformation method. This technique is an useful tool to obtain the probability density function of a random vector that results from mapping another random vector whose probability density function is known. Definitely, the goal of this dissertation is the computation of the first probability density function of the solution stochastic process in different problems, which are based on random difference or differential equations. The interest in determining the first probability density function is justified because this deterministic function characterizes the one-dimensional probabilistic information, as mean, variance, asymmetry, kurtosis, etc. of corresponding solution of a random difference or differential equation. It also allows to determine the probability of a certain event of interest that involves the solution. In addition, in some cases, the theoretical study carried out is completed, showing its application to modelling problems with real data, where the problem of parametric statistics distribution estimation is addressed in the context of random difference and differential equations. / Des de les contribucions de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob i Johann Bernoulli al segle XVII fins a l'actualitat, les equacions en diferències i les diferencials han demostrat la seua capacitat per a modelar satisfactòriament problemes complexos de gran interés en Enginyeria, Física, Epidemiologia, etc. Però, des d'un punt de vista pràctic, els paràmetres o inputs (condicions inicials/frontera, terme font i/o coeficients), que apareixen en aquests problemes, són fixats a partir de certes dades, les quals poden contenir errors de mesura. A més, poden existir factors externs que afecten el sistema objecte d'estudi, de manera que, la seua complexitat faça que no es conega de forma certa els inputs de l'equació que modelitza el problema. Tot aço justifica la necessitat de considerar els paràmetres de l'equació en diferències o de la equació diferencial com a variables aleatòries o processos estocàstics, i no com constants o funcions deterministes. Sota aquesta consideració apareixen les equacions en diferències i les equacions diferencials aleatòries. Aquesta tesi fa un recorregut resolent, des d'un punt de vista probabilístic, diferents tipus d'equacions en diferències i diferencials aleatòries, aplicant fonamentalment el mètode de Transformació de Variables Aleatòries. Aquesta tècnica és una eina útil per a l'obtenció de la funció de densitat de probabilitat d'un vector aleatori, que és una transformació d'un altre vector aleatori i la funció de densitat de probabilitat és del qual és coneguda. En definitiva, l'objectiu d'aquesta tesi és el càlcul de la primera funció de densitat de probabilitat del procés estocàstic solució en diversos problemes basats en equacions en diferències i diferencials. L'interés per determinar la primera funció de densitat es justifica perquè aquesta funció determinista caracteritza la informació probabilística unidimensional, com la mitjana, variància, asimetria, curtosis, etc., de la solució de l'equació en diferències o l'equació diferencial aleatòria corresponent. També permet determinar la probabilitat que esdevinga un determinat succés d'interés que involucre la solució. A més, en alguns casos, l'estudi teòric realitzat es completa mostrant la seua aplicació a problemes de modelització amb dades reals, on s'aborda el problema de l'estimació de distribucions estadístiques paramètriques dels inputs en el context de les equacions en diferències i diferencials aleatòries. / Navarro Quiles, A. (2018). Computational Methods for Random Differential Equations: Theory and Applications [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98703
|
112 |
Multiscale Methods and Uncertainty QuantificationElfverson, Daniel January 2015 (has links)
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
|
113 |
Quantification of uncertainty in the magnetic characteristic of steel and permanent magnets and their effect on the performance of permanent magnet synchronous machineAbhijit Sahu (5930828) 15 August 2019 (has links)
<div>The numerical calculation of the electromagnetic fields within electric machines is sensitive to the magnetic characteristic of steel. However, the magnetic characteristic of steel is uncertain due to fluctuations in alloy composition, possible contamination, and other manufacturing process variations including punching. Previous attempts to quantify magnetic uncertainty due to punching are based on parametric analytical models of <i>B-H</i> curves, where the uncertainty is reflected by model parameters. In this work, we set forth a data-driven approach for quantifying the uncertainty due to punching in <i>B-H</i> curves. In addition to the magnetic characteristics of steel lamination, the remanent flux density (<i>B<sub>r</sub></i>) exhibited by the permanent magnets in a permanent magnet synchronous machine (PMSM) is also uncertain due to unpredictable variations in the manufacturing process. Previous studies consider the impact of uncertainties in <i>B-H</i> curves and <i>B<sub>r</sub></i> of the permanent magnets on the average torque, cogging torque, torque ripple and losses of a PMSM. However, studies pertaining to the impact of these uncertainties on the combined machine/drive system of a PMSM is scarce in the literature. Hence, the objective of this work is to study the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the performance of a PMSM machine/drive system using a validated finite element simulator. </div><div>Our approach is as follows. First, we use principal component analysis to build a reduced-order stochastic model of <i>B-H</i> curves from a synthetic dataset containing <i>B-H</i> curves affected by punching. Second, we model the the uncertainty in <i>B<sub>r</sub></i> and other uncertainties in <i>B-H</i> characteristics e.g., due to unknown state of the material composition and unavailability of accurate data in deep saturation region. Third, to overcome the computational limitations of the finite element simulator, we replace it with surrogate models based on Gaussian process regression. Fourth, we perform propagation studies to assess the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the average torque, torque ripple and the PMSM machine/drive system using the constructed surrogate models.</div>
|
114 |
Analyse de sensibilité pour systèmes hyperboliques non linéaires / Sensitivity analysis for nonlinear hyperbolic equations of conservation lawsFiorini, Camilla 11 July 2018 (has links)
L’analyse de sensibilité (AS) concerne la quantification des changements dans la solution d’un système d’équations aux dérivées partielles (EDP) dus aux varia- tions des paramètres d’entrée du modèle. Les techniques standard d’AS pour les EDP, comme la méthode d’équation de sensibilité continue, requirent de dériver la variable d’état. Cependant, dans le cas d’équations hyperboliques l’état peut présenter des dis- continuités, qui donc génèrent des Dirac dans la sensibilité. Le but de ce travail est de modifier les équations de sensibilité pour obtenir un syst‘eme valable même dans le cas discontinu et obtenir des sensibilités qui ne présentent pas de Dirac. Ceci est motivé par plusieurs raisons : d’abord, un Dirac ne peut pas être saisi numériquement, ce qui pourvoit une solution incorrecte de la sensibilité au voisinage de la discontinuité ; deuxièmement, les pics dans la solution numérique des équations de sensibilité non cor- rigées rendent ces sensibilités inutilisables pour certaines applications. Par conséquent, nous ajoutons un terme de correction aux équations de sensibilité. Nous faisons cela pour une hiérarchie de modèles de complexité croissante : de l’équation de Burgers non visqueuse au système d’Euler quasi-1D. Nous montrons l’influence de ce terme de correction sur un problème d’optimisation et sur un de quantification d’incertitude. / Sensitivity analysis (SA) concerns the quantification of changes in Partial Differential Equations (PDEs) solution due to perturbations in the model input. Stan- dard SA techniques for PDEs, such as the continuous sensitivity equation method, rely on the differentiation of the state variable. However, if the governing equations are hyperbolic PDEs, the state can exhibit discontinuities yielding Dirac delta functions in the sensitivity. We aim at modifying the sensitivity equations to obtain a solution without delta functions. This is motivated by several reasons: firstly, a Dirac delta function cannot be seized numerically, leading to an incorrect solution for the sensi- tivity in the neighbourhood of the state discontinuity; secondly, the spikes appearing in the numerical solution of the original sensitivity equations make such sensitivities unusable for some applications. Therefore, we add a correction term to the sensitivity equations. We do this for a hierarchy of models of increasing complexity: starting from the inviscid Burgers’ equation, to the quasi 1D Euler system. We show the influence of such correction term on an optimization algorithm and on an uncertainty quantification problem.
|
115 |
Uncertainty Quantification for Scale-Bridging Modeling of Multiphase Reactive FlowsIavarone, Salvatore 24 April 2019 (has links) (PDF)
The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of novel and cost-effective combustion technologies and the minimization of environmental concerns at industrial scale. CFD simulations facilitate scaling-up procedures that otherwise would be complicated by strong interactions between reaction kinetics, turbulence and heat transfer. CFD calculations can be applied directly at the industrial scale of interest, thus avoiding scaling-up from lab-scale experiments. However, this advantage can only be obtained if CFD tools are quantitatively predictive and trusted as so. Despite the improvements in the computational capability, the implementation of detailed physical and chemical models in CFD simulations can still be prohibitive for real combustors, which require large computational grids and therefore significant computational efforts. Advanced simulation approaches like Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) guarantee higher fidelity in computational modeling of combustion at, unfortunately, increased computational cost. However, with adequate, reduced, and cost-effective modeling of physical phenomena, such as chemical kinetics and turbulence-chemistry interactions, and state of the art computing, LES will be the tool of choice to describe combustion processes at industrial scale accurately. Therefore, the development of reduced physics and chemistry models with quantified model-form uncertainty is needed to overcome the challenges of performing LES of industrial systems. Reduced-order models must reproduce the main features of the corresponding detailed models. They feature predictivity and capability of bridging scales when validated against a broad range of experiments and targeted by Validation and Uncertainty Quantification (V/UQ) procedures. In this work, V/UQ approaches are applied for reduced-order modeling of pulverized coal devolatilization and subsequent char oxidation, and furthermore for modeling NOx emissions in combustion systems.For coal devolatilization, a benchmark of the Single First-Order Reaction (SFOR) model was performed concerning the accuracy of the prediction of volatile yield. Different SFOR models were implemented and validated against experimental data coming from tests performed in an entrained flow reactor at oxy-conditions, to shed light on their drawbacks and benefits. SFOR models were chosen because of their simplicity: they can be easily included in CFD codes and are very appealing in the perspective of LES of pulverized coal combustion burners. The calibration of kinetic parameters was required to allow the investigated SFOR model to be predictive and reliable for different heating rates, hold temperatures and coal types. A comparison of several calibration approaches was performed to determine if one-step models can be adaptive and able to bridge scales, without losing accuracy, and to select the calibration method to employ for wider ranges of coal rank and operating conditions. The analysis pointed out that the main drawback of the SFOR models is the assumption of a constant ultimate volatile yield, equal to the value from the coal proximate analysis. To overcome this drawback, a yield model, i.e. a simple functional form that relates the ultimate volatile yield to the particle temperature, was proposed. The model depends on two parameters that have a certain degree of uncertainty. The performances of the yield model were assessed using a collaboration of experiments and simulations of a pilot-scale entrained flow reactor. A consistency analysis, based on the Bound-to-Bound Data Collaboration (B2B-DC) approach, and a Bayesian method, based on Gaussian Process Regression (GPR), were employed for the investigation of experiments and simulations. In Bound-to- Bound Data Collaboration the model output, evaluated at specified values of the model parameters, is compared with the experimental data: if the prediction of the model falls within the experimental uncertainty, the corresponding parameter values would be included in the so-called feasible set. The existence of a non-empty feasible set signifies consistency between the experiments and the simulations, i.e. model-data agreement. Consistency was indeed found when a relative error of 19% for all the experimental data was applied. Hence, a feasible set of the two SFOR model parameters was provided. A posterior state of knowledge, indicating potential model forms that could be explored in yield modeling, was obtained by Gaussian Process Regression. The model form evaluated through the consistency analysis is included within the posterior derived from GPR, indicating that it can satisfactorily match the experimental data and provide reliable estimation in almost every range of temperatures. CFD simulations were carried out using the proposed yield model with first-order kinetics, as in the SFOR model. Results showed promising agreement between predicted and experimental conversion for all the investigated cases.Regarding char combustion modeling, the consistency analysis has been applied to validate a reduced-order model and quantify the uncertainty in the prediction of char conversion. The model capability to address heterogeneous reaction between char carbon and O2, CO2 and H2O reagents, mass transport of species in the particle boundary layer, pore diffusion, and internal surface area changes was assessed by comparison with a large number of experiments performed in air and oxy-coal conditions. Different model forms had been considered, with an increasing degree of complexity, until consistency between model outputs and experimental results was reached. Rather than performing forward propagation of the model-form uncertainty on the predictions, the reduction of the parameter uncertainty of a selected model form was pursued and eventually achieved. The resulting 11-dimensional feasible set of model parameters allows the model to predict the experimental data within almost ±10% uncertainty. Due to the high dimensionality of the problem, the employed surrogate models resulted in considerable fitting errors, which led to a spoiled UQ inverse problem. Different strategies were taken to reduce the discrepancy between the surrogate outputs and the corresponding predictions of the simulation model, in the frameworks of constrained optimization and Bayesian inference. Both strategies succeeded in reducing the fitting errors and also resulted in a least-squares estimate for the simulation model. The variety of experimental gas environments ensured the validity of the consistent reduced model for both conventional and oxy-conditions, overcoming the differences in mass transport and kinetics observed in several experimental campaigns.The V/UQ-aided modeling of coal devolatilization and char combustion was done in the framework of the Predictive Science Academic Alliance Program II (PSAAP-II) funded by the US Department of Energy. One of the final goals of PSAAP-II is to develop high-fidelity simulation tools that ensure 5% uncertainty in the incident heat flux predictions inside a 1.2GW Ultra-Super-Critical (USC) coal-fired boiler. The 5% target refers to the expected predictivity of the full-scale simulation without considering the uncertainty in the scenario parameters. The data-driven approaches used in this Thesis helped to improve the predictivity of the investigated models and made them suitable for LES of the 1.2GW USC coal-fired boiler. Moreover, they are suitable for scale-bridging modeling of similar multi-phase processes involved in the conversion of solid renewable sources, such as biomass.In the final part of the Thesis, the sensitivity to finite-rate chemistry combustion models and kinetic mechanisms on the prediction of NO emissions was assessed. Moreover, the forward propagation of the uncertainty in the kinetics of the NNH route (included in the NOx chemistry) on the predictions of NO was investigated to reveal the current state of the art of kinetic modeling of NOx formation. The analysis was carried out on a case where NOx formation comes from various formation routes, both conventional (thermal and prompt) and unconventional ones. To this end, a lab-scale combustion system working in Moderate and Intense Low-oxygen Dilution (MILD) conditions was selected. The results showed considerable sensitivity of the NO emissions to the uncertain kinetic parameters of the rate-limiting reactions of the NNH pathway when a detailed kinetic mechanism is used. The analysis also pointed out that the use of one-step global rate schemes for the NO formation pathways, necessary when a skeletal kinetic mechanism is employed, lacks the required chemical accuracy and dims the importance of the NNH pathway in this combustion regime. An engineering modification of the finite-rate combustion model was proposed to account for the different chemical time scales of the fuel-oxidizer reactions and NOx formation pathways. It showed an equivalent impact on the emissions of NO than the uncertainty in the kinetics of the NNH route. At the cost of introducing a small mass imbalance (of the order of ppm), the adjustment led to improved predictions of NO. The investigation established a possibility for the engineering modeling of NO formation in MILD combustion with a finite-rate chemistry combustion model that can incorporate a detailed mechanism at affordable computational costs. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
116 |
Predictive numerical simulations for rebuilding freestream conditions in atmospheric entry flows / Simulations numériques prédictives pour la reconstruction des conditions en amont dans les écoulements de rentrée atmosphériqueCortesi, Andrea Francesco 16 February 2018 (has links)
Une prédiction fidèle des écoulements hypersoniques à haute enthalpie est capitale pour les missions d'entrée atmosphérique. Cependant, la présence d'incertitudes est inévitable, sur les conditions de l'écoulement libre comme sur d'autres paramètres des modèles physico-chimiques. Pour cette raison, une quantification rigoureuse de l'effet de ces incertitudes est obligatoire pour évaluer la robustesse et la prédictivité des simulations numériques. De plus, une reconstruction correcte des paramètres incertains à partir des mesures en vol peut aider à réduire le niveau d'incertitude sur les sorties. Dans ce travail, nous utilisons un cadre statistique pour la propagation directe des incertitudes ainsi que pour la reconstruction inverse des conditions de l'écoulement libre dans le cas d'écoulements de rentrée atmosphérique. La possibilité d'exploiter les mesures de flux thermique au nez du véhicule pour la reconstruction des variables de l'écoulement libre et des paramètres incertains du modèle est évaluée pour les écoulements de rentrée hypersoniques. Cette reconstruction est réalisée dans un cadre bayésien, permettant la prise en compte des différentes sources d'incertitudes et des erreurs de mesure. Différentes techniques sont introduites pour améliorer les capacités de la stratégie statistique de quantification des incertitudes. Premièrement, une approche est proposée pour la génération d'un métamodèle amélioré, basée sur le couplage de Kriging et Sparse Polynomial Dimensional Decomposition. Ensuite, une méthode d'ajoute adaptatif de nouveaux points à un plan d'expériences existant est présentée dans le but d'améliorer la précision du métamodèle créé. Enfin, une manière d'exploiter les sous-espaces actifs dans les algorithmes de Markov Chain Monte Carlo pour les problèmes inverses bayésiens est également exposée. / Accurate prediction of hypersonic high-enthalpy flows is of main relevance for atmospheric entry missions. However, uncertainties are inevitable on freestream conditions and other parameters of the physico-chemical models. For this reason, a rigorous quantification of the effect of uncertainties is mandatory to assess the robustness and predictivity of numerical simulations. Furthermore, a proper reconstruction of uncertain parameters from in-flight measurements can help reducing the level of uncertainties of the output. In this work, we will use a statistical framework for direct propagation of uncertainties and inverse freestream reconstruction applied to atmospheric entry flows. We propose an assessment of the possibility of exploiting forebody heat flux measurements for the reconstruction of freestream variables and uncertain parameters of the model for hypersonic entry flows. This reconstruction is performed in a Bayesian framework, allowing to account for sources of uncertainties and measurement errors. Different techniques are introduced to enhance the capabilities of the statistical framework for quantification of uncertainties. First, an improved surrogate modeling technique is proposed, based on Kriging and Sparse Polynomial Dimensional Decomposition. Then a method is proposed to adaptively add new training points to an existing experimental design to improve the accuracy of the trained surrogate model. A way to exploit active subspaces in Markov Chain Monte Carlo algorithms for Bayesian inverse problems is also proposed.
|
117 |
Quantification d'incertitudes aléatoires et épistémiques dans la prédiction d'instabilités aéroélastiques / Quantification of aleatory and epistemic uncertainties in the prediction of aeroelastic instabilitiesNitschke, Christian Thomas 01 February 2018 (has links)
La vitesse critique de flottement est un facteur essentiel à la conception aéronautique car elle caractérise le régime de vol au-delà duquel l’aéronef risque de subir un mécanisme de ruine. L’objectif de cette thèse est d’étudier l’impact des incertitudes d’origines aléatoires et épistémiques sur la limite de stabilité linéaire pour des configurations aéroélastiques idéalisées. Dans un premier temps, un problème de propagation directe d’incertitudes aléatoires relatives à des paramètres de fabrication d’une aile en forme de plaque en matériau composite stratifié a été considéré. La représentation du matériau par la méthode polaire lève la contrainte de grande dimensionnalité du problème stochastique initial et permet l’utilisation du Chaos Polynômial. Cependant, la corrélation introduite par cette paramétrisation nécessite une adaptation de la base polynômiale. Enfin, un algorithme d’apprentissage automatique a été employé pour traiter des discontinuités dans le comportement modal des instabilités aéroélastiques. Le second volet de la thèse concerne la quantification d’incertitudes de modélisation de caractère épistémique qui sont introduites au niveau de l’opérateur aérodynamique. Ces travaux, menés à partir d’un formalisme Bayésien, permettent non seulement d’établir des probabilités de modèle, mais aussi de calibrer les coefficients des modèles dans un contexte stochastique afin d’obtenir des prédictions robustes pour la vitesse critique. Enfin, une étude combinée des deux types d’incertitude permet d’améliorer le processus de calibration. / The critical flutter velocity is an essential factor in aeronautic design because it caracterises the flight envelope outside which the aircraft risks to be destroyed. The goal of this thesis is the study of the impact of uncertainties of aleatory and epistemic origin on the linear stability limit of idealised aeroelastic configurations. First, a direct propagation problem of aleatory uncertainties related to manufacturing parameters of a rectangular plate wing made of a laminated composite material was considered. The representation of the material through the polar method alleviates the constraint of the high number of dimensions of the initial stochastic problem, which allows the use of polynomial chaos. However, the correlation which is introduced by this parametrisation requires an adaption of the polynomial basis. Finally, a machine learning algorithm is employed for the treatment of discontinuities in the modal behaviour of the aeroelastic instabilities. The second part of the thesis is about the quantification of modelling uncertainties of epistemic nature which are introduced in the aerodynamic operator. This work, which is conducted based on a Bayesian formalism, allows not only to establish model probabilities, but also to calibrate the model coefficients in a stochastic context in order to obtain robust predictions for the critical velocity. Finally, a combined study of the two types of uncertainty allows to improve the calibration process.
|
118 |
Uncertainty Quantification and Numerical Methods for Conservation LawsPettersson, Per January 2013 (has links)
Conservation laws with uncertain initial and boundary conditions are approximated using a generalized polynomial chaos expansion approach where the solution is represented as a generalized Fourier series of stochastic basis functions, e.g. orthogonal polynomials or wavelets. The stochastic Galerkin method is used to project the governing partial differential equation onto the stochastic basis functions to obtain an extended deterministic system. The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain viscosity. We investigate well-posedness, monotonicity and stability for the stochastic Galerkin system. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability. We investigate the impact of the total spatial operator on the convergence to steady-state. Next we apply the stochastic Galerkin method to Burgers' equation with uncertain boundary conditions. An analysis of the truncated polynomial chaos system presents a qualitative description of the development of the solution over time. An analytical solution is derived and the true polynomial chaos coefficients are shown to be smooth, while the corresponding coefficients of the truncated stochastic Galerkin formulation are shown to be discontinuous. We discuss the problematic implications of the lack of known boundary data and possible ways of imposing stable and accurate boundary conditions. We present a new fully intrusive method for the Euler equations subject to uncertainty based on a Roe variable transformation. The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, it is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. A multiwavelet basis that can handle discontinuities in a robust way is used. Finally, we investigate a two-phase flow problem. Based on regularity analysis of the generalized polynomial chaos coefficients, we present a hybrid method where solution regions of varying smoothness are coupled weakly through interfaces. In this way, we couple smooth solutions solved with high-order finite difference methods with non-smooth solutions solved for with shock-capturing methods.
|
119 |
Some contributions to latin hypercube design, irregular region smoothing and uncertainty quantificationXie, Huizhi 21 May 2012 (has links)
In the first part of the thesis, we propose a new class of designs called multi-layer sliced Latin hypercube design (DSLHD) for running computer experiments. A general recursive strategy for constructing MLSLHD has been developed. Ordinary Latin hypercube designs and sliced Latin hypercube designs are special cases of MLSLHD with zero and one layer respectively. A special case of MLSLHD with two layers, doubly sliced Latin hypercube design, is studied in detail. The doubly sliced structure of DSLHD allows more flexible batch size than SLHD for collective evaluation of different computer models or batch sequential evaluation of a single computer model. Both finite-sample and asymptotical sampling properties of DSLHD are examined. Numerical experiments are provided to show the advantage of DSLHD over SLHD for both sequential evaluating a single computer model and collective evaluation of different computer models. Other applications of DSLHD include design for Gaussian process modeling with quantitative and qualitative factors, cross-validation, etc. Moreover, we also show the sliced structure, possibly combining with other criteria such as distance-based criteria, can be utilized to sequentially sample from a large spatial data set when we cannot include all the data points for modeling. A data center example is presented to illustrate the idea. The enhanced stochastic evolutionary algorithm is deployed to search for optimal design.
In the second part of the thesis, we propose a new smoothing technique called completely-data-driven smoothing, intended for smoothing over irregular regions. The idea is to replace the penalty term in the smoothing splines by its estimate based on local least squares technique. A close form solution for our approach is derived. The implementation is very easy and computationally efficient. With some regularity assumptions on the input region and analytical assumptions on the true function, it can be shown that our estimator achieves the optimal convergence rate in general nonparametric regression. The algorithmic parameter that governs the trade-off between the fidelity to the data and the smoothness of the estimated function is chosen by generalized cross validation (GCV). The asymptotic optimality of GCV for choosing the algorithm parameter in our estimator is proved. Numerical experiments show that our method works well for both regular and irregular region smoothing.
The third part of the thesis deals with uncertainty quantification in building energy assessment. In current practice, building simulation is routinely performed with best guesses of input parameters whose true value cannot be known exactly. These guesses affect the accuracy and reliability of the outcomes. There is an increasing need to perform uncertain analysis of those input parameters that are known to have a significant impact on the final outcome. In this part of the thesis, we focus on uncertainty quantification of two microclimate parameters: the local wind speed and the wind pressure coefficient. The idea is to compare the outcome of the standard model with that of a higher fidelity model. Statistical analysis is then conducted to build a connection between these two. The explicit form of statistical models can facilitate the improvement of the corresponding modules in the standard model.
|
120 |
History matching and uncertainty quantificiation using sampling methodMa, Xianlin 15 May 2009 (has links)
Uncertainty quantification involves sampling the reservoir parameters correctly from a
posterior probability function that is conditioned to both static and dynamic data.
Rigorous sampling methods like Markov Chain Monte Carlo (MCMC) are known to
sample from the distribution but can be computationally prohibitive for high resolution
reservoir models. Approximate sampling methods are more efficient but less rigorous for
nonlinear inverse problems. There is a need for an efficient and rigorous approach to
uncertainty quantification for the nonlinear inverse problems.
First, we propose a two-stage MCMC approach using sensitivities for quantifying
uncertainty in history matching geological models. In the first stage, we compute the
acceptance probability for a proposed change in reservoir parameters based on a
linearized approximation to flow simulation in a small neighborhood of the previously
computed dynamic data. In the second stage, those proposals that passed a selected
criterion of the first stage are assessed by running full flow simulations to assure the
rigorousness.
Second, we propose a two-stage MCMC approach using response surface models for
quantifying uncertainty. The formulation allows us to history match three-phase flow
simultaneously. The built response exists independently of expensive flow simulation,
and provides efficient samples for the reservoir simulation and MCMC in the second
stage. Third, we propose a two-stage MCMC approach using upscaling and non-parametric
regressions for quantifying uncertainty. A coarse grid model acts as a surrogate for the
fine grid model by flow-based upscaling. The response correction of the coarse-scale
model is performed by error modeling via the non-parametric regression to approximate
the response of the computationally expensive fine-scale model.
Our proposed two-stage sampling approaches are computationally efficient and
rigorous with a significantly higher acceptance rate compared to traditional MCMC
algorithms.
Finally, we developed a coarsening algorithm to determine an optimal reservoir
simulation grid by grouping fine scale layers in such a way that the heterogeneity
measure of a defined static property is minimized within the layers. The optimal number
of layers is then selected based on a statistical analysis.
The power and utility of our approaches have been demonstrated using both
synthetic and field examples.
|
Page generated in 0.131 seconds