• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 249
  • 249
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 27
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning

Wu, Jinlong 25 September 2018 (has links)
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled. / Ph. D. / Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
102

Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes

Macatula, Romcholo Yulo 21 July 2020 (has links)
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method. / Master of Science / Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
103

Computational Reconstruction and Quantification of Aerospace Materials

Long, Matthew Thomas 14 May 2024 (has links)
Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructure related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty (epistemic uncertainty) that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty (aleatoric uncertainty), which is the noise that is inherent in the original image representing the experimental data. The epistemic uncertainty that arises from the MRF algorithm is analyzed through the study of the percentage of isolated pixels and the difference in average grain sizes between the initial image and the reconstructed image. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses. / Master of Science / Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructures related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty, which is the noise that is inherent in the original image representing the experimental data. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses.
104

Physics-informed Machine Learning with Uncertainty Quantification

Daw, Arka 12 February 2024 (has links)
Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the form of physics supervision. Uncertainty quantification (UQ) is an important goal in many scientific use-cases, where the obtaining reliable ML model predictions and accessing the potential risks associated with them is crucial. In this thesis, we propose novel methodologies in three key areas for improving uncertainty quantification for PIML. First, we propose to explicitly infuse the physics prior in the form of monotonicity constraints through architectural modifications in neural networks for quantifying uncertainty. Second, we demonstrate a more general framework for quantifying uncertainty with PIML that is compatible with generic forms of physics supervision such as PDEs and closed form equations. Lastly, we study the limitations of physics-based loss in the context of Physics-informed Neural Networks (PINNs), and develop an efficient sampling strategy to mitigate the failure modes. / Doctor of Philosophy / Owing to the success of deep learning in computer vision and natural language processing there is a growing interest of using deep learning in scientific applications. In scientific applications, knowledge is available in the form of closed form equations, partial differential equations, etc. along with labeled data. My work focuses on developing deep learning methods that integrate these forms of supervision. Especially, my work focuses on building methods that can quantify uncertainty in deep learning models, which is an important goal for high-stakes applications.
105

Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann Methods

García Galache, José Pedro 03 November 2017 (has links)
Contamination is becoming an important problem in great metropolitan areas. A large portion of the contaminants is emitted by the vehicle fleet. At European level, as well as in other economical areas, the regulation is becoming more and more restrictive. Euro regulations are the best example of this tendency. Specially important are the emissions of nitrogen oxide (NOx) and Particle Matter (PM). Two different strategies exist to reduce the emission of pollutants. One of them is trying to avoid their creation. Modifying the combustion process by means of different fuel injection laws or controlling the air regeneration are the typical methods. The second set of strategies is focused on the contaminant elimination. The NOx are reduced by means of catalysis and/or reducing atmosphere, usually created by injection of urea. The particle matter is eliminated using filters. This thesis is focused in this matter. Most of the strategies to reduce the emission of contaminants penalise fuel consumption. The particle filter is not an exception. Its installation, located in the exhaust duct, restricts the pass of the air. It increases the pressure along the whole exhaust line before the filter reducing the performance. Optimising the filter is then an important task. The efficiency of the filter has to be good enough to obey the contaminant normative. At the same time the pressure drop has to be as low as possible to optimise fuel consumption and performance. The objective of the thesis is to find the relation between the micro-structure and the macroscopic properties. With this knowledge the optimisation of the micro-structure is possible. The micro-structure of the filter mimics acicular mullite. It is created by procedural generation using random parameters. The relation between micro-structure and the macroscopic properties such as porosity and permeability are studied in detail. The flow field is solved using LabMoTer, a software developed during this thesis. The formulation is based on Lattice Botlzmann Methods, a new approach to simulate fluid dynamics. In addition, Walberla framework is used to solve the flow field too. This tool has been developed by Friedrich Alexander University of Erlangen Nürnberg. The second part of the thesis is focused on the particles immersed into the fluid. The properties of the particles are given as a function of the aerodynamic diameter. This is enough for macroscopic approximations. However, the discretization of the porous media has the same order of magnitude than the particle size. Consequently realistic geometry is necessary. Diesel particles are aggregates of spheres. A simulation tool is developed to create these aggregated using ballistic collision. The results are analysed in detail. The second step is to characterise their aerodynamic properties. Due to the small size of the particles, with the same order of magnitude than the separation between molecules of air, the fluid can not be approximated as a continuous medium. A new approach is needed. Direct Simulation Monte Carlo is the appropriate tool. A solver based on this formulation is developed. Unfortunately complex geometries could not be implemented on time. The thesis has been fruitful in several aspects. A new model based on procedural generation has been developed to create a micro-structure which mimics acicular mullite. A new CFD solver based on Lattice Boltzmann Methods, LabMoTer, has been implemented and validated. At the same time it is proposed a technique to optimized setup. Ballistic agglomeration process is studied in detail thanks to a new simulator developed ad hoc for this task. The results are studied in detail to find correlation between properties and the evolution in time. Uncertainty Quantification is used to include the Uncertainty in the models. A new Direct Simulation Monte Carlo solver has been developed and validated to calculate rarefied flow. / La contaminación se está volviendo un gran problema para las grandes áreas metropolitanas, en gran parte debido al tráfico. A nivel europeo, al igual que en otras áreas, la regulación es cada vez más restrictiva. Una buena prueba de ello es la normativa Euro de la Unión Europea. Especialmente importantes son las emisiones de óxidos de nitrógeno (NOx) y partículas (PM). La reducción de contaminantes se puede abordar desde dos estrategias distintas. La primera es la prevención. Modificar el proceso de combustión a través de las leyes de inyección o controlar la renovación de la carda son los métodos más comunes. La segunda estrategia es la eliminación. Se puede reducir los NOx mediante catálisis o atmósfera reductora y las partículas mediante la instalación de un filtro en el conducto de escape. La presente tesis se centra en el estudio de éste último. La mayoría de as estrategias para la reducción de emisiones penalizan el consumo. El filtro de partículas no es una excepción. Restringe el paso de aire. Como consecuencia la presión se incrementa a lo largo de toda la línea reduciendo las prestaciones del motor. La optimización del filtro es de vital importancia. Tiene que mantener su eficacia a la par que que se minimiza la caída de presión y con ella el consumo de combustible. El objetivo de la tesis es encontrar la relación entre la miscroestructura y las propiedades macroscópicas del filtro. Las conclusiones del estudio podrán utilizarse para optimizar la microestructura. La microestructura elegida imita los filtros de mulita acicular. Se genera por ordenador mediante generación procedimental utilizando parámetros aleatorios. Gracias a ello se puede estudiar la relación que existe entre la microestructura y las propiedades macroscópicas como la porosidad y la permeabilidad. El campo fluido se resuelve con LabMoTer, un software desarrollado en esta tesis. Está basado en Lattice Boltzmann, una nueva aproximación para simular fluidos. Además también se ha utilizado el framework Walberla desarrollado por la universidad Friedrich Alexander de Erlangen Nürnberg. La segunda parte de la tesis se centra en las partículas suspendidas en el fluido. Sus propiedades vienen dadas en función del diámetro aerodinámico. Es una buena aproximación desde un punto de vista macroscópico. Sin embargo éste no es el caso. El tamaño de la discretización requerida para calcular el medio poroso es similar al tamaño de las partículas. En consecuencia se necesita simular geometrías realistas. Las partículas Diesel son agregados de esferas. El proceso de aglomeración se ha simulado mediante colisión balística. Los resultados se han analizado con detalle. El segundo paso es la caracterización aerodinámica de los aglomerados. Debido a que el tamaño de las partículas precursoras es similar a la distancia entre moléculas el fluido no puede ser considerado un medio continuo. Se necesita una nueva aproximación. La herramienta apropiada es la Simulación Directa Monte Carlo (DSMC). Por ello se ha desarrollado un software basado en esta formulación. Desafortunadamente no ha habido tiempo suficiente como para implementar condiciones de contorno sobre geometrías complejas. La tesis ha sido fructífera en múltiples aspectos. Se ha desarrollado un modelo basado en generación procedimental capaz de crear una microestructura que aproxime mulita acicular. Se ha implementado y validado un nuevo solver CFD, LabMoTer. Además se ha planteado una técnica que optimiza la preparación del cálculo. El proceso de aglomeración se ha estudiado en detalle gracias a un nuevo simulador desarrollado ad hoc para esta tarea. Mediante el análisis estadístico de los resultados se han planteado modelos que reproducen la población de partículas y su evolución con el tiempo. Técnicas de Cuantificación de Incertidumbre se han empleado para modelar la dispersión de datos. Por último, un simulador basado / La contaminació s'està tornant un gran problema per a les grans àrees metropolitanes, en gran part degut al tràfic. A nivell europeu, a l'igual que en atres àrees, la regulació és cada volta més restrictiva. Una bona prova d'això és la normativa Euro de l'Unió Europea. Especialment importants són les emissions d'òxits de nitrogen (NOX) i partícules (PM). La reducció de contaminants se pot abordar des de dos estratègies distintes. La primera és la prevenció. Modificar el procés de combustió a través de les lleis d'inyecció o controlar la renovació de la càrrega són els mètodos més comuns. La segona estratègia és l'eliminació. Se pot reduir els NOX mediant catàlisis o atmòsfera reductora i les partícules mediant l'instalació d'un filtre en el vas d'escap. La present tesis se centra en l'estudi d'este últim. La majoria de les estratègies per a la reducció d'emissions penalisen el consum. El filtre de partícules no és una excepció. Restringix el pas d'aire. Com a conseqüència la pressió s'incrementa a lo llarc de tota la llínea reduint les prestacions del motor. L'optimisació del filtre és de vital importància. Ha de mantindre la seua eficàcia a la par que que es minimisa la caiguda de pressió i en ella el consum de combustible. L'objectiu de la tesis és trobar la relació entre la microescritura i les propietats macroscòpiques del filtre. Les conclusions de l'estudi podran utilisar-se per a optimisar la microestructura. La microestructura elegida imita els filtres de mulita acicular. Se genera per ordenador mediant generació procedimental utilisant paràmetros aleatoris. Gràcies ad això es pot estudiar la relació que existix entre la microestructura i les propietats macroscòpiques com la porositat i la permeabilitat. El camp fluït se resol en LabMoTer, un software desenrollat en esta tesis. Està basat en Lattice Boltzmann, una nova aproximació per a simular fluïts. Ademés també s'ha utilisat el framework Walberla, desentollat per l'Universitat Friedrich Alexander d'Erlangen Nürnberg. La segona part de la tesis se centra en les partícules suspeses en el fluït. Les seues propietats venen donades en funció del diàmetro aerodinàmic. És una bona aproximació des d'un punt de vista macroscòpic. No obstant este no és el cas. El tamany de la discretisació requerida per a calcular el mig porós és similar al tamany de les partícules. En conseqüència es necessita simular geometries realistes. Les partícules diésel són agregats d'esferes. El procés d'aglomeració s'ha simulat mediant colisió balística. Els resultats s'han analisat en detall. El segon pas és la caracterisació aerodinàmica dels aglomerats. Degut a que el tamany de les partícules precursores és similar a la distància entre molècules el fluït no pot ser considerat un mig continu. Se necessita una nova aproximació. La ferramenta apropiada és la Simulació Directa Monte Carlo (DSMC). Per això s'ha desenrollat un software basat en esta formulació. Malafortunadament no ha hagut temps suficient com per a implementar condicions de contorn sobre geometries complexes. La tesis ha segut fructífera en múltiples aspectes. S'ha desenrollat un model basat en generació procedimental capaç de crear una microestructura que aproxime mulita acicular. S'ha implementat i validat un nou solver CFD, LabMoTer. Ademés s'ha plantejat una tècnica que optimisa la preparació del càlcul. El procés d'aglomeració s'ha estudiat en detall gràcies a un nou simulador desenrollat ad hoc per ad esta tasca. Mediant l'anàlisis estadístic dels resultats s'han plantejat models que reproduixen la població de partícules i la seua evolució en el temps. Tècniques de Quantificació d'Incertea s'han empleat per a modelar la dispersió de senyes. Per últim, un simulador basat en DSMC s'ha desenrollat per a calcular fluïts rarificats. / García Galache, JP. (2017). Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann Methods [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90413
106

Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models

Wu, Sichao 29 August 2017 (has links)
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity. / Ph. D. / Uncertainties are ubiquitous in computer simulation models especially for network-based models where the underlying mechanisms are difficult to characterize explicitly by mathematical formalizations. Quantifying uncertainties is challenging because of either the lack of knowledge or their inherent indeterminate properties. Verification and validation of models with uncertainties cannot include every detail of real systems and therefore will remain a fundamental task in modeling. Many tools are developed for supporting uncertainty quantification, sensitivity analysis, and experimental design. However, few of them is domain-independent or supports the data management and complex simulation workflow of network-based simulation models. In this dissertation, we present a computational framework called GENEUS, which incorporates a multitude of functions including uncertain parameter specification, experimental design, model execution management, data access and registrations, sensitivity analysis, surrogate modeling, and model calibration. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses for various scenarios. GENEUS provides researchers access to uncertainty quantification, sensitivity analysis and experimental design methods with robust and tested implementations without requiring detailed expertise in modeling, statistics, or computing. Even for groups having all the skills, GENEUS can help save time, guard against mistakes and improve productivity.
107

Multiscale Methods and Uncertainty Quantification

Elfverson, Daniel January 2015 (has links)
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
108

Quantification of uncertainty in the magnetic characteristic of steel and permanent magnets and their effect on the performance of permanent magnet synchronous machine

Abhijit Sahu (5930828) 15 August 2019 (has links)
<div>The numerical calculation of the electromagnetic fields within electric machines is sensitive to the magnetic characteristic of steel. However, the magnetic characteristic of steel is uncertain due to fluctuations in alloy composition, possible contamination, and other manufacturing process variations including punching. Previous attempts to quantify magnetic uncertainty due to punching are based on parametric analytical models of <i>B-H</i> curves, where the uncertainty is reflected by model parameters. In this work, we set forth a data-driven approach for quantifying the uncertainty due to punching in <i>B-H</i> curves. In addition to the magnetic characteristics of steel lamination, the remanent flux density (<i>B<sub>r</sub></i>) exhibited by the permanent magnets in a permanent magnet synchronous machine (PMSM) is also uncertain due to unpredictable variations in the manufacturing process. Previous studies consider the impact of uncertainties in <i>B-H</i> curves and <i>B<sub>r</sub></i> of the permanent magnets on the average torque, cogging torque, torque ripple and losses of a PMSM. However, studies pertaining to the impact of these uncertainties on the combined machine/drive system of a PMSM is scarce in the literature. Hence, the objective of this work is to study the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the performance of a PMSM machine/drive system using a validated finite element simulator. </div><div>Our approach is as follows. First, we use principal component analysis to build a reduced-order stochastic model of <i>B-H</i> curves from a synthetic dataset containing <i>B-H</i> curves affected by punching. Second, we model the the uncertainty in <i>B<sub>r</sub></i> and other uncertainties in <i>B-H</i> characteristics e.g., due to unknown state of the material composition and unavailability of accurate data in deep saturation region. Third, to overcome the computational limitations of the finite element simulator, we replace it with surrogate models based on Gaussian process regression. Fourth, we perform propagation studies to assess the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the average torque, torque ripple and the PMSM machine/drive system using the constructed surrogate models.</div>
109

Analyse de sensibilité pour systèmes hyperboliques non linéaires / Sensitivity analysis for nonlinear hyperbolic equations of conservation laws

Fiorini, Camilla 11 July 2018 (has links)
L’analyse de sensibilité (AS) concerne la quantification des changements dans la solution d’un système d’équations aux dérivées partielles (EDP) dus aux varia- tions des paramètres d’entrée du modèle. Les techniques standard d’AS pour les EDP, comme la méthode d’équation de sensibilité continue, requirent de dériver la variable d’état. Cependant, dans le cas d’équations hyperboliques l’état peut présenter des dis- continuités, qui donc génèrent des Dirac dans la sensibilité. Le but de ce travail est de modifier les équations de sensibilité pour obtenir un syst‘eme valable même dans le cas discontinu et obtenir des sensibilités qui ne présentent pas de Dirac. Ceci est motivé par plusieurs raisons : d’abord, un Dirac ne peut pas être saisi numériquement, ce qui pourvoit une solution incorrecte de la sensibilité au voisinage de la discontinuité ; deuxièmement, les pics dans la solution numérique des équations de sensibilité non cor- rigées rendent ces sensibilités inutilisables pour certaines applications. Par conséquent, nous ajoutons un terme de correction aux équations de sensibilité. Nous faisons cela pour une hiérarchie de modèles de complexité croissante : de l’équation de Burgers non visqueuse au système d’Euler quasi-1D. Nous montrons l’influence de ce terme de correction sur un problème d’optimisation et sur un de quantification d’incertitude. / Sensitivity analysis (SA) concerns the quantification of changes in Partial Differential Equations (PDEs) solution due to perturbations in the model input. Stan- dard SA techniques for PDEs, such as the continuous sensitivity equation method, rely on the differentiation of the state variable. However, if the governing equations are hyperbolic PDEs, the state can exhibit discontinuities yielding Dirac delta functions in the sensitivity. We aim at modifying the sensitivity equations to obtain a solution without delta functions. This is motivated by several reasons: firstly, a Dirac delta function cannot be seized numerically, leading to an incorrect solution for the sensi- tivity in the neighbourhood of the state discontinuity; secondly, the spikes appearing in the numerical solution of the original sensitivity equations make such sensitivities unusable for some applications. Therefore, we add a correction term to the sensitivity equations. We do this for a hierarchy of models of increasing complexity: starting from the inviscid Burgers’ equation, to the quasi 1D Euler system. We show the influence of such correction term on an optimization algorithm and on an uncertainty quantification problem.
110

Uncertainty Quantification for Scale-Bridging Modeling of Multiphase Reactive Flows

Iavarone, Salvatore 24 April 2019 (has links) (PDF)
The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of novel and cost-effective combustion technologies and the minimization of environmental concerns at industrial scale. CFD simulations facilitate scaling-up procedures that otherwise would be complicated by strong interactions between reaction kinetics, turbulence and heat transfer. CFD calculations can be applied directly at the industrial scale of interest, thus avoiding scaling-up from lab-scale experiments. However, this advantage can only be obtained if CFD tools are quantitatively predictive and trusted as so. Despite the improvements in the computational capability, the implementation of detailed physical and chemical models in CFD simulations can still be prohibitive for real combustors, which require large computational grids and therefore significant computational efforts. Advanced simulation approaches like Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) guarantee higher fidelity in computational modeling of combustion at, unfortunately, increased computational cost. However, with adequate, reduced, and cost-effective modeling of physical phenomena, such as chemical kinetics and turbulence-chemistry interactions, and state of the art computing, LES will be the tool of choice to describe combustion processes at industrial scale accurately. Therefore, the development of reduced physics and chemistry models with quantified model-form uncertainty is needed to overcome the challenges of performing LES of industrial systems. Reduced-order models must reproduce the main features of the corresponding detailed models. They feature predictivity and capability of bridging scales when validated against a broad range of experiments and targeted by Validation and Uncertainty Quantification (V/UQ) procedures. In this work, V/UQ approaches are applied for reduced-order modeling of pulverized coal devolatilization and subsequent char oxidation, and furthermore for modeling NOx emissions in combustion systems.For coal devolatilization, a benchmark of the Single First-Order Reaction (SFOR) model was performed concerning the accuracy of the prediction of volatile yield. Different SFOR models were implemented and validated against experimental data coming from tests performed in an entrained flow reactor at oxy-conditions, to shed light on their drawbacks and benefits. SFOR models were chosen because of their simplicity: they can be easily included in CFD codes and are very appealing in the perspective of LES of pulverized coal combustion burners. The calibration of kinetic parameters was required to allow the investigated SFOR model to be predictive and reliable for different heating rates, hold temperatures and coal types. A comparison of several calibration approaches was performed to determine if one-step models can be adaptive and able to bridge scales, without losing accuracy, and to select the calibration method to employ for wider ranges of coal rank and operating conditions. The analysis pointed out that the main drawback of the SFOR models is the assumption of a constant ultimate volatile yield, equal to the value from the coal proximate analysis. To overcome this drawback, a yield model, i.e. a simple functional form that relates the ultimate volatile yield to the particle temperature, was proposed. The model depends on two parameters that have a certain degree of uncertainty. The performances of the yield model were assessed using a collaboration of experiments and simulations of a pilot-scale entrained flow reactor. A consistency analysis, based on the Bound-to-Bound Data Collaboration (B2B-DC) approach, and a Bayesian method, based on Gaussian Process Regression (GPR), were employed for the investigation of experiments and simulations. In Bound-to- Bound Data Collaboration the model output, evaluated at specified values of the model parameters, is compared with the experimental data: if the prediction of the model falls within the experimental uncertainty, the corresponding parameter values would be included in the so-called feasible set. The existence of a non-empty feasible set signifies consistency between the experiments and the simulations, i.e. model-data agreement. Consistency was indeed found when a relative error of 19% for all the experimental data was applied. Hence, a feasible set of the two SFOR model parameters was provided. A posterior state of knowledge, indicating potential model forms that could be explored in yield modeling, was obtained by Gaussian Process Regression. The model form evaluated through the consistency analysis is included within the posterior derived from GPR, indicating that it can satisfactorily match the experimental data and provide reliable estimation in almost every range of temperatures. CFD simulations were carried out using the proposed yield model with first-order kinetics, as in the SFOR model. Results showed promising agreement between predicted and experimental conversion for all the investigated cases.Regarding char combustion modeling, the consistency analysis has been applied to validate a reduced-order model and quantify the uncertainty in the prediction of char conversion. The model capability to address heterogeneous reaction between char carbon and O2, CO2 and H2O reagents, mass transport of species in the particle boundary layer, pore diffusion, and internal surface area changes was assessed by comparison with a large number of experiments performed in air and oxy-coal conditions. Different model forms had been considered, with an increasing degree of complexity, until consistency between model outputs and experimental results was reached. Rather than performing forward propagation of the model-form uncertainty on the predictions, the reduction of the parameter uncertainty of a selected model form was pursued and eventually achieved. The resulting 11-dimensional feasible set of model parameters allows the model to predict the experimental data within almost ±10% uncertainty. Due to the high dimensionality of the problem, the employed surrogate models resulted in considerable fitting errors, which led to a spoiled UQ inverse problem. Different strategies were taken to reduce the discrepancy between the surrogate outputs and the corresponding predictions of the simulation model, in the frameworks of constrained optimization and Bayesian inference. Both strategies succeeded in reducing the fitting errors and also resulted in a least-squares estimate for the simulation model. The variety of experimental gas environments ensured the validity of the consistent reduced model for both conventional and oxy-conditions, overcoming the differences in mass transport and kinetics observed in several experimental campaigns.The V/UQ-aided modeling of coal devolatilization and char combustion was done in the framework of the Predictive Science Academic Alliance Program II (PSAAP-II) funded by the US Department of Energy. One of the final goals of PSAAP-II is to develop high-fidelity simulation tools that ensure 5% uncertainty in the incident heat flux predictions inside a 1.2GW Ultra-Super-Critical (USC) coal-fired boiler. The 5% target refers to the expected predictivity of the full-scale simulation without considering the uncertainty in the scenario parameters. The data-driven approaches used in this Thesis helped to improve the predictivity of the investigated models and made them suitable for LES of the 1.2GW USC coal-fired boiler. Moreover, they are suitable for scale-bridging modeling of similar multi-phase processes involved in the conversion of solid renewable sources, such as biomass.In the final part of the Thesis, the sensitivity to finite-rate chemistry combustion models and kinetic mechanisms on the prediction of NO emissions was assessed. Moreover, the forward propagation of the uncertainty in the kinetics of the NNH route (included in the NOx chemistry) on the predictions of NO was investigated to reveal the current state of the art of kinetic modeling of NOx formation. The analysis was carried out on a case where NOx formation comes from various formation routes, both conventional (thermal and prompt) and unconventional ones. To this end, a lab-scale combustion system working in Moderate and Intense Low-oxygen Dilution (MILD) conditions was selected. The results showed considerable sensitivity of the NO emissions to the uncertain kinetic parameters of the rate-limiting reactions of the NNH pathway when a detailed kinetic mechanism is used. The analysis also pointed out that the use of one-step global rate schemes for the NO formation pathways, necessary when a skeletal kinetic mechanism is employed, lacks the required chemical accuracy and dims the importance of the NNH pathway in this combustion regime. An engineering modification of the finite-rate combustion model was proposed to account for the different chemical time scales of the fuel-oxidizer reactions and NOx formation pathways. It showed an equivalent impact on the emissions of NO than the uncertainty in the kinetics of the NNH route. At the cost of introducing a small mass imbalance (of the order of ppm), the adjustment led to improved predictions of NO. The investigation established a possibility for the engineering modeling of NO formation in MILD combustion with a finite-rate chemistry combustion model that can incorporate a detailed mechanism at affordable computational costs. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished

Page generated in 0.1227 seconds