• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 248
  • 248
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Advanced Sampling Methods for Solving Large-Scale Inverse Problems

Attia, Ahmed Mohamed Mohamed 19 September 2016 (has links)
Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies. / Ph. D.
102

Multiscale Modeling and Uncertainty Quantification of Multiphase Flow and Mass Transfer Processes

Donato, Adam Armido 10 January 2015 (has links)
Most engineering systems have some degree of uncertainty in their input and operating parameters. The interaction of these parameters leads to the uncertain nature of the system performance and outputs. In order to quantify this uncertainty in a computational model, it is necessary to include the full range of uncertainty in the model. Currently, there are two major technical barriers to achieving this: (1) in many situations -particularly those involving multiscale phenomena-the stochastic nature of input parameters is not well defined, and is usually approximated by limited experimental data or heuristics; (2) incorporating the full range of uncertainty across all uncertain input and operating parameters via conventional techniques often results in an inordinate number of computational scenarios to be performed, thereby limiting uncertainty analysis to simple or approximate computational models. This first objective is addressed through combining molecular and macroscale modeling where the molecular modeling is used to quantify the stochastic distribution of parameters that are typically approximated. Specifically, an adsorption separation process is used to demonstrate this computational technique. In this demonstration, stochastic molecular modeling results are validated against a diverse range of experimental data sets. The stochastic molecular-level results are then shown to have a significant role on the macro-scale performance of adsorption systems. The second portion of this research is focused on reducing the computational burden of performing an uncertainty analysis on practical engineering systems. The state of the art for uncertainty analysis relies on the construction of a meta-model (also known as a surrogate model or reduced order model) which can then be sampled stochastically at a relatively minimal computational burden. Unfortunately these meta-models can be very computationally expensive to construct, and the complexity of construction can scale exponentially with the number of relevant uncertain input parameters. In an effort to dramatically reduce this effort, a novel methodology "QUICKER (Quantifying Uncertainty In Computational Knowledge Engineering Rapidly)" has been developed. Instead of building a meta-model, QUICKER focuses exclusively on the output distributions, which are always one-dimensional. By focusing on one-dimensional distributions instead of the multiple dimensions analyzed via meta-models, QUICKER is able to handle systems with far more uncertain inputs. / Ph. D.
103

Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning

Wu, Jinlong 25 September 2018 (has links)
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled. / Ph. D. / Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
104

Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes

Macatula, Romcholo Yulo 21 July 2020 (has links)
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method. / Master of Science / Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
105

Computational Reconstruction and Quantification of Aerospace Materials

Long, Matthew Thomas 14 May 2024 (has links)
Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructure related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty (epistemic uncertainty) that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty (aleatoric uncertainty), which is the noise that is inherent in the original image representing the experimental data. The epistemic uncertainty that arises from the MRF algorithm is analyzed through the study of the percentage of isolated pixels and the difference in average grain sizes between the initial image and the reconstructed image. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses. / Master of Science / Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructures related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty, which is the noise that is inherent in the original image representing the experimental data. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses.
106

Physics-informed Machine Learning with Uncertainty Quantification

Daw, Arka 12 February 2024 (has links)
Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the form of physics supervision. Uncertainty quantification (UQ) is an important goal in many scientific use-cases, where the obtaining reliable ML model predictions and accessing the potential risks associated with them is crucial. In this thesis, we propose novel methodologies in three key areas for improving uncertainty quantification for PIML. First, we propose to explicitly infuse the physics prior in the form of monotonicity constraints through architectural modifications in neural networks for quantifying uncertainty. Second, we demonstrate a more general framework for quantifying uncertainty with PIML that is compatible with generic forms of physics supervision such as PDEs and closed form equations. Lastly, we study the limitations of physics-based loss in the context of Physics-informed Neural Networks (PINNs), and develop an efficient sampling strategy to mitigate the failure modes. / Doctor of Philosophy / Owing to the success of deep learning in computer vision and natural language processing there is a growing interest of using deep learning in scientific applications. In scientific applications, knowledge is available in the form of closed form equations, partial differential equations, etc. along with labeled data. My work focuses on developing deep learning methods that integrate these forms of supervision. Especially, my work focuses on building methods that can quantify uncertainty in deep learning models, which is an important goal for high-stakes applications.
107

Multiscale Methods and Uncertainty Quantification

Elfverson, Daniel January 2015 (has links)
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
108

Quantification of uncertainty in the magnetic characteristic of steel and permanent magnets and their effect on the performance of permanent magnet synchronous machine

Abhijit Sahu (5930828) 15 August 2019 (has links)
<div>The numerical calculation of the electromagnetic fields within electric machines is sensitive to the magnetic characteristic of steel. However, the magnetic characteristic of steel is uncertain due to fluctuations in alloy composition, possible contamination, and other manufacturing process variations including punching. Previous attempts to quantify magnetic uncertainty due to punching are based on parametric analytical models of <i>B-H</i> curves, where the uncertainty is reflected by model parameters. In this work, we set forth a data-driven approach for quantifying the uncertainty due to punching in <i>B-H</i> curves. In addition to the magnetic characteristics of steel lamination, the remanent flux density (<i>B<sub>r</sub></i>) exhibited by the permanent magnets in a permanent magnet synchronous machine (PMSM) is also uncertain due to unpredictable variations in the manufacturing process. Previous studies consider the impact of uncertainties in <i>B-H</i> curves and <i>B<sub>r</sub></i> of the permanent magnets on the average torque, cogging torque, torque ripple and losses of a PMSM. However, studies pertaining to the impact of these uncertainties on the combined machine/drive system of a PMSM is scarce in the literature. Hence, the objective of this work is to study the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the performance of a PMSM machine/drive system using a validated finite element simulator. </div><div>Our approach is as follows. First, we use principal component analysis to build a reduced-order stochastic model of <i>B-H</i> curves from a synthetic dataset containing <i>B-H</i> curves affected by punching. Second, we model the the uncertainty in <i>B<sub>r</sub></i> and other uncertainties in <i>B-H</i> characteristics e.g., due to unknown state of the material composition and unavailability of accurate data in deep saturation region. Third, to overcome the computational limitations of the finite element simulator, we replace it with surrogate models based on Gaussian process regression. Fourth, we perform propagation studies to assess the effect of <i>B-H</i> and <i>B<sub>r</sub></i> uncertainties on the average torque, torque ripple and the PMSM machine/drive system using the constructed surrogate models.</div>
109

Analyse de sensibilité pour systèmes hyperboliques non linéaires / Sensitivity analysis for nonlinear hyperbolic equations of conservation laws

Fiorini, Camilla 11 July 2018 (has links)
L’analyse de sensibilité (AS) concerne la quantification des changements dans la solution d’un système d’équations aux dérivées partielles (EDP) dus aux varia- tions des paramètres d’entrée du modèle. Les techniques standard d’AS pour les EDP, comme la méthode d’équation de sensibilité continue, requirent de dériver la variable d’état. Cependant, dans le cas d’équations hyperboliques l’état peut présenter des dis- continuités, qui donc génèrent des Dirac dans la sensibilité. Le but de ce travail est de modifier les équations de sensibilité pour obtenir un syst‘eme valable même dans le cas discontinu et obtenir des sensibilités qui ne présentent pas de Dirac. Ceci est motivé par plusieurs raisons : d’abord, un Dirac ne peut pas être saisi numériquement, ce qui pourvoit une solution incorrecte de la sensibilité au voisinage de la discontinuité ; deuxièmement, les pics dans la solution numérique des équations de sensibilité non cor- rigées rendent ces sensibilités inutilisables pour certaines applications. Par conséquent, nous ajoutons un terme de correction aux équations de sensibilité. Nous faisons cela pour une hiérarchie de modèles de complexité croissante : de l’équation de Burgers non visqueuse au système d’Euler quasi-1D. Nous montrons l’influence de ce terme de correction sur un problème d’optimisation et sur un de quantification d’incertitude. / Sensitivity analysis (SA) concerns the quantification of changes in Partial Differential Equations (PDEs) solution due to perturbations in the model input. Stan- dard SA techniques for PDEs, such as the continuous sensitivity equation method, rely on the differentiation of the state variable. However, if the governing equations are hyperbolic PDEs, the state can exhibit discontinuities yielding Dirac delta functions in the sensitivity. We aim at modifying the sensitivity equations to obtain a solution without delta functions. This is motivated by several reasons: firstly, a Dirac delta function cannot be seized numerically, leading to an incorrect solution for the sensi- tivity in the neighbourhood of the state discontinuity; secondly, the spikes appearing in the numerical solution of the original sensitivity equations make such sensitivities unusable for some applications. Therefore, we add a correction term to the sensitivity equations. We do this for a hierarchy of models of increasing complexity: starting from the inviscid Burgers’ equation, to the quasi 1D Euler system. We show the influence of such correction term on an optimization algorithm and on an uncertainty quantification problem.
110

Uncertainty Quantification for Scale-Bridging Modeling of Multiphase Reactive Flows

Iavarone, Salvatore 24 April 2019 (has links) (PDF)
The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of novel and cost-effective combustion technologies and the minimization of environmental concerns at industrial scale. CFD simulations facilitate scaling-up procedures that otherwise would be complicated by strong interactions between reaction kinetics, turbulence and heat transfer. CFD calculations can be applied directly at the industrial scale of interest, thus avoiding scaling-up from lab-scale experiments. However, this advantage can only be obtained if CFD tools are quantitatively predictive and trusted as so. Despite the improvements in the computational capability, the implementation of detailed physical and chemical models in CFD simulations can still be prohibitive for real combustors, which require large computational grids and therefore significant computational efforts. Advanced simulation approaches like Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) guarantee higher fidelity in computational modeling of combustion at, unfortunately, increased computational cost. However, with adequate, reduced, and cost-effective modeling of physical phenomena, such as chemical kinetics and turbulence-chemistry interactions, and state of the art computing, LES will be the tool of choice to describe combustion processes at industrial scale accurately. Therefore, the development of reduced physics and chemistry models with quantified model-form uncertainty is needed to overcome the challenges of performing LES of industrial systems. Reduced-order models must reproduce the main features of the corresponding detailed models. They feature predictivity and capability of bridging scales when validated against a broad range of experiments and targeted by Validation and Uncertainty Quantification (V/UQ) procedures. In this work, V/UQ approaches are applied for reduced-order modeling of pulverized coal devolatilization and subsequent char oxidation, and furthermore for modeling NOx emissions in combustion systems.For coal devolatilization, a benchmark of the Single First-Order Reaction (SFOR) model was performed concerning the accuracy of the prediction of volatile yield. Different SFOR models were implemented and validated against experimental data coming from tests performed in an entrained flow reactor at oxy-conditions, to shed light on their drawbacks and benefits. SFOR models were chosen because of their simplicity: they can be easily included in CFD codes and are very appealing in the perspective of LES of pulverized coal combustion burners. The calibration of kinetic parameters was required to allow the investigated SFOR model to be predictive and reliable for different heating rates, hold temperatures and coal types. A comparison of several calibration approaches was performed to determine if one-step models can be adaptive and able to bridge scales, without losing accuracy, and to select the calibration method to employ for wider ranges of coal rank and operating conditions. The analysis pointed out that the main drawback of the SFOR models is the assumption of a constant ultimate volatile yield, equal to the value from the coal proximate analysis. To overcome this drawback, a yield model, i.e. a simple functional form that relates the ultimate volatile yield to the particle temperature, was proposed. The model depends on two parameters that have a certain degree of uncertainty. The performances of the yield model were assessed using a collaboration of experiments and simulations of a pilot-scale entrained flow reactor. A consistency analysis, based on the Bound-to-Bound Data Collaboration (B2B-DC) approach, and a Bayesian method, based on Gaussian Process Regression (GPR), were employed for the investigation of experiments and simulations. In Bound-to- Bound Data Collaboration the model output, evaluated at specified values of the model parameters, is compared with the experimental data: if the prediction of the model falls within the experimental uncertainty, the corresponding parameter values would be included in the so-called feasible set. The existence of a non-empty feasible set signifies consistency between the experiments and the simulations, i.e. model-data agreement. Consistency was indeed found when a relative error of 19% for all the experimental data was applied. Hence, a feasible set of the two SFOR model parameters was provided. A posterior state of knowledge, indicating potential model forms that could be explored in yield modeling, was obtained by Gaussian Process Regression. The model form evaluated through the consistency analysis is included within the posterior derived from GPR, indicating that it can satisfactorily match the experimental data and provide reliable estimation in almost every range of temperatures. CFD simulations were carried out using the proposed yield model with first-order kinetics, as in the SFOR model. Results showed promising agreement between predicted and experimental conversion for all the investigated cases.Regarding char combustion modeling, the consistency analysis has been applied to validate a reduced-order model and quantify the uncertainty in the prediction of char conversion. The model capability to address heterogeneous reaction between char carbon and O2, CO2 and H2O reagents, mass transport of species in the particle boundary layer, pore diffusion, and internal surface area changes was assessed by comparison with a large number of experiments performed in air and oxy-coal conditions. Different model forms had been considered, with an increasing degree of complexity, until consistency between model outputs and experimental results was reached. Rather than performing forward propagation of the model-form uncertainty on the predictions, the reduction of the parameter uncertainty of a selected model form was pursued and eventually achieved. The resulting 11-dimensional feasible set of model parameters allows the model to predict the experimental data within almost ±10% uncertainty. Due to the high dimensionality of the problem, the employed surrogate models resulted in considerable fitting errors, which led to a spoiled UQ inverse problem. Different strategies were taken to reduce the discrepancy between the surrogate outputs and the corresponding predictions of the simulation model, in the frameworks of constrained optimization and Bayesian inference. Both strategies succeeded in reducing the fitting errors and also resulted in a least-squares estimate for the simulation model. The variety of experimental gas environments ensured the validity of the consistent reduced model for both conventional and oxy-conditions, overcoming the differences in mass transport and kinetics observed in several experimental campaigns.The V/UQ-aided modeling of coal devolatilization and char combustion was done in the framework of the Predictive Science Academic Alliance Program II (PSAAP-II) funded by the US Department of Energy. One of the final goals of PSAAP-II is to develop high-fidelity simulation tools that ensure 5% uncertainty in the incident heat flux predictions inside a 1.2GW Ultra-Super-Critical (USC) coal-fired boiler. The 5% target refers to the expected predictivity of the full-scale simulation without considering the uncertainty in the scenario parameters. The data-driven approaches used in this Thesis helped to improve the predictivity of the investigated models and made them suitable for LES of the 1.2GW USC coal-fired boiler. Moreover, they are suitable for scale-bridging modeling of similar multi-phase processes involved in the conversion of solid renewable sources, such as biomass.In the final part of the Thesis, the sensitivity to finite-rate chemistry combustion models and kinetic mechanisms on the prediction of NO emissions was assessed. Moreover, the forward propagation of the uncertainty in the kinetics of the NNH route (included in the NOx chemistry) on the predictions of NO was investigated to reveal the current state of the art of kinetic modeling of NOx formation. The analysis was carried out on a case where NOx formation comes from various formation routes, both conventional (thermal and prompt) and unconventional ones. To this end, a lab-scale combustion system working in Moderate and Intense Low-oxygen Dilution (MILD) conditions was selected. The results showed considerable sensitivity of the NO emissions to the uncertain kinetic parameters of the rate-limiting reactions of the NNH pathway when a detailed kinetic mechanism is used. The analysis also pointed out that the use of one-step global rate schemes for the NO formation pathways, necessary when a skeletal kinetic mechanism is employed, lacks the required chemical accuracy and dims the importance of the NNH pathway in this combustion regime. An engineering modification of the finite-rate combustion model was proposed to account for the different chemical time scales of the fuel-oxidizer reactions and NOx formation pathways. It showed an equivalent impact on the emissions of NO than the uncertainty in the kinetics of the NNH route. At the cost of introducing a small mass imbalance (of the order of ppm), the adjustment led to improved predictions of NO. The investigation established a possibility for the engineering modeling of NO formation in MILD combustion with a finite-rate chemistry combustion model that can incorporate a detailed mechanism at affordable computational costs. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished

Page generated in 0.2229 seconds