• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 254
  • 254
  • 65
  • 59
  • 53
  • 38
  • 37
  • 36
  • 34
  • 30
  • 28
  • 27
  • 26
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Towards multifidelity uncertainty quantification for multiobjective structural design

Lebon, Jérémy 12 December 2013 (has links) (PDF)
This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
62

Multiscale Simulation and Uncertainty Quantification Techniques for Richards' Equation in Heterogeneous Media

Kang, Seul Ki 2012 August 1900 (has links)
In this dissertation, we develop multiscale finite element methods and uncertainty quantification technique for Richards' equation, a mathematical model to describe fluid flow in unsaturated porous media. Both coarse-level and fine-level numerical computation techniques are presented. To develop an accurate coarse-scale numerical method, we need to construct an effective multiscale map that is able to capture the multiscale features of the large-scale solution without resolving the small scale details. With a careful choice of the coarse spaces for multiscale finite element methods, we can significantly reduce errors. We introduce several methods to construct coarse spaces for multiscale finite element methods. A coarse space based on local spectral problems is also presented. The construction of coarse spaces begins with an initial choice of multiscale basis functions supported in coarse regions. These basis functions are complemented using weighted local spectral eigenfunctions. These newly constructed basis functions can capture the small scale features of the solution within a coarse-grid block and give us an accurate coarse-scale solution. However, it is expensive to compute the local basis functions for each parameter value for a nonlinear equation. To overcome this difficulty, local reduced basis method is discussed, which provides smaller dimension spaces with which to compute the basis functions. Robust solution techniques for Richards' equation at a fine scale are discussed. We construct iterative solvers for Richards' equation, whose number of iterations is independent of the contrast. We employ two-level domain decomposition pre-conditioners to solve linear systems arising in approximation of problems with high contrast. We show that, by using the local spectral coarse space for the preconditioners, the number of iterations for these solvers is independent of the physical properties of the media. Several numerical experiments are given to support the theoretical results. Last, we present numerical methods for uncertainty quantification applications for Richards' equation. Numerical methods combined with stochastic solution techniques are proposed to sample conductivities of porous media given in integrated data. Our proposed algorithm is based on upscaling techniques and the Markov chain Monte Carlo method. Sampling results are presented to prove the efficiency and accuracy of our algorithm.
63

Statistical methods for post-processing ensemble weather forecasts

Williams, Robin Mark January 2016 (has links)
Until recent times, weather forecasts were deterministic in nature. For example, a forecast might state ``The temperature tomorrow will be $20^\circ$C.'' More recently, however, increasing interest has been paid to the uncertainty associated with such predictions. By quantifying the uncertainty of a forecast, for example with a probability distribution, users can make risk-based decisions. The uncertainty in weather forecasts is typically based upon `ensemble forecasts'. Rather than issuing a single forecast from a numerical weather prediction (NWP) model, ensemble forecasts comprise multiple model runs that differ in either the model physics or initial conditions. Ideally, ensemble forecasts would provide a representative sample of the possible outcomes of the verifying observations. However, due to model biases and inadequate specification of initial conditions, ensemble forecasts are often biased and underdispersed. As a result, estimates of the most likely values of the verifying observations, and the associated forecast uncertainty, are often inaccurate. It is therefore necessary to correct, or post-process ensemble forecasts, using statistical models known as `ensemble post-processing methods'. To this end, this thesis is concerned with the application of statistical methodology in the field of probabilistic weather forecasting, and in particular ensemble post-processing. Using various datasets, we extend existing work and propose the novel use of statistical methodology to tackle several aspects of ensemble post-processing. Our novel contributions to the field are the following. In chapter~3 we present a comparison study for several post-processing methods, with a focus on probabilistic forecasts for extreme events. We find that the benefits of ensemble post-processing are larger for forecasts of extreme events, compared with forecasts of common events. We show that allowing flexible corrections to the biases in ensemble location is important for the forecasting of extreme events. In chapter~4 we tackle the complicated problem of post-processing ensemble forecasts without making distributional assumptions, to produce recalibrated ensemble forecasts without the intermediate step of specifying a probability forecast distribution. We propose a latent variable model, and make a novel application of measurement error models. We show in three case studies that our distribution-free method is competitive with a popular alternative that makes distributional assumptions. We suggest that our distribution-free method could serve as a useful baseline on which forecasters should seek to improve. In chapter~5 we address the subject of parameter uncertainty in ensemble post-processing. As in all parametric statistical models, the parameter estimates are subject to uncertainty. We approximate the distribution of model parameters by bootstrap resampling, and demonstrate improvements in forecast skill by incorporating this additional source of uncertainty in to out-of-sample probability forecasts. In chapter~6 we use model diagnostic tools to determine how specific post-processing models may be improved. We subsequently introduce bias correction schemes that move beyond the standard linear schemes employed in the literature and in practice, particularly in the case of correcting ensemble underdispersion. Finally, we illustrate the complicated problem of assessing the skill of ensemble forecasts whose members are dependent, or correlated. We show that dependent ensemble members can result in surprising conclusions when employing standard measures of forecast skill.
64

Stochastic analysis, simulation and identification of hyperelastic constitutive equations / Analyse stochastique, simulation et identification de lois de comportement hyperélastiques

Staber, Brian 29 June 2018 (has links)
Le projet de thèse concerne la construction, la génération et l'identification de modèles continus stochastiques, pour des milieux hétérogènes exhibant des comportements non linéaires. Le domaine d'application principal visé est la biomécanique, notamment au travers du développement d'outils de modélisation multi-échelles et stochastiques, afin de quantifier les grandes incertitudes exhibées par les tissus mous. Deux aspects sont particulièrement mis en exergue. Le premier point a trait à la prise en compte des incertitudes en mécanique non linéaire, et leurs incidences sur les prédictions des quantités d'intérêt. Le second aspect concerne la construction, la génération (en grandes dimensions) et l'identification multi-échelle de représentations continues à partir de résultats expérimentaux limités / This work is concerned with the construction, generation and identification of stochastic continuum models, for heterogeneous materials exhibiting nonlinear behaviors. The main covered domains of applications are biomechanics, through the development of multiscale methods and stochastic models, in order to quantify the great variabilities exhibited by soft tissues. Two aspects are particularly highlighted. The first one is related to the uncertainty quantification in non linear mechanics, and its implications on the quantities of interest. The second aspect is concerned with the construction, the generation in high dimension and multiscale identification based on limited experimental data
65

Cross entropy-based analysis of spacecraft control systems

Mujumdar, Anusha Pradeep January 2016 (has links)
Space missions increasingly require sophisticated guidance, navigation and control algorithms, the development of which is reliant on verification and validation (V&V) techniques to ensure mission safety and success. A crucial element of V&V is the assessment of control system robust performance in the presence of uncertainty. In addition to estimating average performance under uncertainty, it is critical to determine the worst case performance. Industrial V&V approaches typically employ mu-analysis in the early control design stages, and Monte Carlo simulations on high-fidelity full engineering simulators at advanced stages of the design cycle. While highly capable, such techniques present a critical gap between pessimistic worst case estimates found using analytical methods, and the optimistic outlook often presented by Monte Carlo runs. Conservative worst case estimates are problematic because they can demand a controller redesign procedure, which is not justified if the poor performance is unlikely to occur. Gaining insight into the probability associated with the worst case performance is valuable in bridging this gap. It should be noted that due to the complexity of industrial-scale systems, V&V techniques are required to be capable of efficiently analysing non-linear models in the presence of significant uncertainty. As well, they must be computationally tractable. It is desirable that such techniques demand little engineering effort before each analysis, to be applied widely in industrial systems. Motivated by these factors, this thesis proposes and develops an efficient algorithm, based on the cross entropy simulation method. The proposed algorithm efficiently estimates the probabilities associated with various performance levels, from nominal performance up to degraded performance values, resulting in a curve of probabilities associated with various performance values. Such a curve is termed the probability profile of performance (PPoP), and is introduced as a tool that offers insight into a control system's performance, principally the probability associated with the worst case performance. The cross entropy-based robust performance analysis is implemented here on various industrial systems in European Space Agency-funded research projects. The implementation on autonomous rendezvous and docking models for the Mars Sample Return mission constitutes the core of the thesis. The proposed technique is implemented on high-fidelity models of the Vega launcher, as well as on a generic long coasting launcher upper stage. In summary, this thesis (a) develops an algorithm based on the cross entropy simulation method to estimate the probability associated with the worst case, (b) proposes the cross entropy-based PPoP tool to gain insight into system performance, (c) presents results of the robust performance analysis of three space industry systems using the proposed technique in conjunction with existing methods, and (d) proposes an integrated template for conducting robust performance analysis of linearised aerospace systems.
66

Nonlinear Dynamics of Uncertain Multi-Joint Structures

January 2016 (has links)
abstract: The present investigation is part of a long-term effort focused on the development of a methodology for the computationally efficient prediction of the dynamic response of structures with multiple joints. The first part of this thesis reports on the dynamic response of nominally identical beams with a single lap joint (“Brake-Reuss” beam). The observed impact responses at different levels clearly demonstrate the occurrence of both micro- and macro-slip, which are reflected by increased damping and a lowering of natural frequencies. Significant beam-to-beam variability of impact responses is also observed. Based on these experimental results, a deterministic 4-parameter Iwan model of the joint was developed. These parameters were randomized following a previous investigation. The randomness in the impact response predicted from this uncertain model was assessed in a Monte Carlo format through a series of time integrations of the response and found to be consistent with the experimental results. The availability of an uncertain computational model for the Brake-Reuss beam provides a starting point to analyze and model the response of multi-joint structures in the presence of uncertainty/variability. To this end, a 4-beam frame was designed that is composed of three identical Brake-Reuss beams and a fourth, stretched one. The response of that structure to impact was computed and several cases were identified. The presence of uncertainty implies that an exact prediction of the response of a particular frame cannot be achieved. Rather, the response can only be predicted to lie within a band reflecting the level of uncertainty. In this perspective, the computational model adopted for the frame is only required to provide a good estimate of this uncertainty band. Equivalently, a relaxation of the model complexity, i.e., the introduction of epistemic uncertainty, can be performed as long as it does not affect significantly the uncertainty band of the predictions. Such an approach, which holds significant promise for the efficient computational of the response of structures with many uncertain joints, is assessed here by replacing some joints by linear spring elements. It is found that this simplification of the model is often acceptable at lower excitation/response levels. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2016
67

Quantification des incertitudes et analyse de sensibilité pour codes de calcul à entrées fonctionnelles et dépendantes / Stochastic methods for uncertainty treatment of functional variables in computer codes : application to safety studies

Nanty, Simon 15 October 2015 (has links)
Cette thèse s'inscrit dans le cadre du traitement des incertitudes dans les simulateurs numériques, et porte plus particulièrement sur l'étude de deux cas d'application liés aux études de sûreté pour les réacteurs nucléaires. Ces deux applications présentent plusieurs caractéristiques communes. La première est que les entrées du code étudié sont fonctionnelles et scalaires, les entrées fonctionnelles étant dépendantes entre elles. La deuxième caractéristique est que la distribution de probabilité des entrées fonctionnelles n'est connue qu'à travers un échantillon de ces variables. La troisième caractéristique, présente uniquement dans un des deux cas d'étude, est le coût de calcul élevé du code étudié qui limite le nombre de simulations possibles. L'objectif principal de ces travaux de thèse était de proposer une méthodologie complète de traitement des incertitudes de simulateurs numériques pour les deux cas étudiés. Dans un premier temps, nous avons proposé une méthodologie pour quantifier les incertitudes de variables aléatoires fonctionnelles dépendantes à partir d'un échantillon de leurs réalisations. Cette méthodologie permet à la fois de modéliser la dépendance entre les variables fonctionnelles et de prendre en compte le lien entre ces variables et une autre variable, appelée covariable, qui peut être, par exemple, la sortie du code étudié. Associée à cette méthodologie, nous avons développé une adaptation d'un outil de visualisation de données fonctionnelles, permettant de visualiser simultanément les incertitudes et les caractéristiques de plusieurs variables fonctionnelles dépendantes. Dans un second temps, une méthodologie pour réaliser l'analyse de sensibilité globale des simulateurs des deux cas d'étude a été proposée. Dans le cas d'un code coûteux en temps de calcul, l'application directe des méthodes d'analyse de sensibilité globale quantitative est impossible. Pour pallier ce problème, la solution retenue consiste à construire un modèle de substitution ou métamodèle, approchant le code de calcul et ayant un temps de calcul très court. Une méthode d'échantillonnage uniforme optimisé pour des variables scalaires et fonctionnelles a été développée pour construire la base d'apprentissage du métamodèle. Enfin, une nouvelle approche d'approximation de codes coûteux et à entrées fonctionnelles a été explorée. Dans cette approche, le code est vu comme un code stochastique dont l'aléa est dû aux variables fonctionnelles supposées incontrôlables. Sous ces hypothèses, plusieurs métamodèles ont été développés et comparés. L'ensemble des méthodes proposées dans ces travaux a été appliqué aux deux cas d'application étudiés. / This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called covariate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or metamodel, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the metamodel. Finally, a new approximation approach for expensive codes with functional outputs has been explored. In this approach, the code is seen as a stochastic code, whose randomness is due to the functional variables, assumed uncontrollable. In this framework, several metamodels have been developed and compared. All the methods proposed in this work have been applied to the two nuclear safety applications.
68

Quantificação de incertezas aplicada à geomecânica de reservatórios

PEREIRA, Leonardo Cabral 08 July 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-04T11:22:15Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TeseLeoCabral_vrsFinal.pdf: 37484380 bytes, checksum: b61e5bb415f505345e69623ffd098b9e (MD5) / Made available in DSpace on 2016-07-04T11:22:15Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TeseLeoCabral_vrsFinal.pdf: 37484380 bytes, checksum: b61e5bb415f505345e69623ffd098b9e (MD5) Previous issue date: 2015-07-08 / A disciplina de geomecânica de reservatórios engloba aspectos relacionados não somente à mecânica de rochas, mas também à geologia estrutural e engenharia de petróleo e deve ser entendida no intuito de melhor explicar aspectos críticos presentes nas fases de exploração e produção de reservatórios de petróleo, tais como: predição de poro pressões, estimativa de potenciais selantes de falhas geológicas, determinação de trajetórias de poços, cálculo da pressão de fratura, reativação de falhas, compactação de reservatórios, injeção de CO2, entre outros. Uma representação adequada da quantificação de incertezas é parte essencial de qualquer projeto. Especificamente, uma análise que se destina a fornecer informações sobre o comportamento de um sistema deve prover uma avaliação da incerteza associada aos resultados. Sem tal estimativa, perspectivas traçadas a partir da análise e decisões tomadas com base nos resultados são questionáveis. O processo de quantificação de incertezas para modelos multifísicos de grande escala, como os modelos relacionados à geomecânica de reservatórios, requer uma atenção especial, principalmente, devido ao fato de comumente se deparar com cenários em que a disponibilidade de dados é nula ou escassa. Esta tese se propôs a avaliar e integrar estes dois temas: quantificação de incertezas e geomecânica de reservatórios. Para isso, foi realizada uma extensa revisão bibliográfica sobre os principais problemas relacionados à geomecânica de reservatórios, tais como: injeção acima da pressão de fratura, reativação de falhas geológicas, compactação de reservatórios e injeção de CO2. Esta revisão contou com a dedução e implementação de soluções analíticas disponíveis na literatura relatas aos fenômenos descritos acima. Desta forma, a primeira contribuição desta tese foi agrupar diferentes soluções analíticas relacionadas à geomecânica de reservatórios em um único documento. O processo de quantificação de incertezas foi amplamente discutido. Desde a definição de tipos de incertezas - aleatórias ou epistêmicas, até a apresentação de diferentes metodologias para quantificação de incertezas. A teoria da evidência, também conhecida como Dempster-Shafer theory, foi detalhada e apresentada como uma generalização da teoria da probabilidade. Apesar de vastamente utilizada em diversas áreas da engenharia, pela primeira vez a teoria da evidência foi utilizada na engenharia de reservatórios, o que torna tal fato uma contribuição fundamental desta tese. O conceito de decisões sob incerteza foi introduzido e catapultou a integração desses dois temas extremamente relevantes na engenharia de reservatórios. Diferentes cenários inerentes à tomada de decisão foram descritos e discutidos, entre eles: a ausência de dados de entrada disponíveis, a situação em que os parâmetros de entrada são conhecidos, a inferência de novos dados ao longo do projeto e, por fim, uma modelagem híbrida. Como resultado desta integração foram submetidos 3 artigos a revistas indexadas. Por fim, foi deduzida a equação de fluxo em meios porosos deformáveis e proposta uma metodologia explícita para incorporação dos efeitos geomecânicos na simulação de reservatórios tradicional. Esta metodologia apresentou resultados bastante efetivos quando comparada a métodos totalmente acoplados ou iterativos presentes na literatura. / Reservoir geomechanics encompasses aspects related to rock mechanics, structural geology and petroleum engineering. The geomechanics of reservoirs must be understood in order to better explain critical aspects present in petroleum reservoirs exploration and production phases, such as: pore pressure prediction, geological fault seal potential, well design, fracture propagation, fault reactivation, reservoir compaction, CO2 injection, among others. An adequate representation of the uncertainties is an essential part of any project. Specifically, an analysis that is intended to provide information about the behavior of a system should provide an assessment of the uncertainty associated with the results. Without such estimate, perspectives drawn from the analysis and decisions made based on the results are questionable. The process of uncertainty quantification for large scale multiphysics models, such as reservoir geomechanics models, requires special attention, due to the fact that scenarios where data availability is nil or scarce commonly come across. This thesis aimed to evaluate and integrate these two themes: uncertainty quantification and reservoir geomechanics. For this, an extensive literature review on key issues related to reservoir geomechanics was carried out, such as: injection above the fracture pressure, fault reactivation, reservoir compaction and CO2 injection. This review included the deduction and implementation of analytical solutions available in the literature. Thus, the first contribution of this thesis was to group different analytical solutions related to reservoir geomechanics into a single document. The process of uncertainty quantification has been widely discussed. The definition of types of uncertainty - aleatory or epistemic and different methods for uncertainty quantification were presented. Evidence theory, also known as Dempster- Shafer theory, was detailed and presented as a probability theory generalization. Although widely used in different fields of engineering, for the first time the evidence theory was used in reservoir engineering, which makes this fact a fundamental contribution of this thesis. The concept of decisions under uncertainty was introduced and catapulted the integration of these two extremely important issues in reservoir engineering. Different scenarios inherent in the decision-making have been described and discussed, among them: the lack of available input data, the situation in which the input parameters are known, the inference of new data along the design time, and finally a hybrid modeling. As a result of this integration three articles were submitted to peer review journals. Finally, the flow equation in deformable porous media was presented and an explicit methodology was proposed to incorporate geomechanical effects in the reservoir simulation. This methodology presented quite effective results when compared to fully coupled or iterative methods in the literature.
69

Analyse de sensibilité pour la simulation numérique des écoulements compressibles en aérodynamique externe / Sensitivity analysis for numerical simulation of compressible flows in external aerodynamics

Resmini, Andrea 11 December 2015 (has links)
L'analyse de sensibilité pour la simulation numérique des écoulements compressibles en aérodynamique externe par rapport à la discrétisation de maillage et aux incertitudes liées à des paramètres d'entrées du modèle a été traitée 1- par le moyen des méthodes adjointes pour le calcul de gradient et 2- par approximations stochastiques non-intrusives basées sur des grilles creuses. 1- Une méthode d'adaptation de maillages goal-oriented basée sur les dérivées totales des fonctions aérodynamiques d'intérêt par rapport aux nœuds du maillage a été introduite sous une forme améliorée. La méthode s'applique au cadre de volumes finis pour des écoulements RANS pour des maillages mono-bloc et multi-bloc structurés. Des applications 2D pour des écoulements transsoniques ainsi que subsonique détaché atour d'un profil pour l'estimation du coefficient de traînée sont présentées. L'apport de la méthode proposée est vérifié. 2- Les méthodes du polynôme de chaos généralisé sous forme pseudospectrale creuse et de la collocation stochastique construite sur des grilles creuses isotropes et anisotropes sont examinées. Les maillages anisotropes sont obtenus par le biais d'une méthode adaptive basée sur l'analyse de sensibilité globale. L'efficacité des ces approximations est testée avec des fonctions test et des écoulements aérodynamiques visqueux autour d'un profil en présence d'incertitudes géométriques et opérationnelles. L'intégration des méthodes et aboutissements 1- et 2- dans une approche couplée permettrait de contrôler de façon équilibrée l'erreur déterministe/stochastique goal-oriented. / Sensitivity analysis for the numerical simulation of external aerodynamics compressible flows with respect to the mesh discretization and to the model input parametric uncertainty has been addressed respectively 1- through adjoint-based gradient computation techniques and 2- through non-intrusive stochastic approximation methods based on sparse grids. 1- An enhanced goal-oriented mesh adaptation method based on aerodynamic functional total derivatives with respect to mesh coordinates in a RANS finite-volume mono-block and non-matching multi-block structured grid framework is introduced. Applications to 2D RANS flow about an airfoil in transonic and detached subsonic conditions for the drag coefficient estimation are presented. The asset of the proposed method is patent. 2- The generalized Polynomial Chaos in its sparse pseudospectral form and stochastic collocation methods based on both isotropic and dimension-adapted sparse grids obtained through an improved dimension-adaptivity method driven by global sensitivity analysis are considered. The stochastic approximations efficiency is assessed on multi-variate test functions and airfoil viscous aerodynamics simulation in the presence of geometrical and operational uncertainties. Integration of achievements 1- and 2- into a coupled approach in future work will pave the way for a well-balanced goal-oriented deterministic/stochastic error control.
70

A Comprehensive Coal Conversion Model Extended to Oxy-Coal Conditions

Holland, Troy Michael 01 July 2017 (has links)
CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. However, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. This work extends a comprehensive char conversion code (the Carbon Conversion Kinetics or CCK model), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work, the CCK model was thoroughly investigated with a global sensitivity analysis. The sensitivity analysis highlighted several submodels in the CCK code, which were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include a greatly extended annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the Chemical Percolation Devolatilization (CPD) model. The resultant Carbon Conversion Kinetics for oxy-coal combustion (CCK/oxy) model predictions were compared to oxy-coal data, and further compared to parallel data sets obtained at near conventional conditions.

Page generated in 0.1241 seconds