• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 240
  • 240
  • 61
  • 56
  • 52
  • 36
  • 35
  • 33
  • 32
  • 28
  • 26
  • 25
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Statistical methods for post-processing ensemble weather forecasts

Williams, Robin Mark January 2016 (has links)
Until recent times, weather forecasts were deterministic in nature. For example, a forecast might state ``The temperature tomorrow will be $20^\circ$C.'' More recently, however, increasing interest has been paid to the uncertainty associated with such predictions. By quantifying the uncertainty of a forecast, for example with a probability distribution, users can make risk-based decisions. The uncertainty in weather forecasts is typically based upon `ensemble forecasts'. Rather than issuing a single forecast from a numerical weather prediction (NWP) model, ensemble forecasts comprise multiple model runs that differ in either the model physics or initial conditions. Ideally, ensemble forecasts would provide a representative sample of the possible outcomes of the verifying observations. However, due to model biases and inadequate specification of initial conditions, ensemble forecasts are often biased and underdispersed. As a result, estimates of the most likely values of the verifying observations, and the associated forecast uncertainty, are often inaccurate. It is therefore necessary to correct, or post-process ensemble forecasts, using statistical models known as `ensemble post-processing methods'. To this end, this thesis is concerned with the application of statistical methodology in the field of probabilistic weather forecasting, and in particular ensemble post-processing. Using various datasets, we extend existing work and propose the novel use of statistical methodology to tackle several aspects of ensemble post-processing. Our novel contributions to the field are the following. In chapter~3 we present a comparison study for several post-processing methods, with a focus on probabilistic forecasts for extreme events. We find that the benefits of ensemble post-processing are larger for forecasts of extreme events, compared with forecasts of common events. We show that allowing flexible corrections to the biases in ensemble location is important for the forecasting of extreme events. In chapter~4 we tackle the complicated problem of post-processing ensemble forecasts without making distributional assumptions, to produce recalibrated ensemble forecasts without the intermediate step of specifying a probability forecast distribution. We propose a latent variable model, and make a novel application of measurement error models. We show in three case studies that our distribution-free method is competitive with a popular alternative that makes distributional assumptions. We suggest that our distribution-free method could serve as a useful baseline on which forecasters should seek to improve. In chapter~5 we address the subject of parameter uncertainty in ensemble post-processing. As in all parametric statistical models, the parameter estimates are subject to uncertainty. We approximate the distribution of model parameters by bootstrap resampling, and demonstrate improvements in forecast skill by incorporating this additional source of uncertainty in to out-of-sample probability forecasts. In chapter~6 we use model diagnostic tools to determine how specific post-processing models may be improved. We subsequently introduce bias correction schemes that move beyond the standard linear schemes employed in the literature and in practice, particularly in the case of correcting ensemble underdispersion. Finally, we illustrate the complicated problem of assessing the skill of ensemble forecasts whose members are dependent, or correlated. We show that dependent ensemble members can result in surprising conclusions when employing standard measures of forecast skill.
62

Stochastic analysis, simulation and identification of hyperelastic constitutive equations / Analyse stochastique, simulation et identification de lois de comportement hyperélastiques

Staber, Brian 29 June 2018 (has links)
Le projet de thèse concerne la construction, la génération et l'identification de modèles continus stochastiques, pour des milieux hétérogènes exhibant des comportements non linéaires. Le domaine d'application principal visé est la biomécanique, notamment au travers du développement d'outils de modélisation multi-échelles et stochastiques, afin de quantifier les grandes incertitudes exhibées par les tissus mous. Deux aspects sont particulièrement mis en exergue. Le premier point a trait à la prise en compte des incertitudes en mécanique non linéaire, et leurs incidences sur les prédictions des quantités d'intérêt. Le second aspect concerne la construction, la génération (en grandes dimensions) et l'identification multi-échelle de représentations continues à partir de résultats expérimentaux limités / This work is concerned with the construction, generation and identification of stochastic continuum models, for heterogeneous materials exhibiting nonlinear behaviors. The main covered domains of applications are biomechanics, through the development of multiscale methods and stochastic models, in order to quantify the great variabilities exhibited by soft tissues. Two aspects are particularly highlighted. The first one is related to the uncertainty quantification in non linear mechanics, and its implications on the quantities of interest. The second aspect is concerned with the construction, the generation in high dimension and multiscale identification based on limited experimental data
63

Cross entropy-based analysis of spacecraft control systems

Mujumdar, Anusha Pradeep January 2016 (has links)
Space missions increasingly require sophisticated guidance, navigation and control algorithms, the development of which is reliant on verification and validation (V&V) techniques to ensure mission safety and success. A crucial element of V&V is the assessment of control system robust performance in the presence of uncertainty. In addition to estimating average performance under uncertainty, it is critical to determine the worst case performance. Industrial V&V approaches typically employ mu-analysis in the early control design stages, and Monte Carlo simulations on high-fidelity full engineering simulators at advanced stages of the design cycle. While highly capable, such techniques present a critical gap between pessimistic worst case estimates found using analytical methods, and the optimistic outlook often presented by Monte Carlo runs. Conservative worst case estimates are problematic because they can demand a controller redesign procedure, which is not justified if the poor performance is unlikely to occur. Gaining insight into the probability associated with the worst case performance is valuable in bridging this gap. It should be noted that due to the complexity of industrial-scale systems, V&V techniques are required to be capable of efficiently analysing non-linear models in the presence of significant uncertainty. As well, they must be computationally tractable. It is desirable that such techniques demand little engineering effort before each analysis, to be applied widely in industrial systems. Motivated by these factors, this thesis proposes and develops an efficient algorithm, based on the cross entropy simulation method. The proposed algorithm efficiently estimates the probabilities associated with various performance levels, from nominal performance up to degraded performance values, resulting in a curve of probabilities associated with various performance values. Such a curve is termed the probability profile of performance (PPoP), and is introduced as a tool that offers insight into a control system's performance, principally the probability associated with the worst case performance. The cross entropy-based robust performance analysis is implemented here on various industrial systems in European Space Agency-funded research projects. The implementation on autonomous rendezvous and docking models for the Mars Sample Return mission constitutes the core of the thesis. The proposed technique is implemented on high-fidelity models of the Vega launcher, as well as on a generic long coasting launcher upper stage. In summary, this thesis (a) develops an algorithm based on the cross entropy simulation method to estimate the probability associated with the worst case, (b) proposes the cross entropy-based PPoP tool to gain insight into system performance, (c) presents results of the robust performance analysis of three space industry systems using the proposed technique in conjunction with existing methods, and (d) proposes an integrated template for conducting robust performance analysis of linearised aerospace systems.
64

Nonlinear Dynamics of Uncertain Multi-Joint Structures

January 2016 (has links)
abstract: The present investigation is part of a long-term effort focused on the development of a methodology for the computationally efficient prediction of the dynamic response of structures with multiple joints. The first part of this thesis reports on the dynamic response of nominally identical beams with a single lap joint (“Brake-Reuss” beam). The observed impact responses at different levels clearly demonstrate the occurrence of both micro- and macro-slip, which are reflected by increased damping and a lowering of natural frequencies. Significant beam-to-beam variability of impact responses is also observed. Based on these experimental results, a deterministic 4-parameter Iwan model of the joint was developed. These parameters were randomized following a previous investigation. The randomness in the impact response predicted from this uncertain model was assessed in a Monte Carlo format through a series of time integrations of the response and found to be consistent with the experimental results. The availability of an uncertain computational model for the Brake-Reuss beam provides a starting point to analyze and model the response of multi-joint structures in the presence of uncertainty/variability. To this end, a 4-beam frame was designed that is composed of three identical Brake-Reuss beams and a fourth, stretched one. The response of that structure to impact was computed and several cases were identified. The presence of uncertainty implies that an exact prediction of the response of a particular frame cannot be achieved. Rather, the response can only be predicted to lie within a band reflecting the level of uncertainty. In this perspective, the computational model adopted for the frame is only required to provide a good estimate of this uncertainty band. Equivalently, a relaxation of the model complexity, i.e., the introduction of epistemic uncertainty, can be performed as long as it does not affect significantly the uncertainty band of the predictions. Such an approach, which holds significant promise for the efficient computational of the response of structures with many uncertain joints, is assessed here by replacing some joints by linear spring elements. It is found that this simplification of the model is often acceptable at lower excitation/response levels. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2016
65

Quantification des incertitudes et analyse de sensibilité pour codes de calcul à entrées fonctionnelles et dépendantes / Stochastic methods for uncertainty treatment of functional variables in computer codes : application to safety studies

Nanty, Simon 15 October 2015 (has links)
Cette thèse s'inscrit dans le cadre du traitement des incertitudes dans les simulateurs numériques, et porte plus particulièrement sur l'étude de deux cas d'application liés aux études de sûreté pour les réacteurs nucléaires. Ces deux applications présentent plusieurs caractéristiques communes. La première est que les entrées du code étudié sont fonctionnelles et scalaires, les entrées fonctionnelles étant dépendantes entre elles. La deuxième caractéristique est que la distribution de probabilité des entrées fonctionnelles n'est connue qu'à travers un échantillon de ces variables. La troisième caractéristique, présente uniquement dans un des deux cas d'étude, est le coût de calcul élevé du code étudié qui limite le nombre de simulations possibles. L'objectif principal de ces travaux de thèse était de proposer une méthodologie complète de traitement des incertitudes de simulateurs numériques pour les deux cas étudiés. Dans un premier temps, nous avons proposé une méthodologie pour quantifier les incertitudes de variables aléatoires fonctionnelles dépendantes à partir d'un échantillon de leurs réalisations. Cette méthodologie permet à la fois de modéliser la dépendance entre les variables fonctionnelles et de prendre en compte le lien entre ces variables et une autre variable, appelée covariable, qui peut être, par exemple, la sortie du code étudié. Associée à cette méthodologie, nous avons développé une adaptation d'un outil de visualisation de données fonctionnelles, permettant de visualiser simultanément les incertitudes et les caractéristiques de plusieurs variables fonctionnelles dépendantes. Dans un second temps, une méthodologie pour réaliser l'analyse de sensibilité globale des simulateurs des deux cas d'étude a été proposée. Dans le cas d'un code coûteux en temps de calcul, l'application directe des méthodes d'analyse de sensibilité globale quantitative est impossible. Pour pallier ce problème, la solution retenue consiste à construire un modèle de substitution ou métamodèle, approchant le code de calcul et ayant un temps de calcul très court. Une méthode d'échantillonnage uniforme optimisé pour des variables scalaires et fonctionnelles a été développée pour construire la base d'apprentissage du métamodèle. Enfin, une nouvelle approche d'approximation de codes coûteux et à entrées fonctionnelles a été explorée. Dans cette approche, le code est vu comme un code stochastique dont l'aléa est dû aux variables fonctionnelles supposées incontrôlables. Sous ces hypothèses, plusieurs métamodèles ont été développés et comparés. L'ensemble des méthodes proposées dans ces travaux a été appliqué aux deux cas d'application étudiés. / This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called covariate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or metamodel, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the metamodel. Finally, a new approximation approach for expensive codes with functional outputs has been explored. In this approach, the code is seen as a stochastic code, whose randomness is due to the functional variables, assumed uncontrollable. In this framework, several metamodels have been developed and compared. All the methods proposed in this work have been applied to the two nuclear safety applications.
66

Quantificação de incertezas aplicada à geomecânica de reservatórios

PEREIRA, Leonardo Cabral 08 July 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-04T11:22:15Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TeseLeoCabral_vrsFinal.pdf: 37484380 bytes, checksum: b61e5bb415f505345e69623ffd098b9e (MD5) / Made available in DSpace on 2016-07-04T11:22:15Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TeseLeoCabral_vrsFinal.pdf: 37484380 bytes, checksum: b61e5bb415f505345e69623ffd098b9e (MD5) Previous issue date: 2015-07-08 / A disciplina de geomecânica de reservatórios engloba aspectos relacionados não somente à mecânica de rochas, mas também à geologia estrutural e engenharia de petróleo e deve ser entendida no intuito de melhor explicar aspectos críticos presentes nas fases de exploração e produção de reservatórios de petróleo, tais como: predição de poro pressões, estimativa de potenciais selantes de falhas geológicas, determinação de trajetórias de poços, cálculo da pressão de fratura, reativação de falhas, compactação de reservatórios, injeção de CO2, entre outros. Uma representação adequada da quantificação de incertezas é parte essencial de qualquer projeto. Especificamente, uma análise que se destina a fornecer informações sobre o comportamento de um sistema deve prover uma avaliação da incerteza associada aos resultados. Sem tal estimativa, perspectivas traçadas a partir da análise e decisões tomadas com base nos resultados são questionáveis. O processo de quantificação de incertezas para modelos multifísicos de grande escala, como os modelos relacionados à geomecânica de reservatórios, requer uma atenção especial, principalmente, devido ao fato de comumente se deparar com cenários em que a disponibilidade de dados é nula ou escassa. Esta tese se propôs a avaliar e integrar estes dois temas: quantificação de incertezas e geomecânica de reservatórios. Para isso, foi realizada uma extensa revisão bibliográfica sobre os principais problemas relacionados à geomecânica de reservatórios, tais como: injeção acima da pressão de fratura, reativação de falhas geológicas, compactação de reservatórios e injeção de CO2. Esta revisão contou com a dedução e implementação de soluções analíticas disponíveis na literatura relatas aos fenômenos descritos acima. Desta forma, a primeira contribuição desta tese foi agrupar diferentes soluções analíticas relacionadas à geomecânica de reservatórios em um único documento. O processo de quantificação de incertezas foi amplamente discutido. Desde a definição de tipos de incertezas - aleatórias ou epistêmicas, até a apresentação de diferentes metodologias para quantificação de incertezas. A teoria da evidência, também conhecida como Dempster-Shafer theory, foi detalhada e apresentada como uma generalização da teoria da probabilidade. Apesar de vastamente utilizada em diversas áreas da engenharia, pela primeira vez a teoria da evidência foi utilizada na engenharia de reservatórios, o que torna tal fato uma contribuição fundamental desta tese. O conceito de decisões sob incerteza foi introduzido e catapultou a integração desses dois temas extremamente relevantes na engenharia de reservatórios. Diferentes cenários inerentes à tomada de decisão foram descritos e discutidos, entre eles: a ausência de dados de entrada disponíveis, a situação em que os parâmetros de entrada são conhecidos, a inferência de novos dados ao longo do projeto e, por fim, uma modelagem híbrida. Como resultado desta integração foram submetidos 3 artigos a revistas indexadas. Por fim, foi deduzida a equação de fluxo em meios porosos deformáveis e proposta uma metodologia explícita para incorporação dos efeitos geomecânicos na simulação de reservatórios tradicional. Esta metodologia apresentou resultados bastante efetivos quando comparada a métodos totalmente acoplados ou iterativos presentes na literatura. / Reservoir geomechanics encompasses aspects related to rock mechanics, structural geology and petroleum engineering. The geomechanics of reservoirs must be understood in order to better explain critical aspects present in petroleum reservoirs exploration and production phases, such as: pore pressure prediction, geological fault seal potential, well design, fracture propagation, fault reactivation, reservoir compaction, CO2 injection, among others. An adequate representation of the uncertainties is an essential part of any project. Specifically, an analysis that is intended to provide information about the behavior of a system should provide an assessment of the uncertainty associated with the results. Without such estimate, perspectives drawn from the analysis and decisions made based on the results are questionable. The process of uncertainty quantification for large scale multiphysics models, such as reservoir geomechanics models, requires special attention, due to the fact that scenarios where data availability is nil or scarce commonly come across. This thesis aimed to evaluate and integrate these two themes: uncertainty quantification and reservoir geomechanics. For this, an extensive literature review on key issues related to reservoir geomechanics was carried out, such as: injection above the fracture pressure, fault reactivation, reservoir compaction and CO2 injection. This review included the deduction and implementation of analytical solutions available in the literature. Thus, the first contribution of this thesis was to group different analytical solutions related to reservoir geomechanics into a single document. The process of uncertainty quantification has been widely discussed. The definition of types of uncertainty - aleatory or epistemic and different methods for uncertainty quantification were presented. Evidence theory, also known as Dempster- Shafer theory, was detailed and presented as a probability theory generalization. Although widely used in different fields of engineering, for the first time the evidence theory was used in reservoir engineering, which makes this fact a fundamental contribution of this thesis. The concept of decisions under uncertainty was introduced and catapulted the integration of these two extremely important issues in reservoir engineering. Different scenarios inherent in the decision-making have been described and discussed, among them: the lack of available input data, the situation in which the input parameters are known, the inference of new data along the design time, and finally a hybrid modeling. As a result of this integration three articles were submitted to peer review journals. Finally, the flow equation in deformable porous media was presented and an explicit methodology was proposed to incorporate geomechanical effects in the reservoir simulation. This methodology presented quite effective results when compared to fully coupled or iterative methods in the literature.
67

Analyse de sensibilité pour la simulation numérique des écoulements compressibles en aérodynamique externe / Sensitivity analysis for numerical simulation of compressible flows in external aerodynamics

Resmini, Andrea 11 December 2015 (has links)
L'analyse de sensibilité pour la simulation numérique des écoulements compressibles en aérodynamique externe par rapport à la discrétisation de maillage et aux incertitudes liées à des paramètres d'entrées du modèle a été traitée 1- par le moyen des méthodes adjointes pour le calcul de gradient et 2- par approximations stochastiques non-intrusives basées sur des grilles creuses. 1- Une méthode d'adaptation de maillages goal-oriented basée sur les dérivées totales des fonctions aérodynamiques d'intérêt par rapport aux nœuds du maillage a été introduite sous une forme améliorée. La méthode s'applique au cadre de volumes finis pour des écoulements RANS pour des maillages mono-bloc et multi-bloc structurés. Des applications 2D pour des écoulements transsoniques ainsi que subsonique détaché atour d'un profil pour l'estimation du coefficient de traînée sont présentées. L'apport de la méthode proposée est vérifié. 2- Les méthodes du polynôme de chaos généralisé sous forme pseudospectrale creuse et de la collocation stochastique construite sur des grilles creuses isotropes et anisotropes sont examinées. Les maillages anisotropes sont obtenus par le biais d'une méthode adaptive basée sur l'analyse de sensibilité globale. L'efficacité des ces approximations est testée avec des fonctions test et des écoulements aérodynamiques visqueux autour d'un profil en présence d'incertitudes géométriques et opérationnelles. L'intégration des méthodes et aboutissements 1- et 2- dans une approche couplée permettrait de contrôler de façon équilibrée l'erreur déterministe/stochastique goal-oriented. / Sensitivity analysis for the numerical simulation of external aerodynamics compressible flows with respect to the mesh discretization and to the model input parametric uncertainty has been addressed respectively 1- through adjoint-based gradient computation techniques and 2- through non-intrusive stochastic approximation methods based on sparse grids. 1- An enhanced goal-oriented mesh adaptation method based on aerodynamic functional total derivatives with respect to mesh coordinates in a RANS finite-volume mono-block and non-matching multi-block structured grid framework is introduced. Applications to 2D RANS flow about an airfoil in transonic and detached subsonic conditions for the drag coefficient estimation are presented. The asset of the proposed method is patent. 2- The generalized Polynomial Chaos in its sparse pseudospectral form and stochastic collocation methods based on both isotropic and dimension-adapted sparse grids obtained through an improved dimension-adaptivity method driven by global sensitivity analysis are considered. The stochastic approximations efficiency is assessed on multi-variate test functions and airfoil viscous aerodynamics simulation in the presence of geometrical and operational uncertainties. Integration of achievements 1- and 2- into a coupled approach in future work will pave the way for a well-balanced goal-oriented deterministic/stochastic error control.
68

Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models

Wu, Sichao 29 August 2017 (has links)
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity. / Ph. D.
69

Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann Methods

García Galache, José Pedro 03 November 2017 (has links)
Contamination is becoming an important problem in great metropolitan areas. A large portion of the contaminants is emitted by the vehicle fleet. At European level, as well as in other economical areas, the regulation is becoming more and more restrictive. Euro regulations are the best example of this tendency. Specially important are the emissions of nitrogen oxide (NOx) and Particle Matter (PM). Two different strategies exist to reduce the emission of pollutants. One of them is trying to avoid their creation. Modifying the combustion process by means of different fuel injection laws or controlling the air regeneration are the typical methods. The second set of strategies is focused on the contaminant elimination. The NOx are reduced by means of catalysis and/or reducing atmosphere, usually created by injection of urea. The particle matter is eliminated using filters. This thesis is focused in this matter. Most of the strategies to reduce the emission of contaminants penalise fuel consumption. The particle filter is not an exception. Its installation, located in the exhaust duct, restricts the pass of the air. It increases the pressure along the whole exhaust line before the filter reducing the performance. Optimising the filter is then an important task. The efficiency of the filter has to be good enough to obey the contaminant normative. At the same time the pressure drop has to be as low as possible to optimise fuel consumption and performance. The objective of the thesis is to find the relation between the micro-structure and the macroscopic properties. With this knowledge the optimisation of the micro-structure is possible. The micro-structure of the filter mimics acicular mullite. It is created by procedural generation using random parameters. The relation between micro-structure and the macroscopic properties such as porosity and permeability are studied in detail. The flow field is solved using LabMoTer, a software developed during this thesis. The formulation is based on Lattice Botlzmann Methods, a new approach to simulate fluid dynamics. In addition, Walberla framework is used to solve the flow field too. This tool has been developed by Friedrich Alexander University of Erlangen Nürnberg. The second part of the thesis is focused on the particles immersed into the fluid. The properties of the particles are given as a function of the aerodynamic diameter. This is enough for macroscopic approximations. However, the discretization of the porous media has the same order of magnitude than the particle size. Consequently realistic geometry is necessary. Diesel particles are aggregates of spheres. A simulation tool is developed to create these aggregated using ballistic collision. The results are analysed in detail. The second step is to characterise their aerodynamic properties. Due to the small size of the particles, with the same order of magnitude than the separation between molecules of air, the fluid can not be approximated as a continuous medium. A new approach is needed. Direct Simulation Monte Carlo is the appropriate tool. A solver based on this formulation is developed. Unfortunately complex geometries could not be implemented on time. The thesis has been fruitful in several aspects. A new model based on procedural generation has been developed to create a micro-structure which mimics acicular mullite. A new CFD solver based on Lattice Boltzmann Methods, LabMoTer, has been implemented and validated. At the same time it is proposed a technique to optimized setup. Ballistic agglomeration process is studied in detail thanks to a new simulator developed ad hoc for this task. The results are studied in detail to find correlation between properties and the evolution in time. Uncertainty Quantification is used to include the Uncertainty in the models. A new Direct Simulation Monte Carlo solver has been developed and validated to calculate rarefied flow. / La contaminación se está volviendo un gran problema para las grandes áreas metropolitanas, en gran parte debido al tráfico. A nivel europeo, al igual que en otras áreas, la regulación es cada vez más restrictiva. Una buena prueba de ello es la normativa Euro de la Unión Europea. Especialmente importantes son las emisiones de óxidos de nitrógeno (NOx) y partículas (PM). La reducción de contaminantes se puede abordar desde dos estrategias distintas. La primera es la prevención. Modificar el proceso de combustión a través de las leyes de inyección o controlar la renovación de la carda son los métodos más comunes. La segunda estrategia es la eliminación. Se puede reducir los NOx mediante catálisis o atmósfera reductora y las partículas mediante la instalación de un filtro en el conducto de escape. La presente tesis se centra en el estudio de éste último. La mayoría de as estrategias para la reducción de emisiones penalizan el consumo. El filtro de partículas no es una excepción. Restringe el paso de aire. Como consecuencia la presión se incrementa a lo largo de toda la línea reduciendo las prestaciones del motor. La optimización del filtro es de vital importancia. Tiene que mantener su eficacia a la par que que se minimiza la caída de presión y con ella el consumo de combustible. El objetivo de la tesis es encontrar la relación entre la miscroestructura y las propiedades macroscópicas del filtro. Las conclusiones del estudio podrán utilizarse para optimizar la microestructura. La microestructura elegida imita los filtros de mulita acicular. Se genera por ordenador mediante generación procedimental utilizando parámetros aleatorios. Gracias a ello se puede estudiar la relación que existe entre la microestructura y las propiedades macroscópicas como la porosidad y la permeabilidad. El campo fluido se resuelve con LabMoTer, un software desarrollado en esta tesis. Está basado en Lattice Boltzmann, una nueva aproximación para simular fluidos. Además también se ha utilizado el framework Walberla desarrollado por la universidad Friedrich Alexander de Erlangen Nürnberg. La segunda parte de la tesis se centra en las partículas suspendidas en el fluido. Sus propiedades vienen dadas en función del diámetro aerodinámico. Es una buena aproximación desde un punto de vista macroscópico. Sin embargo éste no es el caso. El tamaño de la discretización requerida para calcular el medio poroso es similar al tamaño de las partículas. En consecuencia se necesita simular geometrías realistas. Las partículas Diesel son agregados de esferas. El proceso de aglomeración se ha simulado mediante colisión balística. Los resultados se han analizado con detalle. El segundo paso es la caracterización aerodinámica de los aglomerados. Debido a que el tamaño de las partículas precursoras es similar a la distancia entre moléculas el fluido no puede ser considerado un medio continuo. Se necesita una nueva aproximación. La herramienta apropiada es la Simulación Directa Monte Carlo (DSMC). Por ello se ha desarrollado un software basado en esta formulación. Desafortunadamente no ha habido tiempo suficiente como para implementar condiciones de contorno sobre geometrías complejas. La tesis ha sido fructífera en múltiples aspectos. Se ha desarrollado un modelo basado en generación procedimental capaz de crear una microestructura que aproxime mulita acicular. Se ha implementado y validado un nuevo solver CFD, LabMoTer. Además se ha planteado una técnica que optimiza la preparación del cálculo. El proceso de aglomeración se ha estudiado en detalle gracias a un nuevo simulador desarrollado ad hoc para esta tarea. Mediante el análisis estadístico de los resultados se han planteado modelos que reproducen la población de partículas y su evolución con el tiempo. Técnicas de Cuantificación de Incertidumbre se han empleado para modelar la dispersión de datos. Por último, un simulador basado / La contaminació s'està tornant un gran problema per a les grans àrees metropolitanes, en gran part degut al tràfic. A nivell europeu, a l'igual que en atres àrees, la regulació és cada volta més restrictiva. Una bona prova d'això és la normativa Euro de l'Unió Europea. Especialment importants són les emissions d'òxits de nitrogen (NOX) i partícules (PM). La reducció de contaminants se pot abordar des de dos estratègies distintes. La primera és la prevenció. Modificar el procés de combustió a través de les lleis d'inyecció o controlar la renovació de la càrrega són els mètodos més comuns. La segona estratègia és l'eliminació. Se pot reduir els NOX mediant catàlisis o atmòsfera reductora i les partícules mediant l'instalació d'un filtre en el vas d'escap. La present tesis se centra en l'estudi d'este últim. La majoria de les estratègies per a la reducció d'emissions penalisen el consum. El filtre de partícules no és una excepció. Restringix el pas d'aire. Com a conseqüència la pressió s'incrementa a lo llarc de tota la llínea reduint les prestacions del motor. L'optimisació del filtre és de vital importància. Ha de mantindre la seua eficàcia a la par que que es minimisa la caiguda de pressió i en ella el consum de combustible. L'objectiu de la tesis és trobar la relació entre la microescritura i les propietats macroscòpiques del filtre. Les conclusions de l'estudi podran utilisar-se per a optimisar la microestructura. La microestructura elegida imita els filtres de mulita acicular. Se genera per ordenador mediant generació procedimental utilisant paràmetros aleatoris. Gràcies ad això es pot estudiar la relació que existix entre la microestructura i les propietats macroscòpiques com la porositat i la permeabilitat. El camp fluït se resol en LabMoTer, un software desenrollat en esta tesis. Està basat en Lattice Boltzmann, una nova aproximació per a simular fluïts. Ademés també s'ha utilisat el framework Walberla, desentollat per l'Universitat Friedrich Alexander d'Erlangen Nürnberg. La segona part de la tesis se centra en les partícules suspeses en el fluït. Les seues propietats venen donades en funció del diàmetro aerodinàmic. És una bona aproximació des d'un punt de vista macroscòpic. No obstant este no és el cas. El tamany de la discretisació requerida per a calcular el mig porós és similar al tamany de les partícules. En conseqüència es necessita simular geometries realistes. Les partícules diésel són agregats d'esferes. El procés d'aglomeració s'ha simulat mediant colisió balística. Els resultats s'han analisat en detall. El segon pas és la caracterisació aerodinàmica dels aglomerats. Degut a que el tamany de les partícules precursores és similar a la distància entre molècules el fluït no pot ser considerat un mig continu. Se necessita una nova aproximació. La ferramenta apropiada és la Simulació Directa Monte Carlo (DSMC). Per això s'ha desenrollat un software basat en esta formulació. Malafortunadament no ha hagut temps suficient com per a implementar condicions de contorn sobre geometries complexes. La tesis ha segut fructífera en múltiples aspectes. S'ha desenrollat un model basat en generació procedimental capaç de crear una microestructura que aproxime mulita acicular. S'ha implementat i validat un nou solver CFD, LabMoTer. Ademés s'ha plantejat una tècnica que optimisa la preparació del càlcul. El procés d'aglomeració s'ha estudiat en detall gràcies a un nou simulador desenrollat ad hoc per ad esta tasca. Mediant l'anàlisis estadístic dels resultats s'han plantejat models que reproduixen la població de partícules i la seua evolució en el temps. Tècniques de Quantificació d'Incertea s'han empleat per a modelar la dispersió de senyes. Per últim, un simulador basat en DSMC s'ha desenrollat per a calcular fluïts rarificats. / García Galache, JP. (2017). Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann Methods [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90413 / TESIS
70

A Comprehensive Coal Conversion Model Extended to Oxy-Coal Conditions

Holland, Troy Michael 01 July 2017 (has links)
CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. However, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. This work extends a comprehensive char conversion code (the Carbon Conversion Kinetics or CCK model), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work, the CCK model was thoroughly investigated with a global sensitivity analysis. The sensitivity analysis highlighted several submodels in the CCK code, which were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include a greatly extended annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the Chemical Percolation Devolatilization (CPD) model. The resultant Carbon Conversion Kinetics for oxy-coal combustion (CCK/oxy) model predictions were compared to oxy-coal data, and further compared to parallel data sets obtained at near conventional conditions.

Page generated in 0.6035 seconds