Spelling suggestions: "subject:"incertainty quantification"" "subject:"ncertainty quantification""
91 |
Dry Static Friction in Metals: Experiments and Micro-Asperity Based ModelingSista, Sri Narasimha Bhargava January 2014 (has links)
No description available.
|
92 |
New Methods of Variable Selection and Inference on High Dimensional DataRen, Sheng January 2017 (has links)
No description available.
|
93 |
Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based DesignRiley, Matthew E. 07 September 2011 (has links)
No description available.
|
94 |
Probabilistic Flood Forecast Using Bayesian MethodsHan, Shasha January 2019 (has links)
The number of flood events and the estimated costs of floods have increased dramatically over the past few decades. To reduce the negative impacts of flooding, reliable flood forecasting is essential for early warning and decision making. Although various flood forecasting models and techniques have been developed, the assessment and reduction of uncertainties associated with the forecast remain a challenging task. Therefore, this thesis focuses on the investigation of Bayesian methods for producing probabilistic flood forecasts to accurately quantify predictive uncertainty and enhance the forecast performance and reliability.
In the thesis, hydrologic uncertainty was quantified by a Bayesian post-processor - Hydrologic Uncertainty Processor (HUP), and the predictability of HUP with different hydrologic models under different flow conditions were investigated. Followed by an extension of HUP into an ensemble prediction framework, which constitutes the Bayesian Ensemble Uncertainty Processor (BEUP). Then the BEUP with bias-corrected ensemble weather inputs was tested to improve predictive performance. In addition, the effects of input and model type on BEUP were investigated through different combinations of BEUP with deterministic/ensemble weather predictions and lumped/semi-distributed hydrologic models.
Results indicate that Bayesian method is robust for probabilistic flood forecasting with uncertainty assessment. HUP is able to improve the deterministic forecast from the hydrologic model and produces more accurate probabilistic forecast. Under high flow condition, a better performing hydrologic model yields better probabilistic forecast after applying HUP. BEUP can significantly improve the accuracy and reliability of short-range flood forecasts, but the improvement becomes less obvious as lead time increases. The best results for short-range forecasts are obtained by applying both bias correction and BEUP. Results also show that bias correcting each ensemble member of weather inputs generates better flood forecast than only bias correcting the ensemble mean. The improvement on BEUP brought by the hydrologic model type is more significant than the input data type. BEUP with semi-distributed model is recommended for short-range flood forecasts. / Dissertation / Doctor of Philosophy (PhD) / Flood is one of the top weather related hazards and causes serious property damage and loss of lives every year worldwide. If the timing and magnitude of the flood event could be accurately predicted in advance, it will allow time to get well prepared, and thus reduce its negative impacts. This research focuses on improving flood forecasts through advanced Bayesian techniques. The main objectives are: (1) enhancing reliability and accuracy of flood forecasting system; and (2) improving the assessment of predictive uncertainty associated with the flood forecasts. The key contributions include: (1) application of Bayesian forecasting methods in a semi-urban watershed to advance the predictive uncertainty quantification; and (2) investigation of the Bayesian forecasting methods with different inputs and models and combining bias correction technique to further improve the forecast performance. It is expected that the findings from this research will benefit flood impact mitigation, watershed management and water resources planning.
|
95 |
Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale SimulationsCheng, Haiyan 03 August 2009 (has links)
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations.
This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making.
``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality. / Ph. D.
|
96 |
Exploring the Stochastic Performance of Metallic Microstructures With Multi-Scale ModelsSenthilnathan, Arulmurugan 01 June 2023 (has links)
Titanium-7%wt-Aluminum (Ti-7Al) has been of interest to the aerospace industry owing to its good structural and thermal properties. However, extensive research is still needed to study the structural behavior and determine the material properties of Ti-7Al. The homogenized macro-scale material properties are directly related to the crystallographic structure at the micro-scale. Furthermore, microstructural uncertainties arising from experiments and computational methods propagate on the material properties used for designing aircraft components. Therefore, multi-scale modeling is employed to characterize the microstructural features of Ti-7Al and computationally predict the macro-scale material properties such as Young's modulus and yield strength using machine learning techniques. Investigation of microstructural features across large domains through experiments requires rigorous and tedious sample preparation procedures that often lead to material waste. Therefore, computational microstructure reconstruction methods that predict the large-scale evolution of microstructural topology given the small-scale experimental information are developed to minimize experimental cost and time. However, it is important to verify the synthetic microstructures with respect to the experimental data by characterizing microstructural features such as grain size and grain shape. While the relationship between homogenized material properties and grain sizes of microstructures is well-studied through the Hall-Petch effect, the influences of grain shapes, especially in complex additively manufactured microstructure topologies, are yet to be explored. Therefore, this work addresses the gap in the mathematical quantification of microstructural topology by developing measures for the computational characterization of microstructures. Moreover, the synthesized microstructures are modeled through crystal plasticity simulations to determine the material properties. However, such crystal plasticity simulations require significant computing times. In addition, the inherent uncertainty of experimental data is propagated on the material properties through the synthetic microstructure representations. Therefore, the aforementioned problems are addressed in this work by explicitly quantifying the microstructural topology and predicting the material properties and their variations through the development of surrogate models. Next, this work extends the proposed multi-scale models of microstructure-property relationships to magnetic materials to investigate the ferromagnetic-paramagnetic phase transition. Here, the same Ising model-based multi-scale approach used for microstructure reconstruction is implemented for investigating the ferromagnetic-paramagnetic phase transition of magnetic materials. The previous research on the magnetic phase transition problem neglects the effects of the long-range interactions between magnetic spins and external magnetic fields. Therefore, this study aims to build a multi-scale modeling environment that can quantify the large-scale interactions between magnetic spins and external fields. / Doctor of Philosophy / Titanium-Aluminum (Ti-Al) alloys are lightweight and temperature-resistant materials with a wide range of applications in aerospace systems. However, there is still a lack of thorough understanding of the microstructural behavior and mechanical performance of Titanium-7wt%-Aluminum (Ti-7Al), a candidate material for jet engine components. This work investigates the multi-scale mechanical behavior of Ti-7Al by computationally characterizing the micro-scale material features, such as crystallographic texture and grain topology. The small-scale experimental data of Ti-7Al is used to predict the large-scale spatial evolution of the microstructures, while the texture and grain topology is modeled using shape moment invariants. Moreover, the effects of the uncertainties, which may arise from measurement errors and algorithmic randomness, on the microstructural features are quantified through statistical parameters developed based on the shape moment invariants. A data-driven surrogate model is built to predict the homogenized mechanical properties and the associated uncertainty as a function of the microstructural texture and topology. Furthermore, the presented multi-scale modeling technique is applied to explore the ferromagnetic-paramagnetic phase transition of magnetic materials, which causes permanent failure of magneto-mechanical components used in aerospace systems. Accordingly, a computational solution is developed based on an Ising model that considers the long-range spin interactions in the presence of external magnetic fields.
|
97 |
Contributions to Efficient Statistical Modeling of Complex Data with Temporal StructuresHu, Zhihao 03 March 2022 (has links)
This dissertation will focus on three research projects: Neighborhood vector auto regression in multivariate time series, uncertainty quantification for agent-based modeling networked anagrams, and a scalable algorithm for multi-class classification. The first project studies the modeling of multivariate time series, with the applications in the environmental sciences and other areas. In this work, a so-called neighborhood vector autoregression (NVAR) model is proposed to efficiently analyze large-dimensional multivariate time series. The time series are assumed to have underlying distances among them based on the inherent setting of the problem. When this distance matrix is available or can be obtained, the proposed NVAR method is demonstrated to provides a computationally efficient and theoretically sound estimation of model parameters. The performance of the proposed method is compared with other existing approaches in both simulation studies and a real application of stream nitrogen study. The second project focuses on the study of group anagram games. In a group anagram game, players are provided letters to form as many words as possible. In this work, the enhanced agent behavior models for networked group anagram games are built, exercised, and evaluated under an uncertainty quantification framework. Specifically, the game data for players is clustered based on their skill levels (forming words, requesting letters, and replying to requests), the multinomial logistic regressions for transition probabilities are performed, and
the uncertainty is quantified within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. Simulations of ego agents with neighbors are conducted to demonstrate the efficacy of the proposed methods. The third project aims to develop efficient and scalable algorithms for multi-class classification, which achieve a balance between prediction accuracy and computing efficiency, especially in high dimensional settings. The traditional multinomial logistic regression becomes slow in high dimensional settings where the number of classes (M) and the number of features (p) is large. Our algorithms are computing efficiently and scalable to data with even higher dimensions. The simulation and case study results demonstrate that our algorithms have huge advantage over traditional multinomial logistic regressions, and maintains comparable prediction performance. / Doctor of Philosophy / In many data-central applications, data often have complex structures involving temporal structures and high dimensionality. Modeling of complex data with temporal structures have attracted great attention in many applications such as enviromental sciences, network sciences, data mining, neuroscience, and economics. However, modeling such complex data is quite challenging due to large uncertainty and dimensionality of complex data. This dissertation focuses on modeling and prediction of complex data with temporal structures. Three different types of complex data are modeled. For example, the nitrogen of multiple streams are modeled in a joint manner, human actions in networked group anagrams are modeled and the uncertainty is quantified, and data with multiple labels are classified. Different models are proposed and they are demonstrated to be efficient through simulation and case study.
|
98 |
Computational Reconstruction and Quantification of Aerospace MaterialsLong, Matthew Thomas 14 May 2024 (has links)
Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructure related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty (epistemic uncertainty) that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty (aleatoric uncertainty), which is the noise that is inherent in the original image representing the experimental data. The epistemic uncertainty that arises from the MRF algorithm is analyzed through the study of the percentage of isolated pixels and the difference in average grain sizes between the initial image and the reconstructed image. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses. / Master of Science / Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructures related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty, which is the noise that is inherent in the original image representing the experimental data. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses.
|
99 |
Physics-informed Machine Learning with Uncertainty QuantificationDaw, Arka 12 February 2024 (has links)
Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the form of physics supervision. Uncertainty quantification (UQ) is an important goal in many scientific use-cases, where the obtaining reliable ML model predictions and accessing the potential risks associated with them is crucial. In this thesis, we propose novel methodologies in three key areas for improving uncertainty quantification for PIML. First, we propose to explicitly infuse the physics prior in the form of monotonicity constraints through architectural modifications in neural networks for quantifying uncertainty. Second, we demonstrate a more general framework for quantifying uncertainty with PIML that is compatible with generic forms of physics supervision such as PDEs and closed form equations. Lastly, we study the limitations of physics-based loss in the context of Physics-informed Neural Networks (PINNs), and develop an efficient sampling strategy to mitigate the failure modes. / Doctor of Philosophy / Owing to the success of deep learning in computer vision and natural language processing there is a growing interest of using deep learning in scientific applications. In scientific applications, knowledge is available in the form of closed form equations, partial differential equations, etc. along with labeled data. My work focuses on developing deep learning methods that integrate these forms of supervision. Especially, my work focuses on building methods that can quantify uncertainty in deep learning models, which is an important goal for high-stakes applications.
|
100 |
Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann MethodsGarcía Galache, José Pedro 03 November 2017 (has links)
Contamination is becoming an important problem in great metropolitan areas. A large portion of the contaminants is emitted by the vehicle fleet. At European level, as well as in other economical areas, the regulation is becoming more and more restrictive. Euro regulations are the best example of this tendency.
Specially important are the emissions of nitrogen oxide (NOx) and Particle Matter (PM). Two different strategies exist to reduce the emission of pollutants. One of them is trying to avoid their creation. Modifying the combustion process by means of different fuel injection laws or controlling the air regeneration are the typical methods. The second set of strategies is focused on the contaminant elimination. The NOx are reduced by means of catalysis and/or reducing atmosphere, usually created by injection of urea. The particle matter is eliminated using filters. This thesis is focused in this matter.
Most of the strategies to reduce the emission of contaminants penalise fuel consumption. The particle filter is not an exception. Its installation, located in the exhaust duct, restricts the pass of the air. It increases the pressure along the whole exhaust line before the filter reducing the performance. Optimising the filter is then an important task. The efficiency of the filter has to be good enough to obey the contaminant normative. At the same time the pressure drop has to be as low as possible to optimise fuel consumption and performance. The objective of the thesis is to find the relation between the micro-structure and the macroscopic properties. With this knowledge the optimisation of the micro-structure is possible.
The micro-structure of the filter mimics acicular mullite. It is created by procedural generation using random parameters. The relation between micro-structure and the macroscopic properties such as porosity and permeability are studied in detail. The flow field is solved using LabMoTer, a software developed during this thesis. The formulation is based on Lattice Botlzmann Methods, a new approach to simulate fluid dynamics. In addition, Walberla framework is used to solve the flow field too. This tool has been developed by Friedrich Alexander University of Erlangen Nürnberg.
The second part of the thesis is focused on the particles immersed into the fluid. The properties of the particles are given as a function of the aerodynamic diameter. This is enough for macroscopic approximations. However, the discretization of the porous media has the same order of magnitude than the particle size. Consequently realistic geometry is necessary. Diesel particles are aggregates of spheres. A simulation tool is developed to create these aggregated using ballistic collision. The results are analysed in detail.
The second step is to characterise their aerodynamic properties. Due to the small size of the particles, with the same order of magnitude than the separation between molecules of air, the fluid can not be approximated as a continuous medium. A new approach is needed. Direct Simulation Monte Carlo is the appropriate tool. A solver based on this formulation is developed. Unfortunately complex geometries could not be implemented on time.
The thesis has been fruitful in several aspects. A new model based on procedural generation has been developed to create a micro-structure which mimics acicular mullite. A new CFD solver based on Lattice Boltzmann Methods, LabMoTer, has been implemented and validated. At the same time it is proposed a technique to optimized setup. Ballistic agglomeration process is studied in detail thanks to a new simulator developed ad hoc for this task. The results are studied in detail to find correlation between properties and the evolution in time. Uncertainty Quantification is used to include the Uncertainty in the models. A new Direct Simulation Monte Carlo solver has been developed and validated to calculate rarefied flow. / La contaminación se está volviendo un gran problema para las grandes áreas metropolitanas, en gran parte debido al tráfico. A nivel europeo, al igual que en otras áreas, la regulación es cada vez más restrictiva. Una buena prueba de ello es la normativa Euro de la Unión Europea.
Especialmente importantes son las emisiones de óxidos de nitrógeno (NOx) y partículas (PM). La reducción de contaminantes se puede abordar desde dos estrategias distintas. La primera es la prevención. Modificar el proceso de combustión a través de las leyes de inyección o controlar la renovación de la carda son los métodos más comunes. La segunda estrategia es la eliminación. Se puede reducir los NOx mediante catálisis o atmósfera reductora y las partículas mediante la instalación de un filtro en el conducto de escape. La presente tesis se centra en el estudio de éste último.
La mayoría de as estrategias para la reducción de emisiones penalizan el consumo. El filtro de partículas no es una excepción. Restringe el paso de aire. Como consecuencia la presión se incrementa a lo largo de toda la línea reduciendo las prestaciones del motor. La optimización del filtro es de vital importancia. Tiene que mantener su eficacia a la par que que se minimiza la caída de presión y con ella el consumo de combustible. El objetivo de la tesis es encontrar la relación entre la miscroestructura y las propiedades macroscópicas del filtro. Las conclusiones del estudio podrán utilizarse para optimizar la microestructura.
La microestructura elegida imita los filtros de mulita acicular. Se genera por ordenador mediante generación procedimental utilizando parámetros aleatorios. Gracias a ello se puede estudiar la relación que existe entre la microestructura y las propiedades macroscópicas como la porosidad y la permeabilidad. El campo fluido se resuelve con LabMoTer, un software desarrollado en esta tesis. Está basado en Lattice Boltzmann, una nueva aproximación para simular fluidos. Además también se ha utilizado el framework Walberla desarrollado por la universidad Friedrich Alexander de Erlangen Nürnberg.
La segunda parte de la tesis se centra en las partículas suspendidas en el fluido. Sus propiedades vienen dadas en función del diámetro aerodinámico. Es una buena aproximación desde un punto de vista macroscópico. Sin embargo éste no es el caso. El tamaño de la discretización requerida para calcular el medio poroso es similar al tamaño de las partículas. En consecuencia se necesita simular geometrías realistas. Las partículas Diesel son agregados de esferas. El proceso de aglomeración se ha simulado mediante colisión balística. Los resultados se han analizado con detalle.
El segundo paso es la caracterización aerodinámica de los aglomerados. Debido a que el tamaño de las partículas precursoras es similar a la distancia entre moléculas el fluido no puede ser considerado un medio continuo. Se necesita una nueva aproximación. La herramienta apropiada es la Simulación Directa Monte Carlo (DSMC). Por ello se ha desarrollado un software basado en esta formulación. Desafortunadamente no ha habido tiempo suficiente como para implementar condiciones de contorno sobre geometrías complejas.
La tesis ha sido fructífera en múltiples aspectos. Se ha desarrollado un modelo basado en generación procedimental capaz de crear una microestructura que aproxime mulita acicular. Se ha implementado y validado un nuevo solver CFD, LabMoTer. Además se ha planteado una técnica que optimiza la preparación del cálculo. El proceso de aglomeración se ha estudiado en detalle gracias a un nuevo simulador desarrollado ad hoc para esta tarea. Mediante el análisis estadístico de los resultados se han planteado modelos que reproducen la población de partículas y su evolución con el tiempo. Técnicas de Cuantificación de Incertidumbre se han empleado para modelar la dispersión de datos. Por último, un simulador basado / La contaminació s'està tornant un gran problema per a les grans àrees metropolitanes, en gran part degut al tràfic. A nivell europeu, a l'igual que en atres àrees, la regulació és cada volta més restrictiva. Una bona prova d'això és la normativa Euro de l'Unió Europea.
Especialment importants són les emissions d'òxits de nitrogen (NOX) i partícules (PM). La reducció de contaminants se pot abordar des de dos estratègies distintes. La primera és la prevenció. Modificar el procés de combustió a través de les lleis d'inyecció o controlar la renovació de la càrrega són els mètodos més comuns. La segona estratègia és l'eliminació. Se pot reduir els NOX mediant catàlisis o atmòsfera reductora i les partícules mediant l'instalació d'un filtre en el vas d'escap. La present tesis se centra en l'estudi d'este últim.
La majoria de les estratègies per a la reducció d'emissions penalisen el consum. El filtre de partícules no és una excepció. Restringix el pas d'aire. Com a conseqüència la pressió s'incrementa a lo llarc de tota la llínea reduint les prestacions del motor. L'optimisació del filtre és de vital importància. Ha de mantindre la seua eficàcia a la par que que es minimisa la caiguda de pressió i en ella el consum de combustible. L'objectiu de la tesis és trobar la relació entre la microescritura i les propietats macroscòpiques del filtre. Les conclusions de l'estudi podran utilisar-se per a optimisar la microestructura.
La microestructura elegida imita els filtres de mulita acicular. Se genera per ordenador mediant generació procedimental utilisant paràmetros aleatoris. Gràcies ad això es pot estudiar la relació que existix entre la microestructura i les propietats macroscòpiques com la porositat i la permeabilitat. El camp fluït se resol en LabMoTer, un software desenrollat en esta tesis. Està basat en Lattice Boltzmann, una nova aproximació per a simular fluïts. Ademés també s'ha utilisat el framework Walberla, desentollat per l'Universitat Friedrich Alexander d'Erlangen Nürnberg.
La segona part de la tesis se centra en les partícules suspeses en el fluït. Les seues propietats venen donades en funció del diàmetro aerodinàmic. És una bona aproximació des d'un punt de vista macroscòpic. No obstant este no és el cas. El tamany de la discretisació requerida per a calcular el mig porós és similar al tamany de les partícules. En conseqüència es necessita simular geometries realistes. Les partícules diésel són agregats d'esferes. El procés d'aglomeració s'ha simulat mediant colisió balística. Els resultats s'han analisat en detall.
El segon pas és la caracterisació aerodinàmica dels aglomerats. Degut a que el tamany de les partícules precursores és similar a la distància entre molècules el fluït no pot ser considerat un mig continu. Se necessita una nova aproximació. La ferramenta apropiada és la Simulació Directa Monte Carlo (DSMC). Per això s'ha desenrollat un software basat en esta formulació. Malafortunadament no ha hagut temps suficient com per a implementar condicions de contorn sobre geometries complexes.
La tesis ha segut fructífera en múltiples aspectes. S'ha desenrollat un model basat en generació procedimental capaç de crear una microestructura que aproxime mulita acicular. S'ha implementat i validat un nou solver CFD, LabMoTer. Ademés s'ha plantejat una tècnica que optimisa la preparació del càlcul. El procés d'aglomeració s'ha estudiat en detall gràcies a un nou simulador desenrollat ad hoc per ad esta tasca. Mediant l'anàlisis estadístic dels resultats s'han plantejat models que reproduixen la població de partícules i la seua evolució en el temps. Tècniques de Quantificació d'Incertea s'han empleat per a modelar la dispersió de senyes. Per últim, un simulador basat en DSMC s'ha desenrollat per a calcular fluïts rarificats. / García Galache, JP. (2017). Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann Methods [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90413
|
Page generated in 0.0973 seconds