• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 240
  • 240
  • 61
  • 56
  • 52
  • 36
  • 35
  • 33
  • 32
  • 28
  • 26
  • 25
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Computational Challenges in Sampling and Representation of Uncertain Reaction Kinetics in Large Dimensions

Almohammadi, Saja M. 29 November 2021 (has links)
This work focuses on the construction of functional representations in high-dimensional spaces.Attention is focused on the modeling of ignition phenomena using detailed kinetics, and on the ignition delay time as the primary quantity of interest (QoI). An iso-octane air mixture is first considered, using a detailed chemical mechanism with 3,811 elementary reactions. Uncertainty in all reaction rates is directly accounted for using associated uncertainty factors, assuming independent log-uniform priors. A Latin hypercube sample (LHS) of the ignition delay times was first generated, and the resulting database was then exploited to assess the possibility of constructing polynomial chaos (PC) representations in terms of the canonical random variables parametrizing the uncertain rates. We explored two avenues, namely sparse regression (SR) using LASSO, and a coordinate transform (CT) approach. Preconditioned variants of both approaches were also considered, namely using the logarithm of the ignition delay time as QoI. A global sensitivity analysis is performed using the representations constructed by SR and CT. Next, the tangent linear approximation is developed to estimate the sensitivity of the ignition delay time with respect to individual rate parameters in a detailed chemical mechanism. Attention is focused on a gas mixture reacting under adiabatic, constant-volume conditions. The approach is based on integrating the linearized system of equations governing the evolution of the partial derivatives of the state vector with respect to individual random variables, and a linearized approximation is developed to relate the ignition delay sensitivity to the scaled partial derivatives of temperature. In particular, the computations indicate that for detailed reaction mechanisms the TLA leads to robust local sensitivity predictions at a computational cost that is order-of-magnitude smaller than that incurred by finite-difference approaches based on one-at-a-time rate parameters perturbations. In the last part, we explore the potential of utilizing TLA-based sensitivities to identify active subspace and to construct suitable representations. Performance is assessed based contrasting experiences with CT-based machinery developed earlier.
72

The Role of Constitutive Model in Traumatic Brain Injury Prediction

Kacker, Shubhra 28 October 2019 (has links)
No description available.
73

Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations

Wang, Jianxun 05 April 2017 (has links)
Computational fluid dynamics (CFD) has been widely used to simulate turbulent flows. Although an increased availability of computational resources has enabled high-fidelity simulations (e.g. large eddy simulation and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier-Stokes (RANS) equations based models are still the dominant tools for industrial applications. However, the predictive capability of RANS models is limited by potential inaccuracies driven by hypotheses in the Reynolds stress closure. With the ever-increasing use of RANS simulations in mission-critical applications, the estimation and reduction of model-form uncertainties in RANS models have attracted attention in the turbulence modeling community. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Both on-line and off-line data are utilized to achieve this goal. The main contributions of this dissertation can be summarized as follows: First, a physics-based, data-driven Bayesian framework is developed for estimating and reducing model-form uncertainties in RANS simulations. An iterative ensemble Kalman method is employed to assimilate sparse on-line measurement data and empirical prior knowledge for a full-field inversion. The merits of incorporating prior knowledge and physical constraints in calibrating RANS model discrepancies are demonstrated and discussed. Second, a random matrix theoretic framework is proposed for estimating model-form uncertainties in RANS simulations. Maximum entropy principle is employed to identify the probability distribution that satisfies given constraints but without introducing artificial information. Objective prior perturbations of RANS-predicted Reynolds stresses in physical projections are provided based on comparisons between physics-based and random matrix theoretic approaches. Finally, a physics-informed, machine learning framework towards predictive RANS turbulence modeling is proposed. The functional forms of model discrepancies with respect to mean flow features are extracted from the off-line database of closely related flows based on machine learning algorithms. The RANS-modeled Reynolds stresses of prediction flows can be significantly improved by the trained discrepancy function, which is an important step towards the predictive turbulence modeling. / Ph. D.
74

Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning

Wu, Jinlong 25 September 2018 (has links)
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled. / Ph. D. / Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
75

Quantification of the parametric uncertainty in the specific absorption rate calculation of a mobile phone / Quantification de l'incertitude paramétrique dans le calcul de débit d'absorption spécifique d'un téléphone mobile

Cheng, Xi 15 December 2015 (has links)
La thèse porte sur la quantification d'incertitude de paramètres (Uncertainty Quantification ou UQ) dans le calcul du débit d'absorption spécifique (Specific Absorption Rate ou SAR) de téléphones mobiles. L'impact de l'incertitude, ainsi le manque de connaissances détaillées sur les propriétés électriques des matériaux, les caractéristiques géométriques du système, etc., dans le calcul SAR est quantifiée par trois méthodes de calcul efficaces dites non-intrusives : Transformation non parfumée (Unscented Transformation ou UT), collocation stochastique (Stochastic Collocation ou SC) et polynômes de chaos non-intrusifs (Non-Intrusive Polynomial Chaos ou NIPC).Ces méthodes sont en effet appelées méthodes non intrusives puisque le processus de simulation est tout simplement considéré comme une boîte noire sans que ne soit modifié le code du solveur de simulation. Leurs performances pour les cas de une et deux variables aléatoires sont analysées dans le présent travail. En contraste avec le procédé d'analyse d'incertitude traditionnel (la méthode de Monte Carlo ou MCM), le temps de calcul devient acceptable. Afin de simplifier la procédure UQ pour le cas de plusieurs entrées incertaines, il est démontré que des incertitudes peuvent être combinées de manière à évaluer l'incertitude sur les paramètres de la sortie.Combiner des incertitudes est une approche généralement utilisée dans le domaine des mesures, et ici, il est utilisé dans le calcul du SAR pour la situation complexe. Une des étapes nécessaires dans le cadre de l'analyse d'incertitude est l'analyse de sensibilité (Sensitivity Analysis ou SA), qui vise à quantifier l'importance relative de chaque paramètre d'entrée incertain par rapport à l'incertitude de la sortie. La méthode reposant sur le calcul des indices de sensibilité de Sobol est employée, ces indices étant évalués par un développement en polynômes de chaos, au lieu d'utiliser la méthode de Monte-Carlo dans le calcul SAR. Les résultats des investigations sont présentés et discutés.Afin de faciliter la lecture, des notions élémentaires de débit d'absorption spécifique, de modélisation, d'incertitude dans la modélisation, de théorie des probabilités, et de calcul SAR par l'un des solveurs de simulation sont proposés dans l'Introduction (chapitre 1). Puis l'usage des méthodes non-intrusives UQ telles que UT, SC et NIPC, et l'application de la méthode des indices de Sobol pour l'analyse de sensibilité dans le calcul SAR est présentée dans les chapitres 2 et 3. Dans le chapitre 4, une autre approche d'utilisation des polynômes de chaos est fournie, et elle est utilisée dans le domaine temporel par l'intermédiaire d'un code de différences finies (Finite Difference-Time Domain ou FD-TD). Puisque le code FD-TD dans le solveur de simulation peut en effet être modifié, c'est le développement en polynômes de chaos intrusifs, étudié en détail par un certain nombre de scientifiques déjà, qui est considéré. Dans le chapitre 5, les conclusions et un aperçu des travaux futurs sont fournis. / This thesis focuses on parameter uncertainty quantification (UQ) in specific absorptionrate (SAR) calculation using a computer-aided design (CAD) mobile phone model.The impact of uncertainty, e.g., lack of detailed knowledge about material electricalproperties, system geometrical features, etc., in SAR calculation is quantified by threecomputationally efficient non-intrusive UQ methods: unscented transformation (UT),stochastic collocation (SC) and non-intrusive polynomial chaos (NIPC). They are callednon-intrusive methods because the simulation process is simply considered as a blackboxwithout changing the code of the simulation solver. Their performances for thecases of one and two random variables are analysed. In contrast to the traditionaluncertainty analysis method: Monte Carlo method, the time of the calculation becomesacceptable. To simplify the UQ procedure for the case of multiple uncertain inputs, it isdemonstrated that uncertainties can be combined to evaluate the parameter uncertaintyof the output. Combining uncertainties is an approach generally used in the field ofmeasurement, in this thesis, it is used in SAR calculations in the complex situation. Oneof the necessary steps in the framework of uncertainty analysis is sensitivity analysis (SA)which aims at quantifying the relative importance of each uncertain input parameterwith respect to the uncertainty of the output. Polynomial chaos (PC) based Sobol’indices method whose SA indices are evaluated by PC expansion instead of Monte Carlomethod is used in SAR calculation. The results of the investigations are presented anddiscussed.In order to make the reading easier, elementary notions of SAR, modelling, uncertaintyin modelling, and probability theory are given in introduction (chapter 1). Thenthe main content of this thesis are presented in chapter 2 and chapter 3. In chapter 4,another approach to use PC expansion is given, and it is used in the finite-differencetime-domain (FDTD) code. Since the FDTD code in the simulation solver should bechanged, it is so-called intrusive PC expansion. Intrusive method already investigatedin details in other people’s thesis. In chapter 5, conclusions and future work are given.
76

Efficient Uncertainty Characterization Framework in Neutronics Core Simulation with Application to Thermal-Spectrum Reactor Systems

Dongli Huang (7473860) 16 April 2020 (has links)
<div>This dissertation is devoted to developing a first-of-a-kind uncertainty characterization framework (UCF) providing comprehensive, efficient and scientifically defendable methodologies for uncertainty characterization (UC) in best-estimate (BE) reactor physics simulations. The UCF is designed with primary application to CANDU neutronics calculations, but could also be applied to other thermal-spectrum reactor systems. The overarching goal of the UCF is to propagate and prioritize all sources of uncertainties, including those originating from nuclear data uncertainties, modeling assumptions, and other approximations, in order to reliably use the results of BE simulations in the various aspects of reactor design, operation, and safety. The scope of this UCF is to propagate nuclear data uncertainties from the multi-group format, representing the input to lattice physics calculations, to the few-group format, representing the input to nodal diffusion-based core simulators and quantify the uncertainties in reactor core attributes.</div><div>The main contribution of this dissertation addresses two major challenges in current uncertainty analysis approaches. The first is the feasibility of the UCF due to the complex nature of nuclear reactor simulation and computational burden of conventional uncertainty quantification (UQ) methods. The second goal is to assess the impact of other sources of uncertainties that are typically ignored in the course of propagating nuclear data uncertainties, such as various modeling assumptions and approximations.</div>To deal with the first challenge, this thesis work proposes an integrated UC process employing a number of approaches and algorithms, including the physics-guided coverage mapping (PCM) method in support of model validation, and the reduced order modeling (ROM) techniques as well as the sensitivity analysis (SA) on uncertainty sources, to reduce the dimensionality of uncertainty space at each interface of neutronics calculations. In addition to the efficient techniques to reduce the computational cost, the UCF aims to accomplish four primary functions in uncertainty analysis of neutronics simulations. The first function is to identify all sources of uncertainties, including nuclear data uncertainties, modeling assumptions, numerical approximations and technological parameter uncertainties. Second, the proposed UC process will be able to propagate the identified uncertainties to the responses of interest in core simulation and provide uncertainty quantifications (UQ) analysis for these core attributes. Third, the propagated uncertainties will be mapped to a wide range of reactor core operation conditions. Finally, the fourth function is to prioritize the identified uncertainty sources, i.e., to generate a priority identification and ranking table (PIRT) which sorts the major sources of uncertainties according to the impact on the core attributes’ uncertainties. In the proposed implementation, the nuclear data uncertainties are first propagated from multi-group level through lattice physics calculation to generate few-group parameters uncertainties, described using a vector of mean values and a covariance matrix. Employing an ROM-based compression of the covariance matrix, the few-group uncertainties are then propagated through downstream core simulation in a computationally efficient manner.<div>To explore on the impact of uncertainty sources except for nuclear data uncertainties on the UC process, a number of approximations and assumptions are investigated in this thesis, e.g., modeling assumptions such as resonance treatment, energy group structure, etc., and assumptions associated with the uncertainty analysis itself, e.g., linearity assumption, level of ROM reduction and associated number of degrees of freedom employed. These approximations and assumptions have been employed in the literature of neutronic uncertainty analysis yet without formal verifications. The major argument here is that these assumptions may introduce another source of uncertainty whose magnitude needs to be quantified in tandem with nuclear data uncertainties. In order to assess whether modeling uncertainties have an impact on parameter uncertainties, this dissertation proposes a process to evaluate the influence of various modeling assumptions and approximations and to investigate the interactions between the two major uncertainty sources. To explore this endeavor, the impact of a number of modeling assumptions on core attributes uncertainties is quantified.</div><div>The proposed UC process has first applied to a BWR application, in order to test the uncertainty propagation and prioritization process with the ROM implementation in a wide range of core conditions. Finally, a comprehensive uncertainty library for CANDU uncertainty analysis with NESTLE-C as core simulator is generated compressed uncertainty sources from the proposed UCF. The modeling uncertainties as well as their impact on the parameter uncertainty propagation process are investigated on the CANDU application with the uncertainty library.</div>
77

POLYNOMIAL CHAOS EXPANSION IN BIO- AND STRUCTURAL MECHANICS / MISE EN OEUVRE DU CHAOS POLYNOMIAL EN BIOMECANIQUE ET EN MECANIQUE DES STRUCTURES

Szepietowska, Katarzyna 12 October 2018 (has links)
Cette thèse présente une approche probabiliste de la modélisation de la mécanique des matériaux et des structures. Le dimensionnement est influencé par l'incertitude des paramètres d'entrée. Le travail est interdisciplinaire et les méthodes décrites sont appliquées à des exemples de biomécanique et de génie civil. La motivation de ce travail était le besoin d'approches basées sur la mécanique dans la modélisation et la simulation des implants utilisés dans la réparation des hernies ventrales. De nombreuses incertitudes apparaissent dans la modélisation du système implant-paroi abdominale. L'approche probabiliste proposée dans cette thèse permet de propager ces incertitudes et d’étudier leurs influences respectives. La méthode du chaos polynomial basée sur la régression est utilisée dans ce travail. L'exactitude de ce type de méthodes non intrusives dépend du nombre et de l'emplacement des points de calcul choisis. Trouver une méthode universelle pour atteindre un bon équilibre entre l'exactitude et le coût de calcul est encore une question ouverte. Différentes approches sont étudiées dans cette thèse afin de choisir une méthode efficace et adaptée au cas d’étude. L'analyse de sensibilité globale est utilisée pour étudier les influences des incertitudes d'entrée sur les variations des sorties de différents modèles. Les incertitudes sont propagées aux modèles implant-paroi abdominale. Elle permet de tirer des conclusions importantes pour les pratiques chirurgicales. À l'aide de l'expertise acquise à partir de ces modèles biomécaniques, la méthodologie développée est utilisée pour la modélisation de joints de bois historiques et la simulation de leur comportement mécanique. Ce type d’étude facilite en effet la planification efficace des réparations et de la rénovation des bâtiments ayant une valeur historique. / This thesis presents a probabilistic approach to modelling the mechanics of materials and structures where the modelled performance is influenced by uncertainty in the input parameters. The work is interdisciplinary and the methods described are applied to medical and civil engineering problems. The motivation for this work was the necessity of mechanics-based approaches in the modelling and simulation of implants used in the repair of ventral hernias. Many uncertainties appear in the modelling of the implant-abdominal wall system. The probabilistic approach proposed in this thesis enables these uncertainties to be propagated to the output of the model and the investigation of their respective influences. The regression-based polynomial chaos expansion method is used here. However, the accuracy of such non-intrusive methods depends on the number and location of sampling points. Finding a universal method to achieve a good balance between accuracy and computational cost is still an open question so different approaches are investigated in this thesis in order to choose an efficient method. Global sensitivity analysis is used to investigate the respective influences of input uncertainties on the variation of the outputs of different models. The uncertainties are propagated to the implant-abdominal wall models in order to draw some conclusions important for further research. Using the expertise acquired from biomechanical models, modelling of historic timber joints and simulations of their mechanical behaviour is undertaken. Such an investigation is important owing to the need for efficient planning of repairs and renovation of buildings of historical value.
78

Efficient Uncertainty quantification with high dimensionality

Jianhua Yin (12456819) 25 April 2022 (has links)
<p>Uncertainty exists everywhere in scientific and engineering applications. To avoid potential risk, it is critical to understand the impact of uncertainty on a system by performing uncertainty quantification (UQ) and reliability analysis (RA). However, the computational cost may be unaffordable using current UQ methods with high-dimensional input. Moreover, current UQ methods are not applicable when numerical data and image data coexist. </p> <p>To decrease the computational cost to an affordable level and enable UQ with special high dimensional data (e.g. image), this dissertation develops three UQ methodologies with high dimensionality of input space. The first two methods focus on high-dimensional numerical input. The core strategy of Methodology 1 is fixing the unimportant variables at their first step most probable point (MPP) so that the dimensionality is reduced. An accurate RA method is used in the reduced space. The final reliability is obtained by accounting for the contributions of important and unimportant variables. Methodology 2 addresses the issue that the dimensionality cannot be reduced when most of the variables are important or when variables equally contribute to the system. Methodology 2 develops an efficient surrogate modeling method for high dimensional UQ using Generalized Sliced Inverse Regression (GSIR), Gaussian Process (GP)-based active learning, and importance sampling. A cost-efficient GP model is built in the latent space after dimension reduction by GSIR. And the failure boundary is identified through active learning that adds optimal training points iteratively. In Methodology 3, a Convolutional Neural Networks (CNN) based surrogate model (CNN-GP) is constructed for dealing with mixed numerical and image data. The numerical data are first converted into images and the converted images are then merged with existing image data. The merged images are fed to CNN for training. Then, we use the latent variables of the CNN model to integrate CNN with GP to quantify the model error using epistemic uncertainty. Both epistemic uncertainty and aleatory uncertainty are considered in uncertainty propagation. </p> <p>The simulation results indicate that the first two methodologies can not only improve the efficiency but also maintain adequate accuracy for the problems with high-dimensional numerical input. GSIR with active learning can handle the situations that the dimensionality cannot be reduced when most of the variables are important or the importance of variables are close. The two methodologies can be combined as a two-stage dimension reduction for high-dimensional numerical input. The third method, CNN-GP, is capable of dealing with special high-dimensional input, mixed numerical and image data, with the satisfying regression accuracy and providing an estimate of the model error. Uncertainty propagation considering both epistemic uncertainty and aleatory uncertainty provides better accuracy. The proposed methods could be potentially applied to engineering design and decision making. </p>
79

Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction Device

Freeman, Jacob Andrew 07 September 2012 (has links)
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study. To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°. The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA. Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty. / Ph. D.
80

Multiscale Modeling and Uncertainty Quantification of Multiphase Flow and Mass Transfer Processes

Donato, Adam Armido 10 January 2015 (has links)
Most engineering systems have some degree of uncertainty in their input and operating parameters. The interaction of these parameters leads to the uncertain nature of the system performance and outputs. In order to quantify this uncertainty in a computational model, it is necessary to include the full range of uncertainty in the model. Currently, there are two major technical barriers to achieving this: (1) in many situations -particularly those involving multiscale phenomena-the stochastic nature of input parameters is not well defined, and is usually approximated by limited experimental data or heuristics; (2) incorporating the full range of uncertainty across all uncertain input and operating parameters via conventional techniques often results in an inordinate number of computational scenarios to be performed, thereby limiting uncertainty analysis to simple or approximate computational models. This first objective is addressed through combining molecular and macroscale modeling where the molecular modeling is used to quantify the stochastic distribution of parameters that are typically approximated. Specifically, an adsorption separation process is used to demonstrate this computational technique. In this demonstration, stochastic molecular modeling results are validated against a diverse range of experimental data sets. The stochastic molecular-level results are then shown to have a significant role on the macro-scale performance of adsorption systems. The second portion of this research is focused on reducing the computational burden of performing an uncertainty analysis on practical engineering systems. The state of the art for uncertainty analysis relies on the construction of a meta-model (also known as a surrogate model or reduced order model) which can then be sampled stochastically at a relatively minimal computational burden. Unfortunately these meta-models can be very computationally expensive to construct, and the complexity of construction can scale exponentially with the number of relevant uncertain input parameters. In an effort to dramatically reduce this effort, a novel methodology "QUICKER (Quantifying Uncertainty In Computational Knowledge Engineering Rapidly)" has been developed. Instead of building a meta-model, QUICKER focuses exclusively on the output distributions, which are always one-dimensional. By focusing on one-dimensional distributions instead of the multiple dimensions analyzed via meta-models, QUICKER is able to handle systems with far more uncertain inputs. / Ph. D.

Page generated in 0.5339 seconds