Spelling suggestions: "subject:"incertainty quantification"" "subject:"ncertainty quantification""
51 |
Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with UncertaintiesSawlan, Zaid A 10 November 2018 (has links)
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here.
Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques.
In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
|
52 |
Uncertainty Quantification and Assimilation for Efficient Coastal Ocean ForecastingSiripatana, Adil 21 April 2019 (has links)
Bayesian inference is commonly used to quantify and reduce modeling uncertainties in coastal ocean models by computing the posterior probability distribution function (pdf) of some uncertain quantities to be estimated conditioned on available observations. The posterior can be computed either directly, using a Markov Chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation (DA) approach. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without a significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach often due to restricted Gaussian prior and noise assumptions.
This thesis aims to develop, implement and test novel efficient Bayesian inference techniques to quantify and reduce modeling and parameter uncertainties of coastal ocean models. Both state and parameter estimations will be addressed within the framework of a state of-the-art coastal ocean model, the Advanced Circulation (ADCIRC) model. The first part of the thesis proposes efficient Bayesian inference techniques for uncertainty quantification (UQ) and state-parameters estimation. Based on a realistic framework of observation system simulation experiments (OSSEs), an ensemble Kalman filter (EnKF) is first evaluated against a Polynomial Chaos (PC)-surrogate MCMC method under identical scenarios. After demonstrating the relevance of the EnKF for parameters estimation, an iterative EnKF is introduced and validated for the estimation of a spatially varying Manning’s n coefficients field. Karhunen-Lo`eve (KL) expansion is also tested for dimensionality reduction and conditioning of the parameter search space. To further enhance the performance of PC-MCMC for estimating spatially varying parameters, a coordinate transformation of a Gaussian process with parameterized prior covariance function is next incorporated into the Bayesian inference framework to account for the uncertainty in covariance model hyperparameters. The second part of the thesis focuses on the use of UQ and DA on adaptive mesh models. We developed new approaches combining EnKF and multiresolution analysis, and demonstrated significant reduction in the cost of data assimilation compared to the traditional EnKF implemented on a non-adaptive mesh.
|
53 |
A new dynamic model for non-viral multi-treatment gene delivery systems for bone regeneration: parameter extraction, estimation, and sensitivityMuhammad, Ruqiah 01 August 2019 (has links)
In this thesis we develop new mathematical models, using dynamical systems, to represent localized gene delivery of bone morphogenetic protein 2 into bone marrow-derived mesenchymal stem cells and rat calvarial defects. We examine two approaches, using pDNA or cmRNA treatments, respectively, towards the production of calcium deposition and bone regeneration in in vitro and in vivo experiments. We first review the relevant scientific literature and survey existing mathematical representations for similar treatment approaches. We then motivate and develop our new models and determine model parameters from literature, heuristic approaches, and estimation using sparse data. We next conduct a qualitative analysis using dynamical systems theory. Due to the nature of the parameter estimation, it was important that we obtain local and global sensitivity analyses of model outputs to changes in model inputs. Finally we compared results from different treatment protocols. Our model suggests that cmRNA treatments may perform better than pDNA treatments towards bone fracture healing. This work is intended to be a foundation for predictive models of non-viral local gene delivery systems.
|
54 |
Mathematical programming techniques for solving stochastic optimization problems with certainty equivalent measures of riskVinel, Alexander 01 May 2015 (has links)
The problem of risk-averse decision making under uncertainties is studied from both modeling and computational perspectives. First, we consider a framework for constructing coherent and convex measures of risk which is inspired by infimal convolution operator, and prove that the proposed approach constitutes a new general representation of these classes. We then discuss how this scheme may be effectively employed to obtain a class of certainty equivalent measures of risk that can directly
incorporate decision maker's preferences as expressed by utility functions. This approach is consequently utilized to introduce a new family of measures, the log-exponential convex measures of risk. Conducted numerical experiments show that this family can be a useful tool when modeling risk-averse decision preferences under heavy-tailed distributions of uncertainties. Next, numerical methods for solving the rising optimization problems are developed. A special attention is devoted to the class p-order cone programming problems and mixed-integer models. Solution approaches proposed include approximation schemes for $p$-order cone and more general nonlinear programming problems, lifted conic and nonlinear valid inequalities, mixed-integer rounding conic cuts and new linear disjunctive cuts.
|
55 |
An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain CoefficientsKouri, Drew 05 September 2012 (has links)
Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.
|
56 |
Bayesian learning methods for potential energy parameter inference in coarse-grained models of atomistic systemsWright, Eric Thomas 27 August 2015 (has links)
The present work addresses issues related to the derivation of reduced models of atomistic systems, their statistical calibration, and their relation to atomistic models of materials. The reduced model, known in the chemical physics community as a coarse-grained model, is calibrated within a Bayesian framework. Particular attention is given to developing likelihood functions, assigning priors on coarse-grained model parameters, and using data from molecular dynamics representations of atomistic systems to calibrate coarse-grained models such that certain physically relevant atomistic observables are accurately reproduced. The developed Bayesian framework is then applied in three case studies of increasing complexity and practical application. A freely jointed chain model is considered first for illustrative purposes. The next example entails the construction of a coarse-grained model for a liquid heptane system, with the explicit design goal of accurately predicting a vapor-liquid transfer free energy. Finally, a coarse-grained model is developed for an alkylthiophene polymer that has been shown to have practical use in certain types of photovoltaic cells. The development therein employs Bayesian decision theory to select an optimal CG potential energy function. Subsequently, this model is subjected to validation tests in a prediction scenario that is relevant to the performance of a polyalkylthiophene-based solar cell. / text
|
57 |
Toward a predictive model of tumor growthHawkins-Daarud, Andrea Jeanine 16 June 2011 (has links)
In this work, an attempt is made to lay out a framework in which models of
tumor growth can be built, calibrated, validated, and differentiated in
their level of goodness in such a manner that all the uncertainties
associated with each step of the modeling process can be accounted for in
the final model prediction.
The study can be divided into four basic parts. The first involves the
development of a general family of mathematical models of interacting
species representing the various constituents of living tissue, which
generalizes those previously available in the literature. In this theory,
surface effects are introduced by incorporating in the Helmholtz free `
gradients of the volume fractions of the interacting species, thus
providing a generalization of the Cahn-Hilliard theory of phase change in
binary media and leading to fourth-order, coupled systems of nonlinear
evolution equations. A subset of these governing equations is selected as
the primary class of models of tumor growth considered in this work.
The second component of this study focuses on the emerging and
fundamentally important issue of predictive modeling, the study of model
calibration, validation, and quantification of uncertainty in predictions
of target outputs of models. The Bayesian framework suggested by Babuska,
Nobile, and Tempone is employed to embed the calibration and validation
processes within the framework of statistical inverse theory. Extensions of
the theory are developed which are regarded as necessary for certain
scenarios in these methods to models of tumor growth.
The third part of the study focuses on the numerical approximation of the
diffuse-interface models of tumor growth and on the numerical
implementations of the statistical inverse methods at the core of the
validation process. A class of mixed finite element models is developed for
the considered mass-conservation models of tumor growth. A family of time
marching schemes is developed and applied to representative problems of
tumor evolution.
Finally, in the fourth component of this investigation, a collection of
synthetic examples, mostly in two-dimensions, is considered to provide a
proof-of-concept of the theory and methods developed in this work. / text
|
58 |
Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty QuantificationWinokur, Justin Gregory January 2015 (has links)
<p>Polynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.</p><p>Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.</p><p>We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.</p><p>In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. </p><p>In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.</p> / Dissertation
|
59 |
Uncertainty in the Bifurcation Diagram of a Model of Heart Rhythm DynamicsRing, Caroline January 2014 (has links)
<p>To understand the underlying mechanisms of cardiac arrhythmias, computational models are used to study heart rhythm dynamics. The parameters of these models carry inherent uncertainty. Therefore, to interpret the results of these models, uncertainty quantification (UQ) and sensitivity analysis (SA) are important. Polynomial chaos (PC) is a computationally efficient method for UQ and SA in which a model output Y, dependent on some independent uncertain parameters represented by a random vector ξ, is approximated as a spectral expansion in multidimensional orthogonal polynomials in ξ. The expansion can then be used to characterize the uncertainty in Y.</p><p>PC methods were applied to UQ and SA of the dynamics of a two-dimensional return-map model of cardiac action potential duration (APD) restitution in a paced single cell. Uncertainty was considered in four parameters of the model: three time constants and the pacing stimulus strength. The basic cycle length (BCL) (the period between stimuli) was treated as the control parameter. Model dynamics was characterized with bifurcation analysis, which determines the APD and stability of fixed points of the model at a range of BCLs, and the BCLs at which bifurcations occur. These quantities can be plotted in a bifurcation diagram, which summarizes the dynamics of the model. PC UQ and SA were performed for these quantities. UQ results were summarized in a novel probabilistic bifurcation diagram that visualizes the APD and stability of fixed points as uncertain quantities.</p><p>Classical PC methods assume that model outputs exist and reasonably smooth over the full domain of ξ. Because models of heart rhythm often exhibit bifurcations and discontinuities, their outputs may not obey the existence and smoothness assumptions on the full domain, but only on some subdomains which may be irregularly shaped. On these subdomains, the random variables representing the parameters may no longer be independent. PC methods therefore must be modified for analysis of these discontinuous quantities. The Rosenblatt transformation maps the variables on the subdomain onto a rectangular domain; the transformed variables are independent and uniformly distributed. A new numerical estimation of the Rosenblatt transformation was developed that improves accuracy and computational efficiency compared to existing kernel density estimation methods. PC representations of the outputs in the transformed variables were then constructed. Coefficients of the PC expansions were estimated using Bayesian inference methods. For discontinuous model outputs, SA was performed using a sampling-based variance-reduction method, with the PC estimation used as an efficient proxy for the full model.</p><p>To evaluate the accuracy of the PC methods, PC UQ and SA results were compared to large-sample Monte Carlo UQ and SA results. PC UQ and SA of the fixed point APDs, and of the probability that a stable fixed point existed at each BCL, was very close to MC UQ results for those quantities. However, PC UQ and SA of the bifurcation BCLs was less accurate compared to MC results.</p><p>The computational time required for PC and Monte Carlo methods was also compared. PC analysis (including Rosenblatt transformation and Bayesian inference) required less than 10 total hours of computational time, of which approximately 30 minutes was devoted to model evaluations, compared to approximately 65 hours required for Monte Carlo sampling of the model outputs at 1 × 10<super>6</super> ξ points.</p><p>PC methods provide a useful framework for efficient UQ and SA of the bifurcation diagram of a model of cardiac APD dynamics. Model outputs with bifurcations and discontinuities can be analyzed using modified PC methods. The methods applied and developed in this study may be extended to other models of heart rhythm dynamics. These methods have potential for use for uncertainty and sensitivity analysis in many applications of these models, including simulation studies of heart rate variability, cardiac pathologies, and interventions.</p> / Dissertation
|
60 |
Closing the building energy performance gap by improving our predictionsSun, Yuming 27 August 2014 (has links)
Increasing studies imply that predicted energy performance of buildings significantly deviates from actual measured energy use. This so-called "performance gap" may undermine one's confidence in energy-efficient buildings, and thereby the role of building energy efficiency in the national carbon reduction plan. Closing the performance gap becomes a daunting challenge for the involved professions, stimulating them to reflect on how to investigate and better understand the size, origins, and extent of the gap. The energy performance gap underlines the lack of prediction capability of current building energy models. Specifically, existing predictions are predominantly deterministic, providing point estimation over the future quantity or event of interest. It, thus, largely ignores the error and noise inherent in an uncertain future of building energy consumption. To overcome this, the thesis turns to a thriving area in engineering statistics that focuses on computation-based uncertainty quantification. The work provides theories and models that enable probabilistic prediction over future energy consumption, forming the basis of risk assessment in decision-making. Uncertainties that affect the wide variety of interacting systems in buildings are organized into five scales (meteorology - urban - building - systems - occupants). At each level both model form and input parameter uncertainty are characterized with probability, involving statistical modeling and parameter distributional analysis. The quantification of uncertainty at different system scales is accomplished using the network of collaborators established through an NSF-funded research project. The bottom-up uncertainty quantification approach, which deals with meta uncertainty, is fundamental for generic application of uncertainty analysis across different types of buildings, under different urban climate conditions, and in different usage scenarios. Probabilistic predictions are evaluated by two criteria: coverage and sharpness. The goal of probabilistic prediction is to maximize the sharpness of the predictive distributions subject to the coverage of the realized values. The method is evaluated on a set of buildings on the Georgia Tech campus. The energy consumption of each building is monitored in most cases by a collection of hourly sub-metered consumption data. This research shows that a good match of probabilistic predictions and the real building energy consumption in operation is achievable. Results from the six case buildings show that using the best point estimations of the probabilistic predictions reduces the mean absolute error (MAE) from 44% to 15% and the root mean squared error (RMSE) from 49% to 18% in total annual cooling energy consumption. As for monthly cooling energy consumption, the MAE decreases from 44% to 21% and the RMSE decreases from 53% to 28%. More importantly, the entire probability distributions are statistically verified at annual level of building energy predictions. Based on uncertainty and sensitivity analysis applied to these buildings, the thesis concludes that the proposed method significantly reduces the magnitude and effectively infers the origins of the building energy performance gap.
|
Page generated in 0.1039 seconds