Spelling suggestions: "subject:"incertainty quantification"" "subject:"ncertainty quantification""
1 
Quantification of Uncertainties Due to Opacities in a LaserDriven RadiativeShock ProblemHetzler, Adam C 03 October 2013 (has links)
This research presents new physicsbased methods to estimate predictive uncertainty stemming from uncertainty in the material opacities in radiative transfer computations of key quantities of interest (QOIs). New methods are needed because it is infeasible to apply standard uncertaintypropagation techniques to the O(105) uncertain opacities in a realistic simulation. The new approach toward uncertainty quantification applies the uncertainty analysis to the physical parameters in the underlying model used to calculate the opacities. This set of uncertain parameters is much smaller (O(102)) than the number of opacities. To further reduce the dimension of the set of parameters to be rigorously explored, we use additional screening applied at two different levels of the calculational hierarchy: first, physicsbased screening eliminates the physical parameters that are unimportant from underlying physics models a priori; then, sensitivity analysis in simplified versions of the complex problem of interest screens out parameters that are not important to the QOIs. We employ a Bayesian Multivariate Adaptive Regression Spline (BMARS) emulator for this sensitivity analysis. The high dimension of the input space and large number of samples test the efficacy of these methods on larger problems. Ultimately, we want to perform uncertainty quantification on the large, complex problem with the reduced set of parameters. Results of this research demonstrate that the QOIs for target problems agree at for different parameter screening criteria and varying sample sizes. Since the QOIs agree, we have gained confidence in our results using the multiple screening criteria and sample sizes.

2 
Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of UncertaintyWhiting, Nolan Wagner 19 July 2019 (has links)
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multielement airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain. / Master of Science / Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multielement airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.

3 
Numerical simulation of backward erosion piping in heterogeneous fieldsLiang, Yue, Yeh, TianChyi Jim, Wang, YuLi, Liu, Mingwei, Wang, Junjie, Hao, Yonghong 04 1900 (has links)
Backward erosion piping (BEP) is one of the major causes of seepage failures in levees. Seepage fields dictate the BEP behaviors and are influenced by the heterogeneity of soil properties. To investigate the effects of the heterogeneity on the seepage failures, we develop a numerical algorithm and conduct simulations to study BEP progressions in geologic media with spatially stochastic parameters. Specifically, the void ratio e, the hydraulic conductivity k, and the ratio of the particle contents r of the media are represented as the stochastic variables. They are characterized by means and variances, the spatial correlation structures, and the cross correlation between variables. Results of the simulations reveal that the heterogeneity accelerates the development of preferential flow paths, which profoundly increase the likelihood of seepage failures. To account for unknown heterogeneity, we define the probability of the seepage instability (PI) to evaluate the failure potential of a given site. Using MonteCarlo simulation (MCS), we demonstrate that the PI value is significantly influenced by the mean and the variance of ln k and its spatial correlation scales. But the other parameters, such as means and variances of e and r, and their cross correlation, have minor impacts. Based on PI analyses, we introduce a risk rating system to classify the field into different regions according to risk levels. This rating system is useful for seepage failures prevention and assists decision making when BEP occurs.

4 
Continuous reservoir simulation incorporating uncertainty quantification and realtime dataHolmes, Jay Cuthbert 15 May 2009 (has links)
A significant body of work has demonstrated both the promise and difficulty of
quantifying uncertainty in reservoir simulation forecasts. It is generally accepted that
accurate and complete quantification of uncertainty should lead to better decision
making and greater profitability. Many of the techniques presented in past work attempt
to quantify uncertainty without sampling the full parameter space, saving on the number
of simulation runs, but inherently limiting and biasing the uncertainty quantification in
the resulting forecasts. In addition, past work generally has looked at uncertainty in
synthetic models and does not address the practical issues of quantifying uncertainty in
an actual field. Both of these issues must be addressed in order to rigorously quantify
uncertainty in practice.
In this study a new approach to reservoir simulation is taken whereby the
traditional onetime simulation study is replaced with a new continuous process
potentially spanning the life of the reservoir. In this process, reservoir models are
generated and run 24 hours a day, seven days a week, allowing many more runs than
previously possible and yielding a more thorough exploration of possible reservoir descriptions. In turn, more runs enabled better estimates of uncertainty in resulting
forecasts. A new technology to allow this process to run continuously with little human
interaction is realtime production and pressure data, which can be automatically
integrated into runs.
Two tests of this continuous simulation process were conducted. The first test
was conducted on the Production with Uncertainty Quantification (PUNQ) synthetic
reservoir. Comparison of our results with previous studies shows that the continuous
approach gives consistent and reasonable estimates of uncertainty. The second study was
conducted in real time on a live field. This study demonstrates the continuous simulation
process and shows that it is feasible and practical for real world applications.

5 
The Method of Manufactured Universes for Testing Uncertainty Quantification MethodsStripling, Hayes Franklin 2010 December 1900 (has links)
The Method of Manufactured Universes is presented as a validation framework for
uncertainty quantification (UQ) methodologies and as a tool for exploring the effects
of statistical and modeling assumptions embedded in these methods. The framework
calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which
simulation results are created (possibly with numerical error), the application of a
system for quantifying uncertainties in model predictions, and an assessment of how
accurately those uncertainties are quantified. The application presented for this research manufactures a particletransport "universe," models it using diffusion theory
with uncertain material parameters, and applies both Gaussian process and Bayesian
MARS algorithms to make quantitative predictions about new "experiments" within
the manufactured reality. To test further the responses of these UQ methods, we
conduct exercises with "experimental" replicates, "measurement" error, and choices
of physical inputs that reduce the accuracy of the diffusion model's approximation
of our manufactured laws.
Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental
statistical formulation was not appropriate for our functional data, but that the code
allows a knowledgable user to vary parameters within this formulation to tailor its
behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop
further a calibration method and to characterize the diffusion model discrepancy.
Overall, we conclude that an MMU exercise with a properly designed universe (that
is, one that is an adequate representation of some realworld problem) will provide
the modeler with an added understanding of the interaction between a given UQ
method and his/her more complex problem of interest. The modeler can then apply
this added understanding and make more informed predictive statements.

6 
Error analysis for radiation transportTencer, John Thomas 18 February 2014 (has links)
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work.
The major original contribution of this work involves the treatment of errors associated with the energydependence of intensity. The full spectrum correlatedk distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The MultiSource Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a linebyline solution on a coarse grid and a number of kdistribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases.
The stochastic full spectrum kdistribution (SFSK) method is a more general approach to estimating the error in kdistribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the kdistribution method. / text

7 
On goaloriented error estimation and adaptivity for nonlinear systems with uncertain data and application to flow problemsBryant, Corey Michael 09 February 2015 (has links)
The objective of this work is to develop a posteriori error estimates and adaptive strategies for the numerical solution to nonlinear systems of partial differential equations with uncertain data. Areas of application cover problems in fluid mechanics including a Bayesian model selection study of turbulence comparing different uncertainty models. Accounting for uncertainties in model parameters may significantly increase the computational time when simulating complex problems. The premise is that using error estimates and adaptively refining the solution process can reduce the cost of such simulations while preserving their accuracy within some tolerance. New insights for goaloriented error estimation for deterministic nonlinear problems are first presented. Linearization of the adjoint problems and quantities of interest introduces higherorder terms in the error representation that are generally neglected. Their effects on goaloriented adaptive strategies are investigated in detail here. Contributions on that subject include extensions of wellknown theoretical results for linear problems to the nonlinear setting, computational studies in support of these results, and an extensive comparative study of goaloriented adaptive schemes that do, and do not, include the higherorder terms. Approaches for goaloriented error estimation for PDEs with uncertain coefficients have already been presented, but lack the capability of distinguishing between the different sources of error. A novel approach is proposed here, that decomposes the error estimate into contributions from the physical discretization and the uncertainty approximation. Theoretical bounds are proven and numerical examples are presented to verify that the approach identifies the predominant source of the error in a surrogate model. Adaptive strategies, that use this error decomposition and refine the approximation space accordingly, are designed and tested. All methodologies are demonstrated on benchmark flow problems: Stokes liddriven cavity, 1D Burger’s equation, 2D incompressible flows at low Reynolds numbers. The procedure is also applied to an uncertainty quantification study of RANS turbulence models in channel flows. Adaptive surrogate models are constructed to make parameter uncertainty propagation more efficient. Using surrogate models and adaptivity in a Bayesian model selection procedure, it is shown that significant computational savings can be gained over the full RANS model while maintaining similar accuracy in the predictions. / text

8 
AdjointBased Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion CalculationsStripling, Hayes Franklin 16 December 2013 (has links)
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error.
We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on highfidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the sourcedriven and keigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on highfidelity reactor analysis problems.

9 
Aerodynamic Uncertainty Quantification and Estimation of Uncertainty Quantified Performance of Unmanned Aircraft Using NonDeterministic SimulationsHale II, Lawrence Edmond 24 January 2017 (has links)
This dissertation addresses model form uncertainty quantification, nondeterministic simulations, and sensitivity analysis of the results of these simulations, with a focus on application to analysis of unmanned aircraft systems. The model form uncertainty quantification utilizes equation error to estimate the error between an identified model and flight test results. The errors are then related to aircraft states, and prediction intervals are calculated. This method for model form uncertainty quantification results in uncertainty bounds that vary with the aircraft state, narrower where consistent information has been collected and wider where data are not available. Nondeterministic simulations can then be performed to provide uncertainty quantified estimates of the system performance. The model form uncertainties could be time varying, so multiple sampling methods were considered. The two methods utilized were a fixed uncertainty level and a rate bounded variation in the uncertainty level. For analysis using fixed uncertainty level, the corner points of the model form uncertainty were sampled, providing reduced computational time. The second model better represents the uncertainty but requires significantly more simulations to sample the uncertainty. The uncertainty quantified performance estimates are compared to estimates based on flight tests to check the accuracy of the results.
Sensitivity analysis is performed on the uncertainty quantified performance estimates to provide information on which of the model form uncertainties contribute most to the uncertainty in the performance estimates. The proposed method uses the results from the fixed uncertainty level analysis that utilizes the corner points of the model form uncertainties. The sensitivity of each parameter is estimated based on corner values of all the other uncertain parameters. This results in a range of possible sensitivities for each parameter dependent on the true value of the other parameters. / Ph. D.

10 
Validation and Uncertainty Quantification of Doublet Lattice Flight Loads using Flight Test DataOlson, Nicholai Kenneth Keeney 19 July 2018 (has links)
This paper presents a framework for tuning, validating, and quantifying uncertainties for flight loads. The flight loads are computed using a Nastran doublet lattice model and are validated using measured data from a flight loads survey for a Cessna Model 525B business jet equipped with Tamarack® Aerospace Group’s active winglet modification, ATLAS® (Active Technology Load Alleviation System). ATLAS® allows for significant aerodynamic improvements to be realized by reducing loads to below the values of the original, unmodified airplane. Flight loads are measured using calibrated strain gages and are used to tune and validate a Nastran doubletlattice flight loads model. Methods used to tune and validate the model include uncertainty quantification of the Nastran model form and lead to an uncertainty quantified model which can be used to estimate flight loads at any given flight condition within the operating envelope of the airplane. The methods presented herein improve the efficiency of the loads process and reduce conservatism in design loads through improved prediction techniques. Regression techniques and uncertainty quantification methods are presented to more accurately assess the complexities in comparing models to flight test results. / Master of Science

Page generated in 0.1424 seconds