• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 250
  • 250
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 27
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock Problem

Hetzler, Adam C 03 October 2013 (has links)
This research presents new physics-based methods to estimate predictive uncertainty stemming from uncertainty in the material opacities in radiative transfer computations of key quantities of interest (QOIs). New methods are needed because it is infeasible to apply standard uncertainty-propagation techniques to the O(105) uncertain opacities in a realistic simulation. The new approach toward uncertainty quantification applies the uncertainty analysis to the physical parameters in the underlying model used to calculate the opacities. This set of uncertain parameters is much smaller (O(102)) than the number of opacities. To further reduce the dimension of the set of parameters to be rigorously explored, we use additional screening applied at two different levels of the calculational hierarchy: first, physics-based screening eliminates the physical parameters that are unimportant from underlying physics models a priori; then, sensitivity analysis in simplified versions of the complex problem of interest screens out parameters that are not important to the QOIs. We employ a Bayesian Multivariate Adaptive Regression Spline (BMARS) emulator for this sensitivity analysis. The high dimension of the input space and large number of samples test the efficacy of these methods on larger problems. Ultimately, we want to perform uncertainty quantification on the large, complex problem with the reduced set of parameters. Results of this research demonstrate that the QOIs for target problems agree at for different parameter screening criteria and varying sample sizes. Since the QOIs agree, we have gained confidence in our results using the multiple screening criteria and sample sizes.
2

Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty

Whiting, Nolan Wagner 19 July 2019 (has links)
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain. / Master of Science / Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
3

Numerical simulation of backward erosion piping in heterogeneous fields

Liang, Yue, Yeh, Tian-Chyi Jim, Wang, Yu-Li, Liu, Mingwei, Wang, Junjie, Hao, Yonghong 04 1900 (has links)
Backward erosion piping (BEP) is one of the major causes of seepage failures in levees. Seepage fields dictate the BEP behaviors and are influenced by the heterogeneity of soil properties. To investigate the effects of the heterogeneity on the seepage failures, we develop a numerical algorithm and conduct simulations to study BEP progressions in geologic media with spatially stochastic parameters. Specifically, the void ratio e, the hydraulic conductivity k, and the ratio of the particle contents r of the media are represented as the stochastic variables. They are characterized by means and variances, the spatial correlation structures, and the cross correlation between variables. Results of the simulations reveal that the heterogeneity accelerates the development of preferential flow paths, which profoundly increase the likelihood of seepage failures. To account for unknown heterogeneity, we define the probability of the seepage instability (PI) to evaluate the failure potential of a given site. Using Monte-Carlo simulation (MCS), we demonstrate that the PI value is significantly influenced by the mean and the variance of ln k and its spatial correlation scales. But the other parameters, such as means and variances of e and r, and their cross correlation, have minor impacts. Based on PI analyses, we introduce a risk rating system to classify the field into different regions according to risk levels. This rating system is useful for seepage failures prevention and assists decision making when BEP occurs.
4

Continuous reservoir simulation incorporating uncertainty quantification and real-time data

Holmes, Jay Cuthbert 15 May 2009 (has links)
A significant body of work has demonstrated both the promise and difficulty of quantifying uncertainty in reservoir simulation forecasts. It is generally accepted that accurate and complete quantification of uncertainty should lead to better decision making and greater profitability. Many of the techniques presented in past work attempt to quantify uncertainty without sampling the full parameter space, saving on the number of simulation runs, but inherently limiting and biasing the uncertainty quantification in the resulting forecasts. In addition, past work generally has looked at uncertainty in synthetic models and does not address the practical issues of quantifying uncertainty in an actual field. Both of these issues must be addressed in order to rigorously quantify uncertainty in practice. In this study a new approach to reservoir simulation is taken whereby the traditional one-time simulation study is replaced with a new continuous process potentially spanning the life of the reservoir. In this process, reservoir models are generated and run 24 hours a day, seven days a week, allowing many more runs than previously possible and yielding a more thorough exploration of possible reservoir descriptions. In turn, more runs enabled better estimates of uncertainty in resulting forecasts. A new technology to allow this process to run continuously with little human interaction is real-time production and pressure data, which can be automatically integrated into runs. Two tests of this continuous simulation process were conducted. The first test was conducted on the Production with Uncertainty Quantification (PUNQ) synthetic reservoir. Comparison of our results with previous studies shows that the continuous approach gives consistent and reasonable estimates of uncertainty. The second study was conducted in real time on a live field. This study demonstrates the continuous simulation process and shows that it is feasible and practical for real world applications.
5

The Method of Manufactured Universes for Testing Uncertainty Quantification Methods

Stripling, Hayes Franklin 2010 December 1900 (has links)
The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented for this research manufactures a particle-transport "universe," models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new "experiments" within the manufactured reality. To test further the responses of these UQ methods, we conduct exercises with "experimental" replicates, "measurement" error, and choices of physical inputs that reduce the accuracy of the diffusion model's approximation of our manufactured laws. Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental statistical formulation was not appropriate for our functional data, but that the code allows a knowledgable user to vary parameters within this formulation to tailor its behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop further a calibration method and to characterize the diffusion model discrepancy. Overall, we conclude that an MMU exercise with a properly designed universe (that is, one that is an adequate representation of some real-world problem) will provide the modeler with an added understanding of the interaction between a given UQ method and his/her more complex problem of interest. The modeler can then apply this added understanding and make more informed predictive statements.
6

Error analysis for radiation transport

Tencer, John Thomas 18 February 2014 (has links)
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work. The major original contribution of this work involves the treatment of errors associated with the energy-dependence of intensity. The full spectrum correlated-k distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The Multi-Source Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a line-by-line solution on a coarse grid and a number of k-distribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases. The stochastic full spectrum k-distribution (SFSK) method is a more general approach to estimating the error in k-distribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the k-distribution method. / text
7

On goal-oriented error estimation and adaptivity for nonlinear systems with uncertain data and application to flow problems

Bryant, Corey Michael 09 February 2015 (has links)
The objective of this work is to develop a posteriori error estimates and adaptive strategies for the numerical solution to nonlinear systems of partial differential equations with uncertain data. Areas of application cover problems in fluid mechanics including a Bayesian model selection study of turbulence comparing different uncertainty models. Accounting for uncertainties in model parameters may significantly increase the computational time when simulating complex problems. The premise is that using error estimates and adaptively refining the solution process can reduce the cost of such simulations while preserving their accuracy within some tolerance. New insights for goal-oriented error estimation for deterministic nonlinear problems are first presented. Linearization of the adjoint problems and quantities of interest introduces higher-order terms in the error representation that are generally neglected. Their effects on goal-oriented adaptive strategies are investigated in detail here. Contributions on that subject include extensions of well-known theoretical results for linear problems to the nonlinear setting, computational studies in support of these results, and an extensive comparative study of goal-oriented adaptive schemes that do, and do not, include the higher-order terms. Approaches for goal-oriented error estimation for PDEs with uncertain coefficients have already been presented, but lack the capability of distinguishing between the different sources of error. A novel approach is proposed here, that decomposes the error estimate into contributions from the physical discretization and the uncertainty approximation. Theoretical bounds are proven and numerical examples are presented to verify that the approach identifies the predominant source of the error in a surrogate model. Adaptive strategies, that use this error decomposition and refine the approximation space accordingly, are designed and tested. All methodologies are demonstrated on benchmark flow problems: Stokes lid-driven cavity, 1D Burger’s equation, 2D incompressible flows at low Reynolds numbers. The procedure is also applied to an uncertainty quantification study of RANS turbulence models in channel flows. Adaptive surrogate models are constructed to make parameter uncertainty propagation more efficient. Using surrogate models and adaptivity in a Bayesian model selection procedure, it is shown that significant computational savings can be gained over the full RANS model while maintaining similar accuracy in the predictions. / text
8

Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations

Stripling, Hayes Franklin 16 December 2013 (has links)
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
9

Quantification of Uncertainty in the Modeling of Creep in RF MEMS Devices

Peter Kolis (9173900) 29 July 2020 (has links)
Permanent deformation in the form of creep is added to a one-dimensional model of a radio-frequency micro-electro-mechanical system (RF-MEMS). Due to uncertainty in the material property values, calibration under uncertainty is carried out through comparison to experiments in order to determine appropriate boundary conditions and material property values. Further uncertainty in the input parameters, in the form of probability distribution functions of geometric device properties, is included in simulations and propagated to the device performance as a function of time. The effect of realistic power-law grain size distributions on the creep response of thin RF-MEMS films is examined through the use of a finite volume software suite designed for the computational modelling of MEMS. It is seen that the use of a realistic height-dependent power-law distribution of grain sizes in the film in place of a uniform grain size has the effect of increasing the simulated creep rate and the uncertainty in its value. The effect is seen to be the result of the difference between the model with a homogeneous grain size and the model with a non-homogeneous grain size. Realistic variations in the grain size distribution for a given film are seen to have a smaller effect. Finally, in order to incorporate variations in thickness in manufactured devices, variation in the thickness of the membrane across the length and width is considered in a 3D finite element model, and variation of thickness along the length is added to the earlier one-dimensional RF-MEMS model. Estimated uncertainty in the film profile is propagated to selected device performance metrics. The effect of film thickness variation along the length of the film is seen to be greater than the effect of variation across the width.
10

Framework for Estimating Performance and Associated Uncertainty of Modified Aircraft Configurations

Denham, Casey Leigh-Anne 22 June 2022 (has links)
Flight testing has been the historical standard for determining aircraft airworthiness - however, increases in the cost of flight testing and the accuracy of inexpensive CFD promote certification by analysis to reduce or replace flight testing. A framework is introduced to predict the performance in the special case of a modification to an existing, previously certified aircraft. This framework uses a combination of existing flight test or high fidelity data of the original aircraft as well as lower fidelity data of the original and modified configurations. Two methods are presented which estimate the model form uncertainty of the modified configuration, which is then used to conduct non-deterministic simulations. The framework is applied to an example aircraft system with simulated flight test data to demonstrate the ability to predict the performance and associated uncertainty of modified aircraft configurations. However, it is important that the models and methods used are applicable and accurate throughout the intended use domain. The factors and limitations of the framework are explored to determine the range of applicability of the framework. The effects of these factors on the performance and uncertainty results are demonstrated using the example aircraft system. The framework is then applied to NASA's X-57 Maxwell and each of its modifications. The estimated performance and associated uncertainties are then compared to the airworthiness criteria to evaluate the potential of the framework as a component to the certification by analysis process. / Doctor of Philosophy / Aircraft are required to undergo an airworthiness certification process to demonstrate the capability for safe and controlled flight. This has historically been satisfied by flight testing, but there is a desire to use computational analysis and simulations to reduce the cost and time required. For aircraft which are based on an aircraft which has already been certified, but contain minor changes, computational tools have the potential to provide a large benefit. This research proposes a framework to estimate the flight performance of these modified aircraft using inexpensive computational or ground based methods and without requiring expensive flight testing. The framework is then evaluated to ensure that it provides accurate results and is suitable for use as a supplement to the airworthiness certification process.

Page generated in 0.1392 seconds