• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 20
  • 16
  • 8
  • 1
  • Tagged with
  • 211
  • 211
  • 55
  • 54
  • 47
  • 35
  • 33
  • 30
  • 28
  • 25
  • 24
  • 22
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty

Whiting, Nolan Wagner 19 July 2019 (has links)
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain. / Master of Science / Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
2

Numerical simulation of backward erosion piping in heterogeneous fields

Liang, Yue, Yeh, Tian-Chyi Jim, Wang, Yu-Li, Liu, Mingwei, Wang, Junjie, Hao, Yonghong 04 1900 (has links)
Backward erosion piping (BEP) is one of the major causes of seepage failures in levees. Seepage fields dictate the BEP behaviors and are influenced by the heterogeneity of soil properties. To investigate the effects of the heterogeneity on the seepage failures, we develop a numerical algorithm and conduct simulations to study BEP progressions in geologic media with spatially stochastic parameters. Specifically, the void ratio e, the hydraulic conductivity k, and the ratio of the particle contents r of the media are represented as the stochastic variables. They are characterized by means and variances, the spatial correlation structures, and the cross correlation between variables. Results of the simulations reveal that the heterogeneity accelerates the development of preferential flow paths, which profoundly increase the likelihood of seepage failures. To account for unknown heterogeneity, we define the probability of the seepage instability (PI) to evaluate the failure potential of a given site. Using Monte-Carlo simulation (MCS), we demonstrate that the PI value is significantly influenced by the mean and the variance of ln k and its spatial correlation scales. But the other parameters, such as means and variances of e and r, and their cross correlation, have minor impacts. Based on PI analyses, we introduce a risk rating system to classify the field into different regions according to risk levels. This rating system is useful for seepage failures prevention and assists decision making when BEP occurs.
3

Continuous reservoir simulation incorporating uncertainty quantification and real-time data

Holmes, Jay Cuthbert 15 May 2009 (has links)
A significant body of work has demonstrated both the promise and difficulty of quantifying uncertainty in reservoir simulation forecasts. It is generally accepted that accurate and complete quantification of uncertainty should lead to better decision making and greater profitability. Many of the techniques presented in past work attempt to quantify uncertainty without sampling the full parameter space, saving on the number of simulation runs, but inherently limiting and biasing the uncertainty quantification in the resulting forecasts. In addition, past work generally has looked at uncertainty in synthetic models and does not address the practical issues of quantifying uncertainty in an actual field. Both of these issues must be addressed in order to rigorously quantify uncertainty in practice. In this study a new approach to reservoir simulation is taken whereby the traditional one-time simulation study is replaced with a new continuous process potentially spanning the life of the reservoir. In this process, reservoir models are generated and run 24 hours a day, seven days a week, allowing many more runs than previously possible and yielding a more thorough exploration of possible reservoir descriptions. In turn, more runs enabled better estimates of uncertainty in resulting forecasts. A new technology to allow this process to run continuously with little human interaction is real-time production and pressure data, which can be automatically integrated into runs. Two tests of this continuous simulation process were conducted. The first test was conducted on the Production with Uncertainty Quantification (PUNQ) synthetic reservoir. Comparison of our results with previous studies shows that the continuous approach gives consistent and reasonable estimates of uncertainty. The second study was conducted in real time on a live field. This study demonstrates the continuous simulation process and shows that it is feasible and practical for real world applications.
4

Error analysis for radiation transport

Tencer, John Thomas 18 February 2014 (has links)
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work. The major original contribution of this work involves the treatment of errors associated with the energy-dependence of intensity. The full spectrum correlated-k distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The Multi-Source Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a line-by-line solution on a coarse grid and a number of k-distribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases. The stochastic full spectrum k-distribution (SFSK) method is a more general approach to estimating the error in k-distribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the k-distribution method. / text
5

On goal-oriented error estimation and adaptivity for nonlinear systems with uncertain data and application to flow problems

Bryant, Corey Michael 09 February 2015 (has links)
The objective of this work is to develop a posteriori error estimates and adaptive strategies for the numerical solution to nonlinear systems of partial differential equations with uncertain data. Areas of application cover problems in fluid mechanics including a Bayesian model selection study of turbulence comparing different uncertainty models. Accounting for uncertainties in model parameters may significantly increase the computational time when simulating complex problems. The premise is that using error estimates and adaptively refining the solution process can reduce the cost of such simulations while preserving their accuracy within some tolerance. New insights for goal-oriented error estimation for deterministic nonlinear problems are first presented. Linearization of the adjoint problems and quantities of interest introduces higher-order terms in the error representation that are generally neglected. Their effects on goal-oriented adaptive strategies are investigated in detail here. Contributions on that subject include extensions of well-known theoretical results for linear problems to the nonlinear setting, computational studies in support of these results, and an extensive comparative study of goal-oriented adaptive schemes that do, and do not, include the higher-order terms. Approaches for goal-oriented error estimation for PDEs with uncertain coefficients have already been presented, but lack the capability of distinguishing between the different sources of error. A novel approach is proposed here, that decomposes the error estimate into contributions from the physical discretization and the uncertainty approximation. Theoretical bounds are proven and numerical examples are presented to verify that the approach identifies the predominant source of the error in a surrogate model. Adaptive strategies, that use this error decomposition and refine the approximation space accordingly, are designed and tested. All methodologies are demonstrated on benchmark flow problems: Stokes lid-driven cavity, 1D Burger’s equation, 2D incompressible flows at low Reynolds numbers. The procedure is also applied to an uncertainty quantification study of RANS turbulence models in channel flows. Adaptive surrogate models are constructed to make parameter uncertainty propagation more efficient. Using surrogate models and adaptivity in a Bayesian model selection procedure, it is shown that significant computational savings can be gained over the full RANS model while maintaining similar accuracy in the predictions. / text
6

Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion Calculations

Stripling, Hayes Franklin 16 December 2013 (has links)
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
7

Quantification of Uncertainty in the Modeling of Creep in RF MEMS Devices

Peter Kolis (9173900) 29 July 2020 (has links)
Permanent deformation in the form of creep is added to a one-dimensional model of a radio-frequency micro-electro-mechanical system (RF-MEMS). Due to uncertainty in the material property values, calibration under uncertainty is carried out through comparison to experiments in order to determine appropriate boundary conditions and material property values. Further uncertainty in the input parameters, in the form of probability distribution functions of geometric device properties, is included in simulations and propagated to the device performance as a function of time. The effect of realistic power-law grain size distributions on the creep response of thin RF-MEMS films is examined through the use of a finite volume software suite designed for the computational modelling of MEMS. It is seen that the use of a realistic height-dependent power-law distribution of grain sizes in the film in place of a uniform grain size has the effect of increasing the simulated creep rate and the uncertainty in its value. The effect is seen to be the result of the difference between the model with a homogeneous grain size and the model with a non-homogeneous grain size. Realistic variations in the grain size distribution for a given film are seen to have a smaller effect. Finally, in order to incorporate variations in thickness in manufactured devices, variation in the thickness of the membrane across the length and width is considered in a 3D finite element model, and variation of thickness along the length is added to the earlier one-dimensional RF-MEMS model. Estimated uncertainty in the film profile is propagated to selected device performance metrics. The effect of film thickness variation along the length of the film is seen to be greater than the effect of variation across the width.
8

CFD Analyses of Air-Ingress Accident for VHTRs

Ham, Tae Kyu 30 December 2014 (has links)
No description available.
9

A Small-Perturbation Automatic-Differentiation (SPAD) Method for Evaluating Uncertainty in Computational Electromagnetics

Gilbert, Michael Stephen 20 December 2012 (has links)
No description available.
10

Inference of Constitutive Relations and Uncertainty Quantification in Electrochemistry

Krishnaswamy Sethurajan, Athinthra 04 1900 (has links)
This study has two parts. In the first part we develop a computational approach to the solution of an inverse modelling problem concerning the material properties of electrolytes used in Lithium-ion batteries. The dependence of the diffusion coefficient and the transference number on the concentration of Lithium ions is reconstructed based on the concentration data obtained from an in-situ NMR imaging experiment. This experiment is modelled by a system of 1D time-dependent Partial Differential Equations (PDE) describing the evolution of the concentration of Lithium ions with prescribed initial concentration and fluxes at the boundary. The material properties that appear in this model are reconstructed by solving a variational optimization problem in which the least-square error between the experimental and simulated concentration values is minimized. The uncertainty of the reconstruction is characterized by assuming that the material properties are random variables and their probability distribution estimated using a novel combination of Monte-Carlo approach and Bayesian statistics. In the second part of this study, we carefully analyze a number of secondary effects such as ion pairing and dendrite growth that may influence the estimation of the material properties and develop mathematical models to include these effects. We then use reconstructions of material properties based on inverse modelling along with their uncertainty estimates as a framework to validate or invalidate the models. The significance of certain secondary effects is assessed based on the influence they have on the reconstructed material properties. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0448 seconds