• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 315
  • 113
  • 91
  • 76
  • 36
  • 22
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 860
  • 860
  • 142
  • 124
  • 121
  • 112
  • 111
  • 101
  • 97
  • 85
  • 82
  • 80
  • 73
  • 67
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Impact of Uncertainties in Reaction Rates and Thermodynamic Properties on Ignition Delay Time

Hantouche, Mireille 04 1900 (has links)
Ignition delay time, τ_ign, is a key quantity of interest that is used to assess the predictability of a chemical kinetic model. This dissertation explores the sensitivity of τ_ign to uncertainties in: 1. rate-rule kinetic rates parameters and 2. enthalpies and entropies of fuel and fuel radicals using global and local sensitivity approaches. We begin by considering variability in τ_ign to uncertainty in rate parameters. We consider a 30-dimensional stochastic germ in which each random variable is associated with one reaction class, and build a surrogate model for τ_ign using polynomial chaos expansions. The adaptive pseudo-spectral projection technique is used for this purpose. First-order and total-order sensitivity indices characterizing the dependence of τ_ign on uncertain inputs are estimated. Results indicate that τ_ign is mostly sensitive to variations in four dominant reaction classes. Next, we develop a thermodynamic class approach to study variability in τ_ign of n-butanol due to uncertainty in thermodynamic properties of species of interest, and to define associated uncertainty ranges. A global sensitivity analysis is performed, again using surrogates constructed using an adaptive pseudo-spectral method. Results indicate that the variability of τ_ign is dominated by uncertainties in the classes associated with peroxy and hydroperoxide radicals. We also perform a combined sensitivity analysis of uncertainty in kinetic rates and thermodynamic properties which revealed that uncertainties in thermodynamic properties can induce variabilities in ignition delay time that are as large as those associated with kinetic rate uncertainties. In the last part, we develop a tangent linear approximation (TLA) to estimate the sensitivity of τ_ign with respect to individual rate parameters and thermodynamic properties in detailed chemical mechanisms. Attention is focused on a gas mixture reacting under adiabatic, constant-volume conditions. The proposed approach is based on integrating the linearized system of equations governing the evolution of the partial derivatives of the state vector with respect to individual random variables, and a linearized approximation is developed to relate ignition delay sensitivity to scaled partial derivatives of temperature. The computations indicate that TLA leads to robust local sensitivity predictions at a computational cost that is order-of-magnitude smaller than that incurred by finite-difference approaches.
82

Development, Calibration, and Validation of a Finite Element Model of the THOR Crash Test Dummy for Aerospace and Spaceflight Crash Safety Analysis

Putnam, Jacob Breece 17 September 2014 (has links)
Anthropometric test devices (ATDs), commonly referred to as crash test dummies, are tools used to conduct aerospace and spaceflight safety evaluations. Finite element (FE) analysis provides an effective complement to these evaluations. In this work a FE model of the Test Device for Human Occupant Restraint (THOR) dummy was developed, calibrated, and validated for use in aerospace and spaceflight impact analysis. A previously developed THOR FE model was first evaluated under spinal loading. The FE model was then updated to reflect recent updates made to the THOR dummy. A novel calibration methodology was developed to improve both kinematic and kinetic responses of the updated model in various THOR dummy certification tests. The updated THOR FE model was then calibrated and validated under spaceflight loading conditions and used to asses THOR dummy biofidelity. Results demonstrate that the FE model performs well under spinal loading and predicts injury criteria values close to those recorded in testing. Material parameter optimization of the updated model was shown to greatly improve its response. The validated THOR-FE model indicated good dummy biofidelity relative to human volunteer data under spinal loading, but limited biofidelity under frontal loading. The calibration methodology developed in this work is proven as an effective tool for improving dummy model response. Results shown by the dummy model developed in this study recommends its use in future aerospace and spaceflight impact simulations. In addition the biofidelity analysis suggests future improvements to the THOR dummy for spaceflight and aerospace analysis. / Master of Science
83

Efficient Time Stepping Methods and Sensitivity Analysis for Large Scale Systems of Differential Equations

Zhang, Hong 09 September 2014 (has links)
Many fields in science and engineering require large-scale numerical simulations of complex systems described by differential equations. These systems are typically multi-physics (they are driven by multiple interacting physical processes) and multiscale (the dynamics takes place on vastly different spatial and temporal scales). Numerical solution of such systems is highly challenging due to the dimension of the resulting discrete problem, and to the complexity that comes from incorporating multiple interacting components with different characteristics. The main contributions of this dissertation are the creation of new families of time integration methods for multiscale and multiphysics simulations, and the development of industrial-strengh tools for sensitivity analysis. This work develops novel implicit-explicit (IMEX) general linear time integration methods for multiphysics and multiscale simulations typically involving both stiff and non-stiff components. In an IMEX approach, one uses an implicit scheme for the stiff components and an explicit scheme for the non-stiff components such that the combined method has the desired stability and accuracy properties. Practical schemes with favorable properties, such as maximized stability, high efficiency, and no order reduction, are constructed and applied in extensive numerical experiments to validate the theoretical findings and to demonstrate their advantages. Approximate matrix factorization (AMF) technique exploits the structure of the Jacobian of the implicit parts, which may lead to further efficiency improvement of IMEX schemes. We have explored the application of AMF within some high order IMEX Runge-Kutta schemes in order to achieve high efficiency. Sensitivity analysis gives quantitative information about the changes in a dynamical model outputs caused by caused by small changes in the model inputs. This information is crucial for data assimilation, model-constrained optimization, inverse problems, and uncertainty quantification. We develop a high performance software package for sensitivity analysis in the context of stiff and nonstiff ordinary differential equations. Efficiency is demonstrated by direct comparisons against existing state-of-art software on a variety of test problems. / Ph. D.
84

Stochastic Computer Model Calibration and Uncertainty Quantification

Fadikar, Arindam 24 July 2019 (has links)
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation. / Doctor of Philosophy / Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
85

MATLODE: A MATLAB ODE Solver and Sensitivity Analysis Toolbox

D'Augustine, Anthony Frank 04 May 2018 (has links)
Sensitivity analysis quantifies the effect that of perturbations of the model inputs have on the model's outputs. Some of the key insights gained using sensitivity analysis are to understand the robustness of the model with respect to perturbations, and to select the most important parameters for the model. MATLODE is a tool for sensitivity analysis of models described by ordinary differential equations (ODEs). MATLODE implements two distinct approaches for sensitivity analysis: direct (via the tangent linear model) and adjoint. Within each approach, four families of numerical methods are implemented, namely explicit Runge-Kutta, implicit Runge-Kutta, Rosenbrock, and single diagonally implicit Runge-Kutta. Each approach and family has its own strengths and weaknesses when applied to real world problems. MATLODE has a multitude of options that allows users to find the best approach for a wide range of initial value problems. In spite of the great importance of sensitivity analysis for models governed by differential equations, until this work there was no MATLAB ordinary differential equation sensitivity analysis toolbox publicly available. The two most popular sensitivity analysis packages, CVODES [8] and FATODE [10], are geared toward the high performance modeling space; however, no native MATLAB toolbox was available. MATLODE fills this need and offers sensitivity analysis capabilities in MATLAB, one of the most popular programming languages within scientific communities such as chemistry, biology, ecology, and oceanogra- phy. We expect that MATLODE will prove to be a useful tool for these communities to help facilitate their research and fill the gap between theory and practice. / Master of Science
86

Sensitivity analysis of pluvial flood modelling tools for dense urban areas : A case study in Lundby-Lindholmen, Gothenburg

Eriksson, Johanna January 2020 (has links)
As a result of the global climate change, extreme precipitation is occurring more frequently which increases the risk of flooding, especially in urban areas. Urbanisation is widely discussed regarding urban flooding where an increase of impervious surfaces limits the infiltration and increases the surface runoff. Flooding events in urban areas are increasing around the world and can cause large damages on infrastructure and buildings, which makes the cities vulnerable. Urban flood models are an important tool for analysing the capacity of the drainage systems, to predict the extent of the events and to find optimal locations to implement measures to prevent damages from flooding. In this project, a sensitivity analysis in MIKE FLOOD, a coupled 1D-2D flood model developed by DHI is presented, where sewer- and surface systems are integrated. The aim with this project is to investigate how the result of a coupled flood model vary in relation to changes in input parameters. The sensitivity analysis is performed to evaluate how different parameters impact the model output in terms of water depth and variations in cost of flooded buildings, roads, rail- and tramways. The analysis is applied in a case study in Lundby-Lindholmen, Gothenburg city, Sweden. The results show that modelling without infiltration influenced the model output the most, with the largest increase both in terms of cost and water depth over the investigated area. Here the correlation between the initial water saturation and location of the applied pre-rain was highlighted. The model outputs were less sensitive to changes in surface roughness (expressed as Manning value) than without infiltration but did lead to measurable changes in surface water depth and distribution while the flood damage cost didn’t show any major changes. Additionally, the coupled flood model was evaluated in terms of handling changes in magnitudes of rain-events. Data indicates the shorter the return period, the smaller the flood propagation, as well as the flood damage cost decreases with shorter return periods. The data evaluated supports the use of this coupled model approach for shorter return periods in terms of flood propagation.
87

Uncertainty Analysis for Land Surface Model Predictions: Application to the Simple Biosphere 3 and Noah Models at Tropical and Semiarid Locations

Roundy, Joshua K. 01 May 2009 (has links)
Uncertainty in model predictions is associated with data, parameters, and model structure. The estimation of these contributions to uncertainty is a critical issue in hydrology. Using a variety of single and multiple criterion methods for sensitivity analysis and inverse modeling, the behaviors of two state-of-the-art land surface models, the Simple Biosphere Model 3 and Noah model, are analyzed. The different algorithms used for sensitivity and inverse modeling are analyzed and compared along with the performance of the land surface models. Generalized sensitivity and variance methods are used for the sensitivity analysis, including the Multi-Objective Generalized Sensitivity Analysis, the Extended Fourier Amplitude Sensitivity Test, and the method of Sobol. The methods used for the parameter uncertainty estimation are based on Markov Chain Monte Carlo simulations with Metropolis type algorithms and include A Multi-algorithm Genetically Adaptive Multi-objective algorithm, Differential Evolution Adaptive Metropolis, the Shuffled Complex Evolution Metropolis, and the Multi-objective Shuffled Complex Evolution Metropolis algorithms. The analysis focuses on the behavior of land surface model predictions for sensible heat, latent heat, and carbon fluxes at the surface. This is done using data from hydrometeorological towers collected at several locations within the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain (Amazon tropical forest) and at locations in Arizona (semiarid grass and shrub-land). The influence that the specific location exerts upon the model simulation is also analyzed. In addition, the Santarém kilometer 67 site located in the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain is further analyzed by using datasets with different levels of quality control for evaluating the resulting effects on the performance of the individual models. The method of Sobol was shown to give the best estimates of sensitivity for the variance-based algorithms and tended to be conservative in terms of assigning parameter sensitivity, while the multi-objective generalized sensitivity algorithm gave a more liberal number of sensitive parameters. For the optimization, the Multi-algorithm Genetically Adaptive Multi-objective algorithm consistently resulted in the smallest overall error; however all other algorithms gave similar results. Furthermore the Simple Biosphere Model 3 provided better estimates of the latent heat and the Noah model gave better estimates of the sensible heat.
88

Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor Design

Blakely, Cole David 01 August 2018 (has links)
The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail. A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS). The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case. The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation. UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation. LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients. Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest. The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs.
89

Variational data assimilation for the shallow water equations with applications to tsunami wave prediction

Khan, Ramsha January 2020 (has links)
Accurate prediction of tsunami waves requires complete boundary and initial condition data, coupled with the appropriate mathematical model. However, necessary data is often missing or inaccurate, and may not have sufficient resolution to capture the dynamics of such nonlinear waves accurately. In this thesis we demonstrate that variational data assimilation for the continuous shallow water equations (SWE) is a feasible approach for recovering both initial conditions and bathymetry data from sparse observations. Using a Sadourny finite-difference finite volume discretisation for our numerical implementation, we show that convergence to true initial conditions can be achieved for sparse observations arranged in multiple configurations, for both isotropic and anisotropic initial conditions, and with realistic bathymetry data in two dimensions. We demonstrate that for the 1-D SWE, convergence to exact bathymetry is improved by including a low-pass filter in the data assimilation algorithm designed to remove scale-scale noise, and with a larger number of observations. A necessary condition for a relative L2 error less than 10% in bathymetry reconstruction is that the amplitude of the initial conditions be less than 1% of the bathymetry height. We perform Second Order Adjoint Sensitivity Analysis and Global Sensitivity Analysis to comprehensively assess the sensitivity of the surface wave to errors in the bathymetry and perturbations in the observations. By demonstrating low sensitivity of the surface wave to the reconstruction error, we found that reconstructing the bathymetry with a relative error of about 10% is sufficiently accurate for surface wave modelling in most cases. These idealised results with simplified 2-D and 1-D geometry are intended to be a first step towards more physically realistic settings, and can be used in tsunami modelling to (i) maximise accuracy of tsunami prediction through sufficiently accurate reconstruction of the necessary data, (ii) attain a priori knowledge of how different bathymetry and initial conditions can affect the surface wave error, and (iii) provide insight on how these can be mitigated through optimal configuration of the observations. / Thesis / Candidate in Philosophy
90

Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit

Doty, Austin January 2012 (has links)
No description available.

Page generated in 0.0892 seconds