• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 877
  • 877
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 70
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Sensitivity analysis of pluvial flood modelling tools for dense urban areas : A case study in Lundby-Lindholmen, Gothenburg

Eriksson, Johanna January 2020 (has links)
As a result of the global climate change, extreme precipitation is occurring more frequently which increases the risk of flooding, especially in urban areas. Urbanisation is widely discussed regarding urban flooding where an increase of impervious surfaces limits the infiltration and increases the surface runoff. Flooding events in urban areas are increasing around the world and can cause large damages on infrastructure and buildings, which makes the cities vulnerable. Urban flood models are an important tool for analysing the capacity of the drainage systems, to predict the extent of the events and to find optimal locations to implement measures to prevent damages from flooding. In this project, a sensitivity analysis in MIKE FLOOD, a coupled 1D-2D flood model developed by DHI is presented, where sewer- and surface systems are integrated. The aim with this project is to investigate how the result of a coupled flood model vary in relation to changes in input parameters. The sensitivity analysis is performed to evaluate how different parameters impact the model output in terms of water depth and variations in cost of flooded buildings, roads, rail- and tramways. The analysis is applied in a case study in Lundby-Lindholmen, Gothenburg city, Sweden. The results show that modelling without infiltration influenced the model output the most, with the largest increase both in terms of cost and water depth over the investigated area. Here the correlation between the initial water saturation and location of the applied pre-rain was highlighted. The model outputs were less sensitive to changes in surface roughness (expressed as Manning value) than without infiltration but did lead to measurable changes in surface water depth and distribution while the flood damage cost didn’t show any major changes. Additionally, the coupled flood model was evaluated in terms of handling changes in magnitudes of rain-events. Data indicates the shorter the return period, the smaller the flood propagation, as well as the flood damage cost decreases with shorter return periods. The data evaluated supports the use of this coupled model approach for shorter return periods in terms of flood propagation.
82

Uncertainty Analysis for Land Surface Model Predictions: Application to the Simple Biosphere 3 and Noah Models at Tropical and Semiarid Locations

Roundy, Joshua K. 01 May 2009 (has links)
Uncertainty in model predictions is associated with data, parameters, and model structure. The estimation of these contributions to uncertainty is a critical issue in hydrology. Using a variety of single and multiple criterion methods for sensitivity analysis and inverse modeling, the behaviors of two state-of-the-art land surface models, the Simple Biosphere Model 3 and Noah model, are analyzed. The different algorithms used for sensitivity and inverse modeling are analyzed and compared along with the performance of the land surface models. Generalized sensitivity and variance methods are used for the sensitivity analysis, including the Multi-Objective Generalized Sensitivity Analysis, the Extended Fourier Amplitude Sensitivity Test, and the method of Sobol. The methods used for the parameter uncertainty estimation are based on Markov Chain Monte Carlo simulations with Metropolis type algorithms and include A Multi-algorithm Genetically Adaptive Multi-objective algorithm, Differential Evolution Adaptive Metropolis, the Shuffled Complex Evolution Metropolis, and the Multi-objective Shuffled Complex Evolution Metropolis algorithms. The analysis focuses on the behavior of land surface model predictions for sensible heat, latent heat, and carbon fluxes at the surface. This is done using data from hydrometeorological towers collected at several locations within the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain (Amazon tropical forest) and at locations in Arizona (semiarid grass and shrub-land). The influence that the specific location exerts upon the model simulation is also analyzed. In addition, the Santarém kilometer 67 site located in the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain is further analyzed by using datasets with different levels of quality control for evaluating the resulting effects on the performance of the individual models. The method of Sobol was shown to give the best estimates of sensitivity for the variance-based algorithms and tended to be conservative in terms of assigning parameter sensitivity, while the multi-objective generalized sensitivity algorithm gave a more liberal number of sensitive parameters. For the optimization, the Multi-algorithm Genetically Adaptive Multi-objective algorithm consistently resulted in the smallest overall error; however all other algorithms gave similar results. Furthermore the Simple Biosphere Model 3 provided better estimates of the latent heat and the Noah model gave better estimates of the sensible heat.
83

Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor Design

Blakely, Cole David 01 August 2018 (has links)
The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail. A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS). The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case. The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation. UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation. LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients. Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest. The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs.
84

Variational data assimilation for the shallow water equations with applications to tsunami wave prediction

Khan, Ramsha January 2020 (has links)
Accurate prediction of tsunami waves requires complete boundary and initial condition data, coupled with the appropriate mathematical model. However, necessary data is often missing or inaccurate, and may not have sufficient resolution to capture the dynamics of such nonlinear waves accurately. In this thesis we demonstrate that variational data assimilation for the continuous shallow water equations (SWE) is a feasible approach for recovering both initial conditions and bathymetry data from sparse observations. Using a Sadourny finite-difference finite volume discretisation for our numerical implementation, we show that convergence to true initial conditions can be achieved for sparse observations arranged in multiple configurations, for both isotropic and anisotropic initial conditions, and with realistic bathymetry data in two dimensions. We demonstrate that for the 1-D SWE, convergence to exact bathymetry is improved by including a low-pass filter in the data assimilation algorithm designed to remove scale-scale noise, and with a larger number of observations. A necessary condition for a relative L2 error less than 10% in bathymetry reconstruction is that the amplitude of the initial conditions be less than 1% of the bathymetry height. We perform Second Order Adjoint Sensitivity Analysis and Global Sensitivity Analysis to comprehensively assess the sensitivity of the surface wave to errors in the bathymetry and perturbations in the observations. By demonstrating low sensitivity of the surface wave to the reconstruction error, we found that reconstructing the bathymetry with a relative error of about 10% is sufficiently accurate for surface wave modelling in most cases. These idealised results with simplified 2-D and 1-D geometry are intended to be a first step towards more physically realistic settings, and can be used in tsunami modelling to (i) maximise accuracy of tsunami prediction through sufficiently accurate reconstruction of the necessary data, (ii) attain a priori knowledge of how different bathymetry and initial conditions can affect the surface wave error, and (iii) provide insight on how these can be mitigated through optimal configuration of the observations. / Thesis / Candidate in Philosophy
85

Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit

Doty, Austin January 2012 (has links)
No description available.
86

Modeling of High-Pressure Entrained-Flow Char Oxidation

Gundersen, Daniel 15 November 2022 (has links)
Coal plays a significant role in electricity production worldwide and will into the foreseeable future. Technologies that improve efficiency and lower emissions are becoming more popular. High pressure reactors and oxyfuel combustion can offer these benefits. Designing new reactors effectively requires accurate single particle modeling. This work models a high-pressure, high-temperature, high-heating rate, entrained-flow, char oxidation data set to generate kinetic parameters. Different modeling methods were explored and a sensitivity analysis on char burnout was performed by varying parameters such as total pressure, O2 partial pressure, O2 and CO2 mole fractions, gas temperature, diameter, and pre-exponential factor. Pressure effects on char burnout modeling were found to be dependent on the set of kinetic parameters chosen. Using kinetic parameters from Hurt-Calo (2001) as opposed to values obtained from Niksa-Hurt (2003) yielded a trend seen in real data sets, that reaction order changes with temperature. Varying O2 mole fraction and partial pressure showed the most significant changes in char burnout. Varying diameter, total pressure, the pre-exponential factor, CO2 environment, and gas temperature all changed the char burnout extent as well. The effect of changing those parameters decreases in the order they are listed. Increasing any of these parameters resulted in an increase in char burnout except for particle diameter and CO2 mole fraction which led to a decrease. Char formation pressure affects reactivity, and a peak in reactivity is shown in this work at the 6 atm condition.
87

Sensitivity Analysis of Convex Relaxations for Nonsmooth Global Optimization

Yuan, Yingwei January 2020 (has links)
Nonsmoothness appears in various applications in chemical engineering, including multi-stream heat exchangers, nonsmooth flash calculation, process integration. In terms of numerical approaches, convex/concave relaxations of static and dynamic systems may also exhibit nonsmoothness. These relaxations are used in deterministic methods for global optimization. This thesis presents several new theoretical results for nonsmooth sensitivity analysis, with an emphasis on convex relaxations. Firstly, the "compass difference" and established ODE results by Pang and Stewart are used to describe a correct subgradient for a nonsmooth dynamic system with two parameters. This sensitivity information can be computed using standard ODE solvers. Next, this thesis also uses the compass difference to obtain a subgradient for the Tsoukalas-Mitsos convex relaxations of composite functions of two variables. Lastly, this thesis develops a new general subgradient result for Tsoukalas-Mitsos convex relaxations of composite functions. This result does not limit on the dimensions of input variables. It gives the whole subdifferential of Tsoukalas-Mitsos convex relaxations. Compare to Tsoukalas-Mitsos’ previous subdifferential results, it does not require additionally solving a dual optimization problem as well. The new subgradient results are extended to obtain directional derivatives for Tsoukalas-Mitsos convex relaxations. The new subgradient results and directional derivative results are computationally approachable: subgradients in this article can be calculated both by the vector forward AD mode and reverse AD mode. A proof-of-concept implementation in Matlab is discussed. / Thesis / Master of Applied Science (MASc)
88

Error Estimation and Grid Adaptation for Functional Outputs using Discrete-Adjoint Sensitivity Analysis

Balasubramanian, Ravishankar 13 December 2002 (has links)
Within the design process, computational fluid dynamics is typically used to compute specific quantities that assess the performance of the apparatus under investigation. These quantities are usually integral output functions such as force and moment coefficients. However, to accurately model the configuration, the geometric features and the resulting physical phenomena must be adequately resolved. Due to limited computational resources a compromise must be made between the fidelity of the solution obtained and the available resources. This creates a degree of uncertainty about the error in the computed output functions. To this end, the current study attempts to address this problem for two-dimensional inviscid, incompressible flows on unstructured grids. The objective is to develop an error estimation and grid adaptive strategy for improving the accuracy of output functions from computational fluid dynamic codes. The present study employs a discrete adjoint formulation to arrive at the error estimates in which the global error in the output function is related to the local residual errors in the flow solution via adjoint variables as weighting functions. This procedure requires prolongation of the flow solution and adjoint solution from coarse to finer grids and, thus, different prolongation operators are studied to evaluate their influence on the accuracy of the error correction terms. Using this error correction procedure, two different adaptive strategies may be employed to enhance the accuracy of the chosen output to a prescribed tolerance. While both strategies strive to improve the accuracy of the computed output, the means by which the adaptation parameters are formed differ. The first strategy improves the computable error estimates by forming adaptation parameters based on the level of error in the computable error estimates. A grid adaptive scheme is then implemented that takes into account the error in both the primal and dual solutions. The second strategy uses the computable error estimates as indicators in an iterative grid adaptive scheme to generate grids that produce accurate estimates of the chosen output. Several test cases are provided to demonstrate the effectiveness and robustness of the error correction procedure and the grid adaptive methods.
89

Adjoint-Based Error Estimation and Grid Adaptation for Functional Outputs from CFD Simulations

Balasubramanian, Ravishankar 10 December 2005 (has links)
This study seeks to reduce the degree of uncertainty that often arises in computational fluid dynamics simulations about the computed accuracy of functional outputs. An error estimation methodology based on discrete adjoint sensitivity analysis is developed to provide a quantitative measure of the error in computed outputs. The developed procedure relates the local residual errors to the global error in output function via adjoint variables as weight functions. The three major steps in the error estimation methodology are: (1) development of adjoint sensitivity analysis capabilities; (2) development of an efficient error estimation procedure; (3) implementation of an output-based grid adaptive scheme. Each of these steps are investigated. For the first step, parallel discrete adjoint capabilities are developed for the variable Mach version of the U2NCLE flow solver. To compare and validate the implementation of adjoint solver, this study also develops direct sensitivity capabilities. A modification is proposed to the commonly used unstructured flux-limiters, specifically, those of Barth-Jespersen and Venkatakrishnan, to make them piecewise continuous and suitable for sensitivity analysis. A distributed-memory message-passing model is employed for the parallelization of sensitivity analysis solver and the consistency of linearization is demonstrated in sequential and parallel environments. In the second step, to compute the error estimates, the flow and adjoint solutions are prolongated from a coarse-mesh to a fine-mesh using the meshless Moving Least Squares (MLS) approximation. These error estimates are used as a correction to obtain highlyurate functional outputs and as adaptive indicators in an iterative grid adaptive scheme to enhance the accuracy of the chosen output to a prescribed tolerance. For the third step, an output-based adaptive strategy that takes into account the error in both the primal (flow) and dual (adjoint) solutions is implemented. A second adaptive strategy based on physics-based feature detection is implemented to compare and demonstrate the robustness and effectiveness of the output-based adaptive approach. As part of the study, a general-element unstructured mesh adaptor employing h-refinement is developed using Python and C++. Error estimation and grid adaptation results are presented for inviscid, laminar and turbulent flows.
90

Categorization of soil suitability to crop switchgrass at Mississippi, US using geographic information system, multicriteria analysis and sensitivity analysis

Arias, Eduardo Fernando 03 May 2008 (has links)
Switchgrass (Panicum virgatum) has been widely investigated because of its notable properties as an alternative pasture grass and as an important biofuel source. The goal of this study was to determine soil suitability for Switchgrass in Mississippi. A linear weighted additive model was developed to predict site suitability. Multicriteria analysis and Sensitivity analysis were utilized to optimize the model. The model was fit using seven years of field data associated with soils characteristics collected from NRCS-USDA. The best model was selected by correlating estimated biomass yield with each model’s soils-based output for Switchgrass suitability. Pearson’s r (correlation coefficient) was the criteria used to establish the ‘best’ soil suitability model. Coefficients associated with the ‘best’ model were implemented within a Geographic Information System (GIS) to create a map of relative soil suitability for Switchgrass in Mississippi. A Geodatabase associated with soil parameters was constructed and is available for future GIS use.

Page generated in 0.0837 seconds