• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 221
  • 221
  • 40
  • 35
  • 32
  • 30
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A Computational Framework for Dam Safety Risk Assessment with Uncertainty Analysis

Srivastava, Anruag 01 May 2013 (has links)
The growing application of risk analysis in dam safety, especially for the owners of large numbers of dams (e.g., U.S. Army Corps of Engineers), has motivated the development of a new tool (DAMRAE) for event tree based dam safety risk analysis. Various theoretical challenges were overcome in formulating the computational framework of DAMRAE and several new computational concepts were introduced. The concepts of Connectivity and Pedigree matrices are proposed to quantify the user-drawn event tree structures with proper accounting of interdependencies among the event tree branches. A generic calculation of Common-Cause Adjustment for the non-mutually exclusive failure modes is implemented along with introducing the new concepts of system response probability and consequence freezing. New output presentation formats such as cumulative risk estimate vs. initiating variable plots to analyze the increase of an incremental (annualized) risk estimate as a function of initiating variable are introduced. An additional consideration is given to the non-breach risk estimates in the risk modeling and new output formats such as non-breach F-N and F-$ charts are included as risk analysis outputs. DAMRAE, a Visual Basic.NET based framework, provides a convenient platform to structure the risk assessment of a dam in its existing state and for alternatives or various stages of implementing a risk reduction plan. The second chapter of the dissertation presents the architectural framework of DAMRAE and describes the underlying theoretical and computational logic employed in the software. An example risk assessment is presented in the third chapter to demonstrate the DAMRAE functionalities. In the fourth chapter, the DAMRAE framework is extended into DAMRAE-U to incorporate uncertainty analysis functionality. Various aspects and requirements reviewed for uncertainty analysis in the context of dam safety risk assessment and theoretical challenges overcome to develop the computational framework for DAMRAE-U are described in this chapter. The capabilities of DAMRAE-U are illustrated in the fifth chapter, which contains an example dam safety risk assessment with uncertainty analysis. The dissertation concludes with a summary of DAMRAE features and recommendations for further work in the sixth chapter.
82

MAPPING AND UNCERTAINTY ANALYSIS OF URBAN VEGETATION CARBON DENSITY BY COMBINING SPATIAL MODELING, DE-SHADOW & SPECTRAL UNMIXING ANALYSIS

Qie, Guangping 01 May 2019 (has links) (PDF)
AN ABSTRACT OF THE DISSERTATION OF
83

Modeling the Dissolution of Immiscible Contaminants in Groundwater for Decision Support

Prieto Estrada, Andres Eduardo 27 June 2023 (has links)
Predicting the dissolution rates of immiscible contaminants in groundwater is crucial for developing environmental remediation strategies, but quantitative modeling efforts are inherently subject to multiple uncertainties. These include unknown residual amounts of non-aqueous phase liquids (NAPL) and source zone dimensions, inconsistent historical monitoring of contaminant mass discharge, and the mathematical simulation of field-scale mass transfer processes. Effective methods for simulating NAPL dissolution must therefore be able to assimilate a variety of data through physical and scalable mass transfer parameters to quantify and reduce site-specific uncertainties. This investigation coupled upscaled and numerical mass transfer modeling with uncertainty analyses to understand and develop data-assimilation and parameter-scaling methods for characterizing NAPL source zones and predicting depletion timeframes. Parameters of key interest regulating kinetic NAPL persistence and contaminant fluxes are residual mass and saturation, but neither can be measured directly at field sites. However, monitoring and characterization measurements can constrain source zone dimensions, where NAPL mass is distributed. This work evaluated the worth of source zone delineation and dissolution monitoring for estimating NAPL mass and mass transfer coefficients at multiple scales of spatial resolution. Mass transfer processes in controlled laboratory and field experiments were analyzed by simulating monitored dissolved-phase concentrations through the parameterization of explicit and lumped system properties in volume-averaged (VA) and numerical models of NAPL dissolution, respectively. Both methods were coupled with uncertainty analysis tools to investigate the relationship between data availability and model design for accurately constraining system parameters and predictions. The modeling approaches were also combined for reproducing experimental bulk effluent rates in discretized domains, explicitly parameterizing mass transfer coefficients at multiple grid scales. Research findings linked dissolved-phase monitoring signatures to model estimates of NAPL persistence, supported by source zone delineation data. The accurate characterization of source zone properties and kinetic dissolution rates, governing NAPL longevity, was achieved by adjusting model parameterization complexity to data availability. While multistage effluent rates accurately constrained explicit-process parameters in VA models, spatially-varying lumped-process parameters estimated from late dissolution stages also constrained unbiased predictions of NAPL depletion. Advantages of the numerical method included the simultaneous assimilation of bulk and high-resolution monitoring data for characterizing the distribution of residual NAPL mass and dissolution rates, whereas the VA method predicted source dissipation timeframes from delineation data alone. Additionally, comparative modeling analyses resulted in a methodology for scaling VA mass transfer coefficients to simulate NAPL dissolution and longevity at multiple grid resolutions. This research suggests feasibility in empirical constraining of lumped-process parameters by applying VA concepts to numerical mass transfer and transport models, enabling the assimilation of monitoring and source delineation data to reduce site-specific uncertainties. / Doctor of Philosophy / Predicting the dissolution rates of immiscible contaminants in groundwater is crucial for developing environmental restoration strategies, but quantitative modeling efforts are inherently subject to multiple uncertainties. These include unknown mass and dimensions of contaminant source zones, inconsistent groundwater monitoring, and the mathematical simulation of physical processes controlling dissolution rates at field scales. Effective simulation methods must therefore be able to leverage a variety of data through rate-limiting parameters suitable for quantifying and reducing uncertainties at contaminated sites. This investigation integrated mathematical modeling with uncertainty analyses to understand and develop data-driven approaches for characterizing contaminant source zones and predicting dissolution rates at multiple measurement scales. Parameters of key interest regulating the lifespan of source zones are the distribution and amount of residual contaminant mass, which cannot be measured directly at field sites. However, monitoring and site characterization measurements can constrain source zone dimensions, where contaminant mass is distributed. This work evaluated the worth of source zone delineation and groundwater monitoring for estimating contaminant mass and dissolution rates at multiple measurement scales. Rate-limiting processes in controlled laboratory and field experiments were analyzed by simulating monitored groundwater concentrations through the explicit and lumped representation of system properties in volume-averaged (VA) and numerical models of contaminant dissolution, respectively. Both methods were coupled with uncertainty analysis tools to investigate the relationship between data availability and model design for accurately constraining system parameters and predictions. The approaches were also combined for predicting average contaminant concentrations at multiple scales of spatial resolution. Research findings linked groundwater monitoring profiles to model estimates of contaminant persistence, supported by source zone delineation data. The accurate characterization of source zone properties and contaminant dissolution rates was achieved by adjusting model complexity to data availability. While monitoring profiles indicating multi-rate contaminant dissolution accurately constrained explicit-process parameters in VA models, spatially-varying lumped parameters estimated from late dissolution stages also constrained unbiased predictions of source mass depletion. Advantages of the numerical method included the simultaneous utilization of average and spatially-detailed monitoring data for characterizing the distribution of contaminant mass and dissolution rates, whereas the VA method predicted source longevity timeframes from delineation data alone. Additionally, comparative modeling analyses resulted in a methodology for scaling estimable VA parameters to predict contaminant dissolution rates at multiple scales of spatial resolution. This research suggests feasibility in empirical constraining of lumped parameters by applying VA concepts to numerical models, enabling a comprehensive data-driven methodology to quantify environmental risk and support groundwater cleanup designs.
84

Fuzzy-Analysis in a Generic Polymorphic Uncertainty Quantification Framework

Richter, Bertram 30 November 2022 (has links)
In this thesis, a framework for generic uncertainty analysis is developed. The two basic uncertainty characteristics aleatoric and epistemic uncertainty are differentiated. Polymorphic uncertainty as the combination of these two characteristics is discussed. The main focus is on epistemic uncertainty, with fuzziness as an uncertainty model. Properties and classes of fuzzy quantities are discussed. Some information reduction measures to reduce a fuzzy quantity to a characteristic value, are briefly debated. Analysis approaches for aleatoric, epistemic and polymorphic uncertainty are discussed. For fuzzy analysis α-level-based and α-level-free methods are described. As a hybridization of both methods, non-flat α-level-optimization is proposed. For numerical uncertainty analysis, the framework PUQpy, which stands for “Polymorphic Uncertainty Quantification in Python” is introduced. The conception, structure, data structure, modules and design principles of PUQpy are documented. Sequential Weighted Sampling (SWS) is presented as an optimization algorithm for general purpose optimization, as well as for fuzzy analysis. Slice Sampling as a component of SWS is shown. Routines to update Pareto-fronts, which are required for optimization are benchmarked. Finally, PUQpy is used to analyze example problems as a proof of concept. In those problems analytical functions with uncertain parameters, characterized by fuzzy and polymorphic uncertainty, are examined.
85

Predicting the Effects of Dimensional and Material Property Variations in Micro Compliant Mechanisms

Wittwer, Jonathan W. 25 July 2001 (has links) (PDF)
Surface micromachining of micro-electro-mechanical systems (MEMS), like all other fabrication processes, has inherent variation that leads to uncertain material and dimensional parameters. To obtain accurate and reliable predictions of mechanism behavior, the effects of these variations need to be analyzed. This thesis expands already existing tolerance and uncertainty analysis methods to apply to micro compliant mechanisms. For simple compliant members, explicit equations can be used in uncertainty analysis. However, for a nonlinear implicit system of equations, the direct linearization method may be used to obtain sensitivities of output parameters to small changes in known variables. This is done by including static equilibrium equations and pseudo-rigid-body model relationships with the kinematic vector loop equations. Examples are used to show a comparison of this method to other deterministic and probabilistic methods and finite element analysis.
86

Computational Fluid Dynamics Uncertainty Analysis For Payload Fairing Spacecraft Environmental Control Systems

Groves, Curtis 01 January 2014 (has links)
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional “validation by test only” mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the iv methodology found in “Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations”. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions
87

Fuzzy evidence theory and Bayesian networks for process systems risk analysis

Yazdi, M., Kabir, Sohag 21 October 2019 (has links)
Yes / Quantitative risk assessment (QRA) approaches systematically evaluate the likelihood, impacts, and risk of adverse events. QRA using fault tree analysis (FTA) is based on the assumptions that failure events have crisp probabilities and they are statistically independent. The crisp probabilities of the events are often absent, which leads to data uncertainty. However, the independence assumption leads to model uncertainty. Experts’ knowledge can be utilized to obtain unknown failure data; however, this process itself is subject to different issues such as imprecision, incompleteness, and lack of consensus. For this reason, to minimize the overall uncertainty in QRA, in addition to addressing the uncertainties in the knowledge, it is equally important to combine the opinions of multiple experts and update prior beliefs based on new evidence. In this article, a novel methodology is proposed for QRA by combining fuzzy set theory and evidence theory with Bayesian networks to describe the uncertainties, aggregate experts’ opinions, and update prior probabilities when new evidences become available. Additionally, sensitivity analysis is performed to identify the most critical events in the FTA. The effectiveness of the proposed approach has been demonstrated via application to a practical system. / The research of Sohag Kabir was partly funded by the DEIS project (Grant Agreement 732242).
88

Machine Learning Integrated Analytics of Electrode Microstructures

Chance Norris (13872521) 17 October 2022 (has links)
<p>In the pursuit to develop safe and reliable lithium-ion batteries, it is imperative to understand all the variabilities that revolve around electrodes. Current cutting-edge physics-based simulations employ an image-based technique. This technique uses images of electrodes to extract effective properties that are used in these physics-based simulations or employ the simulation on the structure itself. Though the electrode images have spatial variability, various particle morphology, and aberrations that need to be accounted for. This work seeks out to help quantify these variabilities and pinpoint uncertainties that arise in image-based simulations by using machine learning and other data analytic techniques. First, we looked at eighteen graphite electrodes with various particle morphologies to gain a better understanding on how heterogeneity and anisotropy interplay with each other. Moreover, we wanted to see if higher anisotropic particles led to greater heterogeneity, and a higher propensity for changes in effective properties. Multiple image-based algorithms were used to extract tortuosity, conductivity, and elucidate particle shape without the need for segmentation of individual particles. What was found is highly anisotropic particles induces greater heterogeneity in the electrode images, but also tightly packed isotropic particles can do the same. These results arise from porous pathways becoming bottlenecked, resulting in greater likelihood to change values with minimal changes in particle arrangement. Next, a model was deployed to see how these anisotropies and heterogeneities impact electrochemical performance. The thought of whether particle morphology and directional dependencies would have impact on plating energy and heat generation, leading to poor electrochemical performance. By using a pseudo-2D model, we elucidated that the larger the tortuosity the greater the propensity to plate and generate heat. Throughout these studies, it became clear that the segmentation of the greyscale images became the origin for subjectiveness to appear in these studies. We sought to quantify this through machine learning techniques, which employed a Bayesian convolutional neural network. By doing so we aimed to see if image quality impacts uncertainties in our effective properties, and whether we might be able to predict this from image characteristics. Being able to predict effective property uncertainty through image quality did not prove possible, but the ability to predict physics properties based on geometric was able to be done. With the largest uncertain particles occurring at the phase boundaries, morphologies that have a large specific surface area presented with the highest structural uncertainty. Lastly, we wanted to see the impact carbon binder domain morphology uncertainty impacts our effective properties. By using a set of sixteen NMC electrodes, which specify the carbon binder domain weight percentage, we can see how uncertainties in morphology, segmentation, spatial variability, and manufacturing variability impact effective properties. We expected there to be an interplay on which uncertainty impacts various effective properties, and if manufacturing variability plays a large role in determining this. By using surrogate models and statistical methods, we show that there is an eb and flow in uncertainties and effective properties are dependent on which uncertainty is being changed.</p>
89

Evaluating Data Averaging Techniques for High Gradient Flow Fields through Uncertainty Analysis

Heng, Boon Liang 04 August 2001 (has links)
Experimental data from two cold airflow turbine tests were evaluated. The two tests had different, relatively high gradient flow fields at the turbine exit. The objective of the research was to evaluate data requirements, including the averaging techniques, the number of measurements, and the types of measurements needed, for high gradient flow fields. Guidelines could then be established for future tests that could allow reduction in test time and costs. An enormous amount of data was collected for both tests. These test data were then manipulated in various ways to study the effects of the averaging techniques, the number of measurements, and the types of measurements on the turbine efficiency. The effects were evaluated relative to maintaining a specific accuracy (1%) for the turbine efficiency. Mass and area averaging were applied to each case. A detailed uncertainty analysis of each case was done to evaluate the uncertainty of the efficiency calculations. A new uncertainty analysis technique was developed to include conceptual bias estimates for the spatially averaged values required in the efficiency equations. Conceptual bias estimates were made for each test case, and these estimates can be used as guidelines for similar turbine tests in the future. The evaluations proved that mass averaging and taking measurements around the full 360 degree was crucial for obtaining accurate efficiency calculations in high gradient flow fields. In addition, circumferential averaging of wall-static pressure measurements could be used rather than measuring static pressures across the annulus of the high gradient flow field while still maintaining highly accurate efficiency calculations. These are an important finding in that considerable time and cost savings may be realized due to the decreased test time, probe measurements, and calibration requirements.
90

Thermoelectric Energy Conversion: Advanced Thermoelectric Analysis and Materials Development

Mackey, Jon A. 26 May 2015 (has links)
No description available.

Page generated in 0.0907 seconds