Spelling suggestions: "subject:"[een] UNCERTAINTY ANALYSIS"" "subject:"[enn] UNCERTAINTY ANALYSIS""
71 |
Fuzzy-Analysis in a Generic Polymorphic Uncertainty Quantification FrameworkRichter, Bertram 30 November 2022 (has links)
In this thesis, a framework for generic uncertainty analysis is developed. The two basic uncertainty characteristics aleatoric and epistemic uncertainty are differentiated. Polymorphic uncertainty as the combination of these two characteristics is discussed. The main focus is on epistemic uncertainty, with fuzziness as an uncertainty model. Properties and classes of fuzzy quantities are discussed. Some information reduction measures to reduce a fuzzy quantity to a characteristic value, are briefly debated. Analysis approaches for aleatoric, epistemic and polymorphic uncertainty are discussed. For fuzzy analysis α-level-based and α-level-free methods are described. As a hybridization of both methods, non-flat α-level-optimization is proposed.
For numerical uncertainty analysis, the framework PUQpy, which stands for “Polymorphic Uncertainty Quantification in Python” is introduced. The conception, structure, data structure, modules and design principles of PUQpy are documented. Sequential Weighted Sampling (SWS) is presented as an optimization algorithm for general purpose optimization, as well as for fuzzy analysis. Slice Sampling as a component of SWS is shown. Routines to update Pareto-fronts, which are required for optimization are benchmarked.
Finally, PUQpy is used to analyze example problems as a proof of concept. In those problems analytical functions with uncertain parameters, characterized by fuzzy and polymorphic uncertainty, are examined.
|
72 |
Predicting the Effects of Dimensional and Material Property Variations in Micro Compliant MechanismsWittwer, Jonathan W. 25 July 2001 (has links) (PDF)
Surface micromachining of micro-electro-mechanical systems (MEMS), like all other fabrication processes, has inherent variation that leads to uncertain material and dimensional parameters. To obtain accurate and reliable predictions of mechanism behavior, the effects of these variations need to be analyzed. This thesis expands already existing tolerance and uncertainty analysis methods to apply to micro compliant mechanisms. For simple compliant members, explicit equations can be used in uncertainty analysis. However, for a nonlinear implicit system of equations, the direct linearization method may be used to obtain sensitivities of output parameters to small changes in known variables. This is done by including static equilibrium equations and pseudo-rigid-body model relationships with the kinematic vector loop equations. Examples are used to show a comparison of this method to other deterministic and probabilistic methods and finite element analysis.
|
73 |
Computational Fluid Dynamics Uncertainty Analysis For Payload Fairing Spacecraft Environmental Control SystemsGroves, Curtis 01 January 2014 (has links)
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional “validation by test only” mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the iv methodology found in “Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations”. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions
|
74 |
Machine Learning Integrated Analytics of Electrode MicrostructuresChance Norris (13872521) 17 October 2022 (has links)
<p>In the pursuit to develop safe and reliable lithium-ion batteries, it is imperative to understand all the variabilities that revolve around electrodes. Current cutting-edge physics-based simulations employ an image-based technique. This technique uses images of electrodes to extract effective properties that are used in these physics-based simulations or employ the simulation on the structure itself. Though the electrode images have spatial variability, various particle morphology, and aberrations that need to be accounted for. This work seeks out to help quantify these variabilities and pinpoint uncertainties that arise in image-based simulations by using machine learning and other data analytic techniques. First, we looked at eighteen graphite electrodes with various particle morphologies to gain a better understanding on how heterogeneity and anisotropy interplay with each other. Moreover, we wanted to see if higher anisotropic particles led to greater heterogeneity, and a higher propensity for changes in effective properties. Multiple image-based algorithms were used to extract tortuosity, conductivity, and elucidate particle shape without the need for segmentation of individual particles. What was found is highly anisotropic particles induces greater heterogeneity in the electrode images, but also tightly packed isotropic particles can do the same. These results arise from porous pathways becoming bottlenecked, resulting in greater likelihood to change values with minimal changes in particle arrangement. Next, a model was deployed to see how these anisotropies and heterogeneities impact electrochemical performance. The thought of whether particle morphology and directional dependencies would have impact on plating energy and heat generation, leading to poor electrochemical performance. By using a pseudo-2D model, we elucidated that the larger the tortuosity the greater the propensity to plate and generate heat. Throughout these studies, it became clear that the segmentation of the greyscale images became the origin for subjectiveness to appear in these studies. We sought to quantify this through machine learning techniques, which employed a Bayesian convolutional neural network. By doing so we aimed to see if image quality impacts uncertainties in our effective properties, and whether we might be able to predict this from image characteristics. Being able to predict effective property uncertainty through image quality did not prove possible, but the ability to predict physics properties based on geometric was able to be done. With the largest uncertain particles occurring at the phase boundaries, morphologies that have a large specific surface area presented with the highest structural uncertainty. Lastly, we wanted to see the impact carbon binder domain morphology uncertainty impacts our effective properties. By using a set of sixteen NMC electrodes, which specify the carbon binder domain weight percentage, we can see how uncertainties in morphology, segmentation, spatial variability, and manufacturing variability impact effective properties. We expected there to be an interplay on which uncertainty impacts various effective properties, and if manufacturing variability plays a large role in determining this. By using surrogate models and statistical methods, we show that there is an eb and flow in uncertainties and effective properties are dependent on which uncertainty is being changed.</p>
|
75 |
Evaluating Data Averaging Techniques for High Gradient Flow Fields through Uncertainty AnalysisHeng, Boon Liang 04 August 2001 (has links)
Experimental data from two cold airflow turbine tests were evaluated. The two tests had different, relatively high gradient flow fields at the turbine exit. The objective of the research was to evaluate data requirements, including the averaging techniques, the number of measurements, and the types of measurements needed, for high gradient flow fields. Guidelines could then be established for future tests that could allow reduction in test time and costs. An enormous amount of data was collected for both tests. These test data were then manipulated in various ways to study the effects of the averaging techniques, the number of measurements, and the types of measurements on the turbine efficiency. The effects were evaluated relative to maintaining a specific accuracy (1%) for the turbine efficiency. Mass and area averaging were applied to each case. A detailed uncertainty analysis of each case was done to evaluate the uncertainty of the efficiency calculations. A new uncertainty analysis technique was developed to include conceptual bias estimates for the spatially averaged values required in the efficiency equations. Conceptual bias estimates were made for each test case, and these estimates can be used as guidelines for similar turbine tests in the future. The evaluations proved that mass averaging and taking measurements around the full 360 degree was crucial for obtaining accurate efficiency calculations in high gradient flow fields. In addition, circumferential averaging of wall-static pressure measurements could be used rather than measuring static pressures across the annulus of the high gradient flow field while still maintaining highly accurate efficiency calculations. These are an important finding in that considerable time and cost savings may be realized due to the decreased test time, probe measurements, and calibration requirements.
|
76 |
Thermoelectric Energy Conversion: Advanced Thermoelectric Analysis and Materials DevelopmentMackey, Jon A. 26 May 2015 (has links)
No description available.
|
77 |
Assessment of Uncertainty in Core Body Temperature due to Variability in Tissue ParametersKalathil, Robins T. January 2016 (has links)
No description available.
|
78 |
Analysis of Transient Overpower Scenarios in Sodium Fast ReactorsGrabaskas, David 20 August 2010 (has links)
No description available.
|
79 |
Uncertainty Analysis In Lattice Reactor Physics CalculationsBall, Matthew R. 04 1900 (has links)
<p>Comprehensive sensitivity and uncertainty analysis has been performed for light-water reactor and heavy-water reactor lattices using three techniques; adjoint-based sensitivity analysis, Monte Carlo sampling, and direct numerical perturbation. The adjoint analysis was performed using a widely accepted, commercially available code, whereas the Monte Carlo sampling and direct numerical perturbation were performed using new codes that were developed as part of this work. Uncertainties associated with fundamental nuclear data accompany evaluated nuclear data libraries in the form of covariance matrices. As nuclear data are important parameters in reactor physics calculations, any associated uncertainty causes a loss of confidence in the calculation results. The quantification of output uncertainties is necessary to adequately establish safety margins of nuclear facilities. In this work, the propagation of uncertainties associated with both physics parameters (e.g. microscopic cross-sections) and lattice model parameters (e.g. material temperature) have been investigated, and the uncertainty of all relevant lattice calculation outputs, including the neutron multiplication constant and few-group, homogenized cross-sections have been quantified. Sensitivity and uncertainty effects arising from the resonance self-shielding of microscopic cross-sections were addressed using a novel set of resonance integral corrections that are derived from perturbations in their infinite-dilution counterparts. It was found that the covariance of the U238 radiative capture cross-section was the dominant contributor to the uncertainties of lattice properties. Also, the uncertainty associated with the prediction of isotope concentrations during burnup is significant, even when uncertainties of fission yields and decay rates were neglected. Such burnup related uncertainties result solely due to the uncertainty of fission and radiative capture rates that arises from physics parameter covariance. The quantified uncertainties of lattice calculation outputs that are described in this work are suitable for use as input uncertainties to subsequent reactor physics calculations, including reactor core analysis employing neutron diffusion theory.</p> / Doctor of Philosophy (PhD)
|
80 |
Covariance in Multigroup and Few Group Reactor Physics Uncertainty CalculationsMcEwan, Curtis E. 10 1900 (has links)
<p>Simulation plays a key role in nuclear reactor safety analysis and being able to assess the accuracy of results obtained by simulation increases their credibility. This thesis examines the propogation of nuclear data uncertainties through lattice level physics calcualtions. These input uncertainties are in the form of covariance matrices, which dictate the variance and covariance of specified nuclear data to one another. These covariances are available within certain nuclear data libraries, however they are generally only available at infinite dilution for a fixed temperature. The overall goal of this research is to examine the importance of various applications of covariance and their associated nuclear data libraries, and most importantanly to examine the effects of dilution and self-shielding on the results. One source of nuclear data and covariances are the TENDL libraries which are based on a reference ENDF data library and are in continuous energy. Each TENDL library was created by randomly perturbing the reference nuclear data at its most fundamental level according to its covariance. These perturbed nuclear data libraries in TENDL format were obtained and NJOY was used to produce cross sections in 69 groups for which the covariance was calculated at multiple temperatures and dilutions. Temperature was found to have little effect but covarances evaluated at various dilutions did differ significantly. Comparisons of the covariances calculated from TENDL with those in SCALE and ENDF/B-VII also revealed significant differences. The multigroup covariance library produced at this stage was then used in subsequent analyses, along with multigroup covariance libraries available elsewhere, in order to see the differences that arise from covariance library sources. Monte Carlo analysis of a PWR pin cell was performed using the newly created covariance library, a specified reference set of nuclear data, and the lattice physics transport solver DRAGON. The Monte Carlo analysis was then repeated by systematically changing the input covariance matrix (for example using an alternative matrix like that included with the TSUNAMI package) or alternate input reference nuclear data. The uncertainty in k-infinite and the homogenized two group cross sections was assessed for each set of covariance data. It was found that the source of covariance data as well as dilution had a significant effect on the predicted uncertainty in the homogenized cell properties, but the dilution did not significanty affect the predicted uncertainty in k-infinite.</p> / Master of Applied Science (MASc)
|
Page generated in 0.0418 seconds