• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 221
  • 221
  • 40
  • 35
  • 32
  • 30
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Assessment of Uncertainty in Core Body Temperature due to Variability in Tissue Parameters

Kalathil, Robins T. January 2016 (has links)
No description available.
92

Analysis of Transient Overpower Scenarios in Sodium Fast Reactors

Grabaskas, David 20 August 2010 (has links)
No description available.
93

Uncertainty Analysis In Lattice Reactor Physics Calculations

Ball, Matthew R. 04 1900 (has links)
<p>Comprehensive sensitivity and uncertainty analysis has been performed for light-water reactor and heavy-water reactor lattices using three techniques; adjoint-based sensitivity analysis, Monte Carlo sampling, and direct numerical perturbation. The adjoint analysis was performed using a widely accepted, commercially available code, whereas the Monte Carlo sampling and direct numerical perturbation were performed using new codes that were developed as part of this work. Uncertainties associated with fundamental nuclear data accompany evaluated nuclear data libraries in the form of covariance matrices. As nuclear data are important parameters in reactor physics calculations, any associated uncertainty causes a loss of confidence in the calculation results. The quantification of output uncertainties is necessary to adequately establish safety margins of nuclear facilities. In this work, the propagation of uncertainties associated with both physics parameters (e.g. microscopic cross-sections) and lattice model parameters (e.g. material temperature) have been investigated, and the uncertainty of all relevant lattice calculation outputs, including the neutron multiplication constant and few-group, homogenized cross-sections have been quantified. Sensitivity and uncertainty effects arising from the resonance self-shielding of microscopic cross-sections were addressed using a novel set of resonance integral corrections that are derived from perturbations in their infinite-dilution counterparts. It was found that the covariance of the U238 radiative capture cross-section was the dominant contributor to the uncertainties of lattice properties. Also, the uncertainty associated with the prediction of isotope concentrations during burnup is significant, even when uncertainties of fission yields and decay rates were neglected. Such burnup related uncertainties result solely due to the uncertainty of fission and radiative capture rates that arises from physics parameter covariance. The quantified uncertainties of lattice calculation outputs that are described in this work are suitable for use as input uncertainties to subsequent reactor physics calculations, including reactor core analysis employing neutron diffusion theory.</p> / Doctor of Philosophy (PhD)
94

Covariance in Multigroup and Few Group Reactor Physics Uncertainty Calculations

McEwan, Curtis E. 10 1900 (has links)
<p>Simulation plays a key role in nuclear reactor safety analysis and being able to assess the accuracy of results obtained by simulation increases their credibility. This thesis examines the propogation of nuclear data uncertainties through lattice level physics calcualtions. These input uncertainties are in the form of covariance matrices, which dictate the variance and covariance of specified nuclear data to one another. These covariances are available within certain nuclear data libraries, however they are generally only available at infinite dilution for a fixed temperature. The overall goal of this research is to examine the importance of various applications of covariance and their associated nuclear data libraries, and most importantanly to examine the effects of dilution and self-shielding on the results. One source of nuclear data and covariances are the TENDL libraries which are based on a reference ENDF data library and are in continuous energy. Each TENDL library was created by randomly perturbing the reference nuclear data at its most fundamental level according to its covariance. These perturbed nuclear data libraries in TENDL format were obtained and NJOY was used to produce cross sections in 69 groups for which the covariance was calculated at multiple temperatures and dilutions. Temperature was found to have little effect but covarances evaluated at various dilutions did differ significantly. Comparisons of the covariances calculated from TENDL with those in SCALE and ENDF/B-VII also revealed significant differences. The multigroup covariance library produced at this stage was then used in subsequent analyses, along with multigroup covariance libraries available elsewhere, in order to see the differences that arise from covariance library sources. Monte Carlo analysis of a PWR pin cell was performed using the newly created covariance library, a specified reference set of nuclear data, and the lattice physics transport solver DRAGON. The Monte Carlo analysis was then repeated by systematically changing the input covariance matrix (for example using an alternative matrix like that included with the TSUNAMI package) or alternate input reference nuclear data. The uncertainty in k-infinite and the homogenized two group cross sections was assessed for each set of covariance data. It was found that the source of covariance data as well as dilution had a significant effect on the predicted uncertainty in the homogenized cell properties, but the dilution did not significanty affect the predicted uncertainty in k-infinite.</p> / Master of Applied Science (MASc)
95

Fuzzy temporal fault tree analysis of dynamic systems

Kabir, Sohag, Walker, M., Papadopoulos, Y., Rüde, E., Securius, P. 18 October 2019 (has links)
Yes / Fault tree analysis (FTA) is a powerful technique that is widely used for evaluating system safety and reliability. It can be used to assess the effects of combinations of failures on system behaviour but is unable to capture sequence dependent dynamic behaviour. A number of extensions to fault trees have been proposed to overcome this limitation. Pandora, one such extension, introduces temporal gates and temporal laws to allow dynamic analysis of temporal fault trees (TFTs). It can be easily integrated in model-based design and analysis techniques. The quantitative evaluation of failure probability in Pandora TFTs is performed using exact probabilistic data about component failures. However, exact data can often be difficult to obtain. In this paper, we propose a method that combines expert elicitation and fuzzy set theory with Pandora TFTs to enable dynamic analysis of complex systems with limited or absent exact quantitative data. This gives Pandora the ability to perform quantitative analysis under uncertainty, which increases further its potential utility in the emerging field of model-based design and dependability analysis. The method has been demonstrated by applying it to a fault tolerant fuel distribution system of a ship, and the results are compared with the results obtained by other existing techniques.
96

Uncertainty Quantification and Accuracy Improvement of the Double-Sensor Conductivity Probe for Two-Phase Flow Measurement

Wang, Dewei 29 October 2019 (has links)
The double-sensor conductivity probe is one of the most commonly used techniques for obtaining local time-averaged parameters in two-phase flows. The uncertainty of this measurement technique has not been well understood in the past as it involves many different steps and influential factors in a typical measurement. This dissertation aims to address this gap by performing a systematic and comprehensive study on the measurement uncertainty of the probe. Three types of uncertainties are analyzed: that of measurands, of the model input parameters, and of the mathematical models. A Monte Carlo uncertainty evaluation framework closely simulating the actual measuring process is developed to link various uncertainty sources to the time-averaged two-phase flow quantities outputted by the probe. Based on the Monte Carlo uncertainty evaluation framework, an iteration method is developed to infer the true values of the quantities that are being measured. A better understanding of the uncertainty of the double-sensor conductivity probe is obtained. Multiple advanced techniques, such as high speed optical imaging and fast X-ray densitometry, recently become mature and easily accessible. To further improve the accuracy of local two-phase flow measurement, a method is developed to integrate these techniques with the double-sensor conductivity probe by considering the measuring principles and unique advantages of each technique. It has been demonstrated that after processing and synergizing the data from different techniques using the current integration method, the final results show improved accuracy for void fraction, gas velocity and superficial gas velocity, compared to the original probe measurements. High-resolution two-phase flow data is essential for the further development of various two-phase flow models and validation of two-phase CFD codes. Therefore, a comprehensive high-accuracy database of two-phase flows is acquired. The gas-phase information is obtained by the integration method developed in this dissertation, and the recently developed Particle Image Velocimetry and Planar Laser Induced Fluorescence (PIV-PLIF) technique is utilized to measure liquid-phase velocity and turbulence characteristics. Flow characteristics of bubbly flow, slug flow and churn-turbulent flow are investigated. The 1-D drift-flux model is re-evaluated by the newly obtained dataset. The distribution parameter model has been optimized based on a new void-profile classification method proposed in this study. The optimized drift-flux model has significant improvements in predicting both gas velocity and void fraction. / Doctor of Philosophy / The double-sensor conductivity probe is one widely used technique for measuring local time-averaged parameters in two-phase flows. Although a number of studies have been carried out in the past, a good understanding of the uncertainty of this technique is still lacking. This paper aims to address this gap by performing a systematic and comprehensive study on the measurement uncertainty of the probe. Three types of uncertainties are analyzed: that of measurands, of the model input parameters, and of the mathematical models. A better understanding of the uncertainty of the double-sensor conductivity probe has been obtained. Considering the unique measuring principles and advantages of multiple advanced techniques, a method is developed to integrate these techniques with the double-sensor conductivity probe to further improve the accuracy of local two-phase flow measurement. It has been demonstrated that the integration method significantly improves the accuracy of probe measurements. Realizing the needs of high-resolution two-phase flow data to the further development of various two-phase flow models and validation of two-phase CFD codes, a comprehensive database of two-phase flows is acquired. The gas-phase and liquid-phase information are acquired by the new integration method and the recently developed Particle Image Velocimetry and Planar Laser Induced Fluorescence (PIV-PLIF) technique, respectively. The classical 1-D drift-flux model is re-evaluated by the newly obtained dataset. The distribution parameter model has been optimized, resulting in significant improvements in predicting both gas velocity and void fraction.
97

Quantifying Coordinate Uncertainty Fields in Coupled Spatial Measurement systems

Calkins, Joseph Matthew 06 August 2002 (has links)
Spatial coordinate measurement systems play an important role in manufacturing and certification processes. There are many types of coordinate measurement systems including electronic theodolite networks, total station systems, video photogrammetry systems, laser tracking systems, laser scanning systems, and coordinate measuring machines. Each of these systems produces coordinate measurements containing some degree of uncertainty. Often, the results from several different types of measurement systems must be combined in order to provide useful measurement results. When these measurements are combined, the resulting coordinate data set contains uncertainties that are a function of the base data sets and complex interactions between the measurement sets. ISO standards, ANSI standards, and others, require that estimates of uncertainty accompany all measurement data. This research presents methods for quantifying the uncertainty fields associated with coupled spatial measurement systems. The significant new developments and refinements presented in this dissertation are summarized as follows: 1) A geometrical representation of coordinate uncertainty fields. 2) An experimental method for characterizing instrument component uncertainty. 3) Coordinate uncertainty field computation for individual measurements systems. 4) Measurement system combination methods based on the relative uncertainty of each measurement's individual components. 5) Combined uncertainty field computation resulting from to the interdependence of the measurements for coupled measurement systems. 6) Uncertainty statements for measurement analyses such as best-fit geometrical shapes and hidden-point measurement. 7) The implementation of these methods into commercial measurement software. 8) Case studies demonstrating the practical applications of this research. The specific focus of this research is portable measurement systems. It is with these systems that uncertainty field combination issues are most prevalent. The results of this research are, however, general and therefore applicable to any instrument capable of measuring spatial coordinates. / Ph. D.
98

Assessment of SWAT to Enable Development of Watershed Management Plans for Agricultural Dominated Systems under Data-Poor Conditions

Osorio Leyton, Javier Mauricio 06 June 2012 (has links)
Modeling is an important tool in watershed management. In much of the world, data needed for modeling, both for model inputs and for model evaluation, are very limited or non-existent. The overall objective of this research was to enable development of watershed management plans for agricultural dominated systems under situations where data are scarce. First, uncertainty of the SWAT model's outputs due to input parameters, specifically soils and high resolution digital elevation models, which are likely to be lacking in data-poor environments, was quantified using Monte Carlo simulation. Two sources of soil parameter values (SSURGO and STATSGO) were investigated, as well as three levels of DEM resolution (10, 30, and 90 m). Uncertainty increased as the input data became coarser for individual soil parameters. The combination of SSURGO and the 30 m DEM proved to adequately balance the level of uncertainty and the quality of input datasets. Second, methods were developed to generate appropriate soils information and DEM resolution for data-poor environments. The soils map was generated based on lithology and slope class, while the soil attributes were generated by linking surface soil texture to soils characterized in the SWAT soils database. A 30 m resolution DEM was generated by resampling a 90 m DEM, the resolution that is readily available around the world, by direct projection using a cubic convolution method. The effect of the generated DEM and soils data on model predictions was evaluated in a data-rich environment. When all soil parameters were varied at the same time, predictions based on the derived soil map were comparable to the predictions based on the SSURGO map. Finally, the methodology was tested in a data-poor watershed in Bolivia. The proposed methodologies for generating input data showed how available knowledge can be employed to generate data for modeling purposes and give the opportunity to incorporate uncertainty in the decision making process in data-poor environments. / Ph. D.
99

Terrestrial Laser Scanning for Quantifying Uncertainty in Fluvial Applications

Resop, Jonathan P. 20 July 2010 (has links)
Stream morphology is an important aspect of many hydrological and ecological applications such as stream restoration design (SRD) and estimating sediment loads for total maximum daily load (TMDL) development. Surveying of stream morphology traditionally involves point measurement tools, such as total stations, or remote sensing technologies, such as aerial laser scanning (ALS), which have limitations in spatial resolution. Terrestrial laser scanning (TLS) can potentially offer improvements over other surveying methods by providing greater resolution and accuracy. The first two objectives were to quantify the measurement and interpolation errors from total station surveying using TLS as a reference dataset for two fluvial applications: 1) measuring streambank retreat (SBR) for sediment load calculations; and 2) measuring topography for habitat complexity quantification. The third objective was to apply knowledge uncertainties and stochastic variability to the application of SRD. A streambank on Stroubles Creek in Blacksburg, VA was surveyed six times over two years to measure SBR. Both total station surveying and erosion pins overestimated total volumetric retreat compared to TLS by 32% and 17%, respectively. The error in SBR using traditional methods would be significant when extrapolating to reach-scale estimates of sediment load. TLS allowed for collecting topographic data over the entire streambank surface and provides small-scale measurements on the spatial variability of SBR. The topography of a reach on the Staunton River in Shenandoah National Park, VA was measured to quantify habitat complexity. Total station surveying underestimated the volume of in-stream rocks by 55% compared to TLS. An algorithm was developed for delineating in-stream rocks from the TLS dataset. Complexity metrics, such as percent in-stream rock cover and cross-sectional heterogeneity, were derived and compared between both methods. TLS quantified habitat complexity in an automated, unbiased manner at a high spatial resolution. Finally, a two-phase uncertainty analysis was performed with Monte Carlo Simulation (MCS) on a two-stage channel SRD for Stroubles Creek. Both knowledge errors (Manning's <i>n</i> and Shield's number) and natural stochasticity (bankfull discharge and grain size) were incorporated into the analysis. The uncertainty design solutions for possible channel dimensions varied over a range of one to four times the magnitude of the deterministic solution. The uncertainty inherent in SRD should be quantified and used to provide a range of design options and to quantify the level of risk in selected design outcomes. / Ph. D.
100

Robust Control Design and Analysis for Small Fixed-Wing Unmanned Aircraft Systems Using Integral Quadratic Constraints

Palframan, Mark C. 29 July 2016 (has links)
The main contributions of this work are applications of robust control and analysis methods to complex engineering systems, namely, small fixed-wing unmanned aircraft systems (UAS). Multiple path-following controllers for a small fixed-wing Telemaster UAS are presented, including a linear parameter-varying (LPV) controller scheduled over path curvature. The controllers are synthesized based on a lumped path-following and UAS dynamic system, effectively combining the six degree-of-freedom aircraft dynamics with established parallel transport frame virtual vehicle dynamics. The robustness and performance of these controllers are tested in a rigorous MATLAB simulation environment that includes steady winds, turbulence, measurement noise, and delays. After being synthesized off-line, the controllers allow the aircraft to follow prescribed geometrically defined paths bounded by a maximum curvature. The controllers presented within are found to be robust to the disturbances and uncertainties in the simulation environment. A robust analysis framework for mathematical validation of flight control systems is also presented. The framework is specifically developed for the complete uncertainty characterization, quantification, and analysis of small fixed-wing UAS. The analytical approach presented within is based on integral quadratic constraint (IQC) analysis methods and uses linear fractional transformations (LFTs) on uncertainties to represent system models. The IQC approach can handle a wide range of uncertainties, including static and dynamic, linear time-invariant and linear time-varying perturbations. While IQC-based uncertainty analysis has a sound theoretical foundation, it has thus far mostly been applied to academic examples, and there are major challenges when it comes to applying this approach to complex engineering systems, such as UAS. The difficulty mainly lies in appropriately characterizing and quantifying the uncertainties such that the resulting uncertain model is representative of the physical system without being overly conservative, and the associated computational problem is tractable. These challenges are addressed by applying IQC-based analysis tools to analyze the robustness of the Telemaster UAS flight control system. Specifically, uncertainties are characterized and quantified based on mathematical models and flight test data obtained in house for the Telemaster platform and custom autopilot. IQC-based analysis is performed on several time-invariant H∞ controllers along with various sets of uncertainties aimed at providing valuable information for use in controller analysis, controller synthesis, and comparison of multiple controllers. The proposed framework is also transferable to other fixed-wing UAS platforms, effectively taking IQC-based analysis beyond academic examples to practical application in UAS control design and airworthiness certification. IQC-based analysis problems are traditionally solved using convex optimization techniques, which can be slow and memory intensive for large problems. An oracle for discrete-time IQC analysis problems is presented to facilitate the use of a cutting plane algorithm in lieu of convex optimization in order to solve large uncertainty analysis problems relatively quickly, and with reasonable computational effort. The oracle is reformulated to a skew-Hamiltonian/Hamiltonian eigenvalue problem in order to improve the robustness of eigenvalue calculations by eliminating unnecessary matrix multiplications and inverses. Furthermore, fast, structure exploiting eigensolvers can be employed with the skew-Hamiltonian/Hamiltonian oracle to accurately determine critical frequencies when solving IQC problems. Applicable solution algorithms utilizing the IQC oracle are briefly presented, and an example shows that these algorithms can solve large problems significantly faster than convex optimization techniques. Finally, a large complex engineering system is analyzed using the oracle and a cutting-plane algorithm. Analysis of the same system using the same computer hardware failed when employing convex optimization techniques. / Ph. D.

Page generated in 0.0553 seconds