• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6741
  • 2272
  • 845
  • 768
  • 233
  • 204
  • 192
  • 180
  • 180
  • 180
  • 180
  • 180
  • 178
  • 71
  • 69
  • Tagged with
  • 16011
  • 3961
  • 3804
  • 1651
  • 1619
  • 1604
  • 1595
  • 1574
  • 989
  • 771
  • 733
  • 732
  • 727
  • 727
  • 665
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Steam Generator Liquid Mass as a Control Input for the Movement of the Feed Control Valve in a Pressurized Water Reactor

Sakabe, Akira 26 November 2001 (has links)
<p> The steam generator in a nuclear power plant plays an important role in cooling the reactor and producing steam for the turbine-generators. As a result, control of the water inventory in the steam generator is crucial. The water mass in the steam generator cannot be measured directly, so the water mass is generally inferred from the downcomer differential pressure as a measure of the downcomer water level. The water level in the downcomer is a good indication of the water mass inventory at or near steady-state conditions. Conventional PI controllers are used to maintain the water level in the downcomer between relatively narrow limits to prevent excessive moisture carryover into the turbine or the uncovering of the tube bundle. Complications arise in level control with respect to mass inventory due to the short-term inverse response of downcomer level. This is also known as shrink and swell. Due to the complications that arise from level control, one would like to directly control the mass inventory in the steam generator. Currently, the mass inventory is not a measurable quantity, but through the use of computer simulation can be calculated. Design and analysis of the new controller will be performed by simulation. The focus of this research was to develop and design, test, and implement a liquid mass inventory controller that would allow for safe automatic operation during normal and accident scenarios. In designing the new controller, it is assumed that the normal plant safety functions are not impacted by the mass controller. Optimal settings for the new mass controller are sought such that the mass control program will have rapid response and avoid reactor trips under automatic control if the downcomer level protection setpoints do not induce a trip for the same transient.For future analysis, it is proposed that neural networks be used in water mass observer instead of calculated simulation results. <P>
372

The Solubility and Diffusivity of Helium in Mercury with Respect to Applications at the Spallation Neutron Source

Francis, Matthew W. 01 May 2008 (has links)
Models for solubility of noble gases in liquid metals are reviewed in detail and evaluated for the combination of mercury and helium for applications at the Spallation Nuetron Source (SNS) at Oak Ridge National Laboratory (ORNL). Gas solubility in mercury is acknowledged to be very low; therefore, mercury has been used in ASTM standard methods as a blocking media for gas solubility studies in organic fluids and water. Models from physical chemistry predict a Henry coefficient for helium in mercury near 3.9x1015 Pa-molHg/molHe, but the models have large uncertainties and are not verified with data. An experiment is designed that bounds the solubility of helium in mercury to values below 1.0x10-8 molHe/molHg at 101.3 kPa, which is below values previously measurable. The engineering application that motivated this study was the desire to inject 10 to 15 micron-radius helium bubbles in the mercury target of the SNS to reduce pressure spikes that accompany the beam energy deposition. While the experiment bounds the solubility to values low enough to support system engineering for the SNS application, it does not allow confirmation of the theoretical solubility with low uncertainty. However, methods to measure the solubility value may be derived from the techniques employed in this study.
373

Forcasting Dose and Dose Rate from Solar Particle Ecents Using Locally Weighted Regression Techniques

Nichols, Theodore Franklin 01 August 2009 (has links)
Continued human exploration of the solar system requires the mitigating of radiation effects from the Sun. Doses from Solar Particle Events (SPE) pose a serious threat to the health of astronauts. A method for forecasting the rate and total severity of such events would give time for the astronauts to take actions to mitigate the effects from an SPE. The danger posed from an SPE depends on dose received and the temporal profile of the event. The temporal profile describes how quickly the dose will arrive (dose rate). Previously deployed methods used neural networks to predict the total dose from the event. Later work added the ability to predict the temporal profiles using the neural network approach. Locally weighted regression (LWR) techniques were then investigated for use in forecasting the total dose from an SPE. That work showed that LWR methods could forecast the total dose from an event. This previous research did not calculate the uncertainty in a forecast. The present research expands the LWR model to forecast dose and temporal profile from an SPE along with the uncertainty in these forecasts. Forecasts made with LWR method are able to make forecasts at a time early in an event with results that can be beneficial to operators and crews. The forecasts in this work are all made at or before five hours after the start of the SPE. For 58 percent of the events tested, the dose-rate profile is within the uncertainty bounds. Restricting the data set to only events less than 145 cGy, 86 percent of the events are within the uncertainty bounds. The uncertainty in the forecasts are large, however the forecasts are being made early enough into an SPE that very little of the dose will have reached the crew. Increasing the number of SPEs in the data set increases the accuracy of the forecasts and reduces the uncertainty in the forecasts.
374

An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

Garvey, Dustin R 01 May 2007 (has links)
To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable "grab bag" of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours.
375

A Generic Prognostic Framework for Remaining Useful Life Prediction of Complex Engineering Systems

Usynin, Alexander V. 01 December 2007 (has links)
Prognostics and Health Management (PHM) is a general term that encompasses methods used to evaluate system health, predict the onset of failure, and mitigate the risks associated with the degraded behavior. Multitudes of health monitoring techniques facilitating the detection and classification of the onset of failure have been developed for commercial and military applications. PHM system designers are currently focused on developing prognostic techniques and integrating diagnostic/prognostic approaches at the system level. This dissertation introduces a prognostic framework, which integrates several methodologies that are necessary for the general application of PHM to a variety of systems. A method is developed to represent the multidimensional system health status in the form of a scalar quantity called a health indicator. This method is able to indicate the effectiveness of the health indicator in terms of how well or how poorly the health indicator can distinguish healthy and faulty system exemplars. A usefulness criterion was developed which allows the practitioner to evaluate the practicability of using a particular prognostic model along with observed degradation evidence data. The criterion of usefulness is based on comparing the model uncertainty imposed primarily by imperfectness of degradation evidence data against the uncertainty associated with the time-to-failure prediction based on average reliability characteristics of the system. This dissertation identifies the major contributors to prognostic uncertainty and analyzes their effects. Further study of two important contributions resulted in the development of uncertainty management techniques to improve PHM performance. An analysis of uncertainty effects attributed to the random nature of the critical degradation threshold, , was performed. An analysis of uncertainty effects attributed to the presence of unobservable failure mechanisms affecting the system degradation process along with observable failure mechanisms was performed. A method was developed to reduce the effects of uncertainty on a prognostic model. This dissertation provides a method to incorporate prognostic information into optimization techniques aimed at finding an optimal control policy for equipment performing in an uncertain environment.
376

Dose Modeling and Statistical Assessment of Hot Spots for Decommissioning Applications

Abelquist, Eric Warner 01 August 2008 (has links)
A primary goal of this research was to develop a technically defensible approach for modeling the receptor dose due to smaller "hot spots" of residual radioactivity. Nearly 700 combinations of environmental pathways, radionuclides and hot spot sizes were evaluated in this work. The hot spot sizes studied ranged from 0.01 m2 to 10 m2, and included both building and land area exposure pathways. Dose modeling codes RESRAD, RESRAD-BUILD, and MicroShield were used to assess hot spot doses and develop pathway-specific area factors for eleven radionuclides. These area factors are proposed for use within the existing Multiagency Radiation Survey and Site Investigation Manual (MARSSIM) context of final status survey design and implementation. The research identified pathways that are particularly "hot spot sensitive"—i.e., particularly sensitive to changes in the areal size of the contaminated area. The external radiation pathway was the most hot spot sensitive for eight of the eleven radionuclides studied. These area factors were evaluated both when the receptor was located directly on the soil hot spot and ranged from 6.6 to 11.4 for 1 m2 hot spot; and ranged from 650 to 785 when the receptor was located 6 m from the 1 m2 hot spot. The external radiation pathway was also the most sensitive of the building occupancy pathways. For the smallest building hot spot studied (100 cm2), the area factors were approximately 1100 for each of the radionuclides. A Bayesian statistical approach for assessing the acceptability of hot spots is proposed. A posterior distribution is generated based on the final status survey data that provides an estimate of the 99th percentile of the contaminant distribution. Hot spot compliance is demonstrated by comparing the upper tolerance limit——defined as the 95% upper confidence level on the 99th percentile of the contaminant distribution in the survey unit—with the DCGL99th value. The DCGL99th is the hot spot dose limit developed using the dose modeling research to establish area factors mentioned above. The proposed approach provides a hot spot assessment approach that considers hot spots that may be present, but not found. Examples are provided to illustrate this approach.
377

Radiation Effects on Metastable States of Superheated Water

Alvord, Charles William 01 December 2008 (has links)
Radiation Effects on Metastable States of Superheated Water covers theory, application, and experimentation into the behavior of water at temperatures above the boiling point. The backgrounds of Positron Emission Tomography target design, bubble chambers, and superheat measurements are presented. The quantitative theory of metastable liquids and their characteristic waiting time is discussed. Energetics of bubble formation from two different perspectives are included. Finally, the design of an apparatus for measuring liquid superheats in the presence of radiation is covered in some detail, including several design iterations, first measurements made on the apparatus, and techniques for data reduction.
378

An Improved Knockout-Ablation-Coalescence Model for Prediction of Secondary Neutron and Light-ion Production in Cosmic Ray Interactions

Sriprisan, Sirikul 01 August 2008 (has links)
An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron and light-ion spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model was extended to incorporate important coalescence effects into the formalism. Recently, the coalescence model was reformulated in UBERNSPEC and alpha coalescence incorporated. In addition, the ability to predict light ion spectra with the coalescence model was added. Earlier versions of UBERNSPEC were limited to nuclei with mass numbers less than 68. In this work, the UBERNSPEC code has been extended to include heavy charged particles with mass numbers as large as 238. Representative predictions from the code are compared with published measurements of neutron energy and angular production spectra and light ion energy spectra for a variety of collision pairs.
379

An Automated Diagnostic Tool for Predicting Anatomical Response to Radiation Therapy

Harris, Carley Elizabeth 01 December 2009 (has links)
A "Clinical Decision Support System" (CDSS) is a concept which has advanced rapidly in health care over the last few decades, and it is defined as "an interactive computer program that is designed to assist physicians and other health professionals with decision-making tasks." Radiation therapy oncologists are required to make decisions that do not involve making a disease diagnosis. Therefore this work focuses on developing a modified CDSS which can be used to aid oncology staff in identifying cancer patients who will require adaptive radiation therapy (ART). An image-guided radiation therapy (IGRT) tool was developed that consists of both diagnostic and prognostic processes. Patients who will require ART are those whose crosssectional neck measurements change by more than half of a centimeter over the entire course of treatment. First, the tool allows one to “diagnose” or identify which patients would benefit from adaptive therapy and then make a “prognosis,” or identify when ART is required. Thirty head and neck (H&N) patients were used in this study, and 15 required ART. Each diagnosis was made by predicting if the threshold of 0.5 cm would be crossed for each of the four cross-sectional measurements, and each prediction was made by determining when the threshold would cross. The diagnosis results show that half (61/120) of the measurements predicted that patients would need ART given the first 15 observations and 28 of 120 predicted needing ART within 20 observations. Therefore, 74% of patients' measurements accurately diagnosed that ART would be required given just the first 20 observations. The prediction results indicate that an average of 11 observations is needed to make adequate time predictions with a v reliability of at least 0.5. However, more accurate time predictions with higher reliability values (0.6 and 0.7) could be made given an average of 16 and 18 observations, respectively. These predictions, while requiring more observations, provided additional lead time in knowing when ART is required.
380

Artificial Neural Network for Spectrum unfolding Bonner Sphere Data

Hou, Jia 01 December 2007 (has links)
The use of Bonner Sphere Spectrometer (BSS) is a well-established method of measuring the energy distribution of neutron emission sources. The purpose of this research is to apply the Generalized Regression Neural Network (GRNN), a kind of Artificial Neural Network (ANN), to predict the neutron spectrum using the count rate data from a BSS. The BSS system was simulated with the MCNP5 Monte-Carlo code to calculate the response to neutrons of different energies for each combination of thermal neutron detector and polyethylene sphere. One hundred and sixty-three different types of neutron spectra were then investigated. GRNN Training and testing was carried out in the MATLAB environment. In the GRNN testing, eight-one predicted spectra were obtained as outputs of the GRNN. Comparison with standard spectra shows that 97.5% of the prediction errors were controlled below 1%, indicating ANN could be used as an alternative with high accuracy in neutron spectrum unfolding methodologies. Advantages and further improvements of this technique are also discussed.

Page generated in 0.0393 seconds