• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1076
  • 83
  • 7
  • 3
  • 1
  • 1
  • Tagged with
  • 2806
  • 2806
  • 1593
  • 1574
  • 1574
  • 425
  • 379
  • 161
  • 155
  • 140
  • 121
  • 115
  • 111
  • 109
  • 98
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A Compton Camera for Spectroscopic Imaging from 100keV to 1MeV

Earnhart, Jonathan Raby 21 June 1999 (has links)
<p>Earnhart, Jonathan Raby Dewitt. A Compton Camera for Spectroscopic Imaging from 100keV to 1MeV (Under the direction of Robin Gardner and Thomas Prettyman).<p>Compton cameras are a particularly interesting gamma-ray imaging technology because they have a large field of view and rely on electronic rather than mechanical collimation (lead). These systems produce two dimensional, spectroscopic images using data collected from spatially separated detector arrays. A single acquisition contains data to produce image signatures for each radionuclide in the field of view. Application of Compton cameras in field of astrophysics has proven the systems capability for imaging in the 1 to 30MeV range. Other potential applications, in the 100keV to 1MeV range, include nuclear material safeguards and nuclear medicine imaging. A particularly attractive feature for these applications is that the technology to produce a portable camera is now available due to improvements in solid state room-temperature detectors. <p>The objective of this work is to investigate Compton camera technology for spectroscopic imaging of gamma rays in the 100keV to 1MeV range. To this end, accurate and efficient camera simulation capability will allow a variety of design issues to be explored before a full camera system is built. An efficient, specific purpose Monte Carlo code was developed to investigate the image formation process in Compton cameras. The code is based on a pathway sampling technique with extensive use of variance reduction techniques. In particular, the technique of forcing is used make each history result in a partial success. The code includes detailed Compton scattering physics, including incoherent scattering functions, Doppler broadening, and multiple scattering. Detector response functions are also included in the simulations.<p>A prototype camera was built to provide code benchmarks and investigate implementation issues. The prototype is based on a two-detector system, which sacrifices detection efficiency for simplicity and versatility. One of the detectors is mounted on a computer controlled stage capable of two dimensional motion (14x14cm full range with ±0.1mm precision). This produces a temporally encoded image via motion of the detector.<p>Experiments were performed with two different camera configurations for a scene containing a 75Se source and a 137Cs source. These sources provided a challenging test of the spectroscopic imaging capability of the Compton camera concept. The first camera was based on a fixed silicon detector in the front plane and a CdZnTe detector mounted in the stage. The second camera configuration was based on two CdZnTe detectors. Both systems were able to reconstruct images of 75Se, using the 265keV line, and 137Cs, using the 662keV line. Only the silicon-CdZnTe camera was able to resolve the low intensity 400keV line of 75Se. Neither camera was able to reconstruct the 75Se source location using the 136keV line. The camera has a low energy limit imposed by the noise level on the front plane detector's timing signal. The timing performance of the coplanar grid CdZnTe detector design was improved, resulting in a reduction in the full width half maximum of the coincidence timing peak between two detectors from 800ns to 30ns.<p>The energy resolution of the silicon-CdZnTe camera system was 4% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 10° for a source on the camera axis and 14° for a source 30° off axis. The source to camera distance was approximately 1m. Typical detector pair efficiencies were measured as 3x10-11 at 662keV.<p>The dual CdZnTe camera had an energy resolution of 3.2% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 8° for a source on the camera axis and 12° for a source 20° off axis. The source to camera distance was 1.7m. Typical detector pair efficiencies were measured as 7x10-11 at 662keV.<p>Of the two prototype camera configurations tested, the silicon-CdZnTe configuration had superior imaging characteristics. This configuration is less sensitive to effects caused by source decay cascades and random coincident events. An implementation of the expectation maximum-maximum likelihood reconstruction technique improved the angular resolution to 6° and reduced the background in all the images.<p>The measured counting rates were a factor of two low for the silicon-CdZnTe camera, and up to a factor of four high for the dual CdZnTe camera compared to simulation. These differences are greater than the error bars. The primary reasons for these discrepancies are related to experimental conditions imposed by source decay cascades and the occurrence of random coincidences which are not modeled by the code.<p><P>
62

Particulate Generation During Disruption Simulation on the SIRENS High Heat Flux Facility

Sharpe, John Phillip 04 April 2000 (has links)
<p>Successful implementation of advanced electrical power generation technology into the global marketplace requires at least two fundamental ideals: cost effectiveness and the guarantee of public safety. These requirements can be met by thorough design and development of technologies in which safety is emphasized and demonstrated. A detailed understanding of the many physical processes and their synergistic effects in a complicated fusion energy system is necessary for a defensible safety analysis. One general area of concern for fusion devices is the production of particulate, often referred to as dust or aerosol, from material exposed to high energy density fusion plasma. This dust may be radiologically activated and/or chemically toxic, and, if released to the environment, could become a hazard to the public. The goal of this investigation was to provide insight into the production and transport of particulate generated during the event of extreme heat loads to surfaces directly exposed to high energy density plasma. A step towards achieving this goal was an experiment campaign carried out with the Surface InteRaction Experiment at North Carolina State (SIRENS), a facility used for high heat flux experiments. These experiments involved exposing various materials, including copper, stainless steel 316, tungsten, aluminum, graphite (carbon), and mixtures of carbon and metals, to the high energy density plasma of the SIRENS source section. Material mobilized as a result of this exposure was collected from a controlled expansion chamber and analyzed to determine physical characteristics important to safety analyses (e.g., particulate shape, size, chemical composition, and total mobilized mass). Key results from metal-only experiments were: the particles were generally spherical and solid with some agglomeration, greater numbers of particles were collected at increasing distances from the source section, and the count median diameter of the measured particle size distributions were of similar value at different positions in the expansion chamber, although the standard deviation was found to increase with increasing distances from the source section, and the average count median diameters were 0.75 micron for different metals. Important results from the carbon and carbon/metals tests were: particle size distributions for graphite tests were bi-modal (i.e. two distributions were present in the particle population), particles were generally smaller than those from metals-only tests (average of 0.3 micron), and the individual particles were found to contain both carbon and metal material. An associated step towards the goal involved development of an integrated mechanistic model to understand the role of different particulate phenomena in the overall behavior observed in the experiment. This required a detailed examination of plasma/fluid behavior in the plasma source section, fluid behavior in the expansion chamber, and mechanisms responsible for particulate generation and growth. The model developed in this work represents the first time integration of these phenomena and was used to simulate mobilization experiments in SIRENS. Comparison of simulation results with experiment observations provides an understanding of the physical mechanisms forming the particulate and indicates if mechanisms other than those in the model were present in the experiment. Key results from this comparison were: the predicted amount of mass mobilized from the source section was generally much lower than that measured, the calculated and measured particle count median diameters were similar at various locations in the expansion chamber, and the measured standard deviations were larger than those predicted by the model. These results implicate that other mechanisms (e.g., mobilization of melted material) in addition to ablation were responsible for mass removal in the source section, a large number of the measured particles were formed by modeled mechanisms of nucleation and growth, and, as indicated by the large measured standard deviations, the larger particles found in the measurement were from an aerosol source not included in the model. From this model, a detailed understanding of the production of primary particles from the interaction of a high energy density plasma and a solid material surface has been achieved. Enhancements to the existing model and improved/extended experimental tests will yield a more sophisticated mechanistic model for particulate production in a fusion reactor.<P>
63

Boundary Layer Energy Transport in Plasma Devices

Orton, Nigel Paul 01 May 2000 (has links)
<p>The purpose of this research was to develop a model of boundary-layer energy transport in electric launchers, and perform a numerical simulation to investigate the influence of turbulence, thermal radiation and ablation on energy flux to plasma-facing surfaces. The model combines boundary-layer conservation equations with a k-omega turbulence model and multi-group radiation transport, and uses plasma models for fluid properties such as viscosity, thermal conductivity and specific heat capacity. The resulting TURBFIRE computer code is the most comprehensive simulation to date of boundary-layer turbulence and radiation transport in electric launcher plasmas.<p>TURBFIRE was run for cases with and without ablation. Temperature and velocity profiles are presented for all code runs, as are values of heat flux to the wall. The results indicate that both radiation transport and turbulence are important mechanisms of energy transport in the boundary layer, and therefore that both should be modeled in future simulations. Additionally, heat flux to the wall via both conduction and radiation was found to be significant for all cases run. Other authors have theorized that conduction could be neglected, but the current results show that this is not the case near the wall.<p>This research is also novel for its advances in computational fluid dynamics (CFD). The energy equation was written in terms of internal energy and discretized in a manner more implicit than in typical CFD codes. These changes were necessary to enable the code to accurately calculate heat capacity, which changes greatly with temperature for even weakly-ionized plasmas. Additionally, zero-gradient boundary conditions were used at the free stream for the turbulent kinetic energy and its dissipation rate (k and omega). Experimentally determined freestream values of k and omega are typically used in CFD codes, but these data are not available for most plasma devices.<P>
64

A New Monte Carlo Assisted Approach toDetector Response Functions

Sood, Avneet 01 May 2000 (has links)
<p>The physical mechanisms that describe the components of NaI,Ge, and SiLi detector response have been investigated using Monte Carlosimulation. The mechanisms described focus on the shape of the Compton edge,the magnitude of the flat continuum, and the shape of the exponential tailsfeatures. These features are not accurately predicted by previous Monte Carlosimulation. Probable interaction mechanisms for each detector responsecomponent is given based on this Monte Carlo simulation.Precollision momentum of the electron is considered when simulating incoherentscattering of the photon. The description of the Doppler broadened photonenergy spectrum corrects the shape of the Compton edge. Special attention isgiven to partial energy loss mechanisms in the frontal region of the detectorlike the escape of photoelectric and Auger electrons or low-energy X-rays fromthe detector surface. The results include a possible physical mechanismdescribing the exponential tail feature that is generated by a separate MonteCarlo simulation. Also included is a description of a convolution effect thataccounts for the difference in magnitude of the flat continuum in the MonteCarlo simulation and experimental spectra. The convolution describes anenhanced electron loss. Results of these applications are discussed.<P>
65

BAYESIAN ANALYSIS FOR THE SITE-SPECIFIC DOSE MODELING IN NUCLEAR POWER PLANT DECOMMISSIONING

LING, XIANBING 30 January 2001 (has links)
<p>Decommissioning is the process of closing down a facility. In nuclear power plant decommissioning, it must be determined that that any remaining radioactivity at a decommissioned site will not pose unacceptable risk to any member of the public after the release of the site. This is demonstrated by the use of predictive computer models for dose assessment. The objective of this thesis is to demonstrate the methodologies of site-specific dose assessment with the use of Bayesian analysis for nuclear power plant decommissioning. An actual decommissioning plant site is used as a test case for the analyses. A residential farmer scenario was used in the analysis with the two of the most common computer codes for dose assessment, i.e., DandD and RESRAD. By identifying key radionuclides and parameters of importance in dose assessment for the site conceptual model, available data on these parameters was identified (as prior information) from the existing default input data from the computer codes or the national database. The site-specific data were developed using the results of field investigations at the site, historical records at the site, regional database, and the relevant information from the literature. This new data were compared to the prior information with respect to their impacts onboth deterministic and probabilistic dose assessment. Then, the two sets of information were combined by using the method of conjugate-pair for Bayesian updating. Value of information (VOI) analysis was also performed based on the results of dose assessment for different radionuclides and parameters. The results of VOI analysis indicated that the value of site-specific information was very low regarding the decision on site release. This observation was held for both of the computer codes used. Although the value of new information was very low with regards to the decisions on site release, it was also found that the use of site-specific information is very important for the reduction of the predicted dose. This would be particularly true with the DandD code.<P>
66

Steam Generator Liquid Mass as a Control Input for the Movement of the Feed Control Valve in a Pressurized Water Reactor

Sakabe, Akira 26 November 2001 (has links)
<p> The steam generator in a nuclear power plant plays an important role in cooling the reactor and producing steam for the turbine-generators. As a result, control of the water inventory in the steam generator is crucial. The water mass in the steam generator cannot be measured directly, so the water mass is generally inferred from the downcomer differential pressure as a measure of the downcomer water level. The water level in the downcomer is a good indication of the water mass inventory at or near steady-state conditions. Conventional PI controllers are used to maintain the water level in the downcomer between relatively narrow limits to prevent excessive moisture carryover into the turbine or the uncovering of the tube bundle. Complications arise in level control with respect to mass inventory due to the short-term inverse response of downcomer level. This is also known as shrink and swell. Due to the complications that arise from level control, one would like to directly control the mass inventory in the steam generator. Currently, the mass inventory is not a measurable quantity, but through the use of computer simulation can be calculated. Design and analysis of the new controller will be performed by simulation. The focus of this research was to develop and design, test, and implement a liquid mass inventory controller that would allow for safe automatic operation during normal and accident scenarios. In designing the new controller, it is assumed that the normal plant safety functions are not impacted by the mass controller. Optimal settings for the new mass controller are sought such that the mass control program will have rapid response and avoid reactor trips under automatic control if the downcomer level protection setpoints do not induce a trip for the same transient.For future analysis, it is proposed that neural networks be used in water mass observer instead of calculated simulation results. <P>
67

The Solubility and Diffusivity of Helium in Mercury with Respect to Applications at the Spallation Neutron Source

Francis, Matthew W. 01 May 2008 (has links)
Models for solubility of noble gases in liquid metals are reviewed in detail and evaluated for the combination of mercury and helium for applications at the Spallation Nuetron Source (SNS) at Oak Ridge National Laboratory (ORNL). Gas solubility in mercury is acknowledged to be very low; therefore, mercury has been used in ASTM standard methods as a blocking media for gas solubility studies in organic fluids and water. Models from physical chemistry predict a Henry coefficient for helium in mercury near 3.9x1015 Pa-molHg/molHe, but the models have large uncertainties and are not verified with data. An experiment is designed that bounds the solubility of helium in mercury to values below 1.0x10-8 molHe/molHg at 101.3 kPa, which is below values previously measurable. The engineering application that motivated this study was the desire to inject 10 to 15 micron-radius helium bubbles in the mercury target of the SNS to reduce pressure spikes that accompany the beam energy deposition. While the experiment bounds the solubility to values low enough to support system engineering for the SNS application, it does not allow confirmation of the theoretical solubility with low uncertainty. However, methods to measure the solubility value may be derived from the techniques employed in this study.
68

Forcasting Dose and Dose Rate from Solar Particle Ecents Using Locally Weighted Regression Techniques

Nichols, Theodore Franklin 01 August 2009 (has links)
Continued human exploration of the solar system requires the mitigating of radiation effects from the Sun. Doses from Solar Particle Events (SPE) pose a serious threat to the health of astronauts. A method for forecasting the rate and total severity of such events would give time for the astronauts to take actions to mitigate the effects from an SPE. The danger posed from an SPE depends on dose received and the temporal profile of the event. The temporal profile describes how quickly the dose will arrive (dose rate). Previously deployed methods used neural networks to predict the total dose from the event. Later work added the ability to predict the temporal profiles using the neural network approach. Locally weighted regression (LWR) techniques were then investigated for use in forecasting the total dose from an SPE. That work showed that LWR methods could forecast the total dose from an event. This previous research did not calculate the uncertainty in a forecast. The present research expands the LWR model to forecast dose and temporal profile from an SPE along with the uncertainty in these forecasts. Forecasts made with LWR method are able to make forecasts at a time early in an event with results that can be beneficial to operators and crews. The forecasts in this work are all made at or before five hours after the start of the SPE. For 58 percent of the events tested, the dose-rate profile is within the uncertainty bounds. Restricting the data set to only events less than 145 cGy, 86 percent of the events are within the uncertainty bounds. The uncertainty in the forecasts are large, however the forecasts are being made early enough into an SPE that very little of the dose will have reached the crew. Increasing the number of SPEs in the data set increases the accuracy of the forecasts and reduces the uncertainty in the forecasts.
69

An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

Garvey, Dustin R 01 May 2007 (has links)
To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable "grab bag" of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours.
70

A Generic Prognostic Framework for Remaining Useful Life Prediction of Complex Engineering Systems

Usynin, Alexander V. 01 December 2007 (has links)
Prognostics and Health Management (PHM) is a general term that encompasses methods used to evaluate system health, predict the onset of failure, and mitigate the risks associated with the degraded behavior. Multitudes of health monitoring techniques facilitating the detection and classification of the onset of failure have been developed for commercial and military applications. PHM system designers are currently focused on developing prognostic techniques and integrating diagnostic/prognostic approaches at the system level. This dissertation introduces a prognostic framework, which integrates several methodologies that are necessary for the general application of PHM to a variety of systems. A method is developed to represent the multidimensional system health status in the form of a scalar quantity called a health indicator. This method is able to indicate the effectiveness of the health indicator in terms of how well or how poorly the health indicator can distinguish healthy and faulty system exemplars. A usefulness criterion was developed which allows the practitioner to evaluate the practicability of using a particular prognostic model along with observed degradation evidence data. The criterion of usefulness is based on comparing the model uncertainty imposed primarily by imperfectness of degradation evidence data against the uncertainty associated with the time-to-failure prediction based on average reliability characteristics of the system. This dissertation identifies the major contributors to prognostic uncertainty and analyzes their effects. Further study of two important contributions resulted in the development of uncertainty management techniques to improve PHM performance. An analysis of uncertainty effects attributed to the random nature of the critical degradation threshold, , was performed. An analysis of uncertainty effects attributed to the presence of unobservable failure mechanisms affecting the system degradation process along with observable failure mechanisms was performed. A method was developed to reduce the effects of uncertainty on a prognostic model. This dissertation provides a method to incorporate prognostic information into optimization techniques aimed at finding an optimal control policy for equipment performing in an uncertain environment.

Page generated in 0.1168 seconds