• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6736
  • 2272
  • 845
  • 768
  • 233
  • 204
  • 192
  • 180
  • 180
  • 180
  • 180
  • 180
  • 178
  • 71
  • 68
  • Tagged with
  • 16002
  • 3958
  • 3801
  • 1651
  • 1618
  • 1603
  • 1595
  • 1574
  • 989
  • 771
  • 732
  • 732
  • 727
  • 726
  • 665
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Alternatives to the Americium-Beryllium Neutron Source for the Compensated Neutron Porosity Log

Peeples, Cody Ryan 07 December 2007 (has links)
Monte Carlo simulations of neutron porosity logs were performed to examine the possibility of replacing the standard Americium-Beryllium neutron source. The candidate replacement sources were the Californium-252 radioisotope and the Deuterium-Tritium fusion reaction based particle accelerator neutron source. It was found that the differences in the energy spectra of neutrons emitted by the sources made an impact on the observed response. Both candidates were found to have potential as sources for the log.
362

Atmospheric Plasma Characterization and Mechanisms of Substrate Surface Modification

Cornelius, Carrie Elizabeth Ms. 08 December 2006 (has links)
The purpose of this research has been to characterize the parameters of an Atmospheric Plasma Device used for surface modifications and functionalization of textile materials. Device parameters are determined in absence and presence of a substrate to quantify the optimal operational conditions. Neutral gas temperature profiles were determined for a variety of gas mixtures including 100% helium and helium with 1 or 2% reactive gases, such as oxygen and carbontetrafluoride. A plasma model was developed to solve for other plasma parameters including the electron-neutral collision frequency and the electron number density. Wool substrates were treated with various gas mixtures for a range of exposure durations and the effects of plasma treatment on weight, surface-functionality, and strength were assessed. Assessment methods include percent weight change calculations, energy dispersive X-ray spectroscopy (EDS), and tensile testing. In addition, cellulosic paper was exposed to 1% oxygen plasma to determine the feasibility of permanently grafting the anti-microbial agent HTCC (quaternized ammonium chitosan). The success of the bond was tested using Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), colorimetry, and percent weight change, and the permanency of the bond was tested though soxhlet extraction.
363

Design and Optimization of Thermosyphon Batch Targets for Production of F-18

Peeples, Johanna Louise 06 December 2006 (has links)
F-18 is a short-lived radioisotope commonly used in Positron Emission Tomography (PET). This radionuclide is typically produced through the O-18(p,n)F-18 reaction by proton bombardment of O-18-enriched water. Thermosyphon batch targets have been proposed as a means to increase F-18 production due to their enhanced heat rejection capabilities. These boiling targets have been operated with up to 3.2 kW of beam power with manageable O-18 enriched water volumes. The purpose of this research project has been to develop computational methods which can be used to design new targets with enhanced production capabilities. The computational methods developed in this work were used to design a low power thermosyphon production target for the Duke Medical Center cyclotron. This design was modeled to be range thick, and operate within the desired margins for beam powers in excess of 1 kW, the operating limit of the Duke cyclotron. A sensitivity analysis of the computational methods was performed which indicated the model is most sensitive to the boiling and condensing heat transfer coefficients. Even with a high uncertainty in these coefficients, the target should still operate well within the desired margins.
364

A NEW METHOD FOR RADIOACTIVE PARTICLE TRACKING

Shehata, Ashraf Hassan 08 December 2005 (has links)
A system based on the concept of three detectors radio active particle tracking, to track a particle non-invasively in the three dimensions is presented. It consists of a set of three well collimated detectors mounted on a platform that can be moved to track the radioactive particle vertically through one collimated detector with a horizontal slot opening. The other two collimated detectors with vertical slot opening can be rotated angularly to track the radioactive particle in the planar domain, and deduce the polar coordinates. A complete description of the actual system developed is outlined including the hardware, the automation and control software, and the data acquisition aspects. A critique of the conventional tomographic radioactive particle tracking was established in comparison to the new three detectors system we developed. A number of obvious and valuable advantages of the new method were pointed out. The result presented here are illustrative through a series of benchmark experiments to test and verify the performance of the system. Results of real trajectories of a single radioactive particle moving in air, and in a bed filled with a mass of granular spherical attenuating medium is also presented. Through testing benchmark experiments that include a variety of real time trajectories the success of the tracking system is demonstrated.
365

Coaxial Atmospheric Pressure Plasma Discharge for Treatment of Filaments and Yarns

Lee, Kyoung Ook 08 January 2008 (has links)
Characteristics of non-thermal atmospheric-pressure plasma generated in a coaxial cylindrical Dielectric-Barrier Discharge (DBD) were investigated for application in treatment of polymer and 100% un-mercerized cotton yarns. The discharge characteristics were investigated by measuring the electrical parameters and utilization of developed plasma circuit models to obtain plasma electron temperature, number density and the electron-neutral collision frequency. The experiments were conducted in helium and oxygenated helium plasma in absence and presence of yarns. The discharge is capacitively-coupled and is induced by an audio-frequency, 4.5 kHz, oscillating voltage. The electrical voltage-current (V-I) characteristics optimized for plasma processing, by the oxygen and helium flow rate ratio, was found to be about 40sccm for oxygen flow. Optical emission spectroscopy (OES) was used to determine the plasma composition and to evaluate plasma temperature and number density. The plasma electron number density decreased from 2.2 x 10^16 to 1.4 x 10^16 per cubic meter when oxygen flow rate was increased to 100sccm in a 10,000sccm helium flow, while the electron temperature increased from 0.15 to 0.4 eV for the same increase in oxygen flow rate. It was also found that the plasma experiences some streamers and that the streamer?s electron temperature has a wide range between 0.5 to 2 eV. The optimized oxygen flow rate for polymer yarn processing was found to be 40sccm in a 10,000sccm helium flow.
366

A Compton Camera for Spectroscopic Imaging from 100keV to 1MeV

Earnhart, Jonathan Raby 21 June 1999 (has links)
<p>Earnhart, Jonathan Raby Dewitt. A Compton Camera for Spectroscopic Imaging from 100keV to 1MeV (Under the direction of Robin Gardner and Thomas Prettyman).<p>Compton cameras are a particularly interesting gamma-ray imaging technology because they have a large field of view and rely on electronic rather than mechanical collimation (lead). These systems produce two dimensional, spectroscopic images using data collected from spatially separated detector arrays. A single acquisition contains data to produce image signatures for each radionuclide in the field of view. Application of Compton cameras in field of astrophysics has proven the systems capability for imaging in the 1 to 30MeV range. Other potential applications, in the 100keV to 1MeV range, include nuclear material safeguards and nuclear medicine imaging. A particularly attractive feature for these applications is that the technology to produce a portable camera is now available due to improvements in solid state room-temperature detectors. <p>The objective of this work is to investigate Compton camera technology for spectroscopic imaging of gamma rays in the 100keV to 1MeV range. To this end, accurate and efficient camera simulation capability will allow a variety of design issues to be explored before a full camera system is built. An efficient, specific purpose Monte Carlo code was developed to investigate the image formation process in Compton cameras. The code is based on a pathway sampling technique with extensive use of variance reduction techniques. In particular, the technique of forcing is used make each history result in a partial success. The code includes detailed Compton scattering physics, including incoherent scattering functions, Doppler broadening, and multiple scattering. Detector response functions are also included in the simulations.<p>A prototype camera was built to provide code benchmarks and investigate implementation issues. The prototype is based on a two-detector system, which sacrifices detection efficiency for simplicity and versatility. One of the detectors is mounted on a computer controlled stage capable of two dimensional motion (14x14cm full range with ±0.1mm precision). This produces a temporally encoded image via motion of the detector.<p>Experiments were performed with two different camera configurations for a scene containing a 75Se source and a 137Cs source. These sources provided a challenging test of the spectroscopic imaging capability of the Compton camera concept. The first camera was based on a fixed silicon detector in the front plane and a CdZnTe detector mounted in the stage. The second camera configuration was based on two CdZnTe detectors. Both systems were able to reconstruct images of 75Se, using the 265keV line, and 137Cs, using the 662keV line. Only the silicon-CdZnTe camera was able to resolve the low intensity 400keV line of 75Se. Neither camera was able to reconstruct the 75Se source location using the 136keV line. The camera has a low energy limit imposed by the noise level on the front plane detector's timing signal. The timing performance of the coplanar grid CdZnTe detector design was improved, resulting in a reduction in the full width half maximum of the coincidence timing peak between two detectors from 800ns to 30ns.<p>The energy resolution of the silicon-CdZnTe camera system was 4% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 10° for a source on the camera axis and 14° for a source 30° off axis. The source to camera distance was approximately 1m. Typical detector pair efficiencies were measured as 3x10-11 at 662keV.<p>The dual CdZnTe camera had an energy resolution of 3.2% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 8° for a source on the camera axis and 12° for a source 20° off axis. The source to camera distance was 1.7m. Typical detector pair efficiencies were measured as 7x10-11 at 662keV.<p>Of the two prototype camera configurations tested, the silicon-CdZnTe configuration had superior imaging characteristics. This configuration is less sensitive to effects caused by source decay cascades and random coincident events. An implementation of the expectation maximum-maximum likelihood reconstruction technique improved the angular resolution to 6° and reduced the background in all the images.<p>The measured counting rates were a factor of two low for the silicon-CdZnTe camera, and up to a factor of four high for the dual CdZnTe camera compared to simulation. These differences are greater than the error bars. The primary reasons for these discrepancies are related to experimental conditions imposed by source decay cascades and the occurrence of random coincidences which are not modeled by the code.<p><P>
367

Particulate Generation During Disruption Simulation on the SIRENS High Heat Flux Facility

Sharpe, John Phillip 04 April 2000 (has links)
<p>Successful implementation of advanced electrical power generation technology into the global marketplace requires at least two fundamental ideals: cost effectiveness and the guarantee of public safety. These requirements can be met by thorough design and development of technologies in which safety is emphasized and demonstrated. A detailed understanding of the many physical processes and their synergistic effects in a complicated fusion energy system is necessary for a defensible safety analysis. One general area of concern for fusion devices is the production of particulate, often referred to as dust or aerosol, from material exposed to high energy density fusion plasma. This dust may be radiologically activated and/or chemically toxic, and, if released to the environment, could become a hazard to the public. The goal of this investigation was to provide insight into the production and transport of particulate generated during the event of extreme heat loads to surfaces directly exposed to high energy density plasma. A step towards achieving this goal was an experiment campaign carried out with the Surface InteRaction Experiment at North Carolina State (SIRENS), a facility used for high heat flux experiments. These experiments involved exposing various materials, including copper, stainless steel 316, tungsten, aluminum, graphite (carbon), and mixtures of carbon and metals, to the high energy density plasma of the SIRENS source section. Material mobilized as a result of this exposure was collected from a controlled expansion chamber and analyzed to determine physical characteristics important to safety analyses (e.g., particulate shape, size, chemical composition, and total mobilized mass). Key results from metal-only experiments were: the particles were generally spherical and solid with some agglomeration, greater numbers of particles were collected at increasing distances from the source section, and the count median diameter of the measured particle size distributions were of similar value at different positions in the expansion chamber, although the standard deviation was found to increase with increasing distances from the source section, and the average count median diameters were 0.75 micron for different metals. Important results from the carbon and carbon/metals tests were: particle size distributions for graphite tests were bi-modal (i.e. two distributions were present in the particle population), particles were generally smaller than those from metals-only tests (average of 0.3 micron), and the individual particles were found to contain both carbon and metal material. An associated step towards the goal involved development of an integrated mechanistic model to understand the role of different particulate phenomena in the overall behavior observed in the experiment. This required a detailed examination of plasma/fluid behavior in the plasma source section, fluid behavior in the expansion chamber, and mechanisms responsible for particulate generation and growth. The model developed in this work represents the first time integration of these phenomena and was used to simulate mobilization experiments in SIRENS. Comparison of simulation results with experiment observations provides an understanding of the physical mechanisms forming the particulate and indicates if mechanisms other than those in the model were present in the experiment. Key results from this comparison were: the predicted amount of mass mobilized from the source section was generally much lower than that measured, the calculated and measured particle count median diameters were similar at various locations in the expansion chamber, and the measured standard deviations were larger than those predicted by the model. These results implicate that other mechanisms (e.g., mobilization of melted material) in addition to ablation were responsible for mass removal in the source section, a large number of the measured particles were formed by modeled mechanisms of nucleation and growth, and, as indicated by the large measured standard deviations, the larger particles found in the measurement were from an aerosol source not included in the model. From this model, a detailed understanding of the production of primary particles from the interaction of a high energy density plasma and a solid material surface has been achieved. Enhancements to the existing model and improved/extended experimental tests will yield a more sophisticated mechanistic model for particulate production in a fusion reactor.<P>
368

Boundary Layer Energy Transport in Plasma Devices

Orton, Nigel Paul 01 May 2000 (has links)
<p>The purpose of this research was to develop a model of boundary-layer energy transport in electric launchers, and perform a numerical simulation to investigate the influence of turbulence, thermal radiation and ablation on energy flux to plasma-facing surfaces. The model combines boundary-layer conservation equations with a k-omega turbulence model and multi-group radiation transport, and uses plasma models for fluid properties such as viscosity, thermal conductivity and specific heat capacity. The resulting TURBFIRE computer code is the most comprehensive simulation to date of boundary-layer turbulence and radiation transport in electric launcher plasmas.<p>TURBFIRE was run for cases with and without ablation. Temperature and velocity profiles are presented for all code runs, as are values of heat flux to the wall. The results indicate that both radiation transport and turbulence are important mechanisms of energy transport in the boundary layer, and therefore that both should be modeled in future simulations. Additionally, heat flux to the wall via both conduction and radiation was found to be significant for all cases run. Other authors have theorized that conduction could be neglected, but the current results show that this is not the case near the wall.<p>This research is also novel for its advances in computational fluid dynamics (CFD). The energy equation was written in terms of internal energy and discretized in a manner more implicit than in typical CFD codes. These changes were necessary to enable the code to accurately calculate heat capacity, which changes greatly with temperature for even weakly-ionized plasmas. Additionally, zero-gradient boundary conditions were used at the free stream for the turbulent kinetic energy and its dissipation rate (k and omega). Experimentally determined freestream values of k and omega are typically used in CFD codes, but these data are not available for most plasma devices.<P>
369

A New Monte Carlo Assisted Approach toDetector Response Functions

Sood, Avneet 01 May 2000 (has links)
<p>The physical mechanisms that describe the components of NaI,Ge, and SiLi detector response have been investigated using Monte Carlosimulation. The mechanisms described focus on the shape of the Compton edge,the magnitude of the flat continuum, and the shape of the exponential tailsfeatures. These features are not accurately predicted by previous Monte Carlosimulation. Probable interaction mechanisms for each detector responsecomponent is given based on this Monte Carlo simulation.Precollision momentum of the electron is considered when simulating incoherentscattering of the photon. The description of the Doppler broadened photonenergy spectrum corrects the shape of the Compton edge. Special attention isgiven to partial energy loss mechanisms in the frontal region of the detectorlike the escape of photoelectric and Auger electrons or low-energy X-rays fromthe detector surface. The results include a possible physical mechanismdescribing the exponential tail feature that is generated by a separate MonteCarlo simulation. Also included is a description of a convolution effect thataccounts for the difference in magnitude of the flat continuum in the MonteCarlo simulation and experimental spectra. The convolution describes anenhanced electron loss. Results of these applications are discussed.<P>
370

BAYESIAN ANALYSIS FOR THE SITE-SPECIFIC DOSE MODELING IN NUCLEAR POWER PLANT DECOMMISSIONING

LING, XIANBING 30 January 2001 (has links)
<p>Decommissioning is the process of closing down a facility. In nuclear power plant decommissioning, it must be determined that that any remaining radioactivity at a decommissioned site will not pose unacceptable risk to any member of the public after the release of the site. This is demonstrated by the use of predictive computer models for dose assessment. The objective of this thesis is to demonstrate the methodologies of site-specific dose assessment with the use of Bayesian analysis for nuclear power plant decommissioning. An actual decommissioning plant site is used as a test case for the analyses. A residential farmer scenario was used in the analysis with the two of the most common computer codes for dose assessment, i.e., DandD and RESRAD. By identifying key radionuclides and parameters of importance in dose assessment for the site conceptual model, available data on these parameters was identified (as prior information) from the existing default input data from the computer codes or the national database. The site-specific data were developed using the results of field investigations at the site, historical records at the site, regional database, and the relevant information from the literature. This new data were compared to the prior information with respect to their impacts onboth deterministic and probabilistic dose assessment. Then, the two sets of information were combined by using the method of conjugate-pair for Bayesian updating. Value of information (VOI) analysis was also performed based on the results of dose assessment for different radionuclides and parameters. The results of VOI analysis indicated that the value of site-specific information was very low regarding the decision on site release. This observation was held for both of the computer codes used. Although the value of new information was very low with regards to the decisions on site release, it was also found that the use of site-specific information is very important for the reduction of the predicted dose. This would be particularly true with the DandD code.<P>

Page generated in 0.0523 seconds