• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • Tagged with
  • 12
  • 12
  • 7
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A rapid, reliable methodology for radionuclide characterization of wet or dry stored used nuclear fuel via the application of algorithm-enhanced scintillator survey spectra

Paul, Jessica Nicole 21 September 2015 (has links)
The growing concern regarding regulation and accountability of plutonium and SNM produced in commercial and research nuclear reactor fuel has driven the need for new spent nuclear fuel characterization methods to enable quantification and qualification of radioisotopes contained in used fuel in a reliable, quick, and inexpensive manner, with little to no impact on normal reactor operating procedures. This research aims to meet these objectives by employing advanced computational radiation transport methods incorporated into an algorithm to post process scintillator detector data gathered from used nuclear fuel in a spent fuel pool or in air. An existing, novel post processing algorithm, SmartID, has been updated to extract and identify unique photopeaks represented in the underwater environment for pool cooled used fuel. The resulting spectral data will be post-processed using an updated SmartID algorithm folded with deterministic adjoint results to render both qualitative and quantitative fuel content and irradiation estimates. This work has much significance to the nuclear power industry, safeguards, and forensics communities, since it yields this information at room temperature for a relatively low cost.
2

Characterizing the Entry Resistance of Smoke Detectors

Ierardi, James Arthur 11 May 2005 (has links)
Entry resistance in smoke detectors was investigated using experimental and analytical approaches. The experimental work consisted of measuring velocity inside the sensing chamber of smoke detectors with a two-component Laser Doppler Velocimeter and exposing addressable smoke detectors to four different aerosol sources. The velocity measurements and exposure tests were performed in NIST's Fire Emulator / Detector Evaluator under steady state flow conditions in the range of 0.08 to 0.52 m/s. The addressable detectors were a photoelectric and an ionization detector. A specially constructed rectangular detector model was also used for the interior velocity measurements in order to have geometry compatible with numerical approaches, such as computational fluid dynamics modeling or a two-dimensional analytical solution. The experimental data was used to investigate the fluid mechanics and mass transport processes in the entry resistance problem. An inlet velocity boundary condition was developed for the smoke detectors evaluated in this study by relating the external velocity and detector geometry to the internal velocity by way of a resistance factor. Data from the exposure tests was then used to characterize the nature of aerosol entry lag and sensor response. The time to alarm for specific alarm points was determined in addition to performing an exponential curve fit to obtain a characteristic response time. A mass transport model for smoke detector response was developed and solved numerically. The mass transport model was used to simulate the response time data collected in the experimental portion of this study and was found, in general, to underestimate the measured response time by up to 20 seconds. However, in the context of wastebasket fire scenario the amount of underprediction in the model is 5 seconds or less which is within the typically polling interval time of 5 to 10 seconds for an addressable system. Therefore, the mass transport model results developed using this proposed engineering framework show promise and are within the expected uncertainty of practical fire protection engineering design situations.
3

Development of a Novel Detector Response Formulation and Algorithm in RAPID and its Benchmarking

Wang, Meng Jen 24 October 2019 (has links)
Solving radiation shielding problems, i.e. deep penetration problems, is a challenging task from both computation time and resource aspects in field of nuclear engineering. This is mainly because of the complexity of the governing equation for neutral particle transport - Linear Boltzmann Equation (LBE). The LBE includes seven independent variables with presence of integral and differential operators. Moreover, the low successive rate of radiation shielding problem is also challenging for solving such problems. In this dissertation, the Detector Response Function (DRF) methodology is proposed and developed for real-time and accurate radiation shielding calculation. The real-time capability of solving radiation shielding problem is very important for: (1) Safety and monitoring of nuclear systems; (2) Nuclear non-proliferation; and (3) Sensitivity study and Uncertainty quantification. Traditionally, the difficulties of solving radiation problem are: (1) Very long computation time using Monte Carlo method; (2) Extremely large memory requirement for deterministic method; and (3) Re-calculations using hybrid method. Among all of them, the hybrid method, typically Monte Carlo + deterministic, is capable of solving radiation shielding problem more efficiently than either Monte Carlo or deterministic methods. However, none of the aforementioned methods are capable of performing "real-time" radiation shielding calculation. Literature survey reveals a number of investigation on improving or developing efficient methods for radiation shielding calculation. These methods can be categorized by: (1) Using variance reduction techniques to improve successive rate of Monte Carlo method; and (2) Developing numerical techniques to improve convergence rate and avoid unphysical behavior for deterministic method. These methods are considered clever and useful for the radiation transport community. However, real-time radiation shielding calculation capability is still missing although the aforementioned advanced methods are able to accelerate the calculation efficiency significantly. In addition, very few methods are "Physics-based" For example, the mean free path of neutrons are typically orders of magnitude smaller than a nuclear system, i.e. nuclear reactor. Each individual neutron will not travel too far before its history is terminated. This is called the "loosely coupled" nature of nuclear systems. In principle, a radiation shielding problem can be potentially decomposed into pieces and solved more efficient. In the DRF methodology, the DRF coefficients are pre-calculated with dependency of several parameters. These coefficients can be directly coupled with radiation source calculated from other code system, i.e. RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system. With this arrangement, detector/dosimeter response can be calculated on the fly. Thus far, the DRF methodology has been incorporated into the RAPID code system, and applied on four different benchmark problems: (1) The GBC-32 Spent Nuclear Fuel (SNF) cask flooded with water with a $^3$He detector placed on the cask surface; (2) The VENUS-3 experimental Reactor Pressure Vessel (RPV) neutron fluence calculation benchmark problem; (3) RPV dosimetry using the Three-Mile Island Unit-1 (TMI-1) commercial reactor; and (4) A Dry storage SNF cask external dosimetry problem. The results show that dosimeter/detector response or dose value calculations using the DRF methodology are all within $2sigma$ relative statistical uncertainties of MCNP5 + CADIS (Consistent Adjoint Driven Importance Sampling) standard fixed-source calculation. The DRF methodology only requires order of seconds for the dosimeter/detector response or dose value calculations using 1 processor if the DRF coefficients are appropriately prepared. The DRF coefficients can be reused without re-calculations when a model configuration is changed. In contrast, the standard MCNP5 calculations typically require more than an hour using 8 processors, even using the CADIS methodology. The DRF methodology has enabled the capability of real-time radiation shielding calculation. The radiation transport community can be greatly benefited by the development of DRF methodology. Users can easily utilize the DRF methodology to perform parametric studies, sensitivity studies, and uncertainty quantifications. The DRF methodology can be applied on various radiation shielding problems, such as nuclear system monitoring and medical radiation facilities. The appropriate procedure of DRF methodology and necessary parameters on DRF coefficient dependency will be discussed in detail in this dissertation. / Doctor of Philosophy / Since the beginning of nuclear era, enormous amount of radiation applications have been proposed, developed, and applied in our daily life. The radiation is useful and beneficial when they are under control. However, there will be some "unwanted radiation" from these applications, which have to be shielded. For this, radiation shielding has become a very important task. To effectively shield the unwanted radiations, studying the thickness and design of the shields is important. Instead of directly performing experiments, computation is a more affordable and safer approach. The radiation shielding computation is typically an extremely difffficult task due to very limited "communication" between the radiation within the shield and detector outside the shield. In general, it is impractical to simulate the radiation shielding problems directly because the extremely expensive computation resources. Most of interactions of radiation are within the shield while we are only interested in how many of them penetrate through the shield. This is typically called "deep penetration" problems in the radiation transport community.
4

Investigation of Enhanced Soot Deposition on Smoke Alarm Horns

Phelan, Patrick 07 January 2005 (has links)
Post-fire reconstruction often includes the analysis of smoke alarms. The determination of whether or not an alarm has sounded during a fire event is of great interest. Until recently, analysis of smoke alarms involved in fires has been limited to electrical diagnostics, which, at best, determined whether or not a smoke alarm was capable of alarm during the fire event. It has subsequently been proposed that evaluation of the soot deposition around a smoke alarm horn can be used to conclude whether a smoke alarm has sounded during a fire event. In order to evaluate the effectiveness of using enhanced soot deposition patterns as an indication of smoke alarms sounding within a fire event, four test series were undertaken. First, a population of smoke alarms representative of the available market variety of horn configurations was selected. This population was subjected four test series. Test Series 1 consisted of UL/EN style experiments with fuel sources that included flaming polyurethane, smoldering polyurethane, flaming wood crib, and flaming turpentine pool. In Test Series 2, alarms were exposed to "nuisance" products from frying bacon, frying tortillas, burnt toast, frying breading, and airborne dust. Test Series 3 exposed the alarms to the following fire sources: smoldering cable, flaming cable, flaming boxes with paper, and flaming boxes with plastic cups. Test Series 4 included new, used, and pre-exposed smoke alarms that were exposed to two larger scale fires: a smoldering transitioning to flaming cabinet/wall assembly fire and a flaming couch section. The results from all four series were used to generate a hueristic for use in evaluating alarms from fire events. These criteria were blindly tested against the population of alarms to develop a correlation between the criteria and the previously tested smoke alarms. The results support the evaluation of soot deposition on smoke alarms exposed to a fire event as a viable method to determine whether or not an alarm sounded, without false positive or negative identifications.
5

Quantitative detection in gas chromatography

Gough, T. A. January 1967 (has links)
The difficulties encountered in quantitative analysis by gas chromatography are discussed, with particular reference to detection systems. The properties of an ideal detector for quantitative analysis are listed. A description is given of the mode of operation of detectors for gas chromatography, and the extent to which they are suitable for quantitative work is assessed. It was concluded that no one detector possessed all the properties required or an ideal detector. In particular a qualitative knowledge of the sample for analysis was required by all detectors; and calibration was required by the majority of detectors. The extent to which the Brunel mass detector overcomes these limitations was assessed. It is shown that the response of the mass detector depends solely on weight changes caused by adsorption of materials eluted from the chromatographic column thus completely eliminating the need for calibration and qualitative information. The response of the detector is integral, so that the problems associated with peak area measurement do not arise. The sensitivity of the detector is of a similar order to conventional hot wire detectors. The detector gave a quantitative response to all materials analysed, covering a wide boiling range: the upper limit was determined by the maximum column operating temperature, and the lower limit by the extent to which the detector was cooled. The detector responded quantitatively to water. At room temperature the detector responded on a qualitative basis to organic and inorganic gases. The detector was used for the calibration of other detector, and was operated in conjunction with the Martin gas density balance to determine the molecular weights of eluted materials.
6

A MONTE CARLO SIMULATION AND DECONVOLUTION STUDY OF DETECTOR RESPONSE FUNCTION FOR SMALL FIELD MEASUREMENTS

FENG, YUNTAO January 2006 (has links)
No description available.
7

Development of a dedicated hybrid K-edge densitometer for pyroprocessing safeguards measurements using Monte Carlo simulation models

Mickum, George S. 07 January 2016 (has links)
Pyroprocessing is an electrochemical method for recovering actinides from used nuclear fuel and recycling them into fresh nuclear fuel. It is posited herein that proposed safeguards approaches on pyroprocessing for nuclear material control and accountability face several challenges due to the unproven plutonium-curium inseparability argument and the limitations of neutron counters. Thus, the Hybrid K-Edge Densitometer is currently being investigated as an assay tool for the measurement of pyroprocessing materials in order to perform effective safeguards. This work details the development of a computational model created using the Monte Carlo N-Particle code to reproduce HKED assay of samples expected from the pyroprocesses. The model incorporates detailed geometrical dimensions of the Oak Ridge National Laboratory HKED system, realistic detector pulse height spectral responses, optimum computational efficiency, and optimization capabilities. The model has been validated on experimental data representative of samples from traditional reprocessing solutions and then extended to the sample matrices and actinide concentrations of pyroprocessing. Data analysis algorithms were created in order to account for unsimulated spectral characteristics and correct inaccuracies in the simulated results. The realistic assay results obtained with the model have provided insight into the extension of the HKED technique to pyroprocessing safeguards and reduced the calibration and validation efforts in support of that design study. Application of the model has allowed for a detailed determination of the volume of the sample being actively irradiated as well as provided a basis for determining the matrix effects from the pyroprocessing salts on the HKED assay spectra.
8

Dosimetry Studies of Different Radiotherapy Applications using Monte Carlo Radiation Transport Calculations

Abbasinejad Enger, Shirin January 2008 (has links)
<p>Developing radiation delivery systems for optimisation of absorbed dose to the target without normal tissue toxicity requires advanced calculations for transport of radiation. In this thesis absorbed dose and fluence in different radiotherapy applications were calculated by using Monte Carlo (MC) simulations.</p><p>In paper I-III external neutron activation of gadolinium (Gd) for intravascular brachytherapy (GdNCB) and tumour therapy (GdNCT) was investigated. MC codes MCNP and GEANT4 were compared. MCNP was chosen for neutron capture reaction calculations. Gd neutron capture reaction includes both very short range (Auger electrons) and long range (IC electrons and gamma) products. In GdNCB the high-energetic gamma gives an almost flat absorbed dose delivery pattern, up to 4 mm around the stent. Dose distribution at the edges and inside the stent may prevent stent edge and in-stent restenosis. For GdNCT the absorbed dose from prompt gamma will dominate over the dose from IC and Auger electrons in an in vivo situation. The absorbed dose from IC electrons will enhance the total absorbed dose in the tumours and contribute to the cell killing.</p><p>In paper IV a model for calculation of inter-cluster cross-fire radiation dose from β-emitting radionuclides in a breast cancer model was developed. GEANT4 was used for obtaining absorbed dose. The dose internally in cells binding the isotope (self-dose) increased with decreasing β-energy except for the radionuclides with substantial amounts of conversion electrons and Auger electrons. An effective therapy approach may be a combination of radionuclides where the high self-dose from nuclides with low β-energy should be combined with the inter-cell cluster cross-fire dose from high energy β-particles.</p><p>In paper V MC simulations using correlated sampling together with importance sampling were used to calculate spectra perturbations in detector volumes caused by the detector silicon chip and its encapsulation. Penelope and EGSnrc were used and yielded similar results. The low energy part of the electron spectrum increased but to a less extent if the silicon detector was encapsulated in low z-materials.</p>
9

High-sensitivity Radioactive Xenon Monitoring and High-accuracy Neutron-proton Scattering Measurements

Johansson, Cecilia January 2004 (has links)
<p>Two aspects of applied nuclear physics have been studied in this thesis; Monte Carlo simulations for high-sensitivity monitoring of radioactive xenon and high-accuracy neutron-proton scattering measurements for neutron physics applications and fundamental physics.</p><p>The Monte Carlo simulations have been performed for two systems for detection of radioactive xenon, using the MCNP code. These systems, designed for monitoring of violations of the Comprehensive Nuclear-Test-Ban Treaty, are based on coincident detection of electrons and gamma rays, emitted in beta decay of xenon nuclides produced in nuclear weapons explosions. In general, the simulations describe test data well, and the deviations from experimental data are understood. </p><p>The neutron-proton scattering measurements have been performed by measuring the differential <i>np</i> scattering cross section at 96 MeV in the angular range θ<sub>c.m.</sub>= 20° – 76°. Together with an earlier data set at the same energy, covering the angles θ<sub>c.m.</sub>= 74° – 180°, a new data set has been formed in the angular range θ<sub>c.m.</sub>= 20° – 180°. This extended data set has been normalised to the experimental total <i>np</i> cross section, resulting in a renormalisation of the earlier data of 0.7 %, which is well within the stated normalisation uncertainty for that experiment. The results on forward <i>np</i> scattering are in reasonable agreement with theory models and partial wave analyses and have been compared with data from the literature.</p>
10

Corrections for improved quantitative accuracy in SPECT and planar scintigraphic imaging

Larsson, Anne January 2005 (has links)
A quantitative evaluation of single photon emission computed tomography (SPECT) and planar scintigraphic imaging may be valuable for both diagnostic and therapeutic purposes. For an accurate quantification it is usually necessary to correct for attenuation and scatter and in some cases also for septal penetration. For planar imaging a background correction for the contribution from over- and underlying tissues is needed. In this work a few correction methods have been evaluated and further developed. Much of the work relies on the Monte Carlo method as a tool for evaluation and optimisation. A method for quantifying the activity of I-125 labelled antibodies in a tumour inoculated in the flank of a mouse, based on planar scintigraphic imaging with a pin-hole collimator, has been developed and two different methods for background subtraction have been compared. The activity estimates of the tumours were compared with measurements in vitro. The major part of this work is attributed to SPECT. A method for attenuation and scatter correction of brain SPECT based on computed tomography (CT) images of the same patient has been developed, using an attenuation map calculated from the CT image volume. The attenuation map is utilised not only for attenuation correction, but also for scatter correction with transmission dependent convolution subtraction (TDCS). A registration method based on fiducial markers, placed on three chosen points during the SPECT examination, was evaluated. The scatter correction method, TDCS, was then optimised for regional cerebral blood flow (rCBF) SPECT with Tc-99m, and was also compared with a related method, convolution scatter subtraction (CSS). TDCS has been claimed to be an iterative technique. This requires however some modifications of the method, which have been demonstrated and evaluated for a simulation with a point source. When the Monte Carlo method is used for evaluation of corrections for septal penetration, it is important that interactions in the collimator are taken into account. A new version of the Monte Carlo program SIMIND with this capability has been evaluated by comparing measured and simulated images and energy spectra. This code was later used for the evaluation of a few different methods for correction of scatter and septal penetration of I-123 brain SPECT. The methods were CSS, TDCS and a method where correction for scatter and septal penetration are included in the iterative reconstruction. This study shows that quantitative accuracy in I-123 brain SPECT benefits from separate modelling of scatter and septal penetration.

Page generated in 0.1192 seconds