• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 613
  • Tagged with
  • 613
  • 613
  • 613
  • 15
  • 15
  • 15
  • 10
  • 10
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Effect of helium injection on diffusion dominated air ingress accidents in pebble bed reactors

Yurko, Joseph Paul January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 73-74). / The primary objective of this thesis was to validate the sustained counter air diffusion (SCAD) method at preventing natural circulation onset in diffusion dominated air ingress accidents. The analysis presented in this thesis starts with a vertically oriented rupture of a coaxial pipe. Air enters into the reactor cavity at a rate dictated by diffusion, until the buoyancy force is strong enough to initiate natural convective flow through the reactor. The SCAD method, developed by Yan et al. reduces the buoyancy force in a high temperature gas reactor (HTGR), during the lengthy diffusion phase, by injecting minute amounts of helium into the top of the reactor to set up a counter helium-air diffusion circuit. By delaying the onset of natural circulation, air enters the reactor only at diffusion transport rates, instead of much higher natural convection transport rates. Thus, the air ingress rate is reduced by several orders of magnitude. Without the continuous convective driven supply of "fresh" air the threat of oxidizing graphite components is significantly reduced. To validate SCAD a small scale simulated Pebble Bed Reactor (PBR) was constructed and a series of air ingress experiments with and without helium injection were conducted. In addition, Computational Fluid Dynamic (CFD) simulations were performed using FLUENT @ to model the experiment and gain further insight into the behavior of the flow field leading up to the onset of natural circulation. In order to have the CFD predicted natural circulation onset time better match the experimentally determined onset time, the initial helium fraction in the numerical model had to be reduced by 15%. This reduction is within the uncertainty of the experimental set-up. This change helped display an important feature of the behavior of air ingress accidents. With the initial helium fraction in the simulated reactor at 100% the first half of the transient is a very slow completely diffusion dominated transport phase. The second half of the transient had an air transport rate that had an increasing natural convective transport contribution leading up to the onset of natural circulation and complete natural convective transport. Reducing the initial helium fraction by only 15% caused that initial very slow, pure diffusion transport phase to be bypassed and the natural circulation onset time was dictated by the combined effects of free convection and diffusion transport, not simply diffusion. A full scale PBR experiencing a similar accident will have the core entirely filled with helium. Thus, for a vertically oriented double ended guillotine (DEG) large-break loss of coolant accident (LB-LOCA) the subsequent air ingress rate will be dictated by the slow diffusion of air into the reactor cavity, for most of the transient. For the helium injection tests, even at the at the lowest tested injection rate, both the experiment and the CFD simulation showed that natural circulation was prevented over a time period twice as long as the time to onset. The tests showed that without helium injection, natural circulation started after about 117 minutes on average. With helium injection, natural circulation did not start after 240 minutes when the experiment was terminated. Additional injection tests were run where after 240 minutes the helium injection was terminated, but data continued to be taken. In these tests natural circulation was initiated in approximately 120 minutes after termination of helium injection confirming the helium injection flow was preventing natural circulation from starting. The lowest tested helium injection rate corresponded to 0.01% of the test assembly's total volume per minute, demonstrating how small of a flow rate is needed for the SCAD method to work. Minimal helium injection is not intended to be an emergency core cooling system but rather a system to prevent or delay natural circulation which would result in a large amount of air ingress. The system response was formulated non-dimensionally to quantify the impact SCAD has on the driving parameters that impact the onset of natural circulation, namely the buoyancy force, mass flow rate, and density ratio between the hot and cold leg. The results showed that SCAD suppresses the buoyancy force and forces a mass flow (transport) rate that causes any changes in the hot leg density to be counter-acted by density changes in the cold leg. The transport rate that is established is orders of magnitude less than the natural circulation transport rate. Using the driving nondimensional parameters, a methodology was also developed in order to formulate a correlation to estimate the minimum injection rate (MIR) of helium to prevent the onset of natural circulation. In order to properly derive a correlation for the MIR, further experiments and/or simulations are required over different geometrical configurations. The non-dimensional analysis showed that Yan's MIR estimate was conservative for the experimental configuration, and would be conservative for a full scale PBR. Therefore, Yan's MIR calculation was used to provide an order of magnitude estimate for the helium injection rate in a full scale PBR. The resulting MIR of helium for a full scale PBR was 5.36 g/hr, which corresponds to storing only 11.6 kg of helium on-site to prevent the onset of natural circulation for three full months. The experiment and CFD simulations were performed using an inverted U-tube which simulates a vertically oriented pipe configuration. If the pipe break occurs in a horizontal configuration, the air ingress phenomena could be substantially different depending on the break size and orientation. Thus, this thesis concludes that the method is capable of preventing natural circulation onset as long as air ingress occurs at transport rates comparable to diffusion after the break occurs. / by Joseph Paul Yurko. / S.M.
132

Development of optimized core design and analysis methods for high power density BWRs / Development of optimized core design and analysis methods for high power density boiling water reactors / Analysis methods for high power density BWRs

Shirvan, Koroush January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 263-268). / Increasing the economic competitiveness of nuclear energy is vital to its future. Improving the economics of BWRs is the main goal of this work, focusing on designing cores with higher power density, to reduce the BWR capital cost. Generally, the core power density in BWRs is limited by the thermal Critical Power of its assemblies, below which heat removal can be accomplished with low fuel and cladding temperatures. The present study investigates both increases in the heat transfer area between the fuel and coolant and changes in operating parameters to achieve higher power levels while meeting the appropriate thermal as well as materials and neutronic constraints. A scoping study is conducted under the constraints of using fuel with cylindrical geometry, traditional materials and enrichments below 5% to enhance its licensability. The reactor vessel diameter is limited to the largest proposed thus far. The BWR with High power Density (BWR-HD) is found to have a power level of 5000 MWth, equivalent to 26% uprated ABWR, resulting into 20% cheaper O&M and Capital costs. This is achieved by utilizing the same number of assemblies, but with wider 16x1 6 assemblies and 50% shorter active fuel than that of the ABWR. The fuel rod diameter and pitch are reduced to just over 45% of the ABWR values. Traditional cruciform form control rods are used, which restricts the assembly span to less than 1.2 times the current GE 14 design due to limitation on shutdown margin. Thus, it is possible to increase the power density and specific power by 65%, while maintaining the nominal ABWR Minimum Critical Power Ratio (MCPR) margin. The optimum core pressure is the same as the current 7.1 MPa. The core exit quality is increased to 19% from the ABWR nominal exit quality of 15%. The pin linear heat generation rate is 20% lower, and the core pressure drop and mass of uranium are 30% lower. The BWR-HD's fuel, modelled with FRAPCON 3.4, showed similar performance to the ABWR pin design. The fuel cycle is only 12 month long, but on the per kWhr, the new design operates with 14% lower fuel cycle front-end costs and similar total fuel cycle cost to the 18 month ABWR fuel cycle. The plant systems outside the vessel are assumed to be the same as the ABWR-1I design, utilizing a combination of active and passive safety systems. Safety analyses applied a void reactivity coefficient calculated by SIMULATE-3 for an equilibrium cycle core that showed a 15% less negative coefficient for the BWR-HD compared to the ABWR. The feedwater temperature was kept the same for the BWR-HD and ABWR which resulted in 4 °K cooler core inlet temperature for the BWR-HD given that its feedwater makes up a larger fraction of total core flow. The stability analysis using the STAB and S3K codes showed satisfactory results for the hot channel, coupled regional out-of-phase and coupled core-wide in-phase modes. A RELAP5 model of the ABWR system was constructed and applied to six transients for the BWR-HD and ABWR. The AMCPRs during all the transients were found to be equal or less for the new design and the core remained covered for both. The lower void coefficient along with smaller core volume proved to be advantages for the simulated transients. Helical Cruciform Fuel (HCF) rods were proposed in prior MIT studies to enhance the fuel surface to volume ratio. In this work, higher fidelity models (e.g. CFD instead of subchannel methods for the hydraulic behaviour) are used to investigate the resolution needed for accurate assessment of the HCF design. For neutronics, conserving the fuel area of cylindrical rods results in a different reactivity level with a lower void coefficient for the HCF design. In single-phase flow, for which experimental results existed, the friction factor is found to be sensitive to HCF geometry and cannot be calculated using current empirical models. A new approach for analysis of flow crisis conditions for HCF rods in the context of Departure from Nucleate Boiling (DNB) and dryout using the two phase interface tracking method was proposed and initial results are presented. It is shown that the twist of the HCF rods promotes detachment of a vapour bubble along the elbows which indicates no possibility for an early DNB for the HCF rods and in fact a potential for a higher DNB heat flux. Under annular flow conditions, it was found that the twist suppressed the liquid film thickness on the HCF rods, at the locations of the highest heat flux, which increases the possibility of reaching early dryout. It was also shown that modeling the 3D heat and stress distribution in the HCF rods is necessary for accurate steady state and transient analyses. The safety analysis of the 20% uprated HCF design in the context of a BWR/4 RPV showed satisfactory AMCHFR performance only if CR is estimated by the EPRI- 1 correlation. / by Koroush Shirvan. / Ph.D.
133

Coherent control of electron spins in diamond for quantum information science and quantum sensing

Cooper-Roy, Alexandre January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 115-122). / This thesis introduces and experimentally demonstrates coherent control techniques to exploit electron spins in diamond for applications in quantum information processing and quantum sensing. Specifically, optically-detected magnetic resonance measurements are performed on quantum states of single and multiple electronic spins associated with nitrogen-vacancy centers and other paramagnetic centers in synthetic diamond crystals. We first introduce and experimentally demonstrate the Walsh reconstruction method as a general framework to estimate the parameters of deterministic and stochastic fields with a quantum probe. Our method generalizes sampling techniques based on dynamical decoupling sequences and enables measuring the temporal profile of time-varying magnetic fields in the presence of dephasing noise. We then introduce and experimentally demonstrate coherent control techniques to identify, integrate, and exploit unknown quantum systems located in the environment of a quantum probe. We first locate and identify two hybrid electron-nuclear spins systems associated with unknown paramagnetic centers in the environment of a single nitrogen-vacancy center in diamond. We then prepare, manipulate, and measure their quantum states using cross-polarization sequences, coherent feedback techniques, and quantum measurements. We finally create and detect entangled states of up to three electron spins to perform environment-assisted quantum metrology of time-varying magnetic fields. These results demonstrate a scalable approach to create entangled states of many particles with quantum resources extracted from the environment of a quantum probe. Applications of these techniques range from real-time functional imaging of neural activity at the level of single neurons to magnetic resonance spectroscopy and imaging of biological complexes in living cells and characterization of the structure and dynamics of magnetic materials. / by Alexandre Cooper-Roy. / Ph. D.
134

Uncertainty and sensitivity analysis of a fire-induced accident scenario involving binary variables and mechanistic codes

Minton, Mark A. (Mark Aaron) January 2010 (has links)
Thesis (Nucl. E. and S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2010. / "September 2010." Cataloged from PDF version of thesis. / Includes bibliographical references (p. 72-74). / In response to the transition by the United States Nuclear Regulatory Commission (NRC) to a risk-informed, performance-based fire protection rulemaking standard, Fire Probabilistic Risk Assessment (PRA) methods have been improved, particularly in the areas of advanced fire modeling and computational methods. As the methods for the quantification of fire risk are improved, the methods for the quantification of the uncertainties must also be improved. In order to gain a more meaningful insight into the methods currently in practice, it was decided that a scenario incorporating the various elements of uncertainty specific to a fire PRA would be analyzed. The NRC has validated and verified five fire models to simulate the effects of fire growth and propagation in nuclear power plants. Although these models cover a wide range of sophistication, epistemic uncertainties resulting from the assumptions and approximations used within the model are always present. The uncertainty of a model prediction is not only dependent on the uncertainties of the model itself, but also on how the uncertainties in input parameters are propagated throughout the model. Inputs to deterministic fire models are often not precise values, but instead follow statistical distributions. The fundamental motivation for assessing model and parameter uncertainties is to combine the results in an effort to calculate a cumulative probability of exceeding a given threshold. This threshold can be for equipment damage, time to alarm, habitability of spaces, etc. Fire growth and propagation is not the only source of uncertainty present in a fire-induced accident scenario. Statistical models are necessary to develop estimates of fire ignition frequency and the probability that a fire will be suppressed. Human Reliability Analysis (HRA) is performed to determine the probability that operators will correctly perform manual actions even with the additional complications of a fire present. Fire induced Main Control Room (MCR) abandonment scenarios are a significant contributor to the total Core Damage Frequency (CDF) estimate of many operating nuclear power plants. Many of the resources spent on fire PRA are devoted to quantification of the probability that a fire will force operators to abandon the MCR and take actions from a remote location. However, many current PRA practitioners feel that effect of MCR fires have been overstated. This report details the simultaneous application of state-of-the-art model and parameter uncertainty techniques to develop a defensible distribution of the probability of a forced MCR abandonment caused by a fire within a MCR benchboard. These results are combined with the other elements of uncertainty present in a fire-induced MCR abandonment scenario to develop a CDF distribution that takes into account the interdependencies between the factors. In addition, the input factors having the strongest influence on the final results are identified so that operators, regulators, and researchers can focus their efforts to mitigate the effects of this class of fire-induced accident scenario. / by Mark A. Minton. / Nucl.E.and S.M.
135

Nuclear warhead monitoring : a study of photon emissions from fission neutron interactions with high explosives as a tool in arms control verification / Study of photon emissions from fission neutron interactions with high explosives as a tool in arms control verification

Snowden, Mareena Robinson January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references. / Since the signing of the Nuclear Nonproliferation Treaty, the technical community has been working to develop verification options that provide confidence in the reduction or elimination of nuclear warheads, while respecting countries' requirement of limited access to national secrets. This dissertation used a simplified open-source warhead model as a vehicle to investigate the use of secondary gammas, generated passively by neutron interactions inside high explosive (HE), as a signature for the presence of a warhead-like object. Analytical calculations were done to estimate the detectability of radiative capture and inelastic scatter emissions generated within the warhead model. Results showed the emission of gammas from nitrogen, between 5-7 MeV, to be detectable above background with dwell-times exceeding 90 minutes. These calculations motivated the systematic study of the signal experimentally using surrogate materials to represent the warhead's weapons-grade plutonium and HE. The experiment did not show the expected signals. This motivated a simulation of the mock-up experiment using the radiation transport code MCNP6 to help understand the observed results. The experimental and simulation data suggest that correlated backgrounds from neutron interactions with environmental materials dominate the signal. This finding helped provide a basis for understanding the feasibility and challenges to detecting this neutron-induced gamma signal. Three sets of pulse-height spectra have been analyzed: experimental spectra that looked at the effect of the HE surrogate on the overall detected counts; simulated spectra that helped to understand the underlying contributors to the observed experimental result; and a data-MCNP6 comparison that assessed the accuracy of the simulated results. Each set contributed to the quantification of detectability for the emissions of interest. The findings suggest the passive detection of the expected high-energy gamma signal is not feasible, unless backgrounds can be better controlled. The difficulty is attributed to low solid-angle coverage of the neutron source by the melamine explosive surrogate, and competing backgrounds produced by neutron-source interactions with surrounding materials. This thesis also examined the benefits and tradeoffs of this particular verification approach by investigating the non-technical context of the verification, such as the preferences of negotiators. The tradeoffs between confidence and intrusiveness highlight the need for technical verification solutions that span the diversity of options. Factors limiting the development of warhead verification systems, from the bias of researchers to issues of classification and sensitive geometries, were discussed. / by Mareena Robinson Snowden. / Ph. D.
136

Effects of chromium and silicon on corrosion of iron alloys in lead-bismuth eutectic

Lim, Jeongyoun January 2006 (has links)
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2006. / Includes bibliographical references. / The high power densities and temperatures expected for next generation nuclear applications, including power generation and transmutation systems, will require new types of heat transport systems to be economic. Present interest in heavy liquid metal coolants, especially in lead and lead-bismuth eutectic, originates from such requirements as increased heat removal capacity and enhanced safety features. However, corrosion of structural metals represents a major limiting factor in developing advanced liquid Pb-alloy coolant technology. In fact, the development of advanced structural and cladding alloys that are resistant to corrosion over a wide range of oxygen potentials in this environment would represent the enabling technology for these systems. The goal of this research was to develop a class of Fe-Cr Si alloys that are resistant to corrosion in Pb and Pb alloys at temperatures of 6000C or higher. As a necessary part of this development effort, an additional goal was to further develop the fundamental understandings of the mechanisms by which corrosion protection is achieved. A series of alloys based on the Fe-Cr-Si system were proposed as potential candidates for this application. These alloys were then produced and evaluated. The results of this evaluation verified the hypothesis that an Fe alloys with suitable levels of Cr (>12 wt%) and Si (> 2.5 wt%) will be protected by either a tenacious oxide film (over a wide range of oxygen potentials above the formation potential for Cr and Si oxides) or by a low solubility surface region (at low oxygen potentials) Experimental results obtained from model alloys after lead-bismuth eutectic exposure at 6000C demonstrated the film formation process. / (cont.) The hypothesis that Si addition would promote the formation of a diffusion barrier was confirmed by the actual reduction of oxide thickness over time. The Si effect was magnified by the addition of Cr to the system. Based on a kinetic data assessment on the experimental results of Fe-Si and Fe-Cr-Si alloys, the synergetic alloying effect of Cr and Si was revealed. An improved understanding on the kinetic process and its dependence on the alloying elements has been achieved. / by Jeongyoun Lim. / Sc.D.
137

High-energy photon transport modeling for oil-well logging

Johnson, Erik D., Ph. D. Massachusetts Institute of Technology January 2009 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 121-122). / Nuclear oil well logging tools utilizing radioisotope sources of photons are used ubiquitously in oilfields throughout the world. Because of safety and security concerns, there is renewed interest in shifting to electronically-switchable accelerator sources. Investigation of accelerator sources opens up the opportunity to study higher-energy sources. In this thesis, sources with a 10 MeV endpoint are examined, a several-fold increase over traditional techniques. The properties of high-energy photon transport are investigated for potential new or improved well logging measurements. Two obvious processes available with a high-energy photon source are pair production and photo neutron emission. A new measurement of formation density is proposed based on the annihilation radiation produced after the pair production of high-energy source photons in the rock formation. With a detector spacing of 55 cm, this measurement exhibits a sensitivity to density with a dynamic range of 10 across a typical range of formation density (2.0 - 3.0 g/cc), the same as traditional measurements. Increases in depth of investigation for these measurements can substantially improve the sampling of the formation and thus the quality and relevance of the measurement. Being distributed in angle and space throughout the formation, a measurement based on anni-hilation photons exhibits a greater depth of investigation than traditional methods. For a detector spacing of 39 cm (equivalent to a typical spacing for one detector in traditional approaches), this measurement has a depth of investigation of 8.0 cm while the traditional measurement has a depth of investigation of 3.6 cm. / (cont.) For the 55 cm spacing, this depth is increased to 9.4 cm. Concerns remain for how to implement an accelerator source in which energy spectroscopy, essential for identifying an annihilation peak, is possible. Because pair production also depends on formation lithology, the effects of chemical composition on annihilation photon flux are small (<20 %) for the studied geometry. Additionally, lithology measurements based on attenuation at high energies show too small an effect to be likely to produce a useful measurement. Photoneutron production cross sections at this energy are too small to obtain a measurement based on this process. / by Eric D. Johnson. / Ph.D.
138

Computational aspects of treatment planning for neutron capture therapy

Albritton, James Raymond, 1977- January 2010 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, February 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references. / Boron Neutron Capture Therapy (BNCT) is a biochemically targeted form of binary radiation therapy that has the potential to deliver radiation to cancers with cellular dose selectivity. Accurate and efficient treatment planning calculations are essential to maximizing the efficacy of BNCT and ensuring patient safety. This thesis investigates computational aspects of BNCT treatment planning with the aim of improving both the accuracy and efficiency of the planning process as well as developing a better understanding of differences in computational dosimetry that exist between the different BNCT clinical sites around the world. A suite of computational dosimetry reference problems were developed as a basis for comprehensively testing, comparing, and analyzing current and future BNCT treatment planning systems (TPSs) under conditions relevant to both patient planning and planning system calibration. Using these reference problems, four of the TPSs that have been used in clinical BNCT (MacNCTPlan, NCTPlan, BNCTRtpe, and SERA) were compared to reference calculations performed with the well-benchmarked Monte Carlo radiation transport code MCNP5. The comparison of multidimensional dose data in the form of dose profiles, isodose contours, dose difference distributions and dose-volume histograms yielded many clinically significant differences. Additional calculations were performed to further investigate and explain significant deviations from the reference calculations. / (cont.) A combined 81 brain tumor patients have been treated in dose escalation trials of Neutron Capture Therapy (NCT) in the USA at Harvard-Massachusetts Institute of Technology (MIT) and Brookhaven National Laboratory (BNL). Pooling the clinical data from these and other trials will allow the evaluation of the safety and efficacy of NCT with more statistical rigor. However, differences in physical and computational dosimetry between the institutions that make a direct comparison of the clinical dosimetry difficult must first be addressed before clinical data can be compared. This study involves normalizing the BNL clinical dosimetry to that of Harvard-MIT for combined NCT dose response analysis using analysis of MIT measurements and calculations with the BNL treatment planning system (TPS), BNCTRtpe, for two different phantoms. The BNL TPS was calibrated to dose measurements made by MIT at the Brookhaven Medical Research Reactor (BMRR) in the BNL calibration phantom, a Lucite cube, and then validated by MIT dose measurements at the BMRR in an ellipsoidal water phantom. Using the newly determined TPS calibration, treatment plans for all BNL patients were recomputed, yielding reductions in reported mean brain doses of 10% on average in the initial 15 patients treated with the 8 cm collimator and 27% in the latter 38 patients treated with a 12 cm collimator. These reductions in reported doses have clinically significant implications for those relying on reported BNL doses as a basis for initial dose selection in clinical studies and reaffirm the importance of collaborative dosimetric comparisons within the NCT community. / (cont.) The dosimetric adjustments allowed the BNL clinical data to be legitimately combined with the Harvard-MIT clinical data for a combined dose response analysis of the incidence of radiation-induced somnolence syndrome. Probit analysis of the composite data set for the incidence of somnolence yielded ED5o values of 5.76 Gyw and 14.4 Gy, for mean and maximum brain dose. The applicability and optimization of variance reduction techniques for BNCT Monte Carlo treatment planning calculations were investigated using MCNP5. The preexisting variance reduction scheme in the Monte Carlo model of the fission converter beam (FCB) at MIT was optimized, resulting in improved energy-dependent neutron and photon weight windows. Using these weight windows, a more precise surface source representation of the FCB was produced downstream at the patient position with improved statistical properties that increased the mean efficiency of in-phantom dose calculations by a factor of 9. The variance reduction techniques available in MCNP were also explored as a means of increasing the efficiency of dose calculations in the patient model. By disabling implicit neutron capture and using fast neutron source biasing and photon production biasing techniques, the mean efficiency of dose calculations was improved by a factor of 2.2. Constructing an accurate description of a neutron beam is critical to achieving accurate calculations of dose in NCT treatment planning. / (cont.) This study compares two methods of neutron beam source definition commonly used in BNCT treatment planning calculations, the phase space file (MCNP surface source file) and source variable probability distributions (MCNP SDef). To facilitate the comparison, a novel software tool was developed to analyze MCNP surface source files and construct MCNP SDef representations. This tool was applied to the MIT FCB, which has a well-validated Monte Carlo model. Each source type (surface source file and SDef) was used to simulate transport of the beam through voxel models of the modified Snyder head phantom, where doses were calculated. Compared to the surface source file, the initial dose calculations with the SDef produced significant errors of ~15%. Using a patched version of MCNP that allowed the observed radial dependence of the relative azimuthal angle to be modeled in the SDef, errors in all dose components in the head phantom at Dmax were reduced to acceptably small levels with none being statistically significant except for the induced photon error of 0.5%. Errors in the calculated doses introduced by sampling the azimuthal component of particle direction uniformly in the SDef vary spatially, are phantom-dependent, and thus cannot be accurately corrected by a simple scaling of doses. / by James Raymond Albritton. / Ph.D.
139

The Greedy Exhaustive Dual Binary Swap methodology for fuel loading optimization in PWR reactors using the poropy reactor optimization Tool

Haugen, Carl C. (Carl Christopher) January 2014 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2014. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 151-153). / This thesis presents the development and analysis of a deterministic optimization scheme termed Greedy Exhaustive Dual Binary Swap for the optimization of nuclear reactor core loading patterns. The goal of this optimization scheme is to emulate the approach taken by an engineer when manually optimizing a reactor core loading pattern. This is to determine if this approach is able to locate high quality patterns that, due to their location in the core loading solution space, are consistently missed by standard stochastic optimization methods such as those in the genetic algorithm class, or those in the simulated annealing class. This optimization study is carried out using the poropy tool to handle the reactor physics model. Initially, optimizations are carried out using beginning of cycle eigenvalue as a surrogate for core excess reactivity and thus cycle length. The deterministic Dual Binary Swap is found to locate acceptable patterns less reliably than stochastic methods, but those that are located are of higher quality. Optimizations of the full depletion problem result in the deterministic Dual Binary Swap optimizer locating patterns that are of higher quality than those found by the stochastic Simulated Annealing, with comparable frequency. The Dual Binary Swap optimizer is, however, found to be very dependent on the starting core configuration, and can not reliably find a high quality pattern from any given starting configuration. / by Carl C. Haugen. / S.M.
140

Exploration of a superposition and reconciliation based approach to cell-centered Lagrangian hydrodynamic methods

Gilman, Lindsey Anne January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 84-90). / Applications and experiments involving the hypervelocity deformation of solids are difficult to devise, implement, and occur on microsecond time scales. As a result, simulations play a large role in the study of hypervelocity deformation. This study explored a superposition and reconciliation based approach using cell-centered Lagrangian hydro methods. The reconciliation forces that are not explicitly calculated for mesh movement were analyzed on an existing hydrocode by Pierre-Henri Maire (PHM) and a truncated form of the Runnels-Gilman method (implemented without using the reconciliation forces as additional forces to form a new hydro method called the Runnels-Gilman method). Results from both the 1D Piston and Saltzman test problems illustrate that the unaccounted reconciliation forces are acting on the mesh both at the shock front and behind the shock wave in PHM's method, while in the truncated Runnels- Gilman method, reconciliation forces are acting only on the vertices at the shock front. In test problems using PHM's method, reconciliation forces may be capturing the additional forces that account for more stable density and internal energy solution during shock wave propagation as compared to the truncated Runnels-Gilman method. / by Lindsey Anne Gilman. / S.M.

Page generated in 0.1518 seconds