• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1077
  • 83
  • 7
  • 3
  • 1
  • 1
  • Tagged with
  • 2807
  • 2807
  • 1593
  • 1574
  • 1574
  • 426
  • 379
  • 161
  • 155
  • 140
  • 121
  • 115
  • 111
  • 109
  • 98
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Identification of performance indicators for nuclear power plants

Sui, Yu, 1973- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2001. / Includes bibliographical references (p. 160-162). / Performance indicators have been assuming an increasingly important role in the nuclear industry. An integrated methodology is proposed in this research for the identification and validation of performance indicators for assessing and predicting nuclear power plant overall performance (i.e., both economic and safety performance) in a systematic and quantitative way. The methodology consists of four steps: the selection of target sites/plants, the identification and refinement of candidate indicators, the collection of historical operating records of selected indicators, and the identification and evaluation of correlations between selected indicators and plant performance through data analysis. The methodology is centered upon individual plants, using plant-specific operation records to identify and validate plant-specific correlations. It can also be applied to multiple plants and the results from different plants can be compared to identify and analyze commonalities and differences in plant operations across-plant. Case studies of the proposed methodology were performed at three target plants. A list of candidate performance indicators was identified through a sensitivity analysis on a quantitative model of nuclear power plant operation. The list was validated and supplemented through interviews with plant personnel and a refined, plant-specific list was obtained for each target plant. Historical operating records of candidate indicators in the lists were collected from target plants. Data analyses, including correlational analysis, multivariate regression analysis, and lead/lag time analysis, were performed using the historical data collected. / (cont.) The methodology was originally intended for the identification of leading indicators, which can provide advance warnings of deterioration of performance before the direct outcome indicators are affected. A regression-based lead/lag time analysis method was proposed and applied in the case studies to evaluate lead/lag relationships between candidate indicators and plant performance. However, the method did not produce stable and reliable results by using the data currently available at the target plants and was not able to identify leading indicators with certainty. As a result, we shifted the focus of our data analysis to identifying correlations between candidate indicators and plant performance through correlational analysis and multivariate regression analysis. Several findings are noteworthy: (1) Data analysis results were sensitive to the indicators and data points used, mainly due to the small number of data points (30-60) available for use in the analyses; (2) Data analysis results generally agreed with our knowledge and expectation, with a few exceptions; (3) Correlations showed large variations from plant to plant; (4) Correlations varied from time to time at most target plants; (5) The outcome indicators with smoother patterns (e.g., the INPO performance index) tended to correlate better with candidate indicators than the outcome indicators that measured relatively rare events and had sharp changes in their patterns (e.g., unplanned capability loss factor); (6) Work order backlogs stood out as important indicators for all three target plants; ... / by Yu Sui. / Ph.D.
212

Precise control of quantum information

Pravia, Marco Antonio (Pravia Hernandez), 1975- January 2002 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2002. / Includes bibliographical references (p. 93-101). / Theoretical discoveries in the nascent field of quantum information processing hold great promise, suggesting the means for increased computational power and unconditionally secure communications. To achieve these advances in practice, however, quantum information must be stored and manipulated with high fidelity. Here, we describe how quantum information stored in a nuclear spin system can be controlled accurately. We describe a method creating strongly-modulating single-spin gates that faithfully produce the desired unitary transformations. The simulated fidelity of the best gate (under ideal conditions) reaches close to 0.99999, a value close to estimates of the fault-tolerant threshold. In addition, we show how knowledge of experimental errors can be used correct or compensate the gates. The experimental demonstration of these methods yields estimated single-spin and coupling gate fidelities close to 0.99. The methods are applicable to a variety of experimental studies in quantum information processing. We used the gates to implement strategies for combating decoherence, including the realization of a noiseless subsystem and the concatenation of quantum error correction with dynamical decoupling. The gates were also used to demonstrate the quantum Fourier transform, the disentanglement eraser, and an entanglement swap. Finally, we describe a nuclear magnetic resonance (NMR) implementation of a quantum lattice gas (QLG) algorithm. Recently, it has been suggested that an array of small quantum information processors sharing classical information can be used to solve selected computational problems. The concrete implementation demonstrated here solves the diffusion equation, and it provides a test example from which to / (cont.) probe the strengths and limitations of this new computation paradigm. The NMR experiment consists of encoding a mass density onto an array of 16 two-qubit quantum information processors and then following the computation through 7 time steps of the algorithm. The results show good agreement with the analytic solution for diffusive dynamics. / by Marco Antonio Pravia. / Ph.D.
213

Fabrication and characterization of multilayers for focusing hard x-ray optics

Ivan, Adrian, 1964- January 2002 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2002. / Includes bibliographical references. / This study investigated the growth of multilayers with depth-graded thicknesses and their performance as x- ray mirrors for the next generation of focusing telescopes. Different material combinations were studied for the first time employing a consistent methodology for the DC magnetron sputtering growth of the multilayers and their characterization by x-ray reflectivity (XRR) measurements. The width of the interfaces was identified as the most important factor that must be controlled and minimized to achieve high reflectivity. The optimization of the process parameters led to interface widths ([sigma]) of 3.0-4.5 [angstroms] for both W/Si and Pt/C, the most interesting materials for this application. The other material combinations studied were: W/C, Mo/Si, Ni/C, Ni-V/C, WSi₂/Si, and Nb/Si. The interface widths for W/C were about 4 [angstroms], but wider ([sigma] [greater than or equal to] 7 [angstroms]) for Mo/Si and Ni/C. The analysis indicated that the large Mo/Si interface width was due to the formation of a silicide compound. In the case of Ni/C, TEM and XRR measurements concurred to an island growth mode of nickel for thicknesses less than 17 [angstroms], resulting in a high roughness at the C-on-Ni interface. A trial with WSi₂/Si multilayers showed values of sigma comparable to the simpler W/Si system. For Nb/Si the interface widths were larger than 9 [angstroms]. Depth graded W/Si, Pt/C, and W/C multilayers were deposited on flat substrates in process conditions optimized for low [sigma] (3-5 [angstroms]). Their x-ray reflectivity was measured at high energy using synchrotron radiation sources and confirmed the designs for the structure of the multilayers and the reflection bandpass. / (cont.) The reflectivity was between 20 and 40 % for grazing incidence angles used in hard x-ray focusing telescopes. One of the deposition chambers built for this study has the unique capability to coat with multilayers the inside of integral conical shells that compose the telescope mirror system. We have accomplished for the first time this type of coatings and present results from their x-ray measurements. / by Adrian Ivan. / Ph.D.
214

Synthesis and evaluation of actinide imprinted resins

Noyes, Karen Lynn, 1977- January 2003 (has links)
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2003. / Includes bibliographical references (p. 143-146). / Organic resins have previously shown good results with application to actinide separations. Large portions of recent research have been dedicated to the synthesis and evaluation of resins with phenolic-type functional groups. Other recent chemical research with lighter metals has developed a technique known as ion imprinting which can provide greater selectivity for the target metal ion. Initial work with ion imprinting and phenolic-type resins has shown these two areas to be largely incompatible. Identifying the ion imprinting technique as potentially the more valuable of the two, further work was undertaken with resins that incorporate a carboxylic acid-type functionality. These new resins are synthesized via a radical polymerization method, which proved to be very compatible with both actinides and the ion imprinting procedure. Polymer-based resins were synthesized without a metal template as well as ion imprinted, or templated, with U(VI), Th(IV), Np(V), and a resin for use with Am(III). Each of these resins were individually characterized and evaluated for use with their respective target metals. Characterization provides a means of comparing theoretical binding capacities of various resins, which the evaluations define the binding characteristics of interest (capacity, selectivity, kinetics, etc.). Based on the initial results for the selectivity of the U(VI) and Th(IV) ions, a new type of resin was developed in an effort to further increase the selectivity of the resin for the target metal ion. This new resin, known as a "capped" resin, seeks to remove the binding capability of any potential binding sites not involved in the ion imprinting process. / (cont.) Results show that the ion imprinting technique can be successfully applied in the synthesis of resins for actinide separations with good success. The resins created through this process also show an affinity for their target metals over both competing ions as well as ions of similar ionic charge and radii. The removal of so-called random binding sites is also possible, with the addition of a few synthetic steps. / by Karen Lynn Noyes. / Sc.D.
215

Model combination by decomposition and aggregation

Xu, Mingyang, 1974- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2004. / Includes bibliographical references (p. 265-282). / This thesis focuses on a general problem in statistical modeling, namely model combination. It proposes a novel feature-based model combination method to improve model accuracy and reduce model uncertainty. In this method, a set of candidate models are first decomposed into a group of components or features and then components are selected and aggregated into a composite model based on data. However, in implementing this new method, some central challenges have to be addressed, which include candidate model choice, component selection, data noise modeling, model uncertainty reduction and model locality. In order to solve these problems, some new methods are put forward. In choosing candidate models, some criteria are proposed including accuracy, diversity, independence as well as completeness and then corresponding quantitative measures are designed to quantify these criteria, and finally an overall preference score is generated for each model in the pool. Principal component analysis (PCA) and independent component analysis (ICA) are applied to decompose candidate models into components and multiple linear regression is employed to aggregate components into a composite model. / (cont.) In order to reduce model structure uncertainty, a new concept of fuzzy variable selection is introduced to carry out component selection, which is able to combine the interpretability of classical variable selection and the stability of shrinkage estimators. In dealing with parameter estimation uncertainty, exponential power distribution is proposed to model unknown non-Gaussian noise and parametric weighted least-squares method is devise to estimate parameters in the context of non-Gaussian noise. These two methods are combined to work together to reduce model uncertainty, including both model structure uncertainty and parameter uncertainty. To handle model locality, i.e. candidate models do not work equally well over different regions, the adaptive fuzzy mixture of local ICA models is developed. Basically, it splits the entire input space into domains, build local ICA models within each sub-region and then combine them into a mixture model. Many different experiments are carried out to demonstrate the performance of this novel method. Our simulation study and comparison show that this new method meets our goals and outperforms existing methods in most situations. / by Mingyang Xu. / Ph.D.
216

Feasibility of risk-informed regulation for Generation-IV reactors

Matos, Craig H January 2005 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2005. / Includes bibliographical references (p. 155-156). / With the advent of new and innovative Generation-IV reactor designs, new regulations must be developed to assure the safety of these plants. In the past a purely deterministic way of developing design basis accidents was prevalent, however this is felt to not be satisfactory, since this leaves insufficient safety restrictions in certain areas, while being overly restrictive in others, not being able to optimize where the safety constraints are truly needed. Currently the USNRC is investigating how one might go about this approach, but no method is finalized. In this paper, a methodology for creating risk-informed design basis accidents is developed. This not only incorporates the Surrogate Risk Guidelines developed by the USNRC for each overall accident initiator the plant may experience, but helps to select which sequences are the most important to that initiator, and from that develop a set of risk-informed assumptions that form the basis of the design basis accident. This method was applied to the test case of the Massachusetts Institute of 'Technology' s Gas-Fast Reactor (GFR) design, as considerable risk-informed design work has been carried out on various initiators for this design (including Turbine Trip, Loss-of-Coolant Accident, and Loss-of-Offsite Power). / (cont.) The turbine trip was chosen for extensive investigation. It was found that the CDF of this event for the GFR (7.098E-6 / RY) did not pass the overall NRC Surrogate Risk Guideline (1E-6 / RY). The method identified the dominating sequence, which was dominated itself by the failure of the passive shutdown cooling system for the GFR design. It was determined that the designers could in fact develop a risk-informed DBA by developing a set of assumptions to ensure success in the Passive SCS. This process showed how risk-informed DBAs could be developed for various new reactor designs. / by Craig H. Matos. / S.M.
217

A functional MRI study of the distributed neural circuitry of learning and reward / fMRI study of the distributed neural circuitry of learning and reward

Awai, Alexandra F January 2005 (has links)
Thesis (S.M. and S.B.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2005. / Includes bibliographical references. / The aim of this research project was to study the neural substrates involved in processing rewarding stimuli. Evaluation of the magnitudes of reward is one of the fundamental aspects of goal directed behavior, and studies have shown that this process involves the midbrain dopamine system. Work by C. R. Gallistel has shown that the reward magnitude of electrical stimulation to structures within this system increases with increasing current and frequency. In this study, operant conditioning with intracranial self-stimulation of the medial forebrain bundle (MFB) was used to correlate the rewarding quality of a stimulus with variations of its current amplitude (Part 1) or electrical pulse frequency (Part 2). For Part 2, a saturation frequency, which is the point at which increasing stimulus frequency does not elicit a more vigorous operant response, were established for each of the responsive subjects. Functional magnetic resonance imaging (fMRI) with blood oxygenation level dependent (BOLD) contrast was then used to evaluate the brain activation in response to behaviorally characterized electrical stimuli. High resolution anatomical images revealed that subjects with electrode tips positioned within 1 mm of the midline of the MFB tended to demonstrate reward-seeking behavior. Timecourses were plotted for imaging voxels in areas exhibiting BOLD responses in Part 1 and Part 2. / (cont.) In Part 1, the BOLD timecourse in the striatum/orbital cortex region - which has been implicated in reward processing - had different time-evolution characteristics than the central sinus, which is thought to reflect general hemodynamic responses to stimuli. Additionally, the activated regions were qualitatively similar for varying currents, but lower current amplitude led to a smaller percentage of active voxels. In Part 2, responses in the somatosensory/motor cortex and striatum with adjacent ventral forebrain, which are both thought comprise important reward processing circuitry, have similar BOLD responses for saturation and above saturation frequencies, but lower responses at below saturation frequencies. These results show that BOLD imaging can be utilized to isolate regions that code for the rewarding quality of MFB stimulation, rather than its sensory aspects. / by Alexandra F. Awai. / S.M.and S.B.
218

Design of a near-field coded aperture cameras for high-resolution medical and industrial gamma-ray imaging

Accorsi, Roberto, 1971- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2001. / Includes bibliographical references (p. [251]-255). / Coded Aperture Imaging is a technique originally developed for X-ray astronomy, where typical imaging problems are characterized by far-field geometry and an object made of point sources distributed over a mainly dark background. These conditions provide, respectively, the basis of artifact-free and high Signal-to-Noise Ratio (SNR) imaging. When the coded apertures successful in far-field problems are used in near-field geometry, images are affected by extensive artifacts. The classic remedy is to move away from the object until a far-field geometry is restored, but this is at the expense of counting efficiency and, thus, of the SNR of the images. It is shown in this thesis that the application to near-field of a technique originally developed to mitigate the effects of non-uniform background in far-field applications results in a considerable reduction of near-field artifacts. This result opens the way to the exploitation in near-field problems of the favorable SNR characteristics of coded apertures: images comparable to those provided by state-of-the-art imagers can be obtained in a shorter time or while administering a lower dose to patients. Further developments follow when the SNR increase is traded for better resolution at constant time and dose. / (cont.) The main focus of this work is on a coded aperture camera specifically designed for high-resolution single-photon planar imaging with a pre-existing gamma (Anger) camera. Original theoretical findings and the results of computer simulations led to an optimal coded aperture that was tested experimentally in phantom as well as in-vivo studies. Results include, but are not limited to, 1.66-mm-resolution images of 99mTc-labeled blood and bone agents in a mouse. The theoretical bases for extension to sub-millimeter resolution and higher-energy isotopes are also laid and a candidate aperture capable of 0.96-mm resolution proposed. Potential applications are in small-animal imaging, pediatric nuclear medicine and breast imaging, where increased resolution can result in earlier diagnosis of disease. The last Chapter of the thesis extends the ideas developed to the design of a coded aperture suitable for CAFNA (Coded Aperture Fast Neutron Analysis), a contraband detection technique that has been under development at MIT for a number of years. / by Roberto Accorsi. / Ph.D.
219

Ion collection by probing objects in flowing magnetized plasmas

Chung, Kyu-Sun January 1989 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 1989. / Includes bibliographical references. / Kyu-Sun Chung. / Ph.D.
220

Measurements of injected impurity transport in text using multiply filtered soft x-ray detectors

Wenzel, Kevin W January 1990 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 1990. / Includes bibliographical references (leaves 213-234). / by Kevin Wayne Wenzel. / Ph.D.

Page generated in 0.199 seconds