501 |
Analysis of Zincblende-Phase GaN, Cubic-Phase SiC, and GaAs MESFETs Including a Full-Band Monte Carlo SimulatorWeber, Michael Thomas 06 October 2005 (has links)
The objective of this research has been the study of device properties for emerging wide-bandgap cubic-phase semiconductors. Though the wide-bandgap semiconductors have great potential as high-power microwave devices, many gaps remain in the knowledge about their properties. The simulations in this work are designed to give insight into the performance of microwave high-power devices constructed from the materials in question. The simulation are performed using a Monte Carlo simulator which was designed from the ground up to include accurate, numerical band structures derived from an empirical pseudo-potential model. Improvements that have been made to the simulator include the generalized device structure simulation, the fully numerical final state selector, and the inclusion of the overlap integrals in the final-state selection. The first comparison that is made among the materials is direct-current breakdown. The DC voltage at which breakdown occurs is a good indication of how much power a transistor can provide. It is found that GaAs has the smallest DC breakdown, with 3C-SiC and ZB-GaN being over 3 times higher. This follows what is expected and is discussed in detail in the work. The second comparison made is the radio-frequency breakdown of the transistors. When devices are used in high-frequency applications it is possible to operate them beyond DC breakdown levels. This phenomenon is caused by the reaction time of the carriers in the device. It is important to understand this effect if these materials are used in a high-frequency application, since this effect can cause a change in the ability of a material to produce high-power devices. MESFETs made from these materials are compared and the results are discussed in detail.
|
502 |
Propagation of Imprecise Probabilities through Black Box ModelsBruns, Morgan Chase 12 April 2006 (has links)
From the decision-based design perspective, decision making is the critical element of the design process. All practical decision making occurs under some degree of uncertainty. Subjective expected utility theory is a well-established method for decision making under uncertainty; however, it assumes that the DM can express his or her beliefs as precise probability distributions. For many reasons, both practical and theoretical, it can be beneficial to relax this assumption of precision. One possible means for avoiding this assumption is the use of imprecise probabilities. Imprecise probabilities are more expressive of uncertainty than precise probabilities, but they are also more computationally cumbersome. Probability Bounds Analysis (PBA) is a compromise between the expressivity of imprecise probabilities and the computational ease of modeling beliefs with precise probabilities. In order for PBA to be implemented in engineering design, it is necessary to develop appropriate computational methods for propagating probability boxes (p-boxes) through black box engineering models. This thesis examines the range of applicability of current methods for p-box propagation and proposes three alternative methods. These methods are applied towards the solution of three successively complex numerical examples.
|
503 |
Monte Carlo Modeling of Carrier Dynamics in Photoconductive Terahertz SourcesKim, Dae Sin 23 June 2006 (has links)
Carrier dynamics in GaAs-based photoconductive terahertz (THz) sources is investigated using Monte Carlo techniques to optimize the emitted THz transients. A self-consistent Monte Carlo-Poisson solver is developed for the spatio-temporal carrier transport properties. The screening contributions to the THz radiation associated with the Coulomb and radiation fields are obtained self-consistently by incorporating the three-dimensional Maxwell equations into the solver. In addition, the enhancement of THz emission by a large trap-enhance field (TEF) near the anode in semi-insulating (SI) photoconductors is investigated.
The transport properties of the photoexcited carriers in photoconductive THz sources depend markedly on the initial spatial distribution of those carriers. Thus, considerable control of the emitted THz spectrum can be attained by judiciously choosing the optical excitation spot shape on the photoconductor, since the carrier dynamics that provide the source of the THz radiation are strongly affected by the ensuing screenings. The screening contributions due to the Coulomb and radiation parts of the electromagnetic field acting back on the carrier dynamics are distinguished. The dominant component of the screening field crosses over at an excitation aperture size with full width at half maximum (FWHM) of ~100 um for a range of reasonable excitation levels. In addition, the key mechanisms responsible for the TEF near the anode of SI photoconductors are elucidated in detail. For a given optical excitation power, an enhancement of THz radiation power can be obtained using a maximally broadened excitation aperture in the TEF area elongated along the anode due to the reduction in the Coulomb and radiation screening of the TEF.
|
504 |
Stochastically Generated Multigroup Diffusion CoefficientsPounders, Justin M. 20 November 2006 (has links)
The generation of multigroup neutron cross sections is usually the first step in the solution of reactor physics problems. This typically includes generating condensed cross section sets, collapsing the scattering kernel, and within the context of diffusion theory, computing diffusion coefficients that capture transport effects as accurately possible. Although the calculation of multigroup parameters has historically been done via deterministic methods, it is natural to think of using the Monte Carlo method due to its geometric flexibility and robust computational capabilities such as continuous energy transport.
For this reason, a stochastic cross section generation method has been implemented in the Mont Carlo code MCNP5 (Brown et al, 2003) that is capable of computing macroscopic material cross sections (including angular expansions of the scattering kernel) for transport or diffusion applications. This methodology includes the capability of tallying arbitrary-order Legendre expansions of the scattering kernel. Furthermore, several approximations of the diffusion coefficient have been developed and implemented. The accuracy of these stochastic diffusion coefficients within the multigroup framework is investigated by examining a series of simple reactor problems.
|
505 |
Characterization and source apportionment of PM2.5 in the Southeastern United StatesLee, Sangil 07 November 2006 (has links)
Fine particulate matter (PM2.5) affects the environment in a variety of ways, including of human health, visibility impairment, acid deposition, and climate change. As of March, 2006, 47 counties are designated as non-attainment areas in terms of PM2.5 in the southeastern United States. State agencies with PM2.5 non-attainment counties must develop plans that demonstrate how they will achieve attainment status. State agencies also have to address emission sources of visibility impairment and develop strategies to improve visibility. It is essential to understand PM2.5 composition and sources in order to develop effective control strategies to reduce PM2.5. In this thesis, actual prescribed burning emissions were characterized for better estimation of their impacts on air quality. Chemical mass balance (CMB) modeling, a receptor-oriented source apportionment technique, was applied to understand regional characteristics of PM2.5 source impacts in the Southeast. Uncertainty issues in the CMB source apportionment results due to both poor spatial representativeness and measurement errors was addressed for better understanding and estimation of the uncertainties. Possible future research is recommended based on the findings in this thesis.
|
506 |
A Particle Filtering-based Framework for On-line Fault Diagnosis and Failure PrognosisOrchard, Marcos Eduardo 08 November 2007 (has links)
This thesis presents an on-line particle-filtering-based framework for fault diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology assumes the definition of a set of fault indicators, which are appropriate for monitoring purposes, the availability of real-time process measurements, and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions.
The incorporation of particle-filtering (PF) techniques in the proposed scheme not only allows for the implementation of real time algorithms, but also provides a solid theoretical framework to handle the problem of fault detection and isolation (FDI), fault identification, and failure prognosis. Founded on the concept of sequential importance sampling (SIS) and Bayesian theory, PF approximates the conditional state probability distribution by a swarm of points called particles and a set of weights representing discrete probability masses. Particles can be easily generated and recursively updated in real time, given a nonlinear process dynamic model and a measurement model that relates the states of the system with the observed fault indicators.
Two autonomous modules have been considered in this research. On one hand, the fault diagnosis module uses a hybrid state-space model of the plant and a particle-filtering algorithm to (1) calculate the probability of any given fault condition in real time, (2) estimate the probability density function (pdf) of the continuous-valued states in the monitored system, and (3) provide information about type I and type II detection errors, as well as other critical statistics. Among the advantages offered by this diagnosis approach is the fact that the pdf state estimate may be used as the initial condition in prognostic modules after a particular fault mode is isolated, hence allowing swift transitions between FDI and prognostic routines.
The failure prognosis module, on the other hand, computes (in real time) the pdf of the remaining useful life (RUL) of the faulty subsystem using a particle-filtering-based algorithm. This algorithm consecutively updates the current state estimate for a nonlinear state-space model (with unknown time-varying parameters) and predicts the evolution in time of the fault indicator pdf. The outcome of the prognosis module provides information about the precision and accuracy of long-term predictions, RUL expectations, 95% confidence intervals, and other hypothesis tests for the failure condition under study. Finally, inner and outer correction loops (learning schemes) are used to periodically improve the parameters that characterize the performance of FDI and/or prognosis algorithms. Illustrative theoretical examples and data from a seeded fault test for a UH-60 planetary carrier plate are used to validate all proposed approaches.
Contributions of this research include: (1) the establishment of a general methodology for real time FDI and failure prognosis in nonlinear processes with unknown model parameters, (2) the definition of appropriate procedures to generate dependable statistics about fault conditions, and (3) a description of specific ways to utilize information from real time measurements to improve the precision and accuracy of the predictions for the state probability density function (pdf).
|
507 |
The simulation research on capital adequancy for banks--study on market riskChai, Hui-Wen 25 August 2003 (has links)
NONE
|
508 |
Monte Carlo modeling of an x-ray fluorescence detection system by the MCNP codeLiu, Fang 17 March 2009 (has links)
An x-ray fluorescence detection system has been designed by our research group for quantifying the amount of gold nanoparticles presented within the phantom and animals during gold nanoparticle-aided cancer detection and therapy procedures. The primary components of the system consist of a microfocus x-ray source, a Pb beam collimator, and a CdTe photodiode detector. In order to optimize and facilitate future experimental tasks, a Monte Carlo model of the detection system has been created by using the MCNP5 code. Specifically, the model included an x-ray source, a Pb collimator, a CdTe detector, and an acrylic plastic phantom with four cylindrical columns where various materials such as gold nanoparticles, aluminum, etc. can be inserted during the experiments. In this model, 110 kVp x-rays emitted into a 60o cone from the focal spot of the x-ray source were collimated to a circular beam with a diameter of 5 mm. The collimated beam was then delivered to the plastic phantom with and without a gold nanoparticle-containing column. The fluence of scattered and gold fluorescence x-rays from the phantom was scored within the detector's sensitive volume resulting in various photon spectra and compared with the spectra acquired experimentally under the same geometry. The results show that the current Monte Carlo model can produce the results comparable to those from actual experiments and therefore it would serve as a useful tool to optimize and troubleshoot experimental tasks necessary for the development of gold nanoparticle-aided cancer detection and therapy procedures.
|
509 |
Entropy-based diagnostics of criticality Monte Carlo simulation and higher eigenmode acceleration methodologyShi, Bo 10 June 2010 (has links)
Because of the accuracy and ease of implementation, Monte Carlo methodology is widely used in analysis of nuclear systems. The obtained estimate of the multiplication factor (keff) or flux distribution is statistical by its nature. In criticality simulation of a nuclear critical system, whose basis is the power iteration method, the guessed source distribution initially is generally away from the converged fundamental one. Therefore, it is necessary to ensure that the convergence is achieved before data are accumulated. Discarding a larger amount of initial histories could reduce the risk of contaminating the results by non-converged data but increases the computational expense. This issue is amplified for large loosely coupled nuclear systems with low convergence rate. Since keff is a generation-based global value, frequently no explicit criterion is applied to the diagnostic of keff directly. As an alternative, a flux-based entropy check available in MCNP5 works well in many cases. However, when applied to a difficult storage fuel pool benchmark problem, it could not always detect the non-convergence of flux distribution. Preliminary evaluation indicates that it is due to collapsing local information into a single number. This thesis addresses this problem by two new developments. First, it aims to find a more reliable way to assess convergence by analyzing the local flux change. Second, it introduces an approach to simultaneously compute both the first and second eigenmodes. At the same time, by computing these eigenmodes, this approach could increase the convergence rate. Improvement in these two areas could have a significant impact on practicality of Monte Carlo criticality simulations.
|
510 |
The structure-property relation in nanocrystalline materials: a computational study on nanocrystalline copper by Monte Carlo and molecular dynamics simulationsXu, Tao 10 November 2009 (has links)
Nanocrystalline materials have been under extensive study in the past two decades. The reduction in grain size induces many abnormal behaviors in the properties of nanocrystalline materials, that have been investigated systematically and quantitatively. As one of the most fundamental relations in materials science, the structure-property relation should still apply on materials of nano-scale grain sizes. The characterization of grain boundaries (GBs) and related entities remains a big obstacle to understanding the structure-property relation in nanocrystalline materials. It is challenging experimentally to determine the topological properties of polycrystalline materials due to the complex and disordered grain boundary network presented in the nanocrystalline materials. The constantly improving computing power enables us to study the structure-property relation in nanocrystalline materials via Monte Carlo and molecular dynamic simulations.
In this study, we will first propose a geometrical construction method based on inverse Monte Carlo simulation to generate digital microstructures with desired topological properties such as grain size, interface area, triple junction length as well as their statistical distributions. The influences on the grain shapes by different topological properties are studied. Two empirical geometrical laws are examined including the Lewis rule and Aboav-Weaire law. Secondly, defect free nanocrystalline Copper (nc-Cu) samples are generated by filling atoms into the Voronoi structure and then relaxed by molecular dynamics simulations. Atoms in the relaxed nc-Cu samples are then characterized into grain atoms, GB interface atoms, GB triple junction atoms and vertex atoms using a newly proposed method. Atoms in each GB entity can also be identified. Next, the topological properties of nc-Cu samples before and after relaxation are calculated and compared, indicating that there exists a physical limit in the number of atoms to form a stable grain boundary interface and triple junction in nanocrystalline materials. In addition, we are able to obtain the statistical averages of geometrical and thermal properties of atoms across each GB interfaces, the so-called GB profiles, and study the grain size, misorientation and temperature effects on the microstructures in nanocrystalline materials. Finally, nc-Cu samples with different topological properties are deformed under simple shear using MD simulation in an attempt to study the structure-property relation in nanocrystalline materials.
|
Page generated in 0.0381 seconds