• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 181
  • 54
  • 47
  • 23
  • 18
  • 10
  • 9
  • 9
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1208
  • 1208
  • 1208
  • 173
  • 172
  • 165
  • 128
  • 124
  • 120
  • 108
  • 102
  • 96
  • 86
  • 84
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

The conceptual design and evaluation of research reactors utilizing a Monte Carlo and diffusion based computational modeling tool

Govender, Nicolin 06 August 2012 (has links)
M.Sc. / Due to the demand for medical isotopes, new Materials Testing Reactors (MTR's) are being considered and built globally. Different countries all have varying design requirements resulting in a plethora of different designs. South-Africa is also considering a new MTR reactor for dedicated medical radio-isotope production. A neutronic analysis of these various designs is used to ascertain/evaluate the viability of each. Most safety and utilization parameters can be calculated from the neutron flux. The code systems that are used to perform these analysis are either stochastic or deterministic in nature. In performing such an analysis the tracking of the depletion of isotopes is essential, to ensure that the modeled macroscopic cross-sections are as close as possible to that of the actual reactor. Stochastic methods are currently too slow when performing depletion analysis, but are very accurate and flexible. Deterministic based methods, on the other hand are much faster, but are generally not as accurate or flexible due to the approximations made in solving the Boltzmann Transport Equation. The aim of this work is therefore to synergistically use a deterministic (diffusion) code to obtain an equilibrium material distribution for a given design and a stochastic (Monte Carlo) code to evaluate the neutronics of the resulting core model - therefore applying a hybrid approach to conceptual core design. A comparison between the hybrid approach and the diffusion code demonstrates the limitations and strengths of the diffusion-based calculational path for various core designs. In order to facilitate the described process, and implement it in a consistent manner, a computational tool termed COREGEN has been developed. This tool facilitates the creation of neutronics models of conceptual reactor cores for both the Monte Carlo and diffusion codes in order to implement the described hybrid approach. The system uses the Monte-Carlo based MCNP code system developed at Los Alamos National Laboratory as stochastic solver, and the nodal diffusion based OSCAR-4 code system developed at Necsa as the deterministic solver. Given basic input for a core design, COREGEN will generate a detailed OSCAR-4 and MCNP input model. An equilibrium core obtained by running OSCAR-4, is then used in the MCNP model. COREGEN will analyze the most important core parameters with both codes and provide comparisons. In this work, various MTR reactor designs are evaluated to meet the primary requirement of isotope production. A heavy water reflected core with 20 isotope production rigs was found to be the most promising candidate. Based on the comparison of the various parameters between Monte Carlo and diffusion for the various cores, we found that the diffusion based OSCAR-4 system compares well to Monte Carlo in the neutronic analysis of cores with in-core irradiation positions (average error 4.5% in assembly power). However, for the heavy water reflected cores with ex-core rigs, the diffusion method differs significantly from the MonteCarlo solution in the rig positions (average error 17.0% in assembly power) and parameters obtained from OSCAR must be used with caution in these ex-core regions. The solution of the deterministic approach in in-core regions corresponded to the stochastic approach within 7% (in assembly averaged power) for all core designs.
392

Numerical methods for the valuation of financial derivatives

Ntwiga, Davis Bundi January 2005 (has links)
Magister Scientiae - MSc / Numerical methods form an important part of the pricing of financial derivatives and especially in cases where there is no closed form analytical formula. We begin our work with an introduction of the mathematical tools needed in the pricing of financial derivatives. Then, we discuss the assumption of the log-normal returns on stock prices and the stochastic differential equations. These lay the foundation for the derivation of the Black Scholes differential equation, and various Black Scholes formulas are thus obtained. Then, the model is modified to cater for dividend paying stock and for the pricing of options on futures. Multi-period binomial model is very flexible even for the valuation of options that do not have a closed form analytical formula. We consider the pricing of vanilla options both on non dividend and dividend paying stocks. Then show that the model converges to the Black-Scholes value as we increase the number of steps. We discuss the Finite difference methods quite extensively with a focus on the Implicit and Crank-Nicolson methods, and apply these numerical techniques to the pricing of vanilla options. Finally, we compare the convergence of the multi-period binomial model, the Implicit and Crank Nicolson methods to the analytical Black Scholes price of the option. We conclude with the pricing of exotic options with special emphasis on path dependent options. Monte Carlo simulation technique is applied as this method is very versatile in cases where there is no closed form analytical formula. The method is slow and time consuming but very flexible even for multi dimensional problems. / South Africa
393

Determination of the photopeak detection efficiency of a HPGe detector, for volume sources, via Monte Carlo simulations

Damon, Raphael Wesley January 2005 (has links)
Magister Scientiae - MSc / The Environmental Radioactivity Laboratory (ERL) at iThemba LABS undertakes experimental work using a high purity germanium (HPGe) detector for laboratory measurements. In this study the Monte Carlo transport code, MCNPX, which is a general-purpose Monte Carlo N − Particle code that extends the capabilities of the MCNP code, developed at the Los Alamos National Laboratory in New Mexico, was used. The study considers how various parameters such as (1) coincidence summing, (2) volume, (3) atomic number (Z) and (4) density, affects the absolute photopeak efficiency of the ERL’s HPGe detector in a close geometry (Marinelli beaker) for soil, sand, KCl and liquid samples. The results from these simulations are presented here, together with an intercomparison exercise of two MC codes (MCNPX and a C++ program developed for this study) that determine the energy deposition of a point source in germanium spheres of radii 1 cm and 5 cm. A sensitivity analysis on the effect of the detector dimensions (dead layer and core of detector crystal) on the photopeak detection efficiency in a liquid sample and the effect of moisture content on the photopeak detection efficiency in sand and soil samples, was also carried out. This study has shown evidence that the dead layer of the ERL HPGe detector may be larger than stated by the manufacturer, possibly due to warming up of the detector crystal. This would result in a decrease in the photopeak efficiency of up to 8 % if the dead layer of the crystal were doubled from its original size of 0.05 cm. This study shows the need for coincidence summing correction factors for the gamma lines (911.1 keV and 968.1 keV) in the 232Th series for determining accurate activity concentrations in environmental samples. For the liquid source the gamma lines, 121.8 keV, 244.7 keV, 444.1 keV and 1085.5 keV of the 152Eu series, together with the 1173.2 keV and 1332.5 keV gamma lines of the 60Co, are particularly prone to coincidence summing. In the investigation into the effects of density and volume on the photopeak efficiency for the KCl samples, it has been found that the simulated results are in good agreement with experimental data. For the range of sample densities that are dealt with by the ERL it has been found that the drop in photopeak efficiency is less than 5 %. This study shows that the uncertainty of the KCl sample activity measurement due to the effect of different filling volumes in a Marinelli beaker is estimated in the range of 0.6 % per mm and is not expected to vary appreciably with photon energy. In the case of the effect of filling height on the efficiency for the soil sample, it was found that there is a large discrepancy in the trends of the simulated and experimental curves. This discrepancy could be a result of the use of only one sand sample in this study and therefore the homogeneity of the sample has to be investigated. The effect of atomic number has been found to be negligible for the soil and sand compositions for energies above 400 keV, however if the composition of the heavy elements is not properly considered when simulating soil and sand samples, the effect of atomic number on the absolute photopeak efficiency in the low energy (< 400 keV) region can make a 14 % difference. / South Africa
394

Efficient Monte Carlo methods for pricing of electricity derivatives

Nobaza, Linda January 2012 (has links)
>Magister Scientiae - MSc / We discuss efficient Monte Carlo methods for pricing of electricity derivatives. Electricity derivatives are risk management tools used in deregulated electricity markets. In the past,research in electricity derivatives has been dedicated in the modelling of the behaviour of electricity spot prices. Some researchers have used the geometric Brownian motion and the Black Scholes formula to offer a closed-form solution. Electricity spot prices however have unique characteristics such as mean-reverting, non-storability and spikes that render the use of geometric Brownian motion inadequate. Geometric Brownian motion assumes that changes of the underlying asset are continuous and electricity spikes are far from being continuous. Recently there is a greater consensus on the use of Mean-Reverting Jump-Diffusion (MRJD) process to describe the evolution of electricity spot prices. In this thesis,we use Mean-Reverting Jump-Diffusion process to model the evolution of electricity spot prices. Since there is no closed-form technique to price these derivatives when the underlying electricity spot price is assumed to follow MRJD, we use Monte Carlo methods to value electricity forward contracts. We present variance reduction techniques that improve the accuracy of the Monte Carlo Method for pricing electricity derivatives.
395

Performance analysis of P2MP hybrid FSO/RF network

Ansari, Yaseen Akbar 20 December 2017 (has links)
Free space optics (FSO) technology is proving to be an exceptionally beneficial supplement to conventional Fiber Optics and radio frequency (RF) links. FSO and RF links are greatly affected by atmospheric conditions. Hybrid FSO/RF systems have emerged as a promising solution for high data rate wireless communication. FSO technology can be used effectively in multi-user scenarios to support Point-to-Multi-Point (P2MP) networks. In this work we present and analyse a P2MP Hybrid FSO/RF network that uses a number of FSO links for data transmission from the central node to different remote nodes of the network. A common backup RF link is used by the central node to transmit data to any of the remote nodes in case of failure of any FSO links. Each remote node is assigned a transmit buffer at the central node for the downlink transmission. We deploy a non-equal priority protocol and p-persistent strategy for nodes accessing the RF link and consider the back up RF transmission link with lower frame transmission rates as compared to the FSO link. Under different atmospheric conditions, we also study various performance metrics of the network. We study the throughput from the central node to the remote nodes individually as well as the following: the average transmit buffer size, the frame queuing delay in the transmit buffers, the efficiency of the queuing systems and the frame loss probability. / Graduate
396

Monte Carlo simulation and aspects of the magnetostatic design of the TRIUMF second arm spectrometer

Duncan, Fraser Andrew January 1988 (has links)
The optical design of the TRIUMF Second Arm Spectrometer (SASP) has been completed and the engineering design started. The effects of the dipole shape and field clamps on the aperture fringe fields were studied. It was determined that a field clamp would be necessary to achieve the field specifications over the desired range of dipole excitations. A specification of the dipole pole edges and field clamps for the SASP is made. A Monte Carlo simulator for the SASP was written. During the design this was used to study the profiles of rays passing through the SASP. These profiles were used in determining the positioning of the dipole vacuum boxes and the SASP detector arrays. The simulator is intended to assess experimental arrangements of the SASP. / Science, Faculty of / Physics and Astronomy, Department of / Graduate
397

Monte Carlo study of fatigue crack growth under random loading

Harris, Richard Francis. January 1975 (has links)
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1975 / Includes bibliographical references. / by Richard F. Harris. / M.S. / M.S. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
398

Optimization of construction projects budget minimizing risks using the Monte Carlo method

Garcia, Sergio, Pisfil, Jose Michael, Rodriguez, Sandra, Luna, Roger 30 September 2020 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / Currently, it is common for the risks in construction projects to generate significant budgetary deviations due to their null or insufficient identification and quantification. In relation to this point, and with the focus on improving the competitiveness of construction companies when developing and complying with their budgets, it is essential to have an accurate methodology for estimating the contingency associated with risks from an early stage. This allows the contingency amount not to be exceeded, resulting in better reliability and adjustment of the budget assigned for the project, and therefore guaranteeing the expected profitability. This objective can be achieved using applications such as the Monte Carlo method, since through the probabilistic simulations that can be developed through it, it is possible to precisely establish the value of the contingency associated with project risks in study. It is recommended to carry out these evaluations and analyzes before the project starts. In this sense, this research focuses on establishing a sequential methodology that serves as an application tool for any type of construction project, ensuring the optimization of the budget by minimizing the risks associated with the project.
399

Modernizing Markov Chains Monte Carlo for Scientific and Bayesian Modeling

Margossian, Charles Christopher January 2022 (has links)
The advent of probabilistic programming languages has galvanized scientists to write increasingly diverse models to analyze data. Probabilistic models use a joint distribution over observed and latent variables to describe at once elaborate scientific theories, non-trivial measurement procedures, information from previous studies, and more. To effectively deploy these models in a data analysis, we need inference procedures which are reliable, flexible, and fast. In a Bayesian analysis, inference boils down to estimating the expectation values and quantiles of the unnormalized posterior distribution. This estimation problem also arises in the study of non-Bayesian probabilistic models, a prominent example being the Ising model of Statistical Physics. Markov chains Monte Carlo (MCMC) algorithms provide a general-purpose sampling method which can be used to construct sample estimators of moments and quantiles. Despite MCMC’s compelling theory and empirical success, many models continue to frustrate MCMC, as well as other inference strategies, effectively limiting our ability to use these models in a data analysis. These challenges motivate new developments in MCMC. The term “modernize” in the title refers to the deployment of methods which have revolutionized Computational Statistics and Machine Learning in the past decade, including: (i) hardware accelerators to support massive parallelization, (ii) approximate inference based on tractable densities, (iii) high-performance automatic differentiation and (iv) continuous relaxations of discrete systems. The growing availability of hardware accelerators such as GPUs has in the past years motivated a general MCMC strategy, whereby we run many chains in parallel with a short sampling phase, rather than a few chains with a long sampling phase. Unfortunately existing convergence diagnostics are not designed for the “many short chains” regime. This is notably the case of the popular R statistics which claims convergence only if the effective sample size per chain is large. We present the nested R, denoted nR, a generalization of R which does not conflate short chains and poor mixing, and offers a useful diagnostic provided we run enough chains and meet certain initialization conditions. Combined with nR the short chain regime presents us with the opportunity to identify optimal lengths for the warmup and sampling phases, as well as the optimal number of chains; tuning parameters of MCMC which are otherwise chosen using heuristics or trial-and-error. We next focus on semi-specialized algorithms for latent Gaussian models, arguably the most widely used of class of hierarchical models. It is well understood that MCMC often struggles with the geometry of the posterior distribution generated by these models. Using a Laplace approximation, we marginalize out the latent Gaussian variables and then integrate the remaining parameters with Hamiltonian Monte Carlo (HMC), a gradient-based MCMC. This approach combines MCMC and a distributional approximation, and offers a useful alternative to pure MCMC or pure approximation methods such as Variational Inference. We compare the three paradigms across a range of general linear models, which admit a sophisticated prior, i.e. a Gaussian process and a Horseshoe prior. To implement our scheme efficiently, we derive a novel automatic differentiation method called the adjoint-differentiated Laplace approximation. This differentiation algorithm propagates the minimal information needed to construct the gradient of the approximate marginal likelihood, and yields a scalable differentiation method that is orders of magnitude faster than state of the art differentiation for high-dimensional hyperparameters. We next discuss the application of our algorithm to models with an unconventional likelihood, going beyond the classical setting of general linear models. This necessitates a non-trivial generalization of the adjoint-differentiated Laplace approximation, which we implement using higher-order adjoint methods. The generalization works out to be both more general and more efficient. We apply the resulting method to an unconventional latent Gaussian model, identifying promising features and highlighting persistent challenges. The final chapter of this dissertation focuses on a specific but rich problem: the Ising model of Statistical Physics, and its generalization as the Potts and Spin Glass models. These models are challenging because they are discrete, precluding the immediate use of gradient-based algorithms, and exhibit multiple modes, notably at cold temperatures. We propose a new class of MCMC algorithms to draw samples from Potts models by augmenting the target space with a carefully constructed auxiliary Gaussian variable. In contrast to existing methods of a similar flavor, our algorithm can take advantage of the low-rank structure of the coupling matrix and scales linearly with the number of states in a Potts model. The method is applied to a broad range of coupling and temperature regimes and compared to several sampling methods, allowing us to paint a nuanced algorithmic landscape.
400

Monte-Carlo simulation of wave propagation in polycrystalline solids

Biswas, B.K. (Bikash Kumar). January 1983 (has links)
No description available.

Page generated in 0.0281 seconds