321 |
Incorporating substation and switching station related outages in composite system reliability evaluationNighot, Rajesh U 06 October 2003
This thesis presents the development of a new method for incorporating station related outages in composite or bulk system reliability analysis. Station related failures can cause multiple component outages that can propagate to other parts of the network resulting in severe damages. In order to minimize the effects of station related outages on the composite system performance it is necessary for the designer to assess their effects. This task can be achieved by including station related outages in the composite system evaluation.
Monte Carlo simulation is used in this research to assess composite system reliability. The new method described in this thesis is used to include station related outages in the reliability evaluation of two composite test systems. This new method is relatively simple and can be used to consider multiple component outages due to station related failures in composite system reliability evaluation. In this approach, the effects of station related outages are combined with the connected terminal failure parameters.
Reliability studies conducted on the two composite test systems demonstrates that station failures significantly affect the system performance. The system reliability can be improved by selecting appropriate station configurations. This is illustrated by application to the two composite test systems.
|
322 |
Incorporating substation and switching station related outages in composite system reliability evaluationNighot, Rajesh U 06 October 2003 (has links)
This thesis presents the development of a new method for incorporating station related outages in composite or bulk system reliability analysis. Station related failures can cause multiple component outages that can propagate to other parts of the network resulting in severe damages. In order to minimize the effects of station related outages on the composite system performance it is necessary for the designer to assess their effects. This task can be achieved by including station related outages in the composite system evaluation.
Monte Carlo simulation is used in this research to assess composite system reliability. The new method described in this thesis is used to include station related outages in the reliability evaluation of two composite test systems. This new method is relatively simple and can be used to consider multiple component outages due to station related failures in composite system reliability evaluation. In this approach, the effects of station related outages are combined with the connected terminal failure parameters.
Reliability studies conducted on the two composite test systems demonstrates that station failures significantly affect the system performance. The system reliability can be improved by selecting appropriate station configurations. This is illustrated by application to the two composite test systems.
|
323 |
A short-time dynamics study of Heisenberg non-collinear magnetsZelli, Mirsaeed 14 September 2007 (has links)
A generalized model which describes a family of antiferromagnetic Heisenberg magnets on a three-dimensional stacked triangular lattice is introduced. The model contains a constraint parameter which changes the details of the interactions but not the symmetry of the model. We investigate the question of whether a first or second order phase transition occurs in these systems using a short time dynamics method. This method does not suffer from the problem of critical slowing down which occurs in the usual equilibrium Monte Carlo simulations. The effective critical exponents are determined as a function of the constraint parameter. Our results provide strong evidence that the phase transition is first order. In addition, for a particular value of the constraint parameter, the model corresponds to an antiferromagnet on a stacked Kagome lattice. In this case, our results are not inconsistent with the existence of a finite temperature first order phase transition. / October 2007
|
324 |
Reactivity Analysis of Nuclear Fuel Storages : The Effect of 238U Nuclear Data UncertaintiesÖstangård, Louise January 2013 (has links)
The aim of this master thesis work was to investigate how the uncertainties in nuclear data for 238U affects the uncertainty of keff in criticality simulations for nuclear fuel storages. This was performed by using the Total Monte Carlo (TMC) method which allows propagation of nuclear data uncertainties from basic nuclear physics to reactor parameters, such as keff. The TMC approach relies on simulations with hundreds of calculations of keff with different random nuclear data libraries for 238U for each calculation. The result is a probability distribution for keff where the standard deviation for the distribution represents a spread in keff due to statistical and nuclear data uncertainties. Simulations were performed with MCNP for a nuclear fuel storage representing two different cases: Normal Case and Worst Case. Normal Case represents a scenario during normal conditions and Worst Case represents accident conditions where optimal moderation occurs. In order to validate the MCNP calculations and the libraries produced with TMC, criticality benchmarks were used. The calculated mean value of keff for the criticality benchmark simulations with random libraries produced with TMC obtained a good agreement with the experimental keff for the benchmarks. This indicates that the libraries used in this this work were of good quality. The TMC method´s drawback is the long calculation time, therefore the new method, fast TMC, was tested. Both fast TMC and original TMC were applied to the Normal Case. The two methods obtained similar results, indicating that fast TMC is a good option in order to reduce the computational time. The computer time using fast TMC was found to be significantly faster compared with original TMC in this work. The 238U nuclear data uncertainty was obtained to be 209 pcm for the Normal Case, both for original and fast TMC. For the Worst Case simulation the 238U nuclear data uncertainty was obtained to be 672 pcm with fast TMC. These results show the importance of handling uncertainties in nuclear data in order to improve the knowledge about the uncertainties for criticality calculations of keff. / Nukleära databibliotek innehåller all nödvändig information för att till exempel kunna simulera en reaktor eller en bränslebassäng för kärnbränsle. Dessa bibliotek är centrala vid beräkningar av olika reaktorparametrar som krävs för en säker kärnkraftsproduktion. En viktig reaktorparameter är multiplikationskonstanten (keff) som anger reaktiviteten för ett system. Ett kritiskt system (keff = 1) innebär att en kedjereaktion av kärnklyvningar kan upprätthållas. Detta tillstånd erfordras i en reaktor för att möjliggöra elproduktion. I en bränslebassäng där använt kärnbränsle förvaras är det viktigt att systemet är underkritiskt (keff < 1). Olika reaktorkoder används för att utföra dessa beräkningar av keff, vars resultat används i processen för att designa säkra bränsleförråd för kärnbränsle. Dagens nukleära databibliotek innehåller osäkerheter som i sin tur beror på osäkerheter i de modellparametrar som används vid framställningen av biblioteken. Ofta är dessa nukleära data osäkerheter okända, vilket ger upphov till okända osäkerheter vid beräkning av keff. Vattenfall Nuclear Fuel AB undersöker idag möjligheten att öka anrikningen på bränslet för att minska antalet behövda bränsleknippen för en viss energimängd. Varje bränsleknippe blir då mer reaktiv och i och med det minskar marginalen till kriticitet i bränslebassängen. Därmed är osäkerheterna för nukleära data viktiga i processen för att kunna beräkna den maximalt tillåtna anrikningen för bränslet. För att undersöka hur stora dessa osäkerheter är, användes en relativ ny metod TMC (Total Monte Carlo) som propagerar osäkerheter i nukleära data till olika reaktorparametrar (t.ex. keff) i en enda simuleringsprocess. TMC metoden användes för att undersöka hur osäkerheterna i nukleära data för 238U påverkar beräkningar av keff för en bränslebassäng med använt kärnbränsle. Beräkningar utfördes för en bränslebassäng under normala driftförhållanden samt för en olyckshändelse då optimal moderering förekommer. Resultaten visade på att standardavvikelsen för nukleära data för 238U var 209 pcm vid normala driftförhållanden och 672 pcm för fallet med optimal moderering. Den ursprungliga TMC metoden är en tidskrävande metod och nyligen har en snabbare variant av TMC utvecklats. Denna nya metod applicerades också på bränslebassängen under normala driftförhållanden och resultaten jämfördes. Resultaten visade att båda metoderna beräknade samma nukleära dataosäkerhet för 238U och genom att använda den snabba TMC metoden, minskade beräkningstiden betydligt jämfört med att använda den ursprungliga TMC metoden.
|
325 |
Development and applications of a computer code for Monte Carlo simulation of electronphoton showersSempau Roma, Josep 29 February 1996 (has links)
Se presenta el trabajo realizado sobre el paquete de Subrrutinas Penélope. Este código permite la simulación MC del transporte de fotones y electrones en la materia con geometrías complejas. Los aspectos considerados son:A) mejora del algoritmo de SCATTERING de la radiación primaria y de los algoritmos que dan cuenta de las secundarias.B) simplificación del algoritmo de SCATTERING mixto par electrones empleado anteriormente. C) incorporación de secciones eficaces diferenciales. D) un paquete de subrutinas geométricas, pengeom, ha sido desarrollado. Permite geometría combinatoria con superficies cuadricas.e) presentación de un marco teórico para aplicar técnicas de reducción de varianza.F) comparación con resultados experimentales y presentación de 4 aplicaciones reales que emplean pengeom y reducción de varianza. En su estado actual Penélope permite que usuarios externos no especializados puedan abordar problemas en el campo de la ingeniería de radiaciones, de la física médica, etc.
|
326 |
Efficient Modelling and Performance Analysis of Wideband Communication ReceiversEriksson, Andreas January 2011 (has links)
This thesis deals with Symbol Error Rate (SER)-simulation of wireless communications and its application into throughput analysis of UltraWideband (UWB) systems. The SERs will be simulated in C++ using the Monte Carlo method and when some are calculated, the rest will be estimated using a novel extrapolation method. These SER values will be very accurate and in this thesis go as low as 1.0e-14. Reaching that low values would otherwise be impossible using the traditional Monte Carlo method, because of very large computation time. However, the novel extrapolation method, can simulate a SER-curve in less than 30 seconds. It is assumed that the noise belongs to the Generalized Gaussian distribution family and among them noise from the Normal distribution (Gaussian noise) gives the best result. It is to be noted that Gaussian noise is the most commonly used in digital communication simulations. Although the program is used for throughput analysis of UWB, the program could easily be adapted to various signals. In this thesis, throughput analysis means a plot with symbol rate vs distance. From any given symbols, the user can, with a desired minimum SER, generate an extrapolated SER-curve and see what symbol rate can be achieved by the system, while obeying power constraints of signals imposed by international laws. The developed program is, by comparing with published theoretical results, tested for QAM and PSK cases, but can easily be extended to UWB systems.
|
327 |
Pricing American Options using SimulationLarsson, Karl January 2007 (has links)
American options are financial contracts that allow exercise at any time until ex- piration. While the pricing of standard American option contracts has been well researched, with a few exceptions no analytical solutions exist. Valuation of more in- volved American option contracts, which include multiple underlying assets or path- dependent payoff, is still to a high degree an uncharted area. Most numerical methods work badly for such options as their time complexity scales exponentially with the number of dimensions. In this Master’s thesis we study valuation methods based on Monte Carlo sim- ulations. Monte Carlo methods don’t suffer from exponential time complexity, but have been known to be difficult to use for American option pricing due to the forward nature of simulations and the backward nature of American option valuation. The studied methods are: Parametrization of exercise rule, Random Tree, Stochastic Mesh and Regression based method with a dual approach. These methods are evaluated and compared for the standard American put option and for the American maximum call option. Where applicable the values are compared with those from deterministic reference methods. The strengths and weaknesses of each method is discussed. The Regression based method essentially reduces the problem to one of selecting suitable basis functions. This choice is empirically evaluated for the following Amer- ican option contracts; standard put, maximum call, basket call, Asian call and Asian call on a basket. The set of basis functions considered include polynomials in the underlying assets, the payoff, the price of the corresponding European contract as well as certain analytic approximation of the latter. Results from the empirical studies show that the regression based method is the best choice when pricing exotic American options. Furthermore, using available analytical approximations for the corresponding European option values as a basis function seems to improve the performance of the method in most cases.
|
328 |
Monte Carlo modeling of the sensitivity of x-ray photoconductorsYunus, Mohammad 13 May 2005 (has links)
The sensitivity reduction or ghosting mechanism of x-ray photoconductor is studied based on Monte Carlo simulation techniques. We have calculated the sensitivity reduction for different detector operating conditions (applied electric field, x-ray spectrum and photoconductor thickness) and for different levels of carrier trapping. We have analyzed the effect of photoconductor biasing (positive or negative) on ghosting. The following effects are taken into account in modeling the ghosting phenomena: (i) recombination between trapped and oppositely charged drifting carriers, (ii) trap filling, (iii) nonuniform electric field, (iv) detrapping of trapped holes, and (v) x-ray induced trap generation.
Our calculation shows that not only the recombination between trapped and oppositely charged drifting carriers but the x-ray induced trap generation is also responsible for ghosting in photoconductor based x-ray image detectors. Moreover not all the trapped carriers take part in recombination; rather only a fraction of the trapped carriers are involved in recombination. Electric field also plays an important role in ghosting calculations via the electron hole pair generation mechanism. Trap filling has also non trivial effects on ghosting.
The simulation results show that the amount of ghosting strongly depends on the applied electric field. Ghosting increases with decreasing applied electric field and vice versa. It is observed that ghosting is higher at high carrier trapping level than at low trapping level. Again ghosting is more pronounced in chest radiographic detector than mammographic detector. In chest radiographic detector, carrier trapping is high due to greater thickness hence recombination and electric field effects are prominent in chest radiographic detector. Biasing dependent ghosting depends on the carrier mobility lifetime product. For positively biased detectors, ghosting is less if the mobility lifetime product of hole is higher than that of electron and vice versa for negatively biased detectors. It also appears that the use of only recombination to calculate ghosting, as believed the primary source of ghosting in some literatures, will lead to significant error in the calculation of ghosting.
|
329 |
Uncertainty Analysis of the NONROAD Emissions Model for the State of GeorgiaChi, Tien-Ru Rosa 23 August 2004 (has links)
Understanding uncertainty in emissions inventories is critical for evaluating both air quality modeling results as well as impacts of emissions reduction strategies. This study focused on quantification of uncertainty due to non-road emissions specifically for the state of Georgia using the EPA NONROAD emissions model.
Nonroad engines contribute significantly to anthropogenic emissions inventories, with national estimates for various criteria pollutants ranging from 14% to 22%. The NONROAD model is designed to estimate emissions for any area in the United States based on population, activity, and emissions data. Information used in the model comes from a variety of sources collected over many years.
A sensitivity analysis of the model determined the input variables that have significant effects on emissions. Results showed that model estimated emissions are significantly sensitive to increases in equipment population, activity, load factor, and emission factor. Increases in ambient temperature, fuel RVP, fuel sulfur (except on SO2), and average useful life have smaller effects.
Emissions and activity data used in the NONROAD model were analyzed using statistical techniques to quantify uncertainty in the input parameters. Expert elicitation was also used to estimate uncertainties in emission factors, equipment population, activity, load factors, and geographic allocations of the emissions to the county level. A Monte Carlo approach using the derived parameter uncertainties and different input probability distributions was used to estimate the overall uncertainty of emissions from the NONROAD model for the state of Georgia. The uncertainties resulting from this analysis were significant, with 95% confidence intervals about the mean ranging from ?? to +61% for THC, -46 to +68% for NOx, -43% to 75% for CO, and ?? to +75% for PM.
The sensitivity of ozone and CO for different regions in Georgia to NONROAD emissions in Georgia was also estimated. The analysis suggests that uncertainties in ozone and CO simulations due to NONROAD emissions uncertainties, averaged over the regions of interest, are not large, with resulting maximum coefficients of variation of 1% and 10% respectively.
|
330 |
Pricing Path-Dependent Derivative Securities Using Monte Carlo Simulation and Intra-Market Statistical Trading ModelLee, Sungjoo 09 December 2004 (has links)
This thesis is composed of two parts. The first parts deals with a technique for pricing American-style contingent options. The second part details a statistical arbitrage model using statistical process control approaches.
We propose a novel simulation approach for pricing American-style contingent claims. We develop an adaptive policy search algorithm for obtaining the optimal policy in exercising an American-style option. The option price is first obtained by estimating the optimal option exercising policy and then evaluating the option with the estimated policy through simulation. Both high-biased and low-biased estimators of the option price are obtained. We show that the proposed algorithm leads to convergence to the true optimal policy with probability one. This policy search algorithm requires little knowledge about the structure of the optimal policy and can be naturally implemented using parallel computing methods. As illustrative examples, computational results on pricing regular American options and American-Asian options are reported and they indicate that our algorithm is faster than certain alternative American option pricing algorithms reported in the literature.
Secondly, we investigate arbitrage opportunities arising from continuous monitoring of the price difference of highly correlated assets. By differentiating between two assets, we can separate common macroeconomic factors that influence the asset price movements from an idiosyncratic condition that can be monitored very closely by itself. Since price movements are in line with macroeconomic conditions such as interest rates and economic cycles, we can easily see out of the normal behaviors on the price changes. We apply a statistical process control approach for monitoring time series with the serially correlated data. We use various variance estimators to set up and establish trading strategy thresholds.
|
Page generated in 0.0534 seconds