• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 481
  • 218
  • 85
  • 66
  • 34
  • 30
  • 26
  • 25
  • 13
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1088
  • 1088
  • 1088
  • 121
  • 121
  • 99
  • 99
  • 95
  • 79
  • 68
  • 67
  • 63
  • 63
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Characterization of the 60Co therapy unit Siemens Gammatron 1 using BEAMnrc Monte Carlo simulations

De Luelmo, Sandro Carlos January 2006 (has links)
The aim of this work is to characterize the beam of the 60Co therapy unit “Siemens Gammatron 1”, used at the Swedish Radiation Protection Authority (SSI) to calibrate therapy level ionization chambers. SSI wants to know the spectra in the laboratory’s reference points and a verified, virtual model of the 60Co unit to be able to compare current and future experiments to Monte Carlo simulations. EGSnrc is a code for performing Monte Carlo simulations. By using BEAMnrc, which is an additional package that simplifies the building process of a geometry in the EGS-code, the whole Gammatron at SSI was defined virtually. In this work virtual models for two experimental setups were built: the Gammatron irradiating in air to simulate the air-kerma calibration geometry and the Gammatron irradiating a water phantom similar to that used for the absorbed dose to water calibrations. The simulations are divided into two different substeps: one for the fixed part of the Gammatron and one for the variable part to be able to study different entities and to shorten simulation times. The virtual geometries are verified by comparing Monte Carlo results with measurements. When it was verified that the virtual geometries were to be trusted, they were used to generate the Gammatron photon spectra in air and water with different field sizes and at different depths. The contributions to the photon spectra from different regions in the Gammatron were also collected. This is something that is easy to achieve with Monte Carlo calculations, but difficult to obtain with ordinary detectors in real life measurements. The results from this work give SSI knowledge of the photon spectra in their reference points for calibrations in air and in water phantom. The first step of the virtual model (fixed part of Gammatron) can be used for future experimental setups at SSI.
262

On Monte Carlo simulation and analysis of electricity markets

Amelin, Mikael January 2004 (has links)
This dissertation is about how Monte Carlo simulation can be used to analyse electricity markets. There are a wide range of applications for simulation; for example, players in the electricity market can use simulation to decide whether or not an investment can be expected to be profitable, and authorities can by means of simulation find out which consequences a certain market design can be expected to have on electricity prices, environmental impact, etc. In the first part of the dissertation, the focus is which electricity market models are suitable for Monte Carlo simulation. The starting point is a definition of an ideal electricity market. Such an electricity market is partly practical from a mathematical point of view (it is simple to formulate and does not require too complex calculations) and partly it is a representation of the best possible resource utilisation. The definition of the ideal electricity market is followed by analysis how the reality differs from the ideal model, what consequences the differences have on the rules of the electricity market and the strategies of the players, as well as how non-ideal properties can be included in a mathematical model. Particularly, questions about environmental impact, forecast uncertainty and grid costs are studied. The second part of the dissertation treats the Monte Carlo technique itself. To reduce the number of samples necessary to obtain accurate results, variance reduction techniques can be used. Here, six different variance reduction techniques are studied and possible applications are pointed out. The conclusions of these studies are turned into a method for efficient simulation of basic electricity markets. The method is applied to some test systems and the results show that the chosen variance reduction techniques can produce equal or better results using 99% fewer samples compared to when the same system is simulated without any variance reduction technique. More complex electricity market models cannot directly be simulated using the same method. However, in the dissertation it is shown that there are parallels and that the results from simulation of basic electricity markets can form a foundation for future simulation methods. Keywords: Electricity market, Monte Carlo simulation, variance reduction techniques, operation cost, reliability. / QC 20100608
263

Evaluation of single and three factor CAPM based on Monte Carlo Simulation

Iordanova, Tzveta January 2007 (has links)
The aim of this master thesis was to examine whether the noticed effect of Black Monday October 1987 on stock market volatility has also influenced the predictive power of the single factor CAPM and the Fama French three factor CAPM, in order to conclude whether the models are less effective after the stock market crash. I have used an OLS regression analysis and a Monte Carlo Simulation technique. I have applied these techniques on 12 industry portfolios with US data to draw a conclusion whether the predictability of the single and three factor model has changed after October 1987. My research confirms that the single factor CAPM performs better before October 1987 and also found evidences that support the same hypothesis of Black Monday effect on the predictive power of the Fama French three factor model.
264

Knotting statistics after a local strand passage in unknotted self-avoiding polygons in Z<sup>3</sup>

Szafron, Michael Lorne 15 April 2009
We study here a model for a strand passage in a ring polymer about a randomly chosen location at which two strands of the polymer have been brought gcloseh together. The model is based on ¦-SAPs, which are unknotted self-avoiding polygons in Z^3 that contain a fixed structure ¦ that forces two segments of the polygon to be close together. To study this model, the Composite Markov Chain Monte Carlo (CMCMC) algorithm, referred to as the CMC ¦-BFACF algorithm, that I developed and proved to be ergodic for unknotted ¦-SAPs in my M. Sc. Thesis, is used. Ten simulations (each consisting of 9.6~10^10 time steps) of the CMC ¦-BFACF algorithm are performed and the results from a statistical analysis of the simulated data are presented. To this end, a new maximum likelihood method, based on previous work of Berretti and Sokal, is developed for obtaining maximum likelihood estimates of the growth constants and critical exponents associated respectively with the numbers of unknotted (2n)-edge ¦-SAPs, unknotted (2n)-edge successful-strand-passage ¦-SAPs, unknotted (2n)-edge failed-strand-passage ¦-SAPs, and (2n)-edge after-strand-passage-knot-type-K unknotted successful-strand-passage ¦-SAPs. The maximum likelihood estimates are consistent with the result (proved here) that the growth constants are all equal, and provide evidence that the associated critical exponents are all equal.<p> We then investigate the question gGiven that a successful local strand passage occurs at a random location in a (2n)-edge knot-type K ¦-SAP, with what probability will the ¦-SAP have knot-type Kf after the strand passage?h. To this end, the CMCMC data is used to obtain estimates for the probability of knotting given a (2n)-edge successful-strand-passage ¦-SAP and the probability of an after-strand-passage polygon having knot-type K given a (2n)-edge successful-strand-passage ¦-SAP. The computed estimates numerically support the unproven conjecture that these probabilities, in the n¨ limit, go to a value lying strictly between 0 and 1. We further prove here that the rate of approach to each of these limits (should the limits exist) is less than exponential.<p> We conclude with a study of whether or not there is a difference in the gsizeh of an unknotted successful-strand-passage ¦-SAP whose after-strand-passage knot-type is K when compared to the gsizeh of a ¦-SAP whose knot-type does not change after strand passage. The two measures of gsizeh used are the expected lengths of, and the expected mean-square radius of gyration of, subsets of ¦-SAPs. How these two measures of gsizeh behave as a function of a polygonfs length and its after-strand-passage knot-type is investigated.
265

Lattice Simulations of the SU(2)-Multi-Higgs Phase Transition

Wurtz, Mark Bryan 29 July 2009
The Higgs boson has an important role in the theoretical formulation of the standard model of fundamental interactions. Symmetry breaking of the vacuum via the Higgs field allows the gauge bosons of the weak interaction and all fermions to acquire mass in a way that preserves gauge-invariance, and thus renormalizablility. The Standard Model can accommodate an arbitrary number of Higgs fields with appropriate charge assignments. To explore the effects of multiple Higgs particles, the SU(2)-multi-Higgs model is studied using lattice simulations, a non-perturbative technique in which the fields are placed on a discrete space-time lattice. The formalism and methods of lattice field theory are discussed in detail. Standard results for the SU(2)-Higgs model are reproduced via Monte Carlo simulations, in particular the single-Higgs phase structure, which has a region of analytic connection between the symmetric and Higgs phases. The phase structure of the SU(2)-multi-Higgs model is explored for the case of N >= 2 identical Higgs fields. There is no remaining region of analytic connection between the phases, at least when interactions between different Higgs flavours are omitted. An explanation of this result in terms of enhancement from overlapping phase transitions is explored for N = 2 by introducing an asymmetry in the hopping parameters of the Higgs fields.
266

Bulk electric system reliability simulation and application

Wangdee, Wijarn 19 December 2005
Bulk electric system reliability analysis is an important activity in both vertically integrated and unbundled electric power utilities. Competition and uncertainty in the new deregulated electric utility industry are serious concerns. New planning criteria with broader engineering consideration of transmission access and consistent risk assessment must be explicitly addressed. Modern developments in high speed computation facilities now permit the realistic utilization of sequential Monte Carlo simulation technique in practical bulk electric system reliability assessment resulting in a more complete understanding of bulk electric system risks and associated uncertainties. Two significant advantages when utilizing sequential simulation are the ability to obtain accurate frequency and duration indices, and the opportunity to synthesize reliability index probability distributions which describe the annual index variability. <p>This research work introduces the concept of applying reliability index probability distributions to assess bulk electric system risk. Bulk electric system reliability performance index probability distributions are used as integral elements in a performance based regulation (PBR) mechanism. An appreciation of the annual variability of the reliability performance indices can assist power engineers and risk managers to manage and control future potential risks under a PBR reward/penalty structure. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the system well-being of bulk electric systems and to evaluate the likelihood, not only of entering a complete failure state, but also the likelihood of being very close to trouble. The system well-being concept presented in this thesis is a probabilistic framework that incorporates the accepted deterministic N-1 security criterion, and provides valuable information on what the degree of the system vulnerability might be under a particular system condition using a quantitative interpretation of the degree of system security and insecurity. An overall reliability analysis framework considering both adequacy and security perspectives is proposed using system well-being analysis and traditional adequacy assessment. The system planning process using combined adequacy and security considerations offers an additional reliability-based dimension. Sequential Monte Carlo simulation is also ideally suited to the analysis of intermittent generating resources such as wind energy conversion systems (WECS) as its framework can incorporate the chronological characteristics of wind. The reliability impacts of wind power in a bulk electric system are examined in this thesis. Transmission reinforcement planning associated with large-scale WECS and the utilization of reliability cost/worth analysis in the examination of reinforcement alternatives are also illustrated.
267

Martingale Property and Pricing for Time-homogeneous Diffusion Models in Finance

Cui, Zhenyu 30 July 2013 (has links)
The thesis studies the martingale properties, probabilistic methods and efficient unbiased Monte Carlo simulation methods for various time-homogeneous diffusion models commonly used in mathematical finance. Some of the popular stochastic volatility models such as the Heston model, the Hull-White model and the 3/2 model are special cases. The thesis consists of the following three parts: Part I: Martingale properties in time-homogeneous diffusion models: Part I of the thesis studies martingale properties of stock prices in stochastic volatility models driven by time-homogeneous diffusions. We find necessary and sufficient conditions for the martingale properties. The conditions are based on the local integrability of certain deterministic test functions. Part II: Analytical pricing methods in time-homogeneous diffusion models: Part II of the thesis studies probabilistic methods for determining the Laplace transform of the first hitting time of an integral functional of a time-homogeneous diffusion, and pricing an arithmetic Asian option when the stock price is modeled by a time-homogeneous diffusion. We also consider the pricing of discrete variance swaps and discrete gamma swaps in stochastic volatility models based on time-homogeneous diffusions. Part III: Nearly Unbiased Monte Carlo Simulation: Part III of the thesis studies the unbiased Monte Carlo simulation of option prices when the characteristic function of the stock price is known but its density function is unknown or complicated.
268

Understanding the Nature of Blazars High Energy Emission with Time Dependent Multi-zone Modeling

Chen, Xuhui 06 September 2012 (has links)
In this thesis we present a time-dependent multi-zone radiative transfer code and its applications to study the multiwavelength emission of blazars. The multiwavelength variability of blazars is widely believed to be a direct manifestation of the formation and propagation of relativistic jets, and hence the related physics of the black hole - accretion disk - jet system. However, the understanding of these variability demands highly sophisticated theoretical analysis and numerical simulations. Especially, the inclusion of the light travel time effects(LTTEs) in these calculations has long been realized important, but very difficult. The code we use couples Fokker-Planck and Monte Carlo methods, in a 2 dimensional (cylindrical) geometry. For the first time all the LTTEs are fully considered, along with a proper, full, self-consistent treatment of Compton cooling, which depends on the LTTEs. Using this code, we studied a set of physical processes that are relevant to the variability of blazars, including electron injection and escape, radiative cooling, and stochastic particle acceleration. Our comparison of the observational data and the simulation results revealed that a combination of all those processes is needed to reproduce the observed behaviors of the emission of blue blazars. The simulation favors that the high energy emission at quiet and flare stages comes from the same location. We have further modeled red blazars PKS 1510-089. External radiation, which comes from the broad line region (BLR) or infrared torus, is included in the model. The results confirm that external Compton model can adequately describe the emission from red blazars. The emission from BLR is favored as the source of Inverse Compton seed photons, compared to synchrotron and IR torus radiation.
269

Development of Cosmic Ray Simulation Program -- Earth Cosmic Ray Shower (ECRS)

Hakmana Witharana, Sampath S 04 May 2007 (has links)
ECRS is a program for the detailed simulation of extensive air shower initiated by high energy cosmic ray particles. In this dissertation work, a Geant4 based ECRS simulation was designed and developed to study secondary cosmic ray particle showers in the full range of Earth's atmosphere. A proper atmospheric air density and geomagnetic field are implemented in order to correctly simulate the charged particles interactions in the Earth's atmosphere. The initial simulation was done for the Atlanta (33.460 N , 84.250 W) region. Four different types of primary proton energies (109, 1010, 1011 and 1012 eV) were considered to determine the secondary particle distribution at the Earth's surface. The geomagnetic field and atmospheric air density have considerable effects on the muon particle distribution at the Earth's surface. The muon charge ratio at the Earth's surface was studied with ECRS simulation for two different geomagnetic locations: Atlanta, Georgia, USA and Lynn Lake, Manitoba, Canada. The simulation results are shown in excellent agreement with the data from NMSU-WIZARD/CAPRICE and BESS experiments at Lynn Lake. At low momentum, ground level muon charge ratios show latitude dependent geomagnetic effects for both Atlanta and Lynn Lake from the simulation. The simulated charge ratio is 1.20 ± 0.05 (without geomagnetic field), 1.12 ± 0.05 (with geomagnetic field) for Atlanta and 1.22 ± 0.04 (with geomagnetic field) for Lynn Lake. These types of studies are very important for analyzing secondary cosmic ray muon flux distribution at the Earth's surface and can be used to study the atmospheric neutrino oscillations.
270

Känslighets- och osäkerhetsanalys av parametrar och indata i dagvatten- och recipientmodellen StormTac

Stenvall, Brita January 2004 (has links)
Three methods of sensitivity and unceartainty analysis have been applied to the operative stormwater- and recipient model StormTac. The study area is the watershed of lake Flaten in the municipality Salem. StormTac’s submodels for stormwater, pollutant transport and the recipient are cosidired. In the sensitivity assessment, the model parametres and inputs were varied one at a time by a constant percentage according to the “one at a time” (OAAT) method and the response of the outputs were calculated. It was found that the stormwater- and baseflow were most sensitive to perturbations in the perciptation. Unceartainty analysis using Monte Carlo simulation was performed in two different ways. (1) All model parametres and inputs were included with defined unceartainties and the resulting unceartainty for the target variable was quantified. Thereafter, whith the purpose to estimate the contribution of all the parametres and inputs, the cumulative uncertainty for the target variable, each parameters/inputs unceartainty was omitted one at the time. The most crucial uncertainty for the storm water flow was the runoff coefficient for forestland and the perciptation (i.e the differens between the 90- and 10-percentile for the storm water flow was reduced whith 44 % and 33 % respectively). (2) To identify optimal parameter intervals, the probability for an acceptable value of the target variable was plotted against each parameters value range. The result suggests that for some of the parametres i StormTac, the ranges should be changed. / Den operativa dagvatten- och recipientmodellen StormTac har applicerats på sjön Flatens avrinningsområde i Salems kommun. StormTac:s delmodeller för dagvatten, föroreningstransport och recipienten studerades. Tre olika metoder för att undersöka osäkerheten och känsligheten hos parametrar och indata i delmodellerna tillämpades. I känslighetsanalysen (OAAT-metoden) behäftades parametervärdena och indata med systematiska fel och responsen hos utdata beräknades. Dag- och basvattenflödet var känsligast mot fel i nederbördsdata, medan kväve-, fosfor- och kopparbelastningen till recipienten var känsligast mot respektive förorenings dagvattenkoncentration från områden med bebyggelse. Varje parameter och indatas bidrag till den kumulativa osäkerheten hos utdata uppskattades med hjälp av Montecarlosimulering. Genom att för varje effektvariabel studera differensen mellan 90- och 10-percentilen när osäkerheten hos en parameter/indata i taget utelämnades, kunde varje parameters/indatas bidrag till modellresultatets osäkerhet kvantifieras. För dagvattenflödet bidrog avrinningskoefficienten för skogmark med 44 % av osäkerheten och nederbörden med 33 %. Montecarloanlys praktiserades även för att identifiera optimala intervall för parametrarna i modellen. Sannolikheten för ett accepterat värde på den simulerade effektvariabeln plottades mot varje parameters värdemängd. För vissa parametrar indikerade resultatet att intervallen kan förändras mot hur de i nuläget ser ut i StormTac. Uniforma sannolikhetsfördelningar, begränsade av StormTac:s min- och maxvärden för parametrarna och ± 50% av orginalvärdet för indata, användes i båda osäkerhetsanalyserna.

Page generated in 0.1103 seconds