• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 429
  • 219
  • 73
  • 66
  • 34
  • 29
  • 26
  • 24
  • 12
  • 9
  • 8
  • 6
  • 4
  • 4
  • 2
  • Tagged with
  • 1013
  • 1013
  • 1013
  • 120
  • 117
  • 99
  • 96
  • 83
  • 74
  • 65
  • 64
  • 61
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Net pay evaluation: a comparison of methods to estimate net pay and net-to-gross ratio using surrogate variables

Bouffin, Nicolas 02 June 2009 (has links)
Net pay (NP) and net-to-gross ratio (NGR) are often crucial quantities to characterize a reservoir and assess the amount of hydrocarbons in place. Numerous methods in the industry have been developed to evaluate NP and NGR, depending on the intended purposes. These methods usually involve the use of cut-off values of one or more surrogate variables to discriminate non-reservoir from reservoir rocks. This study investigates statistical issues related to the selection of such cut-off values by considering the specific case of using porosity () as the surrogate. Four methods are applied to permeability-porosity datasets to estimate porosity cut-off values. All the methods assume that a permeability cut-off value has been previously determined and each method is based on minimizing the prediction error when particular assumptions are satisfied. The results show that delineating NP and evaluating NGR require different porosity cut-off values. In the case where porosity and the logarithm of permeability are joint normally distributed, NP delineation requires the use of the Y-on-X regression line to estimate the optimal porosity cut-off while the reduced major axis (RMA) line provides the optimal porosity cut-off value to evaluate NGR. Alternatives to RMA and regression lines are also investigated, such as discriminant analysis and a data-oriented method using a probabilistic analysis of the porosity-permeability crossplots. Joint normal datasets are generated to test the ability of the methods to predict accurately the optimal porosity cut-off value for sampled sub datasets. These different methods have been compared to one another on the basis of the bias, standard error and robustness of the estimates. A set of field data has been used from the Travis Peak formation to test the performance of the methods. The conclusions of the study have been confirmed when applied to field data: as long as the initial assumptions concerning the distribution of data are verified, it is recommended to use the Y-on-X regression line to delineate NP while either the RMA line or discriminant analysis should be used for evaluating NGR. In the case where the assumptions on data distribution are not verified, the quadrant method should be used.
212

Signal decompositions using trans-dimensional Bayesian methods.

Roodaki, Alireza 14 May 2012 (has links) (PDF)
This thesis addresses the challenges encountered when dealing with signal decomposition problems with an unknown number of components in a Bayesian framework. Particularly, we focus on the issue of summarizing the variable-dimensional posterior distributions that typically arise in such problems. Such posterior distributions are defined over union of subspaces of differing dimensionality, and can be sampled from using modern Monte Carlo techniques, for instance the increasingly popular Reversible-Jump MCMC (RJ-MCMC) sampler. No generic approach is available, however, to summarize the resulting variable-dimensional samples and extract from them component-specific parameters. One of the main challenges that needs to be addressed to this end is the label-switching issue, which is caused by the invariance of the posterior distribution to the permutation of the components. We propose a novel approach to this problem, which consists in approximating the complex posterior of interest by a "simple"--but still variable-dimensional parametric distribution. We develop stochastic EM-type algorithms, driven by the RJ-MCMC sampler, to estimate the parameters of the model through the minimization of a divergence measure between the two distributions. Two signal decomposition problems are considered, to show the capability of the proposed approach both for relabeling and for summarizing variable dimensional posterior distributions: the classical problem of detecting and estimating sinusoids in white Gaussian noise on the one hand, and a particle counting problem motivated by the Pierre Auger project in astrophysics on the other hand.
213

Development of tissue-equivalent CVD-diamond radiation detectors with small interface effects

Górka, Bartosz January 2008 (has links)
Due to its close tissue-equivalence, high radiation sensitivity, dose and dose-rate linearity, diamond is a very promising detector for radiation therapy applications. The present thesis focuses on the development of a chemical vapour deposited (CVD) diamond detector with special attention on the arrangement of the electrodes and encapsulation having minimal influence on the measured signal. Several prototype detectors were designed by using CVD-diamond substrates with attached silver electrodes. Interface effects in the electrode-diamond-electrode structure are investigated using the Monte Carlo (MC) code PENELOPE. The studies cover a wide range of electrode and diamond thicknesses, electrode materials and photon beam energies. An appreciable enhancement of the absorbed dose to diamond was found for high-Z electrodes. The influence of the electrodes diminishes with decreasing atomic number difference and layer thickness, so that from this point of view thin graphite electrodes would be ideal. The effect of encapsulation, cable and electrical connections on the detector response is also addressed employing MC techniques. For Co-60, 6 and 18 MV photon beam qualities it is shown that the prototypes exhibit energy and directional dependence of about 3% and 2%, respectively. By modifying the geometry and using graphite electrodes the dependencies are reduced to 1%. Although experimental studies disclose some limitations of the prototypes (high leakage current, priming effect and slow signal stabilisation), diamonds of higher quality, suitable for dosimetry, can be produced with better-controlled CVD process. With good crystals and a well-designed encapsulation, the CVD-diamond detector could become competitive for routine dosimetry. It is then important for correct dose determination to use a collision stopping power for diamond incorporating proper mean excitation energy and density-effect corrections. A new mean excitation energy of 88 eV has been calculated.
214

Characterization of the 60Co therapy unit Siemens Gammatron 1 using BEAMnrc Monte Carlo simulations

De Luelmo, Sandro Carlos January 2006 (has links)
The aim of this work is to characterize the beam of the 60Co therapy unit “Siemens Gammatron 1”, used at the Swedish Radiation Protection Authority (SSI) to calibrate therapy level ionization chambers. SSI wants to know the spectra in the laboratory’s reference points and a verified, virtual model of the 60Co unit to be able to compare current and future experiments to Monte Carlo simulations. EGSnrc is a code for performing Monte Carlo simulations. By using BEAMnrc, which is an additional package that simplifies the building process of a geometry in the EGS-code, the whole Gammatron at SSI was defined virtually. In this work virtual models for two experimental setups were built: the Gammatron irradiating in air to simulate the air-kerma calibration geometry and the Gammatron irradiating a water phantom similar to that used for the absorbed dose to water calibrations. The simulations are divided into two different substeps: one for the fixed part of the Gammatron and one for the variable part to be able to study different entities and to shorten simulation times. The virtual geometries are verified by comparing Monte Carlo results with measurements. When it was verified that the virtual geometries were to be trusted, they were used to generate the Gammatron photon spectra in air and water with different field sizes and at different depths. The contributions to the photon spectra from different regions in the Gammatron were also collected. This is something that is easy to achieve with Monte Carlo calculations, but difficult to obtain with ordinary detectors in real life measurements. The results from this work give SSI knowledge of the photon spectra in their reference points for calibrations in air and in water phantom. The first step of the virtual model (fixed part of Gammatron) can be used for future experimental setups at SSI.
215

On Monte Carlo simulation and analysis of electricity markets

Amelin, Mikael January 2004 (has links)
This dissertation is about how Monte Carlo simulation can be used to analyse electricity markets. There are a wide range of applications for simulation; for example, players in the electricity market can use simulation to decide whether or not an investment can be expected to be profitable, and authorities can by means of simulation find out which consequences a certain market design can be expected to have on electricity prices, environmental impact, etc. In the first part of the dissertation, the focus is which electricity market models are suitable for Monte Carlo simulation. The starting point is a definition of an ideal electricity market. Such an electricity market is partly practical from a mathematical point of view (it is simple to formulate and does not require too complex calculations) and partly it is a representation of the best possible resource utilisation. The definition of the ideal electricity market is followed by analysis how the reality differs from the ideal model, what consequences the differences have on the rules of the electricity market and the strategies of the players, as well as how non-ideal properties can be included in a mathematical model. Particularly, questions about environmental impact, forecast uncertainty and grid costs are studied. The second part of the dissertation treats the Monte Carlo technique itself. To reduce the number of samples necessary to obtain accurate results, variance reduction techniques can be used. Here, six different variance reduction techniques are studied and possible applications are pointed out. The conclusions of these studies are turned into a method for efficient simulation of basic electricity markets. The method is applied to some test systems and the results show that the chosen variance reduction techniques can produce equal or better results using 99% fewer samples compared to when the same system is simulated without any variance reduction technique. More complex electricity market models cannot directly be simulated using the same method. However, in the dissertation it is shown that there are parallels and that the results from simulation of basic electricity markets can form a foundation for future simulation methods. Keywords: Electricity market, Monte Carlo simulation, variance reduction techniques, operation cost, reliability. / QC 20100608
216

Evaluation of single and three factor CAPM based on Monte Carlo Simulation

Iordanova, Tzveta January 2007 (has links)
The aim of this master thesis was to examine whether the noticed effect of Black Monday October 1987 on stock market volatility has also influenced the predictive power of the single factor CAPM and the Fama French three factor CAPM, in order to conclude whether the models are less effective after the stock market crash. I have used an OLS regression analysis and a Monte Carlo Simulation technique. I have applied these techniques on 12 industry portfolios with US data to draw a conclusion whether the predictability of the single and three factor model has changed after October 1987. My research confirms that the single factor CAPM performs better before October 1987 and also found evidences that support the same hypothesis of Black Monday effect on the predictive power of the Fama French three factor model.
217

Knotting statistics after a local strand passage in unknotted self-avoiding polygons in Z<sup>3</sup>

Szafron, Michael Lorne 15 April 2009
We study here a model for a strand passage in a ring polymer about a randomly chosen location at which two strands of the polymer have been brought gcloseh together. The model is based on ¦-SAPs, which are unknotted self-avoiding polygons in Z^3 that contain a fixed structure ¦ that forces two segments of the polygon to be close together. To study this model, the Composite Markov Chain Monte Carlo (CMCMC) algorithm, referred to as the CMC ¦-BFACF algorithm, that I developed and proved to be ergodic for unknotted ¦-SAPs in my M. Sc. Thesis, is used. Ten simulations (each consisting of 9.6~10^10 time steps) of the CMC ¦-BFACF algorithm are performed and the results from a statistical analysis of the simulated data are presented. To this end, a new maximum likelihood method, based on previous work of Berretti and Sokal, is developed for obtaining maximum likelihood estimates of the growth constants and critical exponents associated respectively with the numbers of unknotted (2n)-edge ¦-SAPs, unknotted (2n)-edge successful-strand-passage ¦-SAPs, unknotted (2n)-edge failed-strand-passage ¦-SAPs, and (2n)-edge after-strand-passage-knot-type-K unknotted successful-strand-passage ¦-SAPs. The maximum likelihood estimates are consistent with the result (proved here) that the growth constants are all equal, and provide evidence that the associated critical exponents are all equal.<p> We then investigate the question gGiven that a successful local strand passage occurs at a random location in a (2n)-edge knot-type K ¦-SAP, with what probability will the ¦-SAP have knot-type Kf after the strand passage?h. To this end, the CMCMC data is used to obtain estimates for the probability of knotting given a (2n)-edge successful-strand-passage ¦-SAP and the probability of an after-strand-passage polygon having knot-type K given a (2n)-edge successful-strand-passage ¦-SAP. The computed estimates numerically support the unproven conjecture that these probabilities, in the n¨ limit, go to a value lying strictly between 0 and 1. We further prove here that the rate of approach to each of these limits (should the limits exist) is less than exponential.<p> We conclude with a study of whether or not there is a difference in the gsizeh of an unknotted successful-strand-passage ¦-SAP whose after-strand-passage knot-type is K when compared to the gsizeh of a ¦-SAP whose knot-type does not change after strand passage. The two measures of gsizeh used are the expected lengths of, and the expected mean-square radius of gyration of, subsets of ¦-SAPs. How these two measures of gsizeh behave as a function of a polygonfs length and its after-strand-passage knot-type is investigated.
218

Bulk electric system reliability simulation and application

Wangdee, Wijarn 19 December 2005
Bulk electric system reliability analysis is an important activity in both vertically integrated and unbundled electric power utilities. Competition and uncertainty in the new deregulated electric utility industry are serious concerns. New planning criteria with broader engineering consideration of transmission access and consistent risk assessment must be explicitly addressed. Modern developments in high speed computation facilities now permit the realistic utilization of sequential Monte Carlo simulation technique in practical bulk electric system reliability assessment resulting in a more complete understanding of bulk electric system risks and associated uncertainties. Two significant advantages when utilizing sequential simulation are the ability to obtain accurate frequency and duration indices, and the opportunity to synthesize reliability index probability distributions which describe the annual index variability. <p>This research work introduces the concept of applying reliability index probability distributions to assess bulk electric system risk. Bulk electric system reliability performance index probability distributions are used as integral elements in a performance based regulation (PBR) mechanism. An appreciation of the annual variability of the reliability performance indices can assist power engineers and risk managers to manage and control future potential risks under a PBR reward/penalty structure. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the system well-being of bulk electric systems and to evaluate the likelihood, not only of entering a complete failure state, but also the likelihood of being very close to trouble. The system well-being concept presented in this thesis is a probabilistic framework that incorporates the accepted deterministic N-1 security criterion, and provides valuable information on what the degree of the system vulnerability might be under a particular system condition using a quantitative interpretation of the degree of system security and insecurity. An overall reliability analysis framework considering both adequacy and security perspectives is proposed using system well-being analysis and traditional adequacy assessment. The system planning process using combined adequacy and security considerations offers an additional reliability-based dimension. Sequential Monte Carlo simulation is also ideally suited to the analysis of intermittent generating resources such as wind energy conversion systems (WECS) as its framework can incorporate the chronological characteristics of wind. The reliability impacts of wind power in a bulk electric system are examined in this thesis. Transmission reinforcement planning associated with large-scale WECS and the utilization of reliability cost/worth analysis in the examination of reinforcement alternatives are also illustrated.
219

Martingale Property and Pricing for Time-homogeneous Diffusion Models in Finance

Cui, Zhenyu 30 July 2013 (has links)
The thesis studies the martingale properties, probabilistic methods and efficient unbiased Monte Carlo simulation methods for various time-homogeneous diffusion models commonly used in mathematical finance. Some of the popular stochastic volatility models such as the Heston model, the Hull-White model and the 3/2 model are special cases. The thesis consists of the following three parts: Part I: Martingale properties in time-homogeneous diffusion models: Part I of the thesis studies martingale properties of stock prices in stochastic volatility models driven by time-homogeneous diffusions. We find necessary and sufficient conditions for the martingale properties. The conditions are based on the local integrability of certain deterministic test functions. Part II: Analytical pricing methods in time-homogeneous diffusion models: Part II of the thesis studies probabilistic methods for determining the Laplace transform of the first hitting time of an integral functional of a time-homogeneous diffusion, and pricing an arithmetic Asian option when the stock price is modeled by a time-homogeneous diffusion. We also consider the pricing of discrete variance swaps and discrete gamma swaps in stochastic volatility models based on time-homogeneous diffusions. Part III: Nearly Unbiased Monte Carlo Simulation: Part III of the thesis studies the unbiased Monte Carlo simulation of option prices when the characteristic function of the stock price is known but its density function is unknown or complicated.
220

Understanding the Nature of Blazars High Energy Emission with Time Dependent Multi-zone Modeling

Chen, Xuhui 06 September 2012 (has links)
In this thesis we present a time-dependent multi-zone radiative transfer code and its applications to study the multiwavelength emission of blazars. The multiwavelength variability of blazars is widely believed to be a direct manifestation of the formation and propagation of relativistic jets, and hence the related physics of the black hole - accretion disk - jet system. However, the understanding of these variability demands highly sophisticated theoretical analysis and numerical simulations. Especially, the inclusion of the light travel time effects(LTTEs) in these calculations has long been realized important, but very difficult. The code we use couples Fokker-Planck and Monte Carlo methods, in a 2 dimensional (cylindrical) geometry. For the first time all the LTTEs are fully considered, along with a proper, full, self-consistent treatment of Compton cooling, which depends on the LTTEs. Using this code, we studied a set of physical processes that are relevant to the variability of blazars, including electron injection and escape, radiative cooling, and stochastic particle acceleration. Our comparison of the observational data and the simulation results revealed that a combination of all those processes is needed to reproduce the observed behaviors of the emission of blue blazars. The simulation favors that the high energy emission at quiet and flare stages comes from the same location. We have further modeled red blazars PKS 1510-089. External radiation, which comes from the broad line region (BLR) or infrared torus, is included in the model. The results confirm that external Compton model can adequately describe the emission from red blazars. The emission from BLR is favored as the source of Inverse Compton seed photons, compared to synchrotron and IR torus radiation.

Page generated in 0.0602 seconds