Spelling suggestions: "subject:"ehe fonte carlo 3methods"" "subject:"ehe fonte carlo 4methods""
1 |
Pricing Options with Monte Carlo and Binomial Tree MethodsSun, Xihao 03 May 2011 (has links)
This report describes our work in pricing options using computational methods. First, I collected the historical asset prices for assets in four economic sectors to estimate model parameters, such as asset returns and covariances. Then I used these parameters to model asset prices using multiple geometric Brownian motion and simulate new asset prices. Using the generated prices, I used Monte Carlo methods and control variates to price call options. Next I used the binomial tree model to price put options, which I was introduced to in the course Math 571: Financial Mathematics I. Using the estimated put and call option prices together with some stocks, I formed a portfolio in an Interactive Brokers paper account . This project was done a part of the masters capstone course Math 573: Computational Methods of Financial Mathematics.
|
2 |
High accuracy correlated wavefunctionsHarrison, R. J. January 1984 (has links)
No description available.
|
3 |
Efficient simulation techniques for biochemical reaction networksLester, Christopher January 2017 (has links)
Discrete-state, continuous-time Markov models are becoming commonplace in the modelling of biochemical processes. The mathematical formulations that such models lead to are opaque, and, due to their complexity, are often considered analytically intractable. As such, a variety of Monte Carlo simulation algorithms have been developed to explore model dynamics empirically. Whilst well-known methods, such as the Gillespie Algorithm, can be implemented to investigate a given model, the computational demands of traditional simulation techniques remain a significant barrier to modern research. In order to further develop and explore biologically relevant stochastic models, new and efficient computational methods are required. In this thesis, high-performance simulation algorithms are developed to estimate summary statistics that characterise a chosen reaction network. The algorithms make use of variance reduction techniques, which exploit statistical properties of the model dynamics, so that the statistics can be computed efficiently. The multi-level method is an example of a variance reduction technique. The method estimates summary statistics of well-mixed, spatially homogeneous models by using estimates from multiple ensembles of sample paths of different accuracies. In this thesis, the multi-level method is developed in three directions: firstly, a nuanced implementation framework is described; secondly, a reformulated method is applied to stiff reaction systems; and, finally, different approaches to variance reduction are implemented and compared. The variance reduction methods that underpin the multi-level method are then re-purposed to understand how the dynamics of a spatially-extended Markov model are affected by changes in its input parameters. By exploiting the inherent dynamics of spatially-extended models, an efficient finite difference scheme is used to estimate parametric sensitivities robustly. The new simulation methods are tested for functionality and efficiency with a range of illustrative examples. The thesis concludes with a discussion of our findings, and a number of future research directions are proposed.
|
4 |
Econometric analysis of limited dependent time seriesManrique Garcia, Aurora January 1997 (has links)
No description available.
|
5 |
Design and evaluation of a Monte Carlo model of a low-cost kilovoltage x-ray arc therapy systemBreitkreutz, Dylan Yamabe 28 June 2019 (has links)
There is a growing global need for proper access to radiation therapy. This need exists predominantly in low- and middle-income countries but exists in some high-income countries as well. The solution to this problem is complex and requires changes in government policy, education and technology. The objective of the work contained in this dissertation is the development of a novel external beam radiation therapy system capable of treating a variety of cancers. The intent of this system is to provide a cost-effective radiation therapy system, which can primarily be utilized in low- and middle-income countries. This new system uses kilovoltage rather than megavoltage x-rays and is therefore much more cost-effective. The ultimate purpose of this kilovoltage radiation therapy system is to improve access to radiation therapy worldwide by supplementing current radiation therapy technology.
As a first step, the kilovoltage x-ray arc therapy or KVAT system was modeled using the EGSnrc BEAMnrc and DOSXYZnrc Monte Carlo software tools. For this initial study 200 kV arc-therapy was simulated on cylindrical water phantoms of two sizes, each of which contained a variety of planning target volume (PTV) sizes and locations. Additionally, prone and supine partial breast irradiation treatment plans were generated using KVAT. The objective of this work was to determine whether or not skin-sparing could be achieved using the KVAT system while also delivering a clinically relevant dose rate to the PTV. The results of the study indicated that skin-sparing is indeed achievable and that the quality of KVAT treatment plans improves for full 360-degree arcs and smaller PTV sizes.
The second step of this project involved the Monte Carlo simulation of KVAT treatment plans for breast, lung and prostate cancer. Spherical PTVs of 3-cm diameter were used for the breast and lung treatment plans while a 4-cm diameter PTV was used for prostate. Additionally, inverse optimization was utilized to make full use of the non-conformal irradiation geometry of KVAT. As a means of comparison, megavoltage treatment plans that could be delivered by a clinical linear accelerator were generated for each patient as well. In order to evaluate the safety of KVAT treatment plans, dose constraints were taken from published Radiation Therapy Oncology Group (RTOG) reports. The results of this study indicated that the 200 kV breast and 225 kV lung KVAT treatment plans were within dose constraints and could be delivered in a reasonable length of time. The 225 kV prostate treatment plan, while technically within dose constraints, delivered a large dose to non-critical healthy tissues due to the limited number of beam angles that did not pass through boney anatomy. It was concluded that plans such as prostate with large volumes of bone present might not be feasible for KVAT treatment.
The third step aimed to expand upon previous work and simulated more realistic KVAT treatment plans by using PTV volumes contoured by radiation oncologists. Additionally, this study used a completely redesigned KVAT geometry, which employed a stationary reflection anode and a new collimator design. The design modeled in this study was based upon the specifications of the prototype system under construction by PrecisionRT, a commercial partner. Three stereotactic ablative radiotherapy (SABR) lung patients were selected that had received treatment at the Vancouver Island Cancer Centre. In order to fully cover the PTVs of each patient, spherical sub-volumes were placed within the clinically contoured PTV of each patient. Dose constraints for at-risk organs were taken from an RTOG report on stereotactic body radiation therapy and were used to inversely optimize the 200 kV KVAT treatment plans. The calculated KVAT plans were compared with the clinical 6 MV SABR plans delivered to each patient. The results of this study indicated that KVAT lung plans were within dose constraints for all three patients with the exception of the ribs in the second patient who had a tumor directly adjacent to the rib cage.
The fourth and last step of this project was the experimental validation of a simple, proof-of-principle KVAT system. Simple geometric methods were used to design a collimator consisting of two slabs of brass separated by ~6 cm, each with 5 apertures, which would create an array of 5 converging beamlets. The collimator was used with a tabletop x-ray tube system. A rectangular solid water phantom and cylindrical TIVAR 1000 phantom were placed on a rotation stage and irradiated using 360-degree arcs. EBT3 gafchromic film was placed in each phantom to measure two-dimensional dose distributions. Film dose distributions were analyzed and compared to Monte Carlo generated dose distributions. Both the rectangular solid water phantom and cylindrical TIVAR phantom showed skin-sparing effects in their dose distributions. The highest degree of skin-sparing was achieved in the larger, 20 cm diameter cylindrical phantom. Furthermore, the measured film data and calculated metrics of the rectangular phantom were within 10% of the MC calculated values for two out of three films. The discrepancy in the third film can be explained by errors in the experimental setup.
In conclusion, the work contained in this dissertation has established the feasibility of a cost-effective kilovoltage arc-therapy system designed to treat deep-seated lesions by means of Monte Carlo simulations and experimental dosimetry. The studies performed so far suggest that KVAT is most suitable for smaller lesions in patient anatomy that does not involve large amounts of boney anatomy. Perhaps most importantly, an experimental study has demonstrated the skin-sparing ability of a simple KVAT prototype. / Graduate / 2020-07-10
|
6 |
On large deviations and design of efficient importance sampling algorithmsNyquist, Pierre January 2014 (has links)
This thesis consists of four papers, presented in Chapters 2-5, on the topics large deviations and stochastic simulation, particularly importance sampling. The four papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory, and to the design of efficient algorithms using the subsolution approach developed by Dupuis and Wang (2007). In the first two papers of the thesis, the random output of an importance sampling algorithm is viewed as a sequence of weighted empirical measures and weighted empirical processes, respectively. The main theoretical results are a Laplace principle for the weighted empirical measures (Paper 1) and a moderate deviation result for the weighted empirical processes (Paper 2). The Laplace principle for weighted empirical measures is used to propose an alternative measure of efficiency based on the associated rate function.The moderate deviation result for weighted empirical processes is an extension of what can be seen as the empirical process version of Sanov's theorem. Together with a delta method for large deviations, established by Gao and Zhao (2011), we show moderate deviation results for importance sampling estimators of the risk measures Value-at-Risk and Expected Shortfall. The final two papers of the thesis are concerned with the design of efficient importance sampling algorithms using subsolutions of partial differential equations of Hamilton-Jacobi type (the subsolution approach). In Paper 3 we show a min-max representation of viscosity solutions of Hamilton-Jacobi equations. In particular, the representation suggests a general approach for constructing subsolutions to equations associated with terminal value problems and exit problems. Since the design of efficient importance sampling algorithms is connected to such subsolutions, the min-max representation facilitates the construction of efficient algorithms. In Paper 4 we consider the problem of constructing efficient importance sampling algorithms for a certain type of Markovian intensity model for credit risk. The min-max representation of Paper 3 is used to construct subsolutions to the associated Hamilton-Jacobi equation and the corresponding importance sampling algorithms are investigated both theoretically and numerically. The thesis begins with an informal discussion of stochastic simulation, followed by brief mathematical introductions to large deviations and importance sampling. / <p>QC 20140424</p>
|
7 |
Applying MCMC methods to multi-level modelsBrowne, William J. January 1998 (has links)
No description available.
|
8 |
Projector Quantum Monte Carlo methods for linear and non-linear wavefunction ansatzesSchwarz, Lauretta Rebecca January 2017 (has links)
This thesis is concerned with the development of a Projector Quantum Monte Carlo method for non-linear wavefunction ansatzes and its application to strongly correlated materials. This new approach is partially inspired by a prior application of the Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method to the three-band (p-d) Hubbard model. Through repeated stochastic application of a projector FCIQMC projects out a stochastic description of the Full Configuration Interaction (FCI) ground state wavefunction, a linear combination of Slater determinants spanning the full Hilbert space. The study of the p-d Hubbard model demonstrates that the nature of this FCI expansion is profoundly affected by the choice of single-particle basis. In a counterintuitive manner, the effectiveness of a one-particle basis to produce a sparse, compact and rapidly converging FCI expansion is not necessarily paralleled by its ability to describe the physics of the system within a single determinant. The results suggest that with an appropriate basis, single-reference quantum chemical approaches may be able to describe many-body wavefunctions of strongly correlated materials. Furthermore, this thesis presents a reformulation of the projected imaginary time evolution of FCIQMC as a Lagrangian minimisation. This naturally allows for the optimisation of polynomial complex wavefunction ansatzes with a polynomial rather than exponential scaling with system size. The proposed approach blurs the line between traditional Variational and Projector Quantum Monte Carlo approaches whilst involving developments from the field of deep-learning neural networks which can be expressed as a modification of the projector. The ability of the developed approach to sample and optimise arbitrary non-linear wavefunctions is demonstrated with several classes of Tensor Network States all of which involve controlled approximations but still retain systematic improvability towards exactness. Thus, by applying the method to strongly-correlated Hubbard models, as well as ab-initio systems, including a fully periodic ab-initio graphene sheet, many-body wavefunctions and their one- and two-body static properties are obtained. The proposed approach can handle and simultaneously optimise large numbers of variational parameters, greatly exceeding those of alternative Variational Monte Carlo approaches.
|
9 |
Techniques to handle missing values in a factor analysisTurville, Christopher, University of Western Sydney, Faculty of Informatics, Science and Technology January 2000 (has links)
A factor analysis typically involves a large collection of data, and it is common for some of the data to be unrecorded. This study investigates the ability of several techniques to handle missing values in a factor analysis, including complete cases only, all available cases, imputing means, an iterative component method, singular value decomposition and the EM algorithm. A data set that is representative of that used for a factor analysis is simulated. Some of this data are then randomly removed to represent missing values, and the performance of the techniques are investigated over a wide range of conditions. Several criteria are used to investigate the abilities of the techniques to handle missing values in a factor analysis. Overall, there is no one technique that performs best for all of the conditions studied. The EM algorithm is generally the most effective technique except when there are ill-conditioned matrices present or when computing time is of concern. Some theoretical concerns are introduced regarding the effects that changes in the correlation matrix will have on the loadings of a factor analysis. A complicated expression is derived that shows that the change in factor loadings as a result of change in the elements of a correlation matrix involves components of eigenvectors and eigenvalues. / Doctor of Philosophy (PhD)
|
10 |
Complexity and Error Analysis of Numerical Methods for Wireless Channels, SDE, Random Variables and Quantum MechanicsHoel, Håkon January 2012 (has links)
This thesis consists of the four papers which consider different aspects of stochastic process modeling, error analysis, and minimization of computational cost. In Paper I, we construct a Multipath Fading Channel (MFC) model for wireless channels with noise introduced through scatterers flipping on and off. By coarse graining the MFC model a Gaussian process channel model is developed. Complexity and accuracy comparisons of the models are conducted. In Paper II, we generalize a multilevel Forward Euler Monte Carlo method introduced by Mike Giles for the approximation of expected values depending on solutions of Ito stochastic differential equations. Giles' work proposed and analyzed a Forward Euler Multilevel Monte Carlo (MLMC) method based on realizations on a hierarchy of uniform time discretizations and a coarse graining based control variates idea to reduce the computational cost required by a standard single level Forward Euler Monte Carlo method. This work is an extension of Giles' MLMC method from uniform to adaptive time grids. It has the same improvement in computational cost and is applicable to a larger set of problems. In paper III, we consider the problem to estimate the mean of a random variable by a sequential stopping rule Monte Carlo method. The performance of a typical second moment based sequential stopping rule is shown to be unreliable both by numerical examples and by analytical arguments. Based on analysis and approximation of error bounds we construct a higher moment based stopping rule which performs more reliably. In paper IV, Born-Oppenheimer dynamics is shown to provide an accurate approximation of time-independent Schrödinger observables for a molecular system with an electron spectral gap, in the limit of large ratio of nuclei and electron masses, without assuming that the nuclei are localized to vanishing domains. The derivation, based on a Hamiltonian system interpretation of the Schrödinger equation and stability of the corresponding hitting time Hamilton-Jacobi equation for non ergodic dynamics, bypasses the usual separation of nuclei and electron wave functions, includes caustic states and gives a different perspective on the Born-Oppenheimer approximation, Schrödinger Hamiltonian systems and numerical simulation in molecular dynamics modeling at constant energy. / <p>QC 20120508</p>
|
Page generated in 0.0547 seconds