• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Pricing of European options using empirical characteristic functions

Binkowski, Karol Patryk. January 2008 (has links)
Thesis (PhD)--Macquarie University, Division of Economic and Financial Studies, Dept. of Statistics, 2008. / Bibliography: p. 73-77.
12

Performance of alternative option pricing models during spikes in the FTSE 100 volatility index : Empirical evidence from FTSE100 index options

Rehnby, Nicklas January 2017 (has links)
Derivatives have a large and significant role on the financial markets today and the popularity of options has increased. This has also increased the demand of finding a suitable option pricing model, since the ground-breaking model developed by Black & Scholes (1973) have poor pricing performance. Practitioners and academics have over the years developed different models with the assumption of non-constant volatility, without reaching any conclusions regarding which model is more suitable to use. This thesis examines four different models, the first model is the Practitioners Black & Scholes model proposed by Christoffersen & Jacobs (2004b). The second model is the Heston´s (1993) continuous time stochastic volatility model, a modification of the model is also included, which is called the Strike Vector Computation suggested by Kilin (2011). The last model is the Heston & Nandi (2000) Generalized Autoregressive Conditional Heteroscedasticity type discrete model. From a practical point of view the models are evaluated, with the goal of finding the model with the best pricing performance and the most practical usage. The model´s robustness is also tested to see how the models perform in out-of-sample during a high respectively low implied volatility market. All the models are effected in the robustness test, the out-sample ability is negatively affected by a high implied volatility market. The results show that both of the stochastic volatility models have superior performances in the in-sample and out-sample analysis. The Generalized Autoregressive Conditional Heteroscedasticity type discrete model shows surprisingly poor results both in the in-sample and out-sample analysis. The results indicate that option data should be used instead of historical return data to estimate the model’s parameters. This thesis also provides an insight on why overnight-index-swap (OIS) rates should be used instead of LIBOR rates as a proxy for the risk-free rate.
13

Essays on Fine Structure of Asset Returns, Jumps, and Stochastic Volatility

Yu, Jung-Suk 22 May 2006 (has links)
There has been an on-going debate about choices of the most suitable model amongst a variety of model specifications and parameterizations. The first dissertation essay investigates whether asymmetric leptokurtic return distributions such as Hansen's (1994) skewed tdistribution combined with GARCH specifications can outperform mixed GARCH-jump models such as Maheu and McCurdy's (2004) GARJI model incorporating the autoregressive conditional jump intensity parameterization in the discrete-time framework. I find that the more parsimonious GJR-HT model is superior to mixed GARCH-jump models. Likelihood-ratio (LR) tests, information criteria such as AIC, SC, and HQ and Value-at-Risk (VaR) analysis confirm that GJR-HT is one of the most suitable model specifications which gives us both better fit to the data and parsimony of parameterization. The benefits of estimating GARCH models using asymmetric leptokurtic distributions are more substantial for highly volatile series such as emerging stock markets, which have a higher degree of non-normality. Furthermore, Hansen's skewed t-distribution also provides us with an excellent risk management tool evidenced by VaR analysis. The second dissertation essay provides a variety of empirical evidences to support redundancy of stochastic volatility for SP500 index returns when stochastic volatility is taken into account with infinite activity pure Lévy jumps models and the importance of stochastic volatility to reduce pricing errors for SP500 index options without regard to jumps specifications. This finding is important because recent studies have shown that stochastic volatility in a continuous-time framework provides an excellent fit for financial asset returns when combined with finite-activity Merton's type compound Poisson jump-diffusion models. The second essay also shows that stochastic volatility with jumps (SVJ) and extended variance-gamma with stochastic volatility (EVGSV) models perform almost equally well for option pricing, which strongly imply that the type of Lévy jumps specifications is not important factors to enhance model performances once stochastic volatility is incorporated. In the second essay, I compute option prices via improved Fast Fourier Transform (FFT) algorithm using characteristic functions to match arbitrary log-strike grids with equal intervals with each moneyness and maturity of actual market option prices.
14

Gender differences in child sexual abuse characteristics and long-term outcomes of mental illness, suicide, and fatal overdose : a prospective investigation

Spataro, Josie, 1973- January 2002 (has links)
Abstract not available
15

Pricing of European options using empirical characteristic functions

Binkowski, Karol Patryk January 2008 (has links)
Thesis (PhD)--Macquarie University, Division of Economic and Financial Studies, Dept. of Statistics, 2008. / Bibliography: p. 73-77. / Introduction -- Lévy processes used in option pricing -- Option pricing for Lévy processes -- Option pricing based on empirical characteristic functions -- Performance of the five models on historical data -- Conclusions -- References -- Appendix A. Proofs -- Appendix B. Supplements -- Appendix C. Matlab programs. / Pricing problems of financial derivatives are among the most important ones in Quantitative Finance. Since 1973 when a Nobel prize winning model was introduced by Black, Merton and Scholes the Brownian Motion (BM) process gained huge attention of professionals professionals. It is now known, however, that stock market log-returns do not follow the very popular BM process. Derivative pricing models which are based on more general Lévy processes tend to perform better. --Carr & Madan (1999) and Lewis (2001) (CML) developed a method for vanilla options valuation based on a characteristic function of asset log-returns assuming that they follow a Lévy process. Assuming that at least part of the problem is in adequate modeling of the distribution of log-returns of the underlying price process, we use instead a nonparametric approach in the CML formula and replaced the unknown characteristic function with its empirical version, the Empirical Characteristic Functions (ECF). We consider four modifications of this model based on the ECF. The first modification requires only historical log-returns of the underlying price process. The other three modifications of the model need, in addition, a calibration based on historical option prices. We compare their performance based on the historical data of the DAX index and on ODAX options written on the index between the 1st of June 2006 and the 17th of May 2007. The resulting pricing errors show that one of our models performs, at least in the cases considered in the project, better than the Carr & Madan (1999) model based on calibration of a parametric Lévy model, called a VG model. --Our study seems to confirm a necessity of using implied parameters, apart from an adequate modeling of the probability distribution of the asset log-returns. It indicates that to precisely reproduce behaviour of the real option prices yet other factors like stochastic volatility need to be included in the option pricing model. Fortunately the discrepancies between our model and real option prices are reduced by introducing the implied parameters which seem to be easily modeled and forecasted using a mixture of regression and time series models. Such approach is computationaly less expensive than the explicit modeling of the stochastic volatility like in the Heston (1993) model and its modifications. / Mode of access: World Wide Web. / x, 111 p. ill., charts
16

Algorithmes semi-implicites pour des problèmes d’interaction fluide structure : approches procédures partagées et monolithiques / Semi-implicit algorithms for fluid structure interaction problems : shared and monolithic procedures approaches

Sy, Soyibou 23 October 2009 (has links)
Dans cette thèse on a développé des algorithmes semi-implicites procédures partagées et monolithiques pour l'interaction entre un fluide gouverné par le modèle de Navier Stokes et une structure. Dans le premier chapitre, on présente un algorithme semi-implicite procédures partagées pour l'interaction entre un fluide et une structure gouvernée soit par les équations d'élasticité linéaire ou soit par le modèle de Saint-Venant Kirchhoff non linéaire. Dans le second chapitre, on propose un algorithme semi-implicite procédures partagées pour l'interaction entre un fluide et une structure de modèle linéaire et on montre un résultat de stabilité inconditionnelle en temps de l'algorithme. Un problème d'optimisation est résolu dans les deux algorithmes précédents, afin de satisfaire les conditions de continuité des vitesses et d'égalité des contraintes à l'interface. Durant les itérations de BFGS pour résoudre le problème d'optimisation, le maillage fluide reste fixe et la matrice fluide n'est factorisée qu'une seule fois, ce qui réduit l'effort de calcul. Dans le troisième chapitre, un algorithme semi-implicite monolithique pour l'interaction entre un fluide et une structure de modèle linéaire est proposé. L'algorithme utilise un maillage global pour le domaine fluide structure. La condition de continuité des vitesses à l'interface est automatiquement satisfaite et celle de l'égalité des contraintes n'apparaît pas explicitement dans la formulation faible. A chaque pas de temps on résout un système monolithique d'inconnues vitesse et pression définies sur le domaine global. Le temps CPU est réduit quand l'approche monolithique est utilisée à la place des procédures partagées. / Our aim was to develop some partitioned procedures and monolithic semi-implicit algorithms for solving the interaction between a fluid governed by Navier Stokes equations and a structure. In the first chapter, we propose a partitioned procedures semi-implicit algorithm for solving fluid-structure interaction problems, with a structure governed either by linear elasticity equations or by the non-linear Saint-Venant Kirchhoff model. In the second chapter, we present a partitioned procedures semi-implicit algorithm for solving fluid-structure interaction problem with a linear model for the structure and we prove an unconditional stability result of the algorithm. In the above algorithms, an optimization problem must be solved in order to get the continuity of the velocity as well as the continuity of the stress at the interface. During the iterations of BFGS for solving the optimization problem, the fluid mesh does not move and the fluid matrix is only factorized once, which reduces the computational effort. In the fast chapter, we present a monolithic semi-implicit algorithm for solving fluid-structure interaction problem with linear model for the structure. The algorithm uses one global mesh for the fluid-structure domain. The continuity of velocity at the interface is automatically satisfied and the continuity of stress does not appear explicitly in the global weak form due to the action and reaction principle. At each time step, we have to solve a monolithic system of unknowns velocity and pressure defined on the global fluid-structure domain. When the monolithic approach is used the CPU time is reduced compared to a particular partitioned procedures strategy.
17

Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densities

Cao, Liang January 2014 (has links)
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred. The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
18

Beiträge zur expliziten Fehlerabschätzung im zentralen Grenzwertsatz

Paditz, Ludwig 04 June 2013 (has links) (PDF)
In der Arbeit wird das asymptotische Verhalten von geeignet normierten und zentrierten Summen von Zufallsgrößen untersucht, die entweder unabhängig sind oder im Falle der Abhängigkeit als Martingaldifferenzfolge oder stark multiplikatives System auftreten. Neben der klassischen Summationstheorie werden die Limitierungsverfahren mit einer unendlichen Summationsmatrix oder einer angepaßten Folge von Gewichtsfunktionen betrachtet. Es werden die Methode der charakteristischen Funktionen und besonders die direkte Methode der konjugierten Verteilungsfunktionen weiterentwickelt, um quantitative Aussagen über gleichmäßige und ungleichmäßige Restgliedabschätzungen in zentralen Grenzwertsatz zu beweisen. Die Untersuchungen werden dabei in der Lp-Metrik, 1<p<oo oder p=1 bzw. p=oo, durchgeführt, wobei der Fall p=oo der üblichen sup-Norm entspricht. Darüber hinaus wird im Fall unabhängiger Zufallsgrößen der lokale Grenzwertsatz für Dichten betrachtet. Mittels der elektronischen Datenverarbeitung neue numerische Resultate zu erhalten. Die Arbeit wird abgerundet durch verschiedene Hinweise auf praktische Anwendungen. / In the work the asymptotic behavior of suitably centered and normalized sums of random variables is investigated, which are either independent or occur in the case of dependence as a sequence of martingale differences or a strongly multiplicative system. In addition to the classical theory of summation limiting processes are considered with an infinite summation matrix or an adapted sequence of weighting functions. It will be further developed the method of characteristic functions, and especially the direct method of the conjugate distribution functions to prove quantitative statements about uniform and non-uniform error estimates of the remainder term in central limit theorem. The investigations are realized in the Lp metric, 1 <p <oo or p = 1 or p = oo, where in the case p = oo it is the usual sup-norm. In addition, in the case of independent random variables the local limit theorem for densities is considered. By means of electronic data processing new numerical results are obtained. The work is finished by various references to practical applications.
19

Beiträge zur expliziten Fehlerabschätzung im zentralen Grenzwertsatz

Paditz, Ludwig 27 April 1989 (has links)
In der Arbeit wird das asymptotische Verhalten von geeignet normierten und zentrierten Summen von Zufallsgrößen untersucht, die entweder unabhängig sind oder im Falle der Abhängigkeit als Martingaldifferenzfolge oder stark multiplikatives System auftreten. Neben der klassischen Summationstheorie werden die Limitierungsverfahren mit einer unendlichen Summationsmatrix oder einer angepaßten Folge von Gewichtsfunktionen betrachtet. Es werden die Methode der charakteristischen Funktionen und besonders die direkte Methode der konjugierten Verteilungsfunktionen weiterentwickelt, um quantitative Aussagen über gleichmäßige und ungleichmäßige Restgliedabschätzungen in zentralen Grenzwertsatz zu beweisen. Die Untersuchungen werden dabei in der Lp-Metrik, 1<p<oo oder p=1 bzw. p=oo, durchgeführt, wobei der Fall p=oo der üblichen sup-Norm entspricht. Darüber hinaus wird im Fall unabhängiger Zufallsgrößen der lokale Grenzwertsatz für Dichten betrachtet. Mittels der elektronischen Datenverarbeitung neue numerische Resultate zu erhalten. Die Arbeit wird abgerundet durch verschiedene Hinweise auf praktische Anwendungen. / In the work the asymptotic behavior of suitably centered and normalized sums of random variables is investigated, which are either independent or occur in the case of dependence as a sequence of martingale differences or a strongly multiplicative system. In addition to the classical theory of summation limiting processes are considered with an infinite summation matrix or an adapted sequence of weighting functions. It will be further developed the method of characteristic functions, and especially the direct method of the conjugate distribution functions to prove quantitative statements about uniform and non-uniform error estimates of the remainder term in central limit theorem. The investigations are realized in the Lp metric, 1 <p <oo or p = 1 or p = oo, where in the case p = oo it is the usual sup-norm. In addition, in the case of independent random variables the local limit theorem for densities is considered. By means of electronic data processing new numerical results are obtained. The work is finished by various references to practical applications.

Page generated in 0.1211 seconds