• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Efficient Risk Simulations for Linear Asset Portfolios

Sak, Halis, Hörmann, Wolfgang, Leydold, Josef January 2008 (has links) (PDF)
We consider the problem of calculating tail probabilities of the returns of linear asset portfolios. As flexible and accurate model for the logarithmic returns we use the $t$-copula dependence structure and marginals following the generalized hyperbolic distribution. Exact calculation of the tail-loss probabilities is not possible and even simulation leads to challenging numerical problems. Applying a new numerical inversion method for the generation of the marginals and importance sampling with carefully selected mean shift we develop an efficient simulation algorithm. Numerical results for a variety of realistic portfolio examples show an impressive performance gain. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
2

Efficient Numerical Inversion for Financial Simulations

Derflinger, Gerhard, Hörmann, Wolfgang, Leydold, Josef, Sak, Halis January 2009 (has links) (PDF)
Generating samples from generalized hyperbolic distributions and non-central chi-square distributions by inversion has become an important task for the simulation of recent models in finance in the framework of (quasi-) Monte Carlo. However, their distribution functions are quite expensive to evaluate and thus numerical methods like root finding algorithms are extremely slow. In this paper we demonstrate how our new method based on Newton interpolation and Gauss-Lobatto quadrature can be utilized for financial applications. Its fast marginal generation times make it competitive, even for situations where the parameters are not always constant. / Series: Research Report Series / Department of Statistics and Mathematics
3

On Clustering: Mixture Model Averaging with the Generalized Hyperbolic Distribution

Ricciuti, Sarah 11 1900 (has links)
Cluster analysis is commonly described as the classification of unlabeled observations into groups such that they are more similar to one another than to observations in other groups. Model-based clustering assumes that the data arise from a statistical (mixture) model and typically a group of many models are fit to the data, from which the `best' model is selected by a model selection criterion (often the BIC in mixture model applications). This chosen model is then the only model that is used for making inferences on the data. Although this is common practice, proceeding in this way ignores a large component of model selection uncertainty, especially for situations where the difference between the model selection criterion for two competing models is relatively insignificant. For this reason, recent interest has been placed on selecting a subset of models that are close to the selected best model and using a weighted averaging approach to incorporate information from multiple models in this set. Model averaging is not a novel approach, yet its presence in a clustering framework is minimal. Here, we use Occam's window to select a subset of models eligible for two types of averaging techniques: averaging a posteriori probabilities, and direct averaging of model parameters. The efficacy of these model-based averaging approaches is demonstrated for a family of generalized hyperbolic mixture models using real and simulated data. / Thesis / Master of Science (MSc)
4

Cornish-Fisher Expansion and Value-at-Risk method in application to risk management of large portfolios

Sjöstrand, Maria, Aktaş, Özlem January 2011 (has links)
One of the major problem faced by banks is how to manage the risk exposure in large portfolios. According to Basel II regulation banks has to measure the risk using Value-at-Risk with confidence level 99%. However, this regulation does not specify the way to calculate Valueat- Risk. The easiest way to calculate Value-at-Risk is to assume that portfolio returns are normally distributed. Altough, this is the most common way to calculate Value-at-Risk, there exists also other methods. The previous crisis shows that the regular methods are unfortunately not always enough to prevent bankruptcy. This paper is devoted to compare the classical methods of estimating risk with other methods such as Cornish-Fisher Expansion (CFVaR) and assuming generalized hyperbolic distribution. To be able to do this study, we estimate the risk in a large portfolio consisting of ten stocks. These stocks are chosen from the NASDAQ 100-list in order to have highly liquid stocks (bluechips). The stocks are chosen from different sectors to make the portfolio welldiversified. To investigate the impact of dependence between the stocks in the portfolio we remove the two most correlated stocks and consider the resulting eight stock portfolio as well. In both portfolios we put equal weight to the included stocks. The results show that for a well-diversified large portfolio none of the risk measures are violated. However, for a portfolio consisting of only one highly volatile stock we prove that we have a violation in the classical methods but not when we use the modern methods mentioned above.
5

An Analysis of Markov Regime-Switching Models for Weather Derivative Pricing

Gerdin Börjesson, Fredrik January 2021 (has links)
The valuation of weather derivatives is greatly dependent on accurate modeling and forecasting of the underlying temperature indices. The complexity and uncertainty in such modeling has led to several temperature processes being developed for the Monte Carlo simulation of daily average temperatures. In this report, we aim to compare the results of two recently developed models by Gyamerah et al. (2018) and Evarest, Berntsson, Singull, and Yang (2018). The paper gives a thorough introduction to option theory, Lévy and Wiener processes, and generalized hyperbolic distributions frequently used in temperature modeling. Implementations of maximum likelihood estimation and the expectation-maximization algorithm with Kim's smoothed transition probabilities are used to fit the Lévy process distributions and both models' parameters, respectively. Later, the use of both models is considered for the pricing of European HDD and CDD options by Monte Carlo simulation. The evaluation shows a tendency toward the shifted temperature regime over the base regime, in contrast to the two articles, when evaluated for three data sets. Simulation is successfully demonstrated for the model of Evarest, however Gyamerah's model was unable to be replicated. This is concluded to be due to the two articles containing several incorrect derivations, why the thesis is left unanswered and the articles' conclusions are questioned. We end by proposing further validation of the two models and summarize the alterations required for a correct implementation.
6

A new approach to pricing real options on swaps : a new solution technique and extension to the non-a.s. finite stopping realm

Chu, Uran 07 June 2012 (has links)
This thesis consists of extensions of results on a perpetual American swaption problem. Companies routinely plan to swap uncertain benefits with uncertain costs in the future for their own benefits. Our work explores the choice of timing policies associated with the swap in the form of an optimal stopping problem. In this thesis, we have shown that Hu, Oksendal's (1998) condition given in their paper to guarantee that the optimal stopping time is a.s. finite is in fact both a necessary and sufficient condition. We have extended the solution to the problem from a region in the parameter space where optimal stopping times are a.s. finite to a region where optimal stopping times are non-a.s. finite, and have successfully calculated the probability of never stopping in this latter region. We have identified the joint distribution for stopping times and stopping locations in both the a.s. and non-a.s. finite stopping cases. We have also come up with an integral formula for the inner product of a generalized hyperbolic distribution with the Cauchy distribution. Also, we have applied our results to a back-end forestry harvesting model where stochastic costs are assumed to exponentiate upwards to infinity through time. / Graduation date: 2013
7

Pricing Basket of Credit Default Swaps and Collateralised Debt Obligation by Lévy Linearly Correlated, Stochastically Correlated, and Randomly Loaded Factor Copula Models and Evaluated by the Fast and Very Fast Fourier Transform

Fadel, Sayed M. January 2010 (has links)
In the last decade, a considerable growth has been added to the volume of the credit risk derivatives market. This growth has been followed by the current financial market turbulence. These two periods have outlined how significant and important are the credit derivatives market and its products. Modelling-wise, this growth has parallelised by more complicated and assembled credit derivatives products such as mth to default Credit Default Swaps (CDS), m out of n (CDS) and collateralised debt obligation (CDO). In this thesis, the Lévy process has been proposed to generalise and overcome the Credit Risk derivatives standard pricing model's limitations, i.e. Gaussian Factor Copula Model. One of the most important drawbacks is that it has a lack of tail dependence or, in other words, it needs more skewed correlation. However, by the Lévy Factor Copula Model, the microscopic approach of exploring this factor copula models has been developed and standardised to incorporate an endless number of distribution alternatives those admits the Lévy process. Since the Lévy process could include a variety of processes structural assumptions from pure jumps to continuous stochastic, then those distributions who admit this process could represent asymmetry and fat tails as they could characterise symmetry and normal tails. As a consequence they could capture both high and low events¿ probabilities. Subsequently, other techniques those could enhance the skewness of its correlation and be incorporated within the Lévy Factor Copula Model has been proposed, i.e. the 'Stochastic Correlated Lévy Factor Copula Model' and 'Lévy Random Factor Loading Copula Model'. Then the Lévy process has been applied through a number of proposed Pricing Basket CDS&CDO by Lévy Factor Copula and its skewed versions and evaluated by V-FFT limiting and mixture cases of the Lévy Skew Alpha-Stable distribution and Generalized Hyperbolic distribution. Numerically, the characteristic functions of the mth to default CDS's and (n/m) th to default CDS's number of defaults, the CDO's cumulative loss, and loss given default are evaluated by semi-explicit techniques, i.e. via the DFT's Fast form (FFT) and the proposed Very Fast form (VFFT). This technique through its fast and very fast forms reduce the computational complexity from O(N2) to, respectively, O(N log2 N ) and O(N ).
8

Pricing basket of credit default swaps and collateralised debt obligation by Lévy linearly correlated, stochastically correlated, and randomly loaded factor copula models and evaluated by the fast and very fast Fourier transform

Fadel, Sayed Mohammed January 2010 (has links)
In the last decade, a considerable growth has been added to the volume of the credit risk derivatives market. This growth has been followed by the current financial market turbulence. These two periods have outlined how significant and important are the credit derivatives market and its products. Modelling-wise, this growth has parallelised by more complicated and assembled credit derivatives products such as mth to default Credit Default Swaps (CDS), m out of n (CDS) and collateralised debt obligation (CDO). In this thesis, the Lévy process has been proposed to generalise and overcome the Credit Risk derivatives standard pricing model's limitations, i.e. Gaussian Factor Copula Model. One of the most important drawbacks is that it has a lack of tail dependence or, in other words, it needs more skewed correlation. However, by the Lévy Factor Copula Model, the microscopic approach of exploring this factor copula models has been developed and standardised to incorporate an endless number of distribution alternatives those admits the Lévy process. Since the Lévy process could include a variety of processes structural assumptions from pure jumps to continuous stochastic, then those distributions who admit this process could represent asymmetry and fat tails as they could characterise symmetry and normal tails. As a consequence they could capture both high and low events' probabilities. Subsequently, other techniques those could enhance the skewness of its correlation and be incorporated within the Lévy Factor Copula Model has been proposed, i.e. the 'Stochastic Correlated Lévy Factor Copula Model' and 'Lévy Random Factor Loading Copula Model'. Then the Lévy process has been applied through a number of proposed Pricing Basket CDS&CDO by Lévy Factor Copula and its skewed versions and evaluated by V-FFT limiting and mixture cases of the Lévy Skew Alpha-Stable distribution and Generalized Hyperbolic distribution. Numerically, the characteristic functions of the mth to default CDS's and (n/m) th to default CDS's number of defaults, the CDO's cumulative loss, and loss given default are evaluated by semi-explicit techniques, i.e. via the DFT's Fast form (FFT) and the proposed Very Fast form (VFFT). This technique through its fast and very fast forms reduce the computational complexity from O(N2) to, respectively, O(N log2 N ) and O(N ).
9

Seguro contra risco de downside de uma carteira: uma proposta híbrida frequentista-Bayesiana com uso de derivativos

Pérgola, Gabriel Campos 23 January 2013 (has links)
Submitted by Gabriel Campos Pérgola (gabrielpergola@gmail.com) on 2013-02-04T12:56:43Z No. of bitstreams: 1 DissertationGabrielPergola2013.pdf: 521205 bytes, checksum: 85369078a82b0d5cc02f8248961e9214 (MD5) / Rejected by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br), reason: Prezado Gabriel, Não recebemos os arquivo em PDF. Att. Suzi 3799-7876 on 2013-02-05T18:53:00Z (GMT) / Submitted by Gabriel Campos Pérgola (gabrielpergola@gmail.com) on 2013-02-05T19:00:17Z No. of bitstreams: 2 DissertationGabrielPergola2013.pdf: 521205 bytes, checksum: 85369078a82b0d5cc02f8248961e9214 (MD5) DissertationGabrielPergola2013.pdf: 521205 bytes, checksum: 85369078a82b0d5cc02f8248961e9214 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-02-05T19:07:12Z (GMT) No. of bitstreams: 2 DissertationGabrielPergola2013.pdf: 521205 bytes, checksum: 85369078a82b0d5cc02f8248961e9214 (MD5) DissertationGabrielPergola2013.pdf: 521205 bytes, checksum: 85369078a82b0d5cc02f8248961e9214 (MD5) / Made available in DSpace on 2013-02-05T19:09:04Z (GMT). No. of bitstreams: 2 DissertationGabrielPergola2013.pdf: 521205 bytes, checksum: 85369078a82b0d5cc02f8248961e9214 (MD5) DissertationGabrielPergola2013.pdf: 521205 bytes, checksum: 85369078a82b0d5cc02f8248961e9214 (MD5) Previous issue date: 23-01-13 / Portfolio insurance allows a manager to limit downside risk while allowing participation in upside markets. The purpose of this dissertation is to introduce a framework to portfolio insurance optimization from a hybrid frequentist-Bayesian approach. We obtain the joint distribution of regular returns from a frequentist statistical method, once the outliers have been identified and removed from the data sample. The joint distribution of extreme returns, in its turn, is modelled by a Bayesian network, whose topology reflects the events that can significantly impact the portfolio performance. Once we link the regular and extreme distributions of returns, we simulate future scenarios for the portfolio value. The insurance subportfolio is then optimized by the Differential Evolution algorithm. We show the framework in a step by step example for a long portfolio including stocks participating in the Bovespa Index (Ibovespa), using market data from 2008 to 2012. / Seguros de carteiras proporcionam aos gestores limitar o risco de downside sem renunciar a movimentos de upside. Nesta dissertação, propomos um arcabouço de otimização de seguro de carteira a partir de um modelo híbrido frequentista-Bayesiano com uso de derivativos. Obtemos a distribuição conjunta de retornos regulares através de uma abordagem estatística frequentista, uma vez removidos os outliers da amostra. A distribuição conjunta dos retornos extremos, por sua vez, é modelada através de Redes Bayesianas, cuja topologia contempla os eventos que o gestor considera crítico ao desempenho da carteira. Unindo as distribuições de retornos regulares e extremos, simulamos cenários futuros para a carteira. O seguro é, então, otimizado através do algoritmo Evolução Diferencial. Mostramos uma aplicação passo a passo para uma carteira comprada em ações do Ibovespa, utilizando dados de mercado entre 2008 e 2012.
10

Highway Development Decision-Making Under Uncertainty: Analysis, Critique and Advancement

El-Khatib, Mayar January 2010 (has links)
While decision-making under uncertainty is a major universal problem, its implications in the field of transportation systems are especially enormous; where the benefits of right decisions are tremendous, the consequences of wrong ones are potentially disastrous. In the realm of highway systems, decisions related to the highway configuration (number of lanes, right of way, etc.) need to incorporate both the traffic demand and land price uncertainties. In the literature, these uncertainties have generally been modeled using the Geometric Brownian Motion (GBM) process, which has been used extensively in modeling many other real life phenomena. But few scholars, including those who used the GBM in highway configuration decisions, have offered any rigorous justification for the use of this model. This thesis attempts to offer a detailed analysis of various aspects of transportation systems in relation to decision-making. It reveals some general insights as well as a new concept that extends the notion of opportunity cost to situations where wrong decisions could be made. Claiming deficiency of the GBM model, it also introduces a new formulation that utilizes a large and flexible parametric family of jump models (i.e., Lévy processes). To validate this claim, data related to traffic demand and land prices were collected and analyzed to reveal that their distributions, heavy-tailed and asymmetric, do not match well with the GBM model. As a remedy, this research used the Merton, Kou, and negative inverse Gaussian Lévy processes as possible alternatives. Though the results show indifference in relation to final decisions among the models, mathematically, they improve the precision of uncertainty models and the decision-making process. This furthers the quest for optimality in highway projects and beyond.

Page generated in 0.0936 seconds