• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 14
  • 13
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 168
  • 54
  • 33
  • 30
  • 25
  • 24
  • 21
  • 20
  • 20
  • 19
  • 18
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Modeling Extreme Values / Modelování extrémních hodnot

Shykhmanter, Dmytro January 2013 (has links)
Modeling of extreme events is a challenging statistical task. Firstly, there is always a limit number of observations and secondly therefore no experience to back test the result. One way of estimating higher quantiles is to fit one of theoretical distributions to the data and extrapolate to the tail. The shortcoming of this approach is that the estimate of the tail is based on the observations in the center of distribution. Alternative approach to this problem is based on idea to split the data into two sub-populations and model body of the distribution separately from the tail. This methodology is applied to non-life insurance losses, where extremes are particularly important for risk management. Never the less, even this approach is not a conclusive solution of heavy tail modeling. In either case, estimated 99.5% percentiles have such high standard errors, that the their reliability is very low. On the other hand this approach is theoretically valid and deserves to be considered as one of the possible methods of extreme value analysis.
152

Statistical Post-Processing Methods And Their Implementation On The Ensemble Prediction Systems For Forecasting Temperature In The Use Of The French Electric Consumption / Les propriétés statistiques de correction des prévisions de température et leur application au système des prévisions d’ensemble (SPE) de Météo France

Gogonel, Adriana Geanina 27 November 2012 (has links)
L’objectif des travaux de la thèse est d’étudier les propriétés statistiques de correction des prévisionsde température et de les appliquer au système des prévisions d’ensemble (SPE) de MétéoFrance. Ce SPE est utilisé dans la gestion du système électrique, à EDF R&D, il contient 51membres (prévisions par pas de temps) et fournit des prévisions à 14 jours. La thèse comportetrois parties. Dans la première partie on présente les SPE, dont le principe est de faire tournerplusieurs scénarios du même modèle avec des données d’entrée légèrement différentes pour simulerl’incertitude. On propose après des méthodes statistiques (la méthode du meilleur membre etla méthode bayésienne) que l’on implémente pour améliorer la précision ou la fiabilité du SPEdont nous disposons et nous mettons en place des critères de comparaison des résultats. Dansla deuxième partie nous présentons la théorie des valeurs extrêmes et les modèles de mélange etnous proposons des modèles de mélange contenant le modèle présenté dans la première partieet des fonctions de distributions des extrêmes. Dans la troisième partie nous introduisons larégression quantile pour mieux estimer les queues de distribution. / The thesis has for objective to study new statistical methods to correct temperature predictionsthat may be implemented on the ensemble prediction system (EPS) of Meteo France so toimprove its use for the electric system management, at EDF France. The EPS of Meteo Francewe are working on contains 51 members (forecasts by time-step) and gives the temperaturepredictions for 14 days. The thesis contains three parts: in the first one we present the EPSand we implement two statistical methods improving the accuracy or the spread of the EPS andwe introduce criteria for comparing results. In the second part we introduce the extreme valuetheory and the mixture models we use to combine the model we build in the first part withmodels for fitting the distributions tails. In the third part we introduce the quantile regressionas another way of studying the tails of the distribution.
153

Modelling flood heights of the Limpopo River at Beitbridge Border Post using extreme value distributions

Kajambeu, Robert January 2016 (has links)
MSc (Statistics) / Department of Statistics / Haulage trucks and cross border traders cross through Beitbridge border post from landlocked countries such as Zimbabwe and Zambia for the sake of trading. Because of global warming, South Africa has lately been experiencing extreme weather patterns in the form of very high temperatures and heavy rainfall. Evidently, in 2013 tra c could not cross the Limpopo River because water was owing above the bridge. For planning, its important to predict the likelihood of such events occurring in future. Extreme value models o er one way in which this can be achieved. This study identi es suitable distributions to model the annual maximum heights of Limpopo river at Beitbridge border post. Maximum likelihood method and the Bayesian approach are used for parameter estimation. The r -largest order statistics was also used in this dissertation. For goodness of t, the probability and quantile- quantile plots are used. Finally return levels are calculated from these distributions. The dissertation has revealed that the 100 year return level is 6.759 metres using the maximum likelihood and Bayesian approaches to estimate parameters. Empirical results show that the Fr echet class of distributions ts well the ood heights data at Beitbridge border post. The dissertation contributes positively by informing stakeholders about the socio- economic impacts that are brought by extreme flood heights for Limpopo river at Beitbridge border post
154

Modelování kybernetického rizika pomocí kopula funkcí / Cyber risk modelling using copulas

Spišiak, Michal January 2020 (has links)
Cyber risk or data breach risk can be estimated similarly as other types of operational risk. First we identify problems of cyber risk models in existing literature. A large dataset consisting of 5,713 loss events enables us to apply extreme value theory. We adopt goodness of fit tests adjusted for distribution functions with estimated parameters. These tests are often overlooked in the literature even though they are essential for correct results. We model aggregate losses in three different industries separately and then we combine them using a copula. A t-test reveals that potential one-year global losses due to data breach risk are larger than the GDP of the Czech Republic. Moreover, one-year global cyber risk measured with a 99% CVaR amounts to 2.5% of the global GDP. Unlike others we compare risk measures with other quantities which allows wider audience to understand the magnitude of the cyber risk. An estimate of global data breach risk is a useful indicator not only for insurers, but also for any organization processing sensitive data.
155

Generating Extreme Value Distributions in Finance using Generative Adversarial Networks / Generering av Extremvärdesfördelningar inom Finans med hjälp av Generativa Motstridande Nätverk

Nord-Nilsson, William January 2023 (has links)
This thesis aims to develop a new model for stress-testing financial portfolios using Extreme Value Theory (EVT) and General Adversarial Networks (GANs). The current practice of risk management relies on mathematical or historical models, such as Value-at-Risk and expected shortfall. The problem with historical models is that the data which is available for very extreme events is limited, and therefore we need a method to interpolate and extrapolate beyond the available range. EVT is a statistical framework that analyzes extreme events in a distribution and allows such interpolation and extrapolation, and GANs are machine-learning techniques that generate synthetic data. The combination of these two areas can generate more realistic stress-testing scenarios to help financial institutions manage potential risks better. The goal of this thesis is to develop a new model that can handle complex dependencies and high-dimensional inputs with different kinds of assets such as stocks, indices, currencies, and commodities and can be used in parallel with traditional risk measurements. The evtGAN algorithm shows promising results and is able to mimic actual distributions, and is also able to extrapolate data outside the available data range. / Detta examensarbete handlar om att utveckla en ny modell för stresstestning av finansiella portföljer med hjälp av extremvärdesteori (EVT) och Generative Adversarial Networks (GAN). Dom modeller för riskhantering som används idag bygger på matematiska eller historiska modeller, som till exempel Value-at-Risk och Expected Shortfall. Problemet med historiska modeller är att det finns begränsat med data för mycket extrema händelser. EVT är däremot en del inom statistisk som analyserar extrema händelser i en fördelning, och GAN är maskininlärningsteknik som genererar syntetisk data. Genom att kombinera dessa två områden kan mer realistiska stresstestscenarier skapas för att hjälpa finansiella institutioner att bättre hantera potentiella risker. Målet med detta examensarbete är att utveckla en ny modell som kan hantera komplexa beroenden i högdimensionell data med olika typer av tillgångar, såsom aktier, index, valutor och råvaror, och som kan användas parallellt med traditionella riskmått. Algoritmen evtGAN visar lovande resultat och kan imitera verkliga fördelningar samt extrapolera data utanför tillgänglig datamängd.
156

Pricing and Modeling Heavy Tailed Reinsurance Treaties - A Pricing Application to Risk XL Contracts / Prissättning och modellering av långsvansade återförsäkringsavtal - En prissättningstillämpning på Risk XL kontrakt

Abdullah Mohamad, Ormia, Westin, Anna January 2023 (has links)
To estimate the risk of a loss occurring for insurance takers is a difficult task in the insurance industry. It is an even more difficult task to price the risk for reinsurance companies which insures the primary insurers. Insurance that is bought by an insurance company, the cedent, from another insurance company, the reinsurer, is called treaty reinsurance. This type of reinsurance is the main focus in this thesis. A very common risk to insure, is the risk of fire in municipal and commercial properties which is the risk that is priced in this thesis. This thesis evaluates Länsförsäkringar AB's current pricing model which calculates the risk premium for Risk XL contracts. The goal of this thesis is to find areas of improvement for tail risk pricing. The risk premium can be calculated commonly by using one of three different types of pricing models, experience rating, exposure rating and frequency-severity rating. This thesis focuses on frequency-severity pricing, which is a model that assumes independence between the frequency and the severity of losses, and therefore splits the two into separate models. This is a very common model used when pricing Risk XL contracts. The risk premium is calculated with the help of loss data from two insurance companies, from a Norwegian and a Finnish insurance company. The main focus of this thesis is to price the risk with the help of extreme value theory, mainly with the method of moments method to model the frequency of losses, and peaks over threshold model to model the severity of the losses. In order to model the estimated frequency of losses by using the method of moments method, two distributions are compared, the Poisson and the negative binomial distribution. There are different distributions that can be used to model the severity of losses. In order to evaluate which distribution is optimal to use, two different Goodness of Fit tests are applied, the Kolmogorov-Smirnov and the Anderson-Darling test. The Peaks over threshold model is a model that can be used with the Pareto distribution. With the help of the Hill estimator we are able to calculate a threshold $u$, which regulates the tail of the Pareto curve. To estimate the rest of the ingoing parameters in the generalized Pareto distribution, the maximum likelihood and the least squares method are used. Lastly, the bootstrap method is used to estimate the uncertainty in the price which was calculated with the help of the estimated parameters. From this, empirical percentiles are calculated and set as guidelines to where the risk premium should lie between, in order for both the data sets to be considered fairly priced. / Att uppskatta risken för en skada ska inträffa för försäkringstagarna är svår uppgift i försäkringsbranschen. Det är en ännu svårare uppgift är att prissätta risken för återförsäkringsbolag som försäkrar direktförsäkrarna. Den försäkringen som köps av direkförsäkrarna, cedenten, från återförsäkrarna kallas treaty återförsäkring. Denna typ av återförsäkring är den som behandlas i denna avhandlig. En vanlig risk att prisätta är brandrisken för kommunala och industriella byggnader, vilket är risken som prissätts i denna avhandlnig. Denna avhandling utvärderar Länsförsäkringar AB's nuvarande prissättning som beräknar riskpremien för Risk XL kontrakt.Målet med denna avhandling är att hitta förbättringsområden för långsvansad affär. Riskpremien kan beräknas med hjälp av tre vanliga typer av prissättningsmodeller, experience rating, exposure rating och frequency-severity raring. Denna tes fokuserar endast på frequency-severity rating, vilket är en modell som antar att frekevensen av skador och storleken av de är oberoende, de delas därmed upp de i separata modeller. Detta är en väldigt vanlig modell som används vid prissättning av Risk XL kontrakt.Riskpremien beräknas med hjälp av skadedata från två försäkringsbolag, ett norskt och ett finskt försäkringsbolag.Det huvudsakliga fokuset i denna avhandling är att prissätta risken med hjälp av extremevärdesteori, huvudsakligen med hjälp av momentmetoden för att modellera frekvensen av skador och peaks over threshold metoden för att modellera storleken av de skadorna.För att kunna modellera den förväntade frekvensen av skador med hjälp av moment metoden så jämförs två fördelingar, Poissonfördelingen och den negativa binomialfördelningen. Det finns ett antal fördelningar som kan användas för att modellera storleken av skadorna. För att kunna avgöra vilken fördeling som är bäst att använda så har två olika Goodness of Fit test applicerats, Kolmogorov-Smirnov och Anderson-Darling testet.Peaks over threhsold modellen är en modell som kan användas med Paretofördelningen. Med hjälp av Hillestimatorn så beräknas en tröskel $u$ som regulerar paretokurvans uteseende. För att beräkna de resterande parametrarna i den generaliserade Paretofördelningen används maximum likliehood och minsta kvadratmetoden. Slutligen används bootstrap metoden för att skatta osäkerheten i risk premien som satts med hjälp av de skattade parametrarna. Utifrån den metoden så skapas percentiler som blir en riktlinje för vart risk premien bör ligga för de datasetten för att kunna anses vara rättvist prissatt.
157

Applying Peaks-Over-Threshold for Increasing the Speed of Convergence of a Monte Carlo Simulation / Peaks-Over-Threshold tillämpat på en Monte Carlo simulering för ökad konvergenshastighet

Jakobsson, Eric, Åhlgren, Thor January 2022 (has links)
This thesis investigates applying the semiparametric method Peaks-Over-Threshold on data generated from a Monte Carlo simulation when estimating the financial risk measures Value-at-Risk and Expected Shortfall. The goal is to achieve a faster convergence than a Monte Carlo simulation when assessing extreme events that symbolise the worst outcomes of a financial portfolio. Achieving a faster convergence will enable a reduction of iterations in the Monte Carlo simulation, thus enabling a more efficient way of estimating risk measures for the portfolio manager.  The financial portfolio consists of US life insurance policies offered on the secondary market, gathered by our partner RessCapital. The method is evaluated on three different portfolios with different defining characteristics.  In Part I an analysis of selecting an optimal threshold is made. The accuracy and precision of Peaks-Over-Threshold is compared to the Monte Carlo simulation with 10,000 iterations, using a simulation of 100,000 iterations as the reference value. Depending on the risk measure and the percentile of interest, different optimal thresholds are selected.  Part II presents the result with the optimal thresholds from Part I. One can conclude that Peaks-Over-Threshold performed significantly better than a Monte Carlo simulation for Value-at-Risk with 10,000 iterations. The results for Expected Shortfall did not achieve a clear improvement in terms of precision, but it did show improvement in terms of accuracy.  Value-at-Risk and Expected Shortfall at the 99.5th percentile achieved a greater error reduction than at the 99th. The result therefore aligned well with theory, as the more "rare" event considered, the better the Peaks-Over-Threshold method performed.  In conclusion, the method of applying Peaks-Over-Threshold can be proven useful when looking to reduce the number of iterations since it do increase the convergence of a Monte Carlo simulation. The result is however dependent on the rarity of the event of interest, and the level of precision/accuracy required. / Det här examensarbetet tillämpar metoden Peaks-Over-Threshold på data genererat från en Monte Carlo simulering för att estimera de finansiella riskmåtten Value-at-Risk och Expected Shortfall. Målet med arbetet är att uppnå en snabbare konvergens jämfört med en Monte Carlo simulering när intresset är s.k. extrema händelser som symboliserar de värsta utfallen för en finansiell portfölj. Uppnås en snabbare konvergens kan antalet iterationer i simuleringen minskas, vilket möjliggör ett mer effektivt sätt att estimera riskmåtten för portföljförvaltaren.  Den finansiella portföljen består av amerikanska livförsäkringskontrakt som har erbjudits på andrahandsmarknaden, insamlat av vår partner RessCapital. Metoden utvärderas på tre olika portföljer med olika karaktär.  I Del I så utförs en analys för att välja en optimal tröskel för Peaks-Over-Threshold. Noggrannheten och precisionen för Peaks-Over-Threshold jämförs med en Monte Carlo simulering med 10,000 iterationer, där en Monte Carlo simulering med 100,000 iterationer används som referensvärde. Beroende på riskmått samt vilken percentil som är av intresse så väljs olika trösklar.  I Del II presenteras resultaten med de "optimalt" valda trösklarna från Del I. Peaks-over-Threshold påvisade signifikant bättre resultat för Value-at-Risk jämfört med Monte Carlo simuleringen med 10,000 iterationer. Resultaten för Expected Shortfall påvisade inte en signifikant förbättring sett till precision, men visade förbättring sett till noggrannhet.  För både Value-at-Risk och Expected Shortfall uppnådde Peaks-Over-Threshold en större felminskning vid 99.5:e percentilen jämfört med den 99:e. Resultaten var därför i linje med de teoretiska förväntningarna då en högre percentil motsvarar ett extremare event.  Sammanfattningsvis så kan metoden Peaks-Over-Threshold vara användbar när det kommer till att minska antalet iterationer i en Monte Carlo simulering då resultatet visade att Peaks-Over-Threshold appliceringen accelererar Monte Carlon simuleringens konvergens. Resultatet är dock starkt beroende av det undersökta eventets sannolikhet, samt precision- och noggrannhetskravet.
158

The Performance of Market Risk Models for Value at Risk and Expected Shortfall Backtesting : In the Light of the Fundamental Review of the Trading Book / Bakåttest av VaR och ES i marknadsriskmodeller

Dalne, Katja January 2017 (has links)
The global financial crisis that took off in 2007 gave rise to several adjustments of the risk regulation for banks. An extensive adjustment, that is to be implemented in 2019, is the Fundamental Review of the Trading Book (FRTB). It proposes to use Expected Shortfall (ES) as risk measure instead of the currently used Value at Risk (VaR), as well as applying varying liquidity horizons based on the various risk levels of the assets involved. A major difficulty of implementing the FRTB lies within the backtesting of ES. Righi and Ceretta proposes a robust ES backtest based on Monte Carlo simulation. It is flexible since it does not assume any probability distribution and can be performed without waiting for an entire backtesting period. Implementing some commonly used VaR backtests as well as the ES backtest by Righi and Ceretta, yield a perception of which risk models that are the most accurate from both a VaR and an ES backtesting perspective. It can be concluded that a model that is satisfactory from a VaR backtesting perspective does not necessarily remain so from an ES backtesting perspective and vice versa. Overall, the models that are satisfactory from a VaR backtesting perspective turn out to be probably too conservative from an ES backtesting perspective. Considering the confidence levels proposed by the FRTB, from a VaR backtesting perspective, a risk measure model with a normal copula and a hybrid distribution with the generalized Pareto distribution in the tails and the empirical distribution in the center along with GARCH filtration is the most accurate one, as from an ES backtesting perspective a risk measure model with univariate Student’s t distribution with ⱱ ≈ 7 together with GARCH filtration is the most accurate one for implementation. Thus, when implementing the FRTB, the bank will need to compromise between obtaining a good VaR model, potentially resulting in conservative ES estimates, and obtaining a less satisfactory VaR model, possibly resulting in more accurate ES estimates. The thesis was performed at SAS Institute, an American IT company that develops software for risk management among others. Targeted customers are banks and other financial institutions. Investigating the FRTB acts a potential advantage for the company when approaching customers that are to implement the regulation framework in a near future. / Den globala finanskrisen som inleddes år 2007 ledde till flertalet ändringar vad gäller riskreglering för banker. En omfattande förändring som beräknas implementeras år 2019, utgörs av Fundamental Review of the Trading Book (FRTB). Denna föreslår bland annat användande av Expected Shortfall (ES) som riskmått istället för Value at Risk (VaR) som används idag, liksom tillämpandet av varierande likviditetshorisonter beroende på risknivåerna för tillgångarna i fråga. Den huvudsakliga svårigheten med att implementera FRTB ligger i backtestingen av ES. Righi och Ceretta föreslår ett robust ES backtest som baserar sig på Monte Carlo-simulering. Det är flexibelt i den mening att det inte antar någon specifik sannolikhetsfördelning samt att det går att implementera utan att man behöver vänta en hel backtestingperiod. Vid implementation av olika standardbacktest för VaR, liksom backtestet för ES av Righi och Ceretta, fås en uppfattning av vilka riskmåttsmodeller som ger de mest korrekta resultaten från både ett VaR- och ES-backtestingperspektiv. Sammanfattningsvis kan man konstatera att en modell som är acceptabel från ett VaR-backtestingperspektiv inte nödvändigtvis är det från ett ES-backtestingperspektiv och vice versa. I det hela taget har det visat sig att de modeller som är acceptabla ur ett VaR-backtestingperspektiv troligtvis är för konservativa från ett ESbacktestingperspektiv. Om man betraktar de konfidensnivåer som föreslagits i FRTB, kan man ur ett VaR-backtestingperspektiv konstatera att en riskmåttsmodell med normal-copula och en hybridfördelning med generaliserad Pareto-fördelning i svansarna och empirisk fördelning i centrum tillsammans med GARCH-filtrering är den bäst lämpade, medan det från ett ES-backtestingperspektiv är att föredra en riskmåttsmodell med univariat Student t-fördelning med ⱱ ≈ 7 tillsammans med GARCH-filtrering. Detta innebär att när banker ska implementera FRTB kommer de behöva kompromissa mellan att uppnå en bra VaR-modell som potentiellt resulterar i för konservativa ES-estimat och en modell som är mindre bra ur ett VaRperspektiv men som resulterar i rimligare ES-estimat. Examensarbetet genomfördes vid SAS Institute, ett amerikanskt IT-företag som bland annat utvecklar mjukvara för riskhantering. Tänkbara kunder är banker och andra finansinstitut. Denna studie av FRTB innebär en potentiell fördel för företaget vid kontakt med kunder som planerar implementera regelverket inom en snar framtid. / Riskhantering, finansiella tidsserier, Value at Risk, Expected Shortfall, Monte Carlo-simulering, GARCH-modellering, Copulas, hybrida distributioner, generaliserad Pareto-fördelning, extremvärdesteori, Backtesting, likviditetshorisonter, Basels regelverk
159

Dynamic portfolio construction and portfolio risk measurement

Mazibas, Murat January 2011 (has links)
The research presented in this thesis addresses different aspects of dynamic portfolio construction and portfolio risk measurement. It brings the research on dynamic portfolio optimization, replicating portfolio construction, dynamic portfolio risk measurement and volatility forecast together. The overall aim of this research is threefold. First, it is aimed to examine the portfolio construction and risk measurement performance of a broad set of volatility forecast and portfolio optimization model. Second, in an effort to improve their forecast accuracy and portfolio construction performance, it is aimed to propose new models or new formulations to the available models. Third, in order to enhance the replication performance of hedge fund returns, it is aimed to introduce a replication approach that has the potential to be used in numerous applications, in investment management. In order to achieve these aims, Chapter 2 addresses risk measurement in dynamic portfolio construction. In this chapter, further evidence on the use of multivariate conditional volatility models in hedge fund risk measurement and portfolio allocation is provided by using monthly returns of hedge fund strategy indices for the period 1990 to 2009. Building on Giamouridis and Vrontos (2007), a broad set of multivariate GARCH models, as well as, the simpler exponentially weighted moving average (EWMA) estimator of RiskMetrics (1996) are considered. It is found that, while multivariate GARCH models provide some improvements in portfolio performance over static models, they are generally dominated by the EWMA model. In particular, in addition to providing a better risk-adjusted performance, the EWMA model leads to dynamic allocation strategies that have a substantially lower turnover and could therefore be expected to involve lower transaction costs. Moreover, it is shown that these results are robust across the low - volatility and high-volatility sub-periods. Chapter 3 addresses optimization in dynamic portfolio construction. In this chapter, the advantages of introducing alternative optimization frameworks over the mean-variance framework in constructing hedge fund portfolios for a fund of funds. Using monthly return data of hedge fund strategy indices for the period 1990 to 2011, the standard mean-variance approach is compared with approaches based on CVaR, CDaR and Omega, for both conservative and aggressive hedge fund investors. In order to estimate portfolio CVaR, CDaR and Omega, a semi-parametric approach is proposed, in which first the marginal density of each hedge fund index is modelled using extreme value theory and the joint density of hedge fund index returns is constructed using a copula-based approach. Then hedge fund returns from this joint density are simulated in order to compute CVaR, CDaR and Omega. The semi-parametric approach is compared with the standard, non-parametric approach, in which the quantiles of the marginal density of portfolio returns are estimated empirically and used to compute CVaR, CDaR and Omega. Two main findings are reported. The first is that CVaR-, CDaR- and Omega-based optimization offers a significant improvement in terms of risk-adjusted portfolio performance over mean-variance optimization. The second is that, for all three risk measures, semi-parametric estimation of the optimal portfolio offers a very significant improvement over non-parametric estimation. The results are robust to as the choice of target return and the estimation period. Chapter 4 searches for improvements in portfolio risk measurement by addressing volatility forecast. In this chapter, two new univariate Markov regime switching models based on intraday range are introduced. A regime switching conditional volatility model is combined with a robust measure of volatility based on intraday range, in a framework for volatility forecasting. This chapter proposes a one-factor and a two-factor model that combine useful properties of range, regime switching, nonlinear filtration, and GARCH frameworks. Any incremental improvement in the performance of volatility forecasting is searched for by employing regime switching in a conditional volatility setting with enhanced information content on true volatility. Weekly S&P500 index data for 1982-2010 is used. Models are evaluated by using a number of volatility proxies, which approximate true integrated volatility. Forecast performance of the proposed models is compared to renowned return-based and range-based models, namely EWMA of Riskmetrics, hybrid EWMA of Harris and Yilmaz (2009), GARCH of Bollerslev (1988), CARR of Chou (2005), FIGARCH of Baillie et al. (1996) and MRSGARCH of Klaassen (2002). It is found that the proposed models produce more accurate out of sample forecasts, contain more information about true volatility and exhibit similar or better performance when used for value at risk comparison. Chapter 5 searches for improvements in risk measurement for a better dynamic portfolio construction. This chapter proposes multivariate versions of one and two factor MRSACR models introduced in the fourth chapter. In these models, useful properties of regime switching models, nonlinear filtration and range-based estimator are combined with a multivariate setting, based on static and dynamic correlation estimates. In comparing the out-of-sample forecast performance of these models, eminent return and range-based volatility models are employed as benchmark models. A hedge fund portfolio construction is conducted in order to investigate the out-of-sample portfolio performance of the proposed models. Also, the out-of-sample performance of each model is tested by using a number of statistical tests. In particular, a broad range of statistical tests and loss functions are utilized in evaluating the forecast performance of the variance covariance matrix of each portfolio. It is found that, in terms statistical test results, proposed models offer significant improvements in forecasting true volatility process, and, in terms of risk and return criteria employed, proposed models perform better than benchmark models. Proposed models construct hedge fund portfolios with higher risk-adjusted returns, lower tail risks, offer superior risk-return tradeoffs and better active management ratios. However, in most cases these improvements come at the expense of higher portfolio turnover and rebalancing expenses. Chapter 6 addresses the dynamic portfolio construction for a better hedge fund return replication and proposes a new approach. In this chapter, a method for hedge fund replication is proposed that uses a factor-based model supplemented with a series of risk and return constraints that implicitly target all the moments of the hedge fund return distribution. The approach is used to replicate the monthly returns of ten broad hedge fund strategy indices, using long-only positions in ten equity, bond, foreign exchange, and commodity indices, all of which can be traded using liquid, investible instruments such as futures, options and exchange traded funds. In out-of-sample tests, proposed approach provides an improvement over the pure factor-based model, offering a closer match to both the return performance and risk characteristics of the hedge fund strategy indices.
160

Risks in Commodity and Currency Markets

Bozovic, Milos 17 April 2009 (has links)
This thesis analyzes market risk factors in commodity and currency markets. It focuses on the impact of extreme events on the prices of financial products traded in these markets, and on the overall market risk faced by the investors. The first chapter develops a simple two-factor jump-diffusion model for valuation of contingent claims on commodities in order to investigate the pricing implications of shocks that are exogenous to this market. The second chapter analyzes the nature and pricing implications of the abrupt changes in exchange rates, as well as the ability of these changes to explain the shapes of option-implied volatility "smiles". Finally, the third chapter employs the notion that key results of the univariate extreme value theory can be applied separately to the principal components of ARMA-GARCH residuals of a multivariate return series. The proposed approach yields more precise Value at Risk forecasts than conventional multivariate methods, while maintaining the same efficiency. / El objetivo de esta tesis es analizar los factores del riesgo del mercado de las materias primas y las divisas. Está centrada en el impacto de los eventos extremos tanto en los precios de los productos financieros como en el riesgo total de mercado al cual se enfrentan los inversores. En el primer capítulo se introduce un modelo simple de difusión y saltos (jump-diffusion) con dos factores para la valuación de activos contingentes sobre las materias primas, con el objetivo de investigar las implicaciones de shocks en los precios que son exógenos a este mercado. En el segundo capítulo se analiza la naturaleza e implicaciones para la valuación de los saltos en los tipos de cambio, así como la capacidad de éstos para explicar las formas de sonrisa en la volatilidad implicada. Por último, en el tercer capítulo se utiliza la idea de que los resultados principales de la Teoria de Valores Extremos univariada se pueden aplicar por separado a los componentes principales de los residuos de un modelo ARMA-GARCH de series multivariadas de retorno. El enfoque propuesto produce pronósticos de Value at Risk más precisos que los convencionales métodos multivariados, manteniendo la misma eficiencia.

Page generated in 0.0524 seconds