• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 27
  • 19
  • 13
  • 11
  • 9
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 263
  • 263
  • 175
  • 68
  • 61
  • 51
  • 40
  • 34
  • 31
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Modelling flood heights of the Limpopo River at Beitbridge Border Post using extreme value distributions

Kajambeu, Robert January 2016 (has links)
MSc (Statistics) / Department of Statistics / Haulage trucks and cross border traders cross through Beitbridge border post from landlocked countries such as Zimbabwe and Zambia for the sake of trading. Because of global warming, South Africa has lately been experiencing extreme weather patterns in the form of very high temperatures and heavy rainfall. Evidently, in 2013 tra c could not cross the Limpopo River because water was owing above the bridge. For planning, its important to predict the likelihood of such events occurring in future. Extreme value models o er one way in which this can be achieved. This study identi es suitable distributions to model the annual maximum heights of Limpopo river at Beitbridge border post. Maximum likelihood method and the Bayesian approach are used for parameter estimation. The r -largest order statistics was also used in this dissertation. For goodness of t, the probability and quantile- quantile plots are used. Finally return levels are calculated from these distributions. The dissertation has revealed that the 100 year return level is 6.759 metres using the maximum likelihood and Bayesian approaches to estimate parameters. Empirical results show that the Fr echet class of distributions ts well the ood heights data at Beitbridge border post. The dissertation contributes positively by informing stakeholders about the socio- economic impacts that are brought by extreme flood heights for Limpopo river at Beitbridge border post
242

Modelování kybernetického rizika pomocí kopula funkcí / Cyber risk modelling using copulas

Spišiak, Michal January 2020 (has links)
Cyber risk or data breach risk can be estimated similarly as other types of operational risk. First we identify problems of cyber risk models in existing literature. A large dataset consisting of 5,713 loss events enables us to apply extreme value theory. We adopt goodness of fit tests adjusted for distribution functions with estimated parameters. These tests are often overlooked in the literature even though they are essential for correct results. We model aggregate losses in three different industries separately and then we combine them using a copula. A t-test reveals that potential one-year global losses due to data breach risk are larger than the GDP of the Czech Republic. Moreover, one-year global cyber risk measured with a 99% CVaR amounts to 2.5% of the global GDP. Unlike others we compare risk measures with other quantities which allows wider audience to understand the magnitude of the cyber risk. An estimate of global data breach risk is a useful indicator not only for insurers, but also for any organization processing sensitive data.
243

Modelling the Resilience of Offshore Renewable Energy System Using Non-constant Failure Rates

Beyene, Mussie Abraham January 2021 (has links)
Offshore renewable energy systems, such as Wave Energy Converters or an Offshore Wind Turbine, must be designed to withstand extremes of the weather environment. For this, it is crucial both to have a good understanding of the wave and wind climate at the intended offshore site, and of the system reaction and possible failures to different weather scenarios. Based on these considerations, the first objective of this thesis was to model and identify the extreme wind speed and significant wave height at an offshore site, based on measured wave and wind data. The extreme wind speeds and wave heights were characterized as return values after 10, 25, 50, and 100 years, using the Generalized Extreme Value method. Based on a literature review, fragility curves for wave and wind energy systems were identified as function of significant wave height and wind speed. For a wave energy system, a varying failure rate as function of the wave height was obtained from the fragility curves, and used to model the resilience of a wave energy farm as a function of the wave climate. The cases of non-constant and constant failure rates were compared, and it was found that the non-constant failure rate had a high impact on the wave energy farm's resilience. When a non-constant failure rate as a function of wave height was applied to the energy wave farm, the number of Wave Energy Converters available in the farm and the absorbed energy from the farm are nearly zero. The cases for non-constant and an averaged constant failure of the instantaneous non-constant failure rate as a function of wave height were also compared, and it was discovered that investigating the resilience of the wave energy farm using the averaged constant failure rate of the non-constant failure rate results in better resilience. So, based on the findings of this thesis, it is recommended that identifying and characterizing offshore extreme weather climates, having a high repair rate, and having a high threshold limit repair vessel to withstand the harsh offshore weather environment.
244

Generating Extreme Value Distributions in Finance using Generative Adversarial Networks / Generering av Extremvärdesfördelningar inom Finans med hjälp av Generativa Motstridande Nätverk

Nord-Nilsson, William January 2023 (has links)
This thesis aims to develop a new model for stress-testing financial portfolios using Extreme Value Theory (EVT) and General Adversarial Networks (GANs). The current practice of risk management relies on mathematical or historical models, such as Value-at-Risk and expected shortfall. The problem with historical models is that the data which is available for very extreme events is limited, and therefore we need a method to interpolate and extrapolate beyond the available range. EVT is a statistical framework that analyzes extreme events in a distribution and allows such interpolation and extrapolation, and GANs are machine-learning techniques that generate synthetic data. The combination of these two areas can generate more realistic stress-testing scenarios to help financial institutions manage potential risks better. The goal of this thesis is to develop a new model that can handle complex dependencies and high-dimensional inputs with different kinds of assets such as stocks, indices, currencies, and commodities and can be used in parallel with traditional risk measurements. The evtGAN algorithm shows promising results and is able to mimic actual distributions, and is also able to extrapolate data outside the available data range. / Detta examensarbete handlar om att utveckla en ny modell för stresstestning av finansiella portföljer med hjälp av extremvärdesteori (EVT) och Generative Adversarial Networks (GAN). Dom modeller för riskhantering som används idag bygger på matematiska eller historiska modeller, som till exempel Value-at-Risk och Expected Shortfall. Problemet med historiska modeller är att det finns begränsat med data för mycket extrema händelser. EVT är däremot en del inom statistisk som analyserar extrema händelser i en fördelning, och GAN är maskininlärningsteknik som genererar syntetisk data. Genom att kombinera dessa två områden kan mer realistiska stresstestscenarier skapas för att hjälpa finansiella institutioner att bättre hantera potentiella risker. Målet med detta examensarbete är att utveckla en ny modell som kan hantera komplexa beroenden i högdimensionell data med olika typer av tillgångar, såsom aktier, index, valutor och råvaror, och som kan användas parallellt med traditionella riskmått. Algoritmen evtGAN visar lovande resultat och kan imitera verkliga fördelningar samt extrapolera data utanför tillgänglig datamängd.
245

Pricing and Modeling Heavy Tailed Reinsurance Treaties - A Pricing Application to Risk XL Contracts / Prissättning och modellering av långsvansade återförsäkringsavtal - En prissättningstillämpning på Risk XL kontrakt

Abdullah Mohamad, Ormia, Westin, Anna January 2023 (has links)
To estimate the risk of a loss occurring for insurance takers is a difficult task in the insurance industry. It is an even more difficult task to price the risk for reinsurance companies which insures the primary insurers. Insurance that is bought by an insurance company, the cedent, from another insurance company, the reinsurer, is called treaty reinsurance. This type of reinsurance is the main focus in this thesis. A very common risk to insure, is the risk of fire in municipal and commercial properties which is the risk that is priced in this thesis. This thesis evaluates Länsförsäkringar AB's current pricing model which calculates the risk premium for Risk XL contracts. The goal of this thesis is to find areas of improvement for tail risk pricing. The risk premium can be calculated commonly by using one of three different types of pricing models, experience rating, exposure rating and frequency-severity rating. This thesis focuses on frequency-severity pricing, which is a model that assumes independence between the frequency and the severity of losses, and therefore splits the two into separate models. This is a very common model used when pricing Risk XL contracts. The risk premium is calculated with the help of loss data from two insurance companies, from a Norwegian and a Finnish insurance company. The main focus of this thesis is to price the risk with the help of extreme value theory, mainly with the method of moments method to model the frequency of losses, and peaks over threshold model to model the severity of the losses. In order to model the estimated frequency of losses by using the method of moments method, two distributions are compared, the Poisson and the negative binomial distribution. There are different distributions that can be used to model the severity of losses. In order to evaluate which distribution is optimal to use, two different Goodness of Fit tests are applied, the Kolmogorov-Smirnov and the Anderson-Darling test. The Peaks over threshold model is a model that can be used with the Pareto distribution. With the help of the Hill estimator we are able to calculate a threshold $u$, which regulates the tail of the Pareto curve. To estimate the rest of the ingoing parameters in the generalized Pareto distribution, the maximum likelihood and the least squares method are used. Lastly, the bootstrap method is used to estimate the uncertainty in the price which was calculated with the help of the estimated parameters. From this, empirical percentiles are calculated and set as guidelines to where the risk premium should lie between, in order for both the data sets to be considered fairly priced. / Att uppskatta risken för en skada ska inträffa för försäkringstagarna är svår uppgift i försäkringsbranschen. Det är en ännu svårare uppgift är att prissätta risken för återförsäkringsbolag som försäkrar direktförsäkrarna. Den försäkringen som köps av direkförsäkrarna, cedenten, från återförsäkrarna kallas treaty återförsäkring. Denna typ av återförsäkring är den som behandlas i denna avhandlig. En vanlig risk att prisätta är brandrisken för kommunala och industriella byggnader, vilket är risken som prissätts i denna avhandlnig. Denna avhandling utvärderar Länsförsäkringar AB's nuvarande prissättning som beräknar riskpremien för Risk XL kontrakt.Målet med denna avhandling är att hitta förbättringsområden för långsvansad affär. Riskpremien kan beräknas med hjälp av tre vanliga typer av prissättningsmodeller, experience rating, exposure rating och frequency-severity raring. Denna tes fokuserar endast på frequency-severity rating, vilket är en modell som antar att frekevensen av skador och storleken av de är oberoende, de delas därmed upp de i separata modeller. Detta är en väldigt vanlig modell som används vid prissättning av Risk XL kontrakt.Riskpremien beräknas med hjälp av skadedata från två försäkringsbolag, ett norskt och ett finskt försäkringsbolag.Det huvudsakliga fokuset i denna avhandling är att prissätta risken med hjälp av extremevärdesteori, huvudsakligen med hjälp av momentmetoden för att modellera frekvensen av skador och peaks over threshold metoden för att modellera storleken av de skadorna.För att kunna modellera den förväntade frekvensen av skador med hjälp av moment metoden så jämförs två fördelingar, Poissonfördelingen och den negativa binomialfördelningen. Det finns ett antal fördelningar som kan användas för att modellera storleken av skadorna. För att kunna avgöra vilken fördeling som är bäst att använda så har två olika Goodness of Fit test applicerats, Kolmogorov-Smirnov och Anderson-Darling testet.Peaks over threhsold modellen är en modell som kan användas med Paretofördelningen. Med hjälp av Hillestimatorn så beräknas en tröskel $u$ som regulerar paretokurvans uteseende. För att beräkna de resterande parametrarna i den generaliserade Paretofördelningen används maximum likliehood och minsta kvadratmetoden. Slutligen används bootstrap metoden för att skatta osäkerheten i risk premien som satts med hjälp av de skattade parametrarna. Utifrån den metoden så skapas percentiler som blir en riktlinje för vart risk premien bör ligga för de datasetten för att kunna anses vara rättvist prissatt.
246

Applying Peaks-Over-Threshold for Increasing the Speed of Convergence of a Monte Carlo Simulation / Peaks-Over-Threshold tillämpat på en Monte Carlo simulering för ökad konvergenshastighet

Jakobsson, Eric, Åhlgren, Thor January 2022 (has links)
This thesis investigates applying the semiparametric method Peaks-Over-Threshold on data generated from a Monte Carlo simulation when estimating the financial risk measures Value-at-Risk and Expected Shortfall. The goal is to achieve a faster convergence than a Monte Carlo simulation when assessing extreme events that symbolise the worst outcomes of a financial portfolio. Achieving a faster convergence will enable a reduction of iterations in the Monte Carlo simulation, thus enabling a more efficient way of estimating risk measures for the portfolio manager.  The financial portfolio consists of US life insurance policies offered on the secondary market, gathered by our partner RessCapital. The method is evaluated on three different portfolios with different defining characteristics.  In Part I an analysis of selecting an optimal threshold is made. The accuracy and precision of Peaks-Over-Threshold is compared to the Monte Carlo simulation with 10,000 iterations, using a simulation of 100,000 iterations as the reference value. Depending on the risk measure and the percentile of interest, different optimal thresholds are selected.  Part II presents the result with the optimal thresholds from Part I. One can conclude that Peaks-Over-Threshold performed significantly better than a Monte Carlo simulation for Value-at-Risk with 10,000 iterations. The results for Expected Shortfall did not achieve a clear improvement in terms of precision, but it did show improvement in terms of accuracy.  Value-at-Risk and Expected Shortfall at the 99.5th percentile achieved a greater error reduction than at the 99th. The result therefore aligned well with theory, as the more "rare" event considered, the better the Peaks-Over-Threshold method performed.  In conclusion, the method of applying Peaks-Over-Threshold can be proven useful when looking to reduce the number of iterations since it do increase the convergence of a Monte Carlo simulation. The result is however dependent on the rarity of the event of interest, and the level of precision/accuracy required. / Det här examensarbetet tillämpar metoden Peaks-Over-Threshold på data genererat från en Monte Carlo simulering för att estimera de finansiella riskmåtten Value-at-Risk och Expected Shortfall. Målet med arbetet är att uppnå en snabbare konvergens jämfört med en Monte Carlo simulering när intresset är s.k. extrema händelser som symboliserar de värsta utfallen för en finansiell portfölj. Uppnås en snabbare konvergens kan antalet iterationer i simuleringen minskas, vilket möjliggör ett mer effektivt sätt att estimera riskmåtten för portföljförvaltaren.  Den finansiella portföljen består av amerikanska livförsäkringskontrakt som har erbjudits på andrahandsmarknaden, insamlat av vår partner RessCapital. Metoden utvärderas på tre olika portföljer med olika karaktär.  I Del I så utförs en analys för att välja en optimal tröskel för Peaks-Over-Threshold. Noggrannheten och precisionen för Peaks-Over-Threshold jämförs med en Monte Carlo simulering med 10,000 iterationer, där en Monte Carlo simulering med 100,000 iterationer används som referensvärde. Beroende på riskmått samt vilken percentil som är av intresse så väljs olika trösklar.  I Del II presenteras resultaten med de "optimalt" valda trösklarna från Del I. Peaks-over-Threshold påvisade signifikant bättre resultat för Value-at-Risk jämfört med Monte Carlo simuleringen med 10,000 iterationer. Resultaten för Expected Shortfall påvisade inte en signifikant förbättring sett till precision, men visade förbättring sett till noggrannhet.  För både Value-at-Risk och Expected Shortfall uppnådde Peaks-Over-Threshold en större felminskning vid 99.5:e percentilen jämfört med den 99:e. Resultaten var därför i linje med de teoretiska förväntningarna då en högre percentil motsvarar ett extremare event.  Sammanfattningsvis så kan metoden Peaks-Over-Threshold vara användbar när det kommer till att minska antalet iterationer i en Monte Carlo simulering då resultatet visade att Peaks-Over-Threshold appliceringen accelererar Monte Carlon simuleringens konvergens. Resultatet är dock starkt beroende av det undersökta eventets sannolikhet, samt precision- och noggrannhetskravet.
247

The Performance of Market Risk Models for Value at Risk and Expected Shortfall Backtesting : In the Light of the Fundamental Review of the Trading Book / Bakåttest av VaR och ES i marknadsriskmodeller

Dalne, Katja January 2017 (has links)
The global financial crisis that took off in 2007 gave rise to several adjustments of the risk regulation for banks. An extensive adjustment, that is to be implemented in 2019, is the Fundamental Review of the Trading Book (FRTB). It proposes to use Expected Shortfall (ES) as risk measure instead of the currently used Value at Risk (VaR), as well as applying varying liquidity horizons based on the various risk levels of the assets involved. A major difficulty of implementing the FRTB lies within the backtesting of ES. Righi and Ceretta proposes a robust ES backtest based on Monte Carlo simulation. It is flexible since it does not assume any probability distribution and can be performed without waiting for an entire backtesting period. Implementing some commonly used VaR backtests as well as the ES backtest by Righi and Ceretta, yield a perception of which risk models that are the most accurate from both a VaR and an ES backtesting perspective. It can be concluded that a model that is satisfactory from a VaR backtesting perspective does not necessarily remain so from an ES backtesting perspective and vice versa. Overall, the models that are satisfactory from a VaR backtesting perspective turn out to be probably too conservative from an ES backtesting perspective. Considering the confidence levels proposed by the FRTB, from a VaR backtesting perspective, a risk measure model with a normal copula and a hybrid distribution with the generalized Pareto distribution in the tails and the empirical distribution in the center along with GARCH filtration is the most accurate one, as from an ES backtesting perspective a risk measure model with univariate Student’s t distribution with ⱱ ≈ 7 together with GARCH filtration is the most accurate one for implementation. Thus, when implementing the FRTB, the bank will need to compromise between obtaining a good VaR model, potentially resulting in conservative ES estimates, and obtaining a less satisfactory VaR model, possibly resulting in more accurate ES estimates. The thesis was performed at SAS Institute, an American IT company that develops software for risk management among others. Targeted customers are banks and other financial institutions. Investigating the FRTB acts a potential advantage for the company when approaching customers that are to implement the regulation framework in a near future. / Den globala finanskrisen som inleddes år 2007 ledde till flertalet ändringar vad gäller riskreglering för banker. En omfattande förändring som beräknas implementeras år 2019, utgörs av Fundamental Review of the Trading Book (FRTB). Denna föreslår bland annat användande av Expected Shortfall (ES) som riskmått istället för Value at Risk (VaR) som används idag, liksom tillämpandet av varierande likviditetshorisonter beroende på risknivåerna för tillgångarna i fråga. Den huvudsakliga svårigheten med att implementera FRTB ligger i backtestingen av ES. Righi och Ceretta föreslår ett robust ES backtest som baserar sig på Monte Carlo-simulering. Det är flexibelt i den mening att det inte antar någon specifik sannolikhetsfördelning samt att det går att implementera utan att man behöver vänta en hel backtestingperiod. Vid implementation av olika standardbacktest för VaR, liksom backtestet för ES av Righi och Ceretta, fås en uppfattning av vilka riskmåttsmodeller som ger de mest korrekta resultaten från både ett VaR- och ES-backtestingperspektiv. Sammanfattningsvis kan man konstatera att en modell som är acceptabel från ett VaR-backtestingperspektiv inte nödvändigtvis är det från ett ES-backtestingperspektiv och vice versa. I det hela taget har det visat sig att de modeller som är acceptabla ur ett VaR-backtestingperspektiv troligtvis är för konservativa från ett ESbacktestingperspektiv. Om man betraktar de konfidensnivåer som föreslagits i FRTB, kan man ur ett VaR-backtestingperspektiv konstatera att en riskmåttsmodell med normal-copula och en hybridfördelning med generaliserad Pareto-fördelning i svansarna och empirisk fördelning i centrum tillsammans med GARCH-filtrering är den bäst lämpade, medan det från ett ES-backtestingperspektiv är att föredra en riskmåttsmodell med univariat Student t-fördelning med ⱱ ≈ 7 tillsammans med GARCH-filtrering. Detta innebär att när banker ska implementera FRTB kommer de behöva kompromissa mellan att uppnå en bra VaR-modell som potentiellt resulterar i för konservativa ES-estimat och en modell som är mindre bra ur ett VaRperspektiv men som resulterar i rimligare ES-estimat. Examensarbetet genomfördes vid SAS Institute, ett amerikanskt IT-företag som bland annat utvecklar mjukvara för riskhantering. Tänkbara kunder är banker och andra finansinstitut. Denna studie av FRTB innebär en potentiell fördel för företaget vid kontakt med kunder som planerar implementera regelverket inom en snar framtid. / Riskhantering, finansiella tidsserier, Value at Risk, Expected Shortfall, Monte Carlo-simulering, GARCH-modellering, Copulas, hybrida distributioner, generaliserad Pareto-fördelning, extremvärdesteori, Backtesting, likviditetshorisonter, Basels regelverk
248

違約風險下四種新奇選擇權的評價 / Pricing four kinds of the vulnerable exotic options

林殿一, Lin, Tien-Yi Unknown Date (has links)
本論文推導違約風險下四種新奇選擇權的評價模型及其避險比率,依序為數據選擇權、寬它選擇權、互換選擇權,極值選擇權。並比較無違約風險與違約風險下的評價模型之差異。假若違約風險不存在時,違約風險下各種類型選擇權的評價模型皆會縮減成為無違約風險下所對應的評價模型。避險比率亦為如此。數值範例則印證違約風險下選擇權的價值較無違約風險選擇權的價值低。本論文完成目前尚無任何學術研究於違約風險下四種新奇選擇權的評價模型及避險比率。這是一個重要貢獻。 關鍵詞:違約風險、新奇選擇權、數據選擇權、寬它選擇權、互換選擇權、極值選擇權。 / This paper presents the analytic pricing formula and the hedging ratio of four kinds of exotic options with correlated credit risk. They are Digital options, Quanto Options, Exchange Options and Extreme-value Options, respectively. Furthermore, compare the discrepancy of the models under the condition whether the default risk exists. Finding that if there is no default risk, all models that we derive will reduce to the corresponding models with no default risks, and so do the hedging ratio. Numerical examples certify that the value of the vulnerable options will be lower than that of the ordinary options. All above that finished has not been done by existing researches and it is a chief contribution in this paper. Keywords: Exotic Options, Credit Risk, Digital Options, Quanto Options, Exchange Options, Extreme-value Options, Default Risk.
249

Dynamic portfolio construction and portfolio risk measurement

Mazibas, Murat January 2011 (has links)
The research presented in this thesis addresses different aspects of dynamic portfolio construction and portfolio risk measurement. It brings the research on dynamic portfolio optimization, replicating portfolio construction, dynamic portfolio risk measurement and volatility forecast together. The overall aim of this research is threefold. First, it is aimed to examine the portfolio construction and risk measurement performance of a broad set of volatility forecast and portfolio optimization model. Second, in an effort to improve their forecast accuracy and portfolio construction performance, it is aimed to propose new models or new formulations to the available models. Third, in order to enhance the replication performance of hedge fund returns, it is aimed to introduce a replication approach that has the potential to be used in numerous applications, in investment management. In order to achieve these aims, Chapter 2 addresses risk measurement in dynamic portfolio construction. In this chapter, further evidence on the use of multivariate conditional volatility models in hedge fund risk measurement and portfolio allocation is provided by using monthly returns of hedge fund strategy indices for the period 1990 to 2009. Building on Giamouridis and Vrontos (2007), a broad set of multivariate GARCH models, as well as, the simpler exponentially weighted moving average (EWMA) estimator of RiskMetrics (1996) are considered. It is found that, while multivariate GARCH models provide some improvements in portfolio performance over static models, they are generally dominated by the EWMA model. In particular, in addition to providing a better risk-adjusted performance, the EWMA model leads to dynamic allocation strategies that have a substantially lower turnover and could therefore be expected to involve lower transaction costs. Moreover, it is shown that these results are robust across the low - volatility and high-volatility sub-periods. Chapter 3 addresses optimization in dynamic portfolio construction. In this chapter, the advantages of introducing alternative optimization frameworks over the mean-variance framework in constructing hedge fund portfolios for a fund of funds. Using monthly return data of hedge fund strategy indices for the period 1990 to 2011, the standard mean-variance approach is compared with approaches based on CVaR, CDaR and Omega, for both conservative and aggressive hedge fund investors. In order to estimate portfolio CVaR, CDaR and Omega, a semi-parametric approach is proposed, in which first the marginal density of each hedge fund index is modelled using extreme value theory and the joint density of hedge fund index returns is constructed using a copula-based approach. Then hedge fund returns from this joint density are simulated in order to compute CVaR, CDaR and Omega. The semi-parametric approach is compared with the standard, non-parametric approach, in which the quantiles of the marginal density of portfolio returns are estimated empirically and used to compute CVaR, CDaR and Omega. Two main findings are reported. The first is that CVaR-, CDaR- and Omega-based optimization offers a significant improvement in terms of risk-adjusted portfolio performance over mean-variance optimization. The second is that, for all three risk measures, semi-parametric estimation of the optimal portfolio offers a very significant improvement over non-parametric estimation. The results are robust to as the choice of target return and the estimation period. Chapter 4 searches for improvements in portfolio risk measurement by addressing volatility forecast. In this chapter, two new univariate Markov regime switching models based on intraday range are introduced. A regime switching conditional volatility model is combined with a robust measure of volatility based on intraday range, in a framework for volatility forecasting. This chapter proposes a one-factor and a two-factor model that combine useful properties of range, regime switching, nonlinear filtration, and GARCH frameworks. Any incremental improvement in the performance of volatility forecasting is searched for by employing regime switching in a conditional volatility setting with enhanced information content on true volatility. Weekly S&P500 index data for 1982-2010 is used. Models are evaluated by using a number of volatility proxies, which approximate true integrated volatility. Forecast performance of the proposed models is compared to renowned return-based and range-based models, namely EWMA of Riskmetrics, hybrid EWMA of Harris and Yilmaz (2009), GARCH of Bollerslev (1988), CARR of Chou (2005), FIGARCH of Baillie et al. (1996) and MRSGARCH of Klaassen (2002). It is found that the proposed models produce more accurate out of sample forecasts, contain more information about true volatility and exhibit similar or better performance when used for value at risk comparison. Chapter 5 searches for improvements in risk measurement for a better dynamic portfolio construction. This chapter proposes multivariate versions of one and two factor MRSACR models introduced in the fourth chapter. In these models, useful properties of regime switching models, nonlinear filtration and range-based estimator are combined with a multivariate setting, based on static and dynamic correlation estimates. In comparing the out-of-sample forecast performance of these models, eminent return and range-based volatility models are employed as benchmark models. A hedge fund portfolio construction is conducted in order to investigate the out-of-sample portfolio performance of the proposed models. Also, the out-of-sample performance of each model is tested by using a number of statistical tests. In particular, a broad range of statistical tests and loss functions are utilized in evaluating the forecast performance of the variance covariance matrix of each portfolio. It is found that, in terms statistical test results, proposed models offer significant improvements in forecasting true volatility process, and, in terms of risk and return criteria employed, proposed models perform better than benchmark models. Proposed models construct hedge fund portfolios with higher risk-adjusted returns, lower tail risks, offer superior risk-return tradeoffs and better active management ratios. However, in most cases these improvements come at the expense of higher portfolio turnover and rebalancing expenses. Chapter 6 addresses the dynamic portfolio construction for a better hedge fund return replication and proposes a new approach. In this chapter, a method for hedge fund replication is proposed that uses a factor-based model supplemented with a series of risk and return constraints that implicitly target all the moments of the hedge fund return distribution. The approach is used to replicate the monthly returns of ten broad hedge fund strategy indices, using long-only positions in ten equity, bond, foreign exchange, and commodity indices, all of which can be traded using liquid, investible instruments such as futures, options and exchange traded funds. In out-of-sample tests, proposed approach provides an improvement over the pure factor-based model, offering a closer match to both the return performance and risk characteristics of the hedge fund strategy indices.
250

Outliers detection in mixtures of dissymmetric distributions for data sets with spatial constraints / Détection de valeurs aberrantes dans des mélanges de distributions dissymétriques pour des ensembles de données avec contraintes spatiales

Planchon, Viviane 29 May 2007 (has links)
In the case of soil chemical analyses, frequency distributions for some elements show a dissymmetrical aspect, with a very marked spread to the right or to the left. A high frequency of extreme values is also observed and a possible mixture of several distributions, due to the presence of various soil types within a single geographical unit, is encountered. Then, for the outliers detection and the establishment of detection limits, an original outliers detection procedure has been developed; it allows estimating extreme quantiles above and under which observations are considered as outliers. The estimation of these detection limits is based on the right and the left of the distribution tails. A first estimation is realised for each elementary geographical unit to determine an appropriate truncation level. Then, a spatial classification allows creating adjoining homogeneous groups of geographical units to estimate robust limit values based on an optimal number of observations. / Dans le cas des analyses chimiques de sols, les distributions de fréquences des résultats présentent, pour certains éléments étudiés, un caractère très dissymétrique avec un étalement très marqué à droite ou à gauche. Une fréquence importante de valeurs extrêmes est également observée et un mélange éventuel de plusieurs distributions au sein dune même entité géographique, lié à la présence de divers types de sols, peut être rencontré. Dès lors, pour la détection des valeurs aberrantes et la fixation des limites de détection, une méthode originale, permettant destimer des quantiles extrêmes au-dessus et en dessous desquelles les observations sont considérées comme aberrantes, a été élaborée. Lestimation des limites de détection est établie de manière distincte à partir des queues des distributions droite et gauche. Une première estimation par entité géographique élémentaire est réalisée afin de déterminer un niveau de troncature adéquat. Une classification spatiale permet ensuite de créer des groupes dentités homogènes contiguës, de manière à estimer des valeurs limites robustes basées sur un nombre dobservations optimal.

Page generated in 0.0864 seconds