31 |
Robust portfolio optimization with Expected Shortfall / Robust portföljoptimering med ESIsaksson, Daniel January 2016 (has links)
This thesis project studies robust portfolio optimization with Expected Short-fall applied to a reference portfolio consisting of Swedish linear assets with stocks and a bond index. Specifically, the classical robust optimization definition, focusing on uncertainties in parameters, is extended to also include uncertainties in log-return distribution. My contribution to the robust optimization community is to study portfolio optimization with Expected Shortfall with log-returns modeled by either elliptical distributions or by a normal copula with asymmetric marginal distributions. The robust optimization problem is solved with worst-case parameters from box and ellipsoidal un-certainty sets constructed from historical data and may be used when an investor has a more conservative view on the market than history suggests. With elliptically distributed log-returns, the optimization problem is equivalent to Markowitz mean-variance optimization, connected through the risk aversion coefficient. The results show that the optimal holding vector is almost independent of elliptical distribution used to model log-returns, while Expected Shortfall is strongly dependent on elliptical distribution with higher Expected Shortfall as a result of fatter distribution tails. To model the tails of the log-returns asymmetrically, generalized Pareto distributions are used together with a normal copula to capture multivariate dependence. In this case, the optimization problem is not equivalent to Markowitz mean-variance optimization and the advantages of using Expected Shortfall as risk measure are utilized. With the asymmetric log-return model there is a noticeable difference in optimal holding vector compared to the elliptical distributed model. Furthermore the Expected Shortfall in-creases, which follows from better modeled distribution tails. The general conclusions in this thesis project is that portfolio optimization with Expected Shortfall is an important problem being advantageous over Markowitz mean-variance optimization problem when log-returns are modeled with asymmetric distributions. The major drawback of portfolio optimization with Expected Shortfall is that it is a simulation based optimization problem introducing statistical uncertainty, and if the log-returns are drawn from a copula the simulation process involves more steps which potentially can make the program slower than drawing from an elliptical distribution. Thus, portfolio optimization with Expected Shortfall is appropriate to employ when trades are made on daily basis. / Examensarbetet behandlar robust portföljoptimering med Expected Shortfall tillämpad på en referensportfölj bestående av svenska linjära tillgångar med aktier och ett obligationsindex. Specifikt så utvidgas den klassiska definitionen av robust optimering som fokuserar på parameterosäkerhet till att även inkludera osäkerhet i log-avkastningsfördelning. Mitt bidrag till den robusta optimeringslitteraturen är att studera portföljoptimering med Expected Shortfall med log-avkastningar modellerade med antingen elliptiska fördelningar eller med en norma-copul med asymmetriska marginalfördelningar. Det robusta optimeringsproblemet löses med värsta tänkbara scenario parametrar från box och ellipsoid osäkerhetsset konstruerade från historiska data och kan användas när investeraren har en mer konservativ syn på marknaden än vad den historiska datan föreslår. Med elliptiskt fördelade log-avkastningar är optimeringsproblemet ekvivalent med Markowitz väntevärde-varians optimering, kopplade med riskaversionskoefficienten. Resultaten visar att den optimala viktvektorn är nästan oberoende av vilken elliptisk fördelning som används för att modellera log-avkastningar, medan Expected Shortfall är starkt beroende av elliptisk fördelning med högre Expected Shortfall som resultat av fetare fördelningssvansar. För att modellera svansarna till log-avkastningsfördelningen asymmetriskt används generaliserade Paretofördelningar tillsammans med en normal-copula för att fånga det multivariata beroendet. I det här fallet är optimeringsproblemet inte ekvivalent till Markowitz väntevärde-varians optimering och fördelarna med att använda Expected Shortfall som riskmått används. Med asymmetrisk log-avkastningsmodell uppstår märkbara skillnader i optimala viktvektorn jämfört med elliptiska fördelningsmodeller. Därutöver ökar Expected Shortfall, vilket följer av bättre modellerade fördelningssvansar. De generella slutsatserna i examensarbetet är att portföljoptimering med Expected Shortfall är ett viktigt problem som är fördelaktigt över Markowitz väntevärde-varians optimering när log-avkastningar är modellerade med asymmetriska fördelningar. Den största nackdelen med portföljoptimering med Expected Shortfall är att det är ett simuleringsbaserat optimeringsproblem som introducerar statistisk osäkerhet, och om log-avkastningar dras från en copula så involverar simuleringsprocessen flera steg som potentiellt kan göra programmet långsammare än att dra från en elliptisk fördelning. Därför är portföljoptimering med Expected Shortfall lämpligt att använda när handel sker på daglig basis.
|
32 |
Imputation and Generation of Multidimensional Market DataWall, Tobias, Titus, Jacob January 2021 (has links)
Market risk is one of the most prevailing risks to which financial institutions are exposed. The most popular approach in quantifying market risk is through Value at Risk. Organisations and regulators often require a long historical horizon of the affecting financial variables to estimate the risk exposures. A long horizon stresses the completeness of the available data; something risk applications need to handle. The goal of this thesis is to evaluate and propose methods to impute financial time series. The performance of the methods will be measured with respect to both price-, and risk metric replication. Two different use cases are evaluated; missing values randomly place in the time series and consecutively missing values at the end-point of a time series. In total, there are five models applied to each use case, respectively. For the first use case, the results show that all models perform better than the naive approach. The Lasso model lowered the price replication error by 35% compared to the naive model. The result from use case two is ambiguous. Still, we can conclude that all models performed better than the naive model concerning risk metric replication. In general, all models systemically underestimated the downstream risk metrics, implying that they failed to replicate the fat-tailed property of the price movement.
|
33 |
Multi-factor approximation : An analysis and comparison ofMichael Pykhtin's paper “Multifactor adjustment”Zanetti, Michael, Güzel, Philip January 2023 (has links)
The need to account for potential losses in rare events is of utmost importance for corporations operating in the financial sector. Common measurements for potential losses are Value at Risk and Expected Shortfall. These are measures of which the computation typically requires immense Monte Carlo simulations. Another measurement is the Advanced Internal Ratings-Based model that estimates the capital requirement but solely accounts for a single risk factor. As an alternative to the commonly used time-consuming credit risk methods and measurements, Michael Pykhtin presents methods to approximate the Value at Risk and Expected Shortfall in his paper Multi-factor adjustment from 2004. The thesis’ main focus is an elucidation and investigation of the approximation methods that Pykhtin presents. Pykhtin’s approximations are thereafter implemented along with the Monte Carlo methods that is used as a benchmark. A recreation of the results Pykhtin presents is completed with satisfactory, strongly matching results, which is a confident verification that the methods have been implemented in correspondence with the article. The methods are also applied on a small and large synthetic Nordea data set to test the methods on alternative data. Due to the size complexity of the large data set, it cannot be computed in its original form. Thus, a clustering algorithm is used to eliminate this limitation while still keeping characteristics of the original data set. Executing the methods on the synthetic Nordea data sets, the Value at Risk and Expected Shortfall results have a larger discrepancy between approximated and Monte Carlo simulated results. The noted differences are probably due to increased borrower exposures, and portfolio structures not being compatible with Pykhtin’s approximation. The purpose of clustering the small data set is to test the effect on the accuracy and understand the clustering algorithm’s impact before implementing it on the large data set. Clustering the small data set caused deviant results compared to the original small data set, which is expected. The clustered large data set’s approximation results had a lower discrepancy to the benchmark Monte Carlo simulated results in comparison to the small data. The increased portfolio size creates a granularity decreasing the outcome’s variance for both the MC methods, and the approximation methods, hence the low discrepancy. Overall, Pykhtin’s approximations’ accuracy and execution time are relatively good for the experiments. It is however very challenging for the approximate methods to handle large portfolios, considering the issues that the portfolio run into at just a couple of thousand borrowers. Lastly, a comparison between the Advanced Internal Ratings-Based model, and modified Value at Risks and Expected Shortfalls are made. Calculating the capital requirement for the Advanced Internal Ratings-Based model, the absence of complex concentration risk consideration is clearly illustrated by the significantly lower results compared to either of the other methods. In addition, an increasing difference can be identified between the capital requirements obtained from Pykhtin’s approximation and the Monte Carlo method. This emphasizes the importance of utilizing complex methods to fully grasp the inherent portfolio risks. / Behovet av att ta hänsyn till potentiella förluster av sällsynta händelser är av yttersta vikt för företag verksamma inom den finansiella sektorn. Vanliga mått på potentiella förluster är Value at Risk och Expected Shortfall. Dessa är mått där beräkningen vanligtvis kräver enorma Monte Carlo-simuleringar. Ett annat mått är Advanced Internal Ratings-Based-modellen som uppskattar ett kapitalkrav, men som enbart tar hänsyn till en riskfaktor. Som ett alternativ till dessa ofta förekommande och tidskrävande kreditriskmetoderna och mätningarna, presenterar Michael Pykhtin metoder för att approximera Value at Risk och Expected Shortfall i sin uppsats Multi-factor adjustment från 2004. Avhandlingens huvudfokus är en undersökning av de approximativa metoder som Pykhtin presenterar. Pykhtins approximationer implementeras och jämförs mot Monte Carlo-metoder, vars resultat används som referensvärden. Ett återskapande av resultaten Pykhtin presenterar i sin artikel har gjorts med tillfredsställande starkt matchande resultat, vilket är en säker verifiering av att metoderna har implementerats i samstämmighet med artikeln. Metoderna tillämpas även på ett litet och ett stor syntetiskt dataset erhållet av Nordea för att testa metoderna på alternativa data. På grund av komplexiteten hos det stora datasetet kan det inte beräknas i sin ursprungliga form. Således används en klustringsalgoritm för att eliminera denna begränsning samtidigt som egenskaperna hos den ursprungliga datamängden fortfarande bibehålls. Vid appliceringen av metoderna på de syntetiska Nordea-dataseten, identifierades en större diskrepans hos Value at Risk och Expected Shortfall-resultaten mellan de approximerade och Monte Carlo-simulerade resultaten. De noterade skillnaderna beror sannolikt på ökade exponeringar hos låntagarna och att portföljstrukturerna inte är förenliga med Pykhtins approximation. Syftet med klustringen av den lilla datasetet är att testa effekten av noggrannheten och förstå klustringsalgoritmens inverkan innan den implementeras på det stora datasetet. Att gruppera det lilla datasetet orsakade avvikande resultat jämfört med det ursprungliga lilla datasetet, vilket är förväntat. De modifierade stora datasetets approximativa resultat hade en lägre avvikelse mot de Monte Carlo simulerade benchmark resultaten i jämförelse med det lilla datasetet. Den ökade portföljstorleken skapar en finkornighet som minskar resultatets varians för både MC-metoderna och approximationerna, därav den låga diskrepansen. Sammantaget är Pykhtins approximationers noggrannhet och utförandetid relativt bra för experimenten. Det är dock väldigt utmanande för de approximativa metoderna att hantera stora portföljer, baserat på de problem som portföljen möter redan vid ett par tusen låntagare. Slutligen görs en jämförelse mellan Advanced Internal Ratings-Based-modellen, och modifierade Value at Risks och Expected shortfalls. När man beräknar kapitalkravet för Advanced Internal Ratings-Based-modellen, illustreras saknaden av komplexa koncentrationsrisköverväganden tydligt av de betydligt lägre resultaten jämfört med någon av de andra metoderna. Dessutom kan en ökad skillnad identifieras mellan kapitalkraven som erhålls från Pykhtins approximation och Monte Carlo-metoden. Detta understryker vikten av att använda komplexa metoder för att fullt ut förstå de inneboende portföljriskerna.
|
34 |
Risk Management and Sustainability - A Study of Risk and Return in Portfolios With Different Levels of Sustainability / Finansiell riskhantering och hållbarhet - En studie om risk och avkastning i portföljer med olika nivåer av hållbarhetBorg, Magnus, Ternqvist, Lucas January 2023 (has links)
This thesis examines the risk profile of Electronically Traded Funds and the dependence of the ESG rating on risk. 527 ETFs with exposure globally were analyzed. Risk measures considered were Value-at-Risk and Expected Shortfall, while some other metrics of risk was used, such as the volatility, maximum drawdown, tail dependece, and copulas. Stress tests were conducted in order to test the resilience against market downturns. The ETFs were grouped by their ESG rating as well as by their carbon intensity. The results show that the lowest risk can be found for ETFs with either the lowest ESG rating or the highest. Generally, a higher ESG rating implies a lower risk, but without statistical significance in many cases. Further, ETFs with a higher ESG rating showed, on average, a lower maximum drawdown, a higher tail dependence, and more resilience in market downturns. Regarding volatility, the average was shown to be lower on average for ETFs with a higher ESG rating, but no statistical significance could be found. Interestingly, the results show that investing sustainably returns a better financial performance at a lower risk, thus going against the Capital Asset Pricing Model. / Denna studie undersöker riskprofilen för elektroniskt handlade fonder och sambandet mellan risk och hållbarhetsbetyg. 527 ETF:er med global exponering analyserades. De riskmått som användes var Value-at-Risk och Expected Shortfall, och några andra mått för risk användes, däribland volatilitet, största intradagsnedgång, samband i svansfördelning, och copulas. Stresstest utfördes för att testa motsåtndskraften i marknadsnedgångar. ETF:erna grupperades med hjälp av deras hållbarhetsbetyg och deras koldioxidintensitet. Resultatet visar att lägst risk finns i ETF:er med högst respektive lägst hållbarhetsbetyg. Generellt har ETF:er med högre hållbarhetsbetyg en lägre risk, med endast viss statistisk signifikans. Därtill har ETF:er med högre hållbarhetsbetyg, i genomsnitt, en lägre största intradagsnedgång, högre samband i fördelningssvansarna och är mer motståndskraftiga i marknadsnedgångar. Volatiliteten är i genomsnitt lägre desto högre hållbarhetsbetyget är, men detta resultat saknar statistisk signifikans. Ett intressant resultat är att om man investerar hållbart kan man få en högre avkastning med en lägre risk, vilket går emot Capital Asset Pricing Model.
|
35 |
Applying Peaks-Over-Threshold for Increasing the Speed of Convergence of a Monte Carlo Simulation / Peaks-Over-Threshold tillämpat på en Monte Carlo simulering för ökad konvergenshastighetJakobsson, Eric, Åhlgren, Thor January 2022 (has links)
This thesis investigates applying the semiparametric method Peaks-Over-Threshold on data generated from a Monte Carlo simulation when estimating the financial risk measures Value-at-Risk and Expected Shortfall. The goal is to achieve a faster convergence than a Monte Carlo simulation when assessing extreme events that symbolise the worst outcomes of a financial portfolio. Achieving a faster convergence will enable a reduction of iterations in the Monte Carlo simulation, thus enabling a more efficient way of estimating risk measures for the portfolio manager. The financial portfolio consists of US life insurance policies offered on the secondary market, gathered by our partner RessCapital. The method is evaluated on three different portfolios with different defining characteristics. In Part I an analysis of selecting an optimal threshold is made. The accuracy and precision of Peaks-Over-Threshold is compared to the Monte Carlo simulation with 10,000 iterations, using a simulation of 100,000 iterations as the reference value. Depending on the risk measure and the percentile of interest, different optimal thresholds are selected. Part II presents the result with the optimal thresholds from Part I. One can conclude that Peaks-Over-Threshold performed significantly better than a Monte Carlo simulation for Value-at-Risk with 10,000 iterations. The results for Expected Shortfall did not achieve a clear improvement in terms of precision, but it did show improvement in terms of accuracy. Value-at-Risk and Expected Shortfall at the 99.5th percentile achieved a greater error reduction than at the 99th. The result therefore aligned well with theory, as the more "rare" event considered, the better the Peaks-Over-Threshold method performed. In conclusion, the method of applying Peaks-Over-Threshold can be proven useful when looking to reduce the number of iterations since it do increase the convergence of a Monte Carlo simulation. The result is however dependent on the rarity of the event of interest, and the level of precision/accuracy required. / Det här examensarbetet tillämpar metoden Peaks-Over-Threshold på data genererat från en Monte Carlo simulering för att estimera de finansiella riskmåtten Value-at-Risk och Expected Shortfall. Målet med arbetet är att uppnå en snabbare konvergens jämfört med en Monte Carlo simulering när intresset är s.k. extrema händelser som symboliserar de värsta utfallen för en finansiell portfölj. Uppnås en snabbare konvergens kan antalet iterationer i simuleringen minskas, vilket möjliggör ett mer effektivt sätt att estimera riskmåtten för portföljförvaltaren. Den finansiella portföljen består av amerikanska livförsäkringskontrakt som har erbjudits på andrahandsmarknaden, insamlat av vår partner RessCapital. Metoden utvärderas på tre olika portföljer med olika karaktär. I Del I så utförs en analys för att välja en optimal tröskel för Peaks-Over-Threshold. Noggrannheten och precisionen för Peaks-Over-Threshold jämförs med en Monte Carlo simulering med 10,000 iterationer, där en Monte Carlo simulering med 100,000 iterationer används som referensvärde. Beroende på riskmått samt vilken percentil som är av intresse så väljs olika trösklar. I Del II presenteras resultaten med de "optimalt" valda trösklarna från Del I. Peaks-over-Threshold påvisade signifikant bättre resultat för Value-at-Risk jämfört med Monte Carlo simuleringen med 10,000 iterationer. Resultaten för Expected Shortfall påvisade inte en signifikant förbättring sett till precision, men visade förbättring sett till noggrannhet. För både Value-at-Risk och Expected Shortfall uppnådde Peaks-Over-Threshold en större felminskning vid 99.5:e percentilen jämfört med den 99:e. Resultaten var därför i linje med de teoretiska förväntningarna då en högre percentil motsvarar ett extremare event. Sammanfattningsvis så kan metoden Peaks-Over-Threshold vara användbar när det kommer till att minska antalet iterationer i en Monte Carlo simulering då resultatet visade att Peaks-Over-Threshold appliceringen accelererar Monte Carlon simuleringens konvergens. Resultatet är dock starkt beroende av det undersökta eventets sannolikhet, samt precision- och noggrannhetskravet.
|
36 |
Utilização de cópulas com dinâmica semiparamétrica para estimação de medidas de risco de mercadoSilveira Neto, Paulo Corrêa da January 2015 (has links)
A análise de risco de mercado, o risco associado a perdas financeiras resultantes de utilizações de preços de mercado, é fundamental para instituições financeiras e gestores de carteiras. A alocação dos ativos nas carteiras envolve decisões risco/retorno eficientes, frequentemente limitadas por uma política de risco. Muitos modelos tradicionais simplificam a estimação do risco de mercado impondo muitas suposições, como distribuições simétricas, correlações lineares, normalidade, entre outras. A utilização de cópulas exibiliza a estimação da estrutura de dependência dessas séries de tempo, possibilitando a modelagem de séries de tempo multivariadas em dois passos: estimações marginais e da dependência entre as séries. Neste trabalho, utilizou-se um modelo de cópulas com dinâmica semiparamétrica para medição de risco de mercado. A estrutura dinâmica das cópulas conta com um parâmetro de dependência que varia ao longo do tempo, em que a proposta semiparamétrica possibilita a modelagem de qualquer tipo de forma funcional que a estrutura dinâmica venha a apresentar. O modelo proposto por Hafner e Reznikova (2010), de dinâmica semiparamétrica, é comparado com o modelo sugerido por Patton (2006), que apresenta dinâmica paramétrica. Todas as cópulas no trabalho são bivariadas. Os dados consistem em quatro séries de tempo do mercado brasileiro de ações. Para cada um desses pares, utilizou-se modelos ARMA-GARCH para a estimação das marginais, enquanto a dependência entre as séries foi estimada utilizando os dois modelos de cópulas dinâmicas mencionados. Para comparar as metodologias estimaram-se duas medidas de risco de mercado: Valor em Risco e Expected Shortfall. Testes de hipóteses foram implementados para verificar a qualidade das estimativas de risco. / Market risk management, i.e. managing the risk associated with nancial loss resulting from market price uctuations, is fundamental to nancial institutions and portfolio managers. Allocations involve e cient risk/return decisions, often restricted by an investment policy statement. Many traditional models simplify risk estimation imposing several assumptions, like symmetrical distributions, the existence of only linear correlations, normality, among others. The modelling of the dependence structure of these time series can be exibly achieved by using copulas. This approach can model a complex multivariate time series structure by analyzing the problem in two blocks: marginal distributions estimation and dependence estimation. The dynamic structure of these copulas can account for a dependence parameter that changes over time, whereas the semiparametric option makes it possible to model any kind of functional form in the dynamic structure. We compare the model suggested by Hafner and Reznikova (2010), which is a dynamic semiparametric one, with the model suggested by Patton (2006), which is also dynamic but fully parametric. The copulas in this work are all bivariate. The data consists of four Brazilian stock market time series. For each of these pairs, ARMA-GARCH models have been used to model the marginals, while the dependences between the series are modeled by using the two methods mentioned above. For the comparison between these methodologies, we estimate Value at Risk and Expected Shortfall of the portfolios built for each pair of assets. Hypothesis tests are implemented to verify the quality of the risk estimates.
|
37 |
Efficient Simulations in FinanceSak, Halis January 2008 (has links) (PDF)
Measuring the risk of a credit portfolio is a challenge for financial institutions because of the regulations brought by the Basel Committee. In recent years lots of models and state-of-the-art methods, which utilize Monte Carlo simulation, were proposed to solve this problem. In most of the models factors are used to account for the correlations between obligors. We concentrate on the the normal copula model, which assumes multivariate normality of the factors. Computation of value at risk (VaR) and expected shortfall (ES) for realistic credit portfolio models is subtle, since, (i) there is dependency throughout the portfolio; (ii) an efficient method is required to compute tail loss probabilities and conditional expectations at multiple points simultaneously. This is why Monte Carlo simulation must be improved by variance reduction techniques such as importance sampling (IS). Thus a new method is developed for simulating tail loss probabilities and conditional expectations for a standard credit risk portfolio. The new method is an integration of IS with inner replications using geometric shortcut for dependent obligors in a normal copula framework. Numerical results show that the new method is better than naive simulation for computing tail loss probabilities and conditional expectations at a single x and VaR value. Finally, it is shown that compared to the standard t statistic a skewness-correction method of Peter Hall is a simple and more accurate alternative for constructing confidence intervals. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
38 |
Utilização de cópulas com dinâmica semiparamétrica para estimação de medidas de risco de mercadoSilveira Neto, Paulo Corrêa da January 2015 (has links)
A análise de risco de mercado, o risco associado a perdas financeiras resultantes de utilizações de preços de mercado, é fundamental para instituições financeiras e gestores de carteiras. A alocação dos ativos nas carteiras envolve decisões risco/retorno eficientes, frequentemente limitadas por uma política de risco. Muitos modelos tradicionais simplificam a estimação do risco de mercado impondo muitas suposições, como distribuições simétricas, correlações lineares, normalidade, entre outras. A utilização de cópulas exibiliza a estimação da estrutura de dependência dessas séries de tempo, possibilitando a modelagem de séries de tempo multivariadas em dois passos: estimações marginais e da dependência entre as séries. Neste trabalho, utilizou-se um modelo de cópulas com dinâmica semiparamétrica para medição de risco de mercado. A estrutura dinâmica das cópulas conta com um parâmetro de dependência que varia ao longo do tempo, em que a proposta semiparamétrica possibilita a modelagem de qualquer tipo de forma funcional que a estrutura dinâmica venha a apresentar. O modelo proposto por Hafner e Reznikova (2010), de dinâmica semiparamétrica, é comparado com o modelo sugerido por Patton (2006), que apresenta dinâmica paramétrica. Todas as cópulas no trabalho são bivariadas. Os dados consistem em quatro séries de tempo do mercado brasileiro de ações. Para cada um desses pares, utilizou-se modelos ARMA-GARCH para a estimação das marginais, enquanto a dependência entre as séries foi estimada utilizando os dois modelos de cópulas dinâmicas mencionados. Para comparar as metodologias estimaram-se duas medidas de risco de mercado: Valor em Risco e Expected Shortfall. Testes de hipóteses foram implementados para verificar a qualidade das estimativas de risco. / Market risk management, i.e. managing the risk associated with nancial loss resulting from market price uctuations, is fundamental to nancial institutions and portfolio managers. Allocations involve e cient risk/return decisions, often restricted by an investment policy statement. Many traditional models simplify risk estimation imposing several assumptions, like symmetrical distributions, the existence of only linear correlations, normality, among others. The modelling of the dependence structure of these time series can be exibly achieved by using copulas. This approach can model a complex multivariate time series structure by analyzing the problem in two blocks: marginal distributions estimation and dependence estimation. The dynamic structure of these copulas can account for a dependence parameter that changes over time, whereas the semiparametric option makes it possible to model any kind of functional form in the dynamic structure. We compare the model suggested by Hafner and Reznikova (2010), which is a dynamic semiparametric one, with the model suggested by Patton (2006), which is also dynamic but fully parametric. The copulas in this work are all bivariate. The data consists of four Brazilian stock market time series. For each of these pairs, ARMA-GARCH models have been used to model the marginals, while the dependences between the series are modeled by using the two methods mentioned above. For the comparison between these methodologies, we estimate Value at Risk and Expected Shortfall of the portfolios built for each pair of assets. Hypothesis tests are implemented to verify the quality of the risk estimates.
|
39 |
Stress, uncertainty and multimodality of risk measures / Stress, incertitude et multimodalité des mesures de risqueLi, Kehan 06 June 2017 (has links)
Dans cette thèse, nous discutons du stress, de l'incertitude et de la multimodalité des mesures de risque en accordant une attention particulière à deux parties. Les résultats ont une influence directe sur le calcul du capital économique et réglementaire des banques. Tout d'abord, nous fournissons une nouvelle mesure de risque - la VaR du stress du spectre (SSVaR) - pour quantifier et intégrer l'incertitude de la valeur à risque. C'est un modèle de mise en œuvre de la VaR stressée proposée par Bâle III. La SSVaR est basée sur l'intervalle de confiance de la VaR. Nous étudions la distribution asymptotique de la statistique de l'ordre, qui est un estimateur non paramétrique de la VaR, afin de construire l'intervalle de confiance. Deux intervalles de confiance sont obtenus soit par le résultat gaussien asymptotique, soit par l'approche saddlepoint. Nous les comparons avec l'intervalle de confiance en bootstrapping par des simulations, montrant que l'intervalle de confiance construit à partir de l'approche saddlepoint est robuste pour différentes tailles d'échantillons, distributions sous-jacentes et niveaux de confiance. Les applications de test de stress utilisant SSVaR sont effectuées avec des rendements historiques de l'indice boursier lors d'une crise financière, pour identifier les violations potentielles de la VaR pendant les périodes de turbulences sur les marchés financiers. Deuxièmement, nous étudions l'impact de la multimodalité des distributions sur les calculs de la VaR et de l'ES. Les distributions de probabilité unimodales ont été largement utilisées pour le calcul paramétrique de la VaR par les investisseurs, les gestionnaires de risques et les régulateurs. Cependant, les données financières peuvent être caractérisées par des distributions ayant plus d'un mode. Avec ces données nous montrons que les distributions multimodales peuvent surpasser la distribution unimodale au sens de la qualité de l'ajustement. Deux catégories de distributions multimodales sont considérées: la famille de Cobb et la famille Distortion. Nous développons un algorithme d'échantillonnage de rejet adapté, permettant de générer efficacement des échantillons aléatoires à partir de la fonction de densité de probabilité de la famille de Cobb. Pour une étude empirique, deux ensembles de données sont considérés: un ensemble de données quotidiennes concernant le risque opérationnel et un scénario de trois mois de rendement du portefeuille de marché construit avec cinq minutes de données intraday. Avec un éventail complet de niveaux de confiance, la VaR et l'ES à la fois des distributions unimodales et des distributions multimodales sont calculés. Nous analysons les résultats pour voir l'intérêt d'utiliser la distribution multimodale au lieu de la distribution unimodale en pratique. / In this thesis, we focus on discussing the stress, uncertainty and multimodality of risk measures with special attention on two parts. The results have direct influence on the computation of bank economic and regulatory capital. First, we provide a novel risk measure - the Spectrum Stress VaR (SSVaR) - to quantify and integrate the uncertainty of the Value-at-Risk. It is an implementation model of stressed VaR proposed in Basel III. The SSVaR is based on the confidence interval of the VaR. We investigate the asymptotic distribution of the order statistic, which is a nonparametric estimator of the VaR, in order to build the confidence interval. Two confidence intervals are derived from either the asymptotic Gaussian result, or the saddlepoint approach. We compare them with the bootstrapping confidence interval by simulations, showing that the confidence interval built from the saddlepoint approach is robust for different sample sizes, underlying distributions and confidence levels. Stress testing applications using SSVaR are performed with historical stock index returns during financial crisis, for identifying potential violations of the VaR during turmoil periods on financial markets. Second, we investigate the impact of multimodality of distributions on VaR and ES calculations. Unimodal probability distributions have been widely used for parametric VaR computation by investors, risk managers and regulators. However, financial data may be characterized by distributions having more than one modes. For these data, we show that multimodal distributions may outperform unimodal distribution in the sense of goodness-of-fit. Two classes of multimodal distributions are considered: Cobb's family and Distortion family. We develop an adapted rejection sampling algorithm, permitting to generate random samples efficiently from the probability density function of Cobb's family. For empirical study, two data sets are considered: a daily data set concerning operational risk and a three month scenario of market portfolio return built with five minutes intraday data. With a complete spectrum of confidence levels, the VaR and the ES from both unimodal distributions and multimodal distributions are calculated. We analyze the results to see the interest of using multimodal distribution instead of unimodal distribution in practice.
|
40 |
Utilização de cópulas com dinâmica semiparamétrica para estimação de medidas de risco de mercadoSilveira Neto, Paulo Corrêa da January 2015 (has links)
A análise de risco de mercado, o risco associado a perdas financeiras resultantes de utilizações de preços de mercado, é fundamental para instituições financeiras e gestores de carteiras. A alocação dos ativos nas carteiras envolve decisões risco/retorno eficientes, frequentemente limitadas por uma política de risco. Muitos modelos tradicionais simplificam a estimação do risco de mercado impondo muitas suposições, como distribuições simétricas, correlações lineares, normalidade, entre outras. A utilização de cópulas exibiliza a estimação da estrutura de dependência dessas séries de tempo, possibilitando a modelagem de séries de tempo multivariadas em dois passos: estimações marginais e da dependência entre as séries. Neste trabalho, utilizou-se um modelo de cópulas com dinâmica semiparamétrica para medição de risco de mercado. A estrutura dinâmica das cópulas conta com um parâmetro de dependência que varia ao longo do tempo, em que a proposta semiparamétrica possibilita a modelagem de qualquer tipo de forma funcional que a estrutura dinâmica venha a apresentar. O modelo proposto por Hafner e Reznikova (2010), de dinâmica semiparamétrica, é comparado com o modelo sugerido por Patton (2006), que apresenta dinâmica paramétrica. Todas as cópulas no trabalho são bivariadas. Os dados consistem em quatro séries de tempo do mercado brasileiro de ações. Para cada um desses pares, utilizou-se modelos ARMA-GARCH para a estimação das marginais, enquanto a dependência entre as séries foi estimada utilizando os dois modelos de cópulas dinâmicas mencionados. Para comparar as metodologias estimaram-se duas medidas de risco de mercado: Valor em Risco e Expected Shortfall. Testes de hipóteses foram implementados para verificar a qualidade das estimativas de risco. / Market risk management, i.e. managing the risk associated with nancial loss resulting from market price uctuations, is fundamental to nancial institutions and portfolio managers. Allocations involve e cient risk/return decisions, often restricted by an investment policy statement. Many traditional models simplify risk estimation imposing several assumptions, like symmetrical distributions, the existence of only linear correlations, normality, among others. The modelling of the dependence structure of these time series can be exibly achieved by using copulas. This approach can model a complex multivariate time series structure by analyzing the problem in two blocks: marginal distributions estimation and dependence estimation. The dynamic structure of these copulas can account for a dependence parameter that changes over time, whereas the semiparametric option makes it possible to model any kind of functional form in the dynamic structure. We compare the model suggested by Hafner and Reznikova (2010), which is a dynamic semiparametric one, with the model suggested by Patton (2006), which is also dynamic but fully parametric. The copulas in this work are all bivariate. The data consists of four Brazilian stock market time series. For each of these pairs, ARMA-GARCH models have been used to model the marginals, while the dependences between the series are modeled by using the two methods mentioned above. For the comparison between these methodologies, we estimate Value at Risk and Expected Shortfall of the portfolios built for each pair of assets. Hypothesis tests are implemented to verify the quality of the risk estimates.
|
Page generated in 0.0433 seconds