• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 19
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 100
  • 74
  • 68
  • 56
  • 55
  • 21
  • 15
  • 15
  • 14
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analysis of Estimation and Specification of Various Econometric Models Used to Assess Financial Risk / Análisis de la estimación y la especificación de diversos modelos econométricos utilizados para evaluar el riesgo financiero

Acereda Serrano, Beatriz 25 July 2024 (has links)
This thesis aims to analyze some of the available methods that aid in risk estimation based on econometric models, as well as to propose some new ones. Some of the questions that are expected to be answered include which distribution to choose to obtain better risk estimates for series with abnormal behaviours, how to determine whether the distribution in parametric conditional models is a Student’s t, and how to assess whether an asset’s risk helps predict the risk of another asset. In Chapter 1, we estimate several cryptocurrencies’ Expected Shortfall using different error distributions and GARCH-type models for conditional variance. ur goal is to examine which distributions perform better and to check which component of the specification plays a more crucial role in estimating Expected Shortfall. The performance of the estimations is conducted using a backtesting technique with a rolling-window approach. Results show that, in the case of Bitcoin, it is important to use a distribution with at least two parameters that control its shape and an extension of the GARCH model, whether it be the NGARCH or the CGARCH model. On the other hand, other smaller cryptocurrencies yield good enough risk predictions with the Student’s t distribution and a GARCH model. The fact that the main measures of financial risk are focused on the tail of the distribution of returns highlights the importance of the choice of an appropriate distribution model. Chapter 2 develops a procedure for consistently testing the specification of a Student’s t distribution for the innovations of a dynamic model. This contributes to the existing literature by providing a test for Student’s t distributions in conditional mean and variance models with a parameter-free test statistic and, thus, a known asymptotic distribution, avoiding the use of more computationally costly resampling techniques such as bootstrapping. The specific expressions needed for the computation of the test statistic are obtained by adapting the generic test of Bai (2003), which is based on the Khmaladze (1988) transformation of the model residuals. Finally, in Chapter 3, the concept of Granger causality in Expected Shortfall (ES) is introduced, along with a testing procedure to detect this type of predictive relationship between return series. Granger causality in Expected Shortfall is here defined as the predictive ability of tail values of a series over future tail values of another series on average. This definition may help in analyzing whether past values of an asset in extreme risk affect future extreme risk values of another asset. The main contribution of this chapter is a test for detecting this type of causality, based on the test for Granger causality in VaR by Hong et al. (2009). An empirical application on financial institutions from different industries (banking, insurance, and diversified financials) is presented to analyze the risk spillovers in the US financial market. The contribution of this thesis to the field of financial econometrics focuses on the market risk of financial assets, both in its modeling through the metric known as Expected Shortfall suggested in the Basel III Accords and in its utility beyond capital requirements. The results highlight the importance of a good specification of the chosen distribution model for risk estimation - especially in high-risk assets such as cryptocurrencies - and a test is proposed to verify if the conditional distribution in parametric models used for risk predictions is or is not a Student’s t distribution. Finally, a Granger causality test in Expected Shortfall is proposed, which allows for studying risk propagation in tails of return distributions. The proposed test can be used to investigate interconnections within and between markets as a complement when evaluating systemic risk. Other potential applications include improving Expected Shortfall forecasts by including causing variables as regressors in estimations, studying the inclusion of certain asset pairs in the same portfolio based on how they interact in the riskiest situations, or constructing networks of extreme risk propagation. / Esta tesis doctoral ha sido financiada mediante una ayuda FPU por el Ministerio de Educación, Cultura y Deporte (FPU17/06227).
42

Estimação de medidas de risco utilizando modelos CAViaR e CARE / Risk measures estimation using CAViaR and CARE models.

Silva, Francyelle de Lima e 06 August 2010 (has links)
Neste trabalho são definidos, discutidos e estimados o Valor em Risco e o Expected Shortfall. Estas são medidas de Risco Financeiro de Mercado muito utilizadas por empresas e investidores para o gerenciamento do risco, aos quais podem estar expostos. O objetivo foi apresentar e utilizar vários métodos e modelos para a estimação dessas medidas e estabelecer qual o modelo mais adequado dentro de determinados cenários. / In this work Value at Risk and Expected Shortfall are defined, discussed and estimated . These are measures heavily used in Financial Market Risk, in particular by companies and investors to manage risk, which they may be exposed. The aim is to present and use several methods and models for estimating those measures and to establish which model is most appropriate in certain scenarios.
43

Estimação de medidas de risco utilizando modelos CAViaR e CARE / Risk measures estimation using CAViaR and CARE models.

Francyelle de Lima e Silva 06 August 2010 (has links)
Neste trabalho são definidos, discutidos e estimados o Valor em Risco e o Expected Shortfall. Estas são medidas de Risco Financeiro de Mercado muito utilizadas por empresas e investidores para o gerenciamento do risco, aos quais podem estar expostos. O objetivo foi apresentar e utilizar vários métodos e modelos para a estimação dessas medidas e estabelecer qual o modelo mais adequado dentro de determinados cenários. / In this work Value at Risk and Expected Shortfall are defined, discussed and estimated . These are measures heavily used in Financial Market Risk, in particular by companies and investors to manage risk, which they may be exposed. The aim is to present and use several methods and models for estimating those measures and to establish which model is most appropriate in certain scenarios.
44

Robust portfolio optimization with Expected Shortfall / Robust portföljoptimering med ES

Isaksson, Daniel January 2016 (has links)
This thesis project studies robust portfolio optimization with Expected Short-fall applied to a reference portfolio consisting of Swedish linear assets with stocks and a bond index. Specifically, the classical robust optimization definition, focusing on uncertainties in parameters, is extended to also include uncertainties in log-return distribution. My contribution to the robust optimization community is to study portfolio optimization with Expected Shortfall with log-returns modeled by either elliptical distributions or by a normal copula with asymmetric marginal distributions. The robust optimization problem is solved with worst-case parameters from box and ellipsoidal un-certainty sets constructed from historical data and may be used when an investor has a more conservative view on the market than history suggests. With elliptically distributed log-returns, the optimization problem is equivalent to Markowitz mean-variance optimization, connected through the risk aversion coefficient. The results show that the optimal holding vector is almost independent of elliptical distribution used to model log-returns, while Expected Shortfall is strongly dependent on elliptical distribution with higher Expected Shortfall as a result of fatter distribution tails. To model the tails of the log-returns asymmetrically, generalized Pareto distributions are used together with a normal copula to capture multivariate dependence. In this case, the optimization problem is not equivalent to Markowitz mean-variance optimization and the advantages of using Expected Shortfall as risk measure are utilized. With the asymmetric log-return model there is a noticeable difference in optimal holding vector compared to the elliptical distributed model. Furthermore the Expected Shortfall in-creases, which follows from better modeled distribution tails. The general conclusions in this thesis project is that portfolio optimization with Expected Shortfall is an important problem being advantageous over Markowitz mean-variance optimization problem when log-returns are modeled with asymmetric distributions. The major drawback of portfolio optimization with Expected Shortfall is that it is a simulation based optimization problem introducing statistical uncertainty, and if the log-returns are drawn from a copula the simulation process involves more steps which potentially can make the program slower than drawing from an elliptical distribution. Thus, portfolio optimization with Expected Shortfall is appropriate to employ when trades are made on daily basis. / Examensarbetet behandlar robust portföljoptimering med Expected Shortfall tillämpad på en referensportfölj bestående av svenska linjära tillgångar med aktier och ett obligationsindex. Specifikt så utvidgas den klassiska definitionen av robust optimering som fokuserar på parameterosäkerhet till att även inkludera osäkerhet i log-avkastningsfördelning. Mitt bidrag till den robusta optimeringslitteraturen är att studera portföljoptimering med Expected Shortfall med log-avkastningar modellerade med antingen elliptiska fördelningar eller med en norma-copul med asymmetriska marginalfördelningar. Det robusta optimeringsproblemet löses med värsta tänkbara scenario parametrar från box och ellipsoid osäkerhetsset konstruerade från historiska data och kan användas när investeraren har en mer konservativ syn på marknaden än vad den historiska datan föreslår. Med elliptiskt fördelade log-avkastningar är optimeringsproblemet ekvivalent med Markowitz väntevärde-varians optimering, kopplade med riskaversionskoefficienten. Resultaten visar att den optimala viktvektorn är nästan oberoende av vilken elliptisk fördelning som används för att modellera log-avkastningar, medan Expected Shortfall är starkt beroende av elliptisk fördelning med högre Expected Shortfall som resultat av fetare fördelningssvansar. För att modellera svansarna till log-avkastningsfördelningen asymmetriskt används generaliserade Paretofördelningar tillsammans med en normal-copula för att fånga det multivariata beroendet. I det här fallet är optimeringsproblemet inte ekvivalent till Markowitz väntevärde-varians optimering och fördelarna med att använda Expected Shortfall som riskmått används. Med asymmetrisk log-avkastningsmodell uppstår märkbara skillnader i optimala viktvektorn jämfört med elliptiska fördelningsmodeller. Därutöver ökar Expected Shortfall, vilket följer av bättre modellerade fördelningssvansar. De generella slutsatserna i examensarbetet är att portföljoptimering med Expected Shortfall är ett viktigt problem som är fördelaktigt över Markowitz väntevärde-varians optimering när log-avkastningar är modellerade med asymmetriska fördelningar. Den största nackdelen med portföljoptimering med Expected Shortfall är att det är ett simuleringsbaserat optimeringsproblem som introducerar statistisk osäkerhet, och om log-avkastningar dras från en copula så involverar simuleringsprocessen flera steg som potentiellt kan göra programmet långsammare än att dra från en elliptisk fördelning. Därför är portföljoptimering med Expected Shortfall lämpligt att använda när handel sker på daglig basis.
45

Imputation and Generation of Multidimensional Market Data

Wall, Tobias, Titus, Jacob January 2021 (has links)
Market risk is one of the most prevailing risks to which financial institutions are exposed. The most popular approach in quantifying market risk is through Value at Risk. Organisations and regulators often require a long historical horizon of the affecting financial variables to estimate the risk exposures. A long horizon stresses the completeness of the available data; something risk applications need to handle.  The goal of this thesis is to evaluate and propose methods to impute financial time series. The performance of the methods will be measured with respect to both price-, and risk metric replication. Two different use cases are evaluated; missing values randomly place in the time series and consecutively missing values at the end-point of a time series. In total, there are five models applied to each use case, respectively.  For the first use case, the results show that all models perform better than the naive approach. The Lasso model lowered the price replication error by 35% compared to the naive model. The result from use case two is ambiguous. Still, we can conclude that all models performed better than the naive model concerning risk metric replication. In general, all models systemically underestimated the downstream risk metrics, implying that they failed to replicate the fat-tailed property of the price movement.
46

Multi-factor approximation : An analysis and comparison ofMichael Pykhtin's paper “Multifactor adjustment”

Zanetti, Michael, Güzel, Philip January 2023 (has links)
The need to account for potential losses in rare events is of utmost importance for corporations operating in the financial sector. Common measurements for potential losses are Value at Risk and Expected Shortfall. These are measures of which the computation typically requires immense Monte Carlo simulations. Another measurement is the Advanced Internal Ratings-Based model that estimates the capital requirement but solely accounts for a single risk factor. As an alternative to the commonly used time-consuming credit risk methods and measurements, Michael Pykhtin presents methods to approximate the Value at Risk and Expected Shortfall in his paper Multi-factor adjustment from 2004. The thesis’ main focus is an elucidation and investigation of the approximation methods that Pykhtin presents. Pykhtin’s approximations are thereafter implemented along with the Monte Carlo methods that is used as a benchmark. A recreation of the results Pykhtin presents is completed with satisfactory, strongly matching results, which is a confident verification that the methods have been implemented in correspondence with the article. The methods are also applied on a small and large synthetic Nordea data set to test the methods on alternative data. Due to the size complexity of the large data set, it cannot be computed in its original form. Thus, a clustering algorithm is used to eliminate this limitation while still keeping characteristics of the original data set. Executing the methods on the synthetic Nordea data sets, the Value at Risk and Expected Shortfall results have a larger discrepancy between approximated and Monte Carlo simulated results. The noted differences are probably due to increased borrower exposures, and portfolio structures not being compatible with Pykhtin’s approximation. The purpose of clustering the small data set is to test the effect on the accuracy and understand the clustering algorithm’s impact before implementing it on the large data set. Clustering the small data set caused deviant results compared to the original small data set, which is expected. The clustered large data set’s approximation results had a lower discrepancy to the benchmark Monte Carlo simulated results in comparison to the small data. The increased portfolio size creates a granularity decreasing the outcome’s variance for both the MC methods, and the approximation methods, hence the low discrepancy. Overall, Pykhtin’s approximations’ accuracy and execution time are relatively good for the experiments. It is however very challenging for the approximate methods to handle large portfolios, considering the issues that the portfolio run into at just a couple of thousand borrowers. Lastly, a comparison between the Advanced Internal Ratings-Based model, and modified Value at Risks and Expected Shortfalls are made. Calculating the capital requirement for the Advanced Internal Ratings-Based model, the absence of complex concentration risk consideration is clearly illustrated by the significantly lower results compared to either of the other methods. In addition, an increasing difference can be identified between the capital requirements obtained from Pykhtin’s approximation and the Monte Carlo method. This emphasizes the importance of utilizing complex methods to fully grasp the inherent portfolio risks. / Behovet av att ta hänsyn till potentiella förluster av sällsynta händelser är av yttersta vikt för företag verksamma inom den finansiella sektorn. Vanliga mått på potentiella förluster är Value at Risk och Expected Shortfall. Dessa är mått där beräkningen vanligtvis kräver enorma Monte Carlo-simuleringar. Ett annat mått är Advanced Internal Ratings-Based-modellen som uppskattar ett kapitalkrav, men som enbart tar hänsyn till en riskfaktor. Som ett alternativ till dessa ofta förekommande och tidskrävande kreditriskmetoderna och mätningarna, presenterar Michael Pykhtin metoder för att approximera Value at Risk och Expected Shortfall i sin uppsats Multi-factor adjustment från 2004. Avhandlingens huvudfokus är en undersökning av de approximativa metoder som Pykhtin presenterar. Pykhtins approximationer implementeras och jämförs mot Monte Carlo-metoder, vars resultat används som referensvärden. Ett återskapande av resultaten Pykhtin presenterar i sin artikel har gjorts med tillfredsställande starkt matchande resultat, vilket är en säker verifiering av att metoderna har implementerats i samstämmighet med artikeln. Metoderna tillämpas även på ett litet och ett stor syntetiskt dataset erhållet av Nordea för att testa metoderna på alternativa data. På grund av komplexiteten hos det stora datasetet kan det inte beräknas i sin ursprungliga form. Således används en klustringsalgoritm för att eliminera denna begränsning samtidigt som egenskaperna hos den ursprungliga datamängden fortfarande bibehålls. Vid appliceringen av metoderna på de syntetiska Nordea-dataseten, identifierades en större diskrepans hos Value at Risk och Expected Shortfall-resultaten mellan de approximerade och Monte Carlo-simulerade resultaten. De noterade skillnaderna beror sannolikt på ökade exponeringar hos låntagarna och att portföljstrukturerna inte är förenliga med Pykhtins approximation. Syftet med klustringen av den lilla datasetet är att testa effekten av noggrannheten och förstå klustringsalgoritmens inverkan innan den implementeras på det stora datasetet. Att gruppera det lilla datasetet orsakade avvikande resultat jämfört med det ursprungliga lilla datasetet, vilket är förväntat. De modifierade stora datasetets approximativa resultat hade en lägre avvikelse mot de Monte Carlo simulerade benchmark resultaten i jämförelse med det lilla datasetet. Den ökade portföljstorleken skapar en finkornighet som minskar resultatets varians för både MC-metoderna och approximationerna, därav den låga diskrepansen. Sammantaget är Pykhtins approximationers noggrannhet och utförandetid relativt bra för experimenten. Det är dock väldigt utmanande för de approximativa metoderna att hantera stora portföljer, baserat på de problem som portföljen möter redan vid ett par tusen låntagare. Slutligen görs en jämförelse mellan Advanced Internal Ratings-Based-modellen, och modifierade Value at Risks och Expected shortfalls. När man beräknar kapitalkravet för Advanced Internal Ratings-Based-modellen, illustreras saknaden av komplexa koncentrationsrisköverväganden tydligt av de betydligt lägre resultaten jämfört med någon av de andra metoderna. Dessutom kan en ökad skillnad identifieras mellan kapitalkraven som erhålls från Pykhtins approximation och Monte Carlo-metoden. Detta understryker vikten av att använda komplexa metoder för att fullt ut förstå de inneboende portföljriskerna.
47

Risk Management and Sustainability - A Study of Risk and Return in Portfolios With Different Levels of Sustainability / Finansiell riskhantering och hållbarhet - En studie om risk och avkastning i portföljer med olika nivåer av hållbarhet

Borg, Magnus, Ternqvist, Lucas January 2023 (has links)
This thesis examines the risk profile of Electronically Traded Funds and the dependence of the ESG rating on risk. 527 ETFs with exposure globally were analyzed. Risk measures considered were Value-at-Risk and Expected Shortfall, while some other metrics of risk was used, such as the volatility, maximum drawdown, tail dependece, and copulas. Stress tests were conducted in order to test the resilience against market downturns. The ETFs were grouped by their ESG rating as well as by their carbon intensity. The results show that the lowest risk can be found for ETFs with either the lowest ESG rating or the highest. Generally, a higher ESG rating implies a lower risk, but without statistical significance in many cases. Further, ETFs with a higher ESG rating showed, on average, a lower maximum drawdown, a higher tail dependence, and more resilience in market downturns. Regarding volatility, the average was shown to be lower on average for ETFs with a higher ESG rating, but no statistical significance could be found. Interestingly, the results show that investing sustainably returns a better financial performance at a lower risk, thus going against the Capital Asset Pricing Model. / Denna studie undersöker riskprofilen för elektroniskt handlade fonder och sambandet mellan risk och hållbarhetsbetyg. 527 ETF:er med global exponering analyserades. De riskmått som användes var Value-at-Risk och Expected Shortfall, och några andra mått för risk användes, däribland volatilitet, största intradagsnedgång, samband i svansfördelning, och copulas. Stresstest utfördes för att testa motsåtndskraften i marknadsnedgångar. ETF:erna grupperades med hjälp av deras hållbarhetsbetyg och deras koldioxidintensitet. Resultatet visar att lägst risk finns i ETF:er med högst respektive lägst hållbarhetsbetyg. Generellt har ETF:er med högre hållbarhetsbetyg en lägre risk, med endast viss statistisk signifikans. Därtill har ETF:er med högre hållbarhetsbetyg, i genomsnitt, en lägre största intradagsnedgång, högre samband i fördelningssvansarna och är mer motståndskraftiga i marknadsnedgångar. Volatiliteten är i genomsnitt lägre desto högre hållbarhetsbetyget är, men detta resultat saknar statistisk signifikans. Ett intressant resultat är att om man investerar hållbart kan man få en högre avkastning med en lägre risk, vilket går emot Capital Asset Pricing Model.
48

Applying Peaks-Over-Threshold for Increasing the Speed of Convergence of a Monte Carlo Simulation / Peaks-Over-Threshold tillämpat på en Monte Carlo simulering för ökad konvergenshastighet

Jakobsson, Eric, Åhlgren, Thor January 2022 (has links)
This thesis investigates applying the semiparametric method Peaks-Over-Threshold on data generated from a Monte Carlo simulation when estimating the financial risk measures Value-at-Risk and Expected Shortfall. The goal is to achieve a faster convergence than a Monte Carlo simulation when assessing extreme events that symbolise the worst outcomes of a financial portfolio. Achieving a faster convergence will enable a reduction of iterations in the Monte Carlo simulation, thus enabling a more efficient way of estimating risk measures for the portfolio manager.  The financial portfolio consists of US life insurance policies offered on the secondary market, gathered by our partner RessCapital. The method is evaluated on three different portfolios with different defining characteristics.  In Part I an analysis of selecting an optimal threshold is made. The accuracy and precision of Peaks-Over-Threshold is compared to the Monte Carlo simulation with 10,000 iterations, using a simulation of 100,000 iterations as the reference value. Depending on the risk measure and the percentile of interest, different optimal thresholds are selected.  Part II presents the result with the optimal thresholds from Part I. One can conclude that Peaks-Over-Threshold performed significantly better than a Monte Carlo simulation for Value-at-Risk with 10,000 iterations. The results for Expected Shortfall did not achieve a clear improvement in terms of precision, but it did show improvement in terms of accuracy.  Value-at-Risk and Expected Shortfall at the 99.5th percentile achieved a greater error reduction than at the 99th. The result therefore aligned well with theory, as the more "rare" event considered, the better the Peaks-Over-Threshold method performed.  In conclusion, the method of applying Peaks-Over-Threshold can be proven useful when looking to reduce the number of iterations since it do increase the convergence of a Monte Carlo simulation. The result is however dependent on the rarity of the event of interest, and the level of precision/accuracy required. / Det här examensarbetet tillämpar metoden Peaks-Over-Threshold på data genererat från en Monte Carlo simulering för att estimera de finansiella riskmåtten Value-at-Risk och Expected Shortfall. Målet med arbetet är att uppnå en snabbare konvergens jämfört med en Monte Carlo simulering när intresset är s.k. extrema händelser som symboliserar de värsta utfallen för en finansiell portfölj. Uppnås en snabbare konvergens kan antalet iterationer i simuleringen minskas, vilket möjliggör ett mer effektivt sätt att estimera riskmåtten för portföljförvaltaren.  Den finansiella portföljen består av amerikanska livförsäkringskontrakt som har erbjudits på andrahandsmarknaden, insamlat av vår partner RessCapital. Metoden utvärderas på tre olika portföljer med olika karaktär.  I Del I så utförs en analys för att välja en optimal tröskel för Peaks-Over-Threshold. Noggrannheten och precisionen för Peaks-Over-Threshold jämförs med en Monte Carlo simulering med 10,000 iterationer, där en Monte Carlo simulering med 100,000 iterationer används som referensvärde. Beroende på riskmått samt vilken percentil som är av intresse så väljs olika trösklar.  I Del II presenteras resultaten med de "optimalt" valda trösklarna från Del I. Peaks-over-Threshold påvisade signifikant bättre resultat för Value-at-Risk jämfört med Monte Carlo simuleringen med 10,000 iterationer. Resultaten för Expected Shortfall påvisade inte en signifikant förbättring sett till precision, men visade förbättring sett till noggrannhet.  För både Value-at-Risk och Expected Shortfall uppnådde Peaks-Over-Threshold en större felminskning vid 99.5:e percentilen jämfört med den 99:e. Resultaten var därför i linje med de teoretiska förväntningarna då en högre percentil motsvarar ett extremare event.  Sammanfattningsvis så kan metoden Peaks-Over-Threshold vara användbar när det kommer till att minska antalet iterationer i en Monte Carlo simulering då resultatet visade att Peaks-Over-Threshold appliceringen accelererar Monte Carlon simuleringens konvergens. Resultatet är dock starkt beroende av det undersökta eventets sannolikhet, samt precision- och noggrannhetskravet.
49

Utilização de cópulas com dinâmica semiparamétrica para estimação de medidas de risco de mercado

Silveira Neto, Paulo Corrêa da January 2015 (has links)
A análise de risco de mercado, o risco associado a perdas financeiras resultantes de utilizações de preços de mercado, é fundamental para instituições financeiras e gestores de carteiras. A alocação dos ativos nas carteiras envolve decisões risco/retorno eficientes, frequentemente limitadas por uma política de risco. Muitos modelos tradicionais simplificam a estimação do risco de mercado impondo muitas suposições, como distribuições simétricas, correlações lineares, normalidade, entre outras. A utilização de cópulas exibiliza a estimação da estrutura de dependência dessas séries de tempo, possibilitando a modelagem de séries de tempo multivariadas em dois passos: estimações marginais e da dependência entre as séries. Neste trabalho, utilizou-se um modelo de cópulas com dinâmica semiparamétrica para medição de risco de mercado. A estrutura dinâmica das cópulas conta com um parâmetro de dependência que varia ao longo do tempo, em que a proposta semiparamétrica possibilita a modelagem de qualquer tipo de forma funcional que a estrutura dinâmica venha a apresentar. O modelo proposto por Hafner e Reznikova (2010), de dinâmica semiparamétrica, é comparado com o modelo sugerido por Patton (2006), que apresenta dinâmica paramétrica. Todas as cópulas no trabalho são bivariadas. Os dados consistem em quatro séries de tempo do mercado brasileiro de ações. Para cada um desses pares, utilizou-se modelos ARMA-GARCH para a estimação das marginais, enquanto a dependência entre as séries foi estimada utilizando os dois modelos de cópulas dinâmicas mencionados. Para comparar as metodologias estimaram-se duas medidas de risco de mercado: Valor em Risco e Expected Shortfall. Testes de hipóteses foram implementados para verificar a qualidade das estimativas de risco. / Market risk management, i.e. managing the risk associated with nancial loss resulting from market price uctuations, is fundamental to nancial institutions and portfolio managers. Allocations involve e cient risk/return decisions, often restricted by an investment policy statement. Many traditional models simplify risk estimation imposing several assumptions, like symmetrical distributions, the existence of only linear correlations, normality, among others. The modelling of the dependence structure of these time series can be exibly achieved by using copulas. This approach can model a complex multivariate time series structure by analyzing the problem in two blocks: marginal distributions estimation and dependence estimation. The dynamic structure of these copulas can account for a dependence parameter that changes over time, whereas the semiparametric option makes it possible to model any kind of functional form in the dynamic structure. We compare the model suggested by Hafner and Reznikova (2010), which is a dynamic semiparametric one, with the model suggested by Patton (2006), which is also dynamic but fully parametric. The copulas in this work are all bivariate. The data consists of four Brazilian stock market time series. For each of these pairs, ARMA-GARCH models have been used to model the marginals, while the dependences between the series are modeled by using the two methods mentioned above. For the comparison between these methodologies, we estimate Value at Risk and Expected Shortfall of the portfolios built for each pair of assets. Hypothesis tests are implemented to verify the quality of the risk estimates.
50

Asset Allocation Based on Shortfall Risk

Čumova, Denisa 23 July 2005 (has links) (PDF)
In der Dissertation wurde ein innovatives Portfoliomodell entwickelt, welches den Präferenzen einer großen Gruppe von Investoren entspricht, die mit der traditionellen Portfolio Selektion auf Basis von Mittelwertrendite und Varianz nicht zufrieden sind. Vor allem bezieht sich die Unzufriedenheit auf eine sehr spezifische Definition der Risiko- und Wertmaße, die angenommene Nutzenfunktion, die Risikodiversifizierung sowie die Beschränkung des Assetuniversums. Dies erschwert vor allem die Optimierung der modernen Finanzprodukte. Das im Modell verwendete Risikomaß-Ausfallrisiko drückt die Präferenzen der Investoren im Bereich unterhalb der Renditebenchmark aus. Die Renditenabweichung von der Benchmark nach oben werden nicht, wie im Falle des Mittelwertrendite-Varianz-Portfoliomodells, minimiert oder als risikoneutral, wie bei dem Mittelwertrendite-Ausfallrisiko-Portfoliomodell, betrachtet. Stattdessen wird ein Wertmaß, das Chance-Potenzial (Upper Partial Moment), verwendet, mit welchem verschiedene Investorenwünsche in diesem Bereich darstellbar sind. Die Eliminierung der Annahme der normalverteilten Renditen in diesem Chance-Potenzial-Ausfallrisiko-Portfoliomodell erlaubt eine korrekte Asset Allokation auch im Falle der nicht normalverteilten Renditen, die z. B. Finanzderivate, Aktien, Renten und Immobilien zu finden sind. Bei diesen tendiert das traditionelle Mittelwertrendite-Varianz-Portfoliomodell zu suboptimalen Entscheidungen. Die praktische Anwendung des Chance-Potenzial-Ausfallrisiko-Portfoliomodells wurde am Assetuniversum von Covered Calls, Protective Puts und Aktien gezeigt. / This thesis presents an innovative portfolio model appropriate for a large group of investors which are not content with the asset allocation with the traditional, mean return-variance based portfolio model above all in term of its rather specific definition of the risk and value decision parameters, risk diversification, related utility function and its restrictions imposed on the asset universe. Its modifiable risk measure – shortfall risk – expresses variable risk preferences below the return benchmark. The upside return deviations from the benchmark are not minimized as in case of the mean return-variance portfolio model or considered risk neutral as in the mean return-shortfall risk portfolio model, but employs variable degrees of the chance potential (upper partial moments) in order to provide investors with broader range of utility choices and so reflect arbitrary preferences. The elimination of the assumption of normally distributed returns in the chance potential-shortfall risk model allows correct allocation of assets with non-normally distributed returns as e.g. financial derivatives, equities, real estates, fixed return assets, commodities where the mean-variance portfolio model tends to inferior asset allocation decisions. The computational issues of the optimization algorithm developed for the mean-variance, mean-shortfall risk and chance potential-shortfall risk portfolio selection are described to ease their practical application. Additionally, the application of the chance potential-shortfall risk model is shown on the asset universe containing stocks, covered calls and protective puts.

Page generated in 0.0346 seconds