61 |
How useful are intraday data in Risk Management? : An application of high frequency stock returns of three Nordic Banks to the VaR and ES calculationSomnicki, Emil, Ostrowski, Krzysztof January 2010 (has links)
<p>The work is focused on the Value at Risk and the Expected Shortfallcalculation. We assume the returns to be based on two pillars - the white noise and the stochastic volatility. We assume that the white noise follows the NIG distribution and the volatility is modeled using the nGARCH, NIG-GARCH, tGARCH and the non-parametric method. We apply the models into the stocks of three Banks of the Nordic market. We consider the daily and the intraday returns with the frequencies 5, 10, 20 and 30 minutes. We calculate the one step ahead VaR and ES for the daily and the intraday data. We use the Kupiec test and the Markov test to assess the correctness of the models. We also provide a new concept of improving the daily VaR calculation by using the high frequency returns. The results show that the intraday data can be used to the one step ahead VaR and the ES calculation. The comparison of the VaR for the end of the following trading day calculated on the basis of the daily returns and the one computed using the high frequency returns shows that using the intraday data can improve the VaR outcomes.</p>
|
62 |
How useful are intraday data in Risk Management? : An application of high frequency stock returns of three Nordic Banks to the VaR and ES calculationSomnicki, Emil, Ostrowski, Krzysztof January 2010 (has links)
The work is focused on the Value at Risk and the Expected Shortfallcalculation. We assume the returns to be based on two pillars - the white noise and the stochastic volatility. We assume that the white noise follows the NIG distribution and the volatility is modeled using the nGARCH, NIG-GARCH, tGARCH and the non-parametric method. We apply the models into the stocks of three Banks of the Nordic market. We consider the daily and the intraday returns with the frequencies 5, 10, 20 and 30 minutes. We calculate the one step ahead VaR and ES for the daily and the intraday data. We use the Kupiec test and the Markov test to assess the correctness of the models. We also provide a new concept of improving the daily VaR calculation by using the high frequency returns. The results show that the intraday data can be used to the one step ahead VaR and the ES calculation. The comparison of the VaR for the end of the following trading day calculated on the basis of the daily returns and the one computed using the high frequency returns shows that using the intraday data can improve the VaR outcomes.
|
63 |
Analyzing value at risk and expected shortfall methods: the use of parametric, non-parametric, and semi-parametric modelsHuang, Xinxin 25 August 2014 (has links)
Value at Risk (VaR) and Expected Shortfall (ES) are methods often used to measure market risk. Inaccurate and unreliable Value at Risk and Expected Shortfall models can lead to underestimation of the market risk that a firm or financial institution is exposed to, and therefore may jeopardize the well-being or survival of the firm or financial institution during adverse markets. The objective of this study is therefore to examine various Value at Risk and Expected Shortfall models, including fatter tail models, in order to analyze the accuracy and reliability of these models.
Thirteen VaR and ES models under three main approaches (Parametric, Non-Parametric and Semi-Parametric) are examined in this study. The results of this study show that the proposed model (ARMA(1,1)-GJR-GARCH(1,1)-SGED) gives the most balanced Value at Risk results. The semi-parametric model (Extreme Value Theory, EVT) is the most accurate Value at Risk model in this study for S&P 500. / October 2014
|
64 |
[en] ESSAYS IN FINANCIAL RISK MANAGEMENT OF EMERGING COUNTRIES / [pt] ENSAIOS EM GERENCIAMENTO DE RISCOS FINANCEIROS DE PAÍSES EMERGENTESALEX SANDRO MONTEIRO DE MORAES 14 April 2016 (has links)
[pt] Nesta tese são desenvolvidos três ensaios que avaliam os riscos relativos a
alguns países emergentes. No primeiro ensaio, por meio do uso de modelos da
família GARCH, verificou-se que o aumento dos pesos relativos atribuídos às
observações mais antigas em função do aumento do horizonte de previsão resulta
em melhores estimativas de volatilidade. Por meio da utilização de sete modelos
de previsão de volatilidade e séries de retornos de ativos do mercado financeiro
brasileiro (ações de Petrobrás e Vale, índice Ibovespa, taxa de câmbio Real/Dólar,
taxa de juros de 1 ano e taxa de juros de 3 anos de títulos de dívida do governo
brasileiro emitidos em reais) compararam-se as estimativas obtidas na amostra
(in-sample) com as observações fora da amostra (out-of-sample). Com base nesta
comparação, constatou-se que as melhores estimativas de previsão de volatilidade
foram obtidas, predominantemente, por dois modelos que permitem que seus
parâmetros variem em função do horizonte de previsão: o modelo modificado
EGARCH e o modelo ARLS. Concluiu-se que a utilização de modelos de
previsão de volatilidade tradicionais, os quais mantêm inalterados os pesos
relativos atribuídos às observações antigas e recentes, independente do horizonte
de previsão, mostrou-se inapropriada. No segundo ensaio comparou-se os
desempenhos dos modelos de memória longa (FIGARCH) e curta (GARCH) na
previsão de value-at-risk (VaR) e expected shortfall (ES) para múltiplos períodos
à frente para seis índices de ações de mercados emergentes. Utilizou-se, para
dados diários de 1999 a 2014, uma adaptação da simulação de Monte Carlo para
estimar previsões de VaR e ES para 1, 10 e 20 dias à frente, usando modelos
FIGARCH e GARCH para quatro distribuições de erros. Os resultados sugerem
que, em geral, os modelos FIGARCH melhoram a precisão das previsões para
horizontes mais longos; que a distribuição dos erros pode influenciar a decisão de
escolha do melhor modelo; e que apenas para os modelos FIGARCH houve
redução do número de subestimações do VaR verdadeiro com o aumento do
horizonte de previsão. Com relação ao terceiro ensaio, percebeu-se que
aadministração de riscos é um assunto que há muito tempo já faz parte do dia-adia
das instituições financeiras e não financeiras, todavia não é comum a
utilização de métricas de risco na Administração Pública. Considerando a
existência dessa lacuna e a importância do tema para uma adequada gestão dos
recursos públicos, principalmente para países emergentes, esse terceiro ensaio
teve como propósitos estimar, em um único valor, o risco de liquidez de um
Órgão Público, a Marinha do Brasil, e identificar as fontes desse risco. Para isso,
utilizou-se o exposure-based Cash-Flow-at-Risk (CFaR) model, o qual, além de
resumir a estimação do risco de liquidez a um único valor, ajuda no
gerenciamento desse risco pelo fornecimento de informações adicionais sobre a
exposição do fluxo de caixa da organização a diversos fatores de risco. Usando
dados trimestrais do período compreendido entre o primeiro trimestre de 1999 ao
quarto trimestre de 2013, identificaram-se as taxas de câmbio real/dólar,
dólar/libra, a taxa SELIC, a Necessidade de Financiamento do Setor Público e a
taxa de inflação dos Estados Unidos como os fatores de risco macroeconômicos e
de mercado que impactam o fluxo de caixa da Marinha, bem como se calculou seu
CFaR com 95 por cento de nível de confiança para o período de um trimestre à frente. / [en] In this thesis we develop three essays on risk management in some
emerging countries. On the first one, using models of the GARCH family, we
verified that the increase in relative weights assigned to the earlier observations
due to the increase of the forecast horizon results in better estimates of volatility.
Through the use of seven forecasting models of volatility and return series of
financial markets assets (shares of Petrobras and Vale, Bovespa index, exchange
rate Real/Dollar, 1-year and 3 years interest rates of Brazilian Government bonds
issued in Reais) the estimates obtained in the sample (in-sample) were compared
with observations outside the sample (out-of-sample). Based on this comparison,
it was found that the best estimates of expected volatility were obtained
predominantly by two models that allow its parameters to vary depending on the
forecast horizon: the modified EGARCH model (exponential generalized
autoregressive conditional heteroskedastic) and the ARLS model proposed by
Ederington and Guan (2005). We conclude that the use of traditional forecasting
models of volatility, which keeps unchanged relative weights assigned to both old
and new observations, regardless of the forecast horizon, was inappropriate. On
the second essay we compared the performance of long-memory models
(FIGARCH) with short-memory models (GARCH) in forecasting value-at-risk
(VaR) and expected shortfall (ES) for multiple periods ahead for six emerging
markets stock índices. We used daily data from 1999 to 2014 and an adaptation of
the Monte Carlo simulation to estimate VaR and ES forecasts for multiple steps
ahead (1, 10 and 20 days ), using FIGARCH and GARCH models for four errors
distributions. The results suggest that, in general, the FIGARCH models improve
the accuracy of forecasts for longer horizons; that the error distribution used may
influence the decision about the best model; and that only for FIGARCH models
the occurrence of underestimation of the true VaR is less frequent with increasing
time horizon. Regarding the third essay, we realized that risk management is a
subject that has long been part of the day-to-day activities of financial and nonfinancial
institutions, yet the use of risk metrics is not common among public
agencies. Considering this gap, and the importance of the issue for the proper
management of public resources, the purpose of this third essay is to estimate, in a
single value, the liquidity risk of a public agency, in this case, the Brazilian Navy,
and to identify the sources of risk. To do this, the exposure-based Cash-Flow-at-
Risk (CFaR) model has been developed, which, in addition to summarizing the
liquidity risk estimation in a single value, helps in managing risk by providing
additional information about the exposure of the organization s cash flow to
various risk factors. Using quarterly data for the period between the first quarter
of 1999 and the fourth quarter of 2013, the macroeconomics and market risk
factors that impact the Navy s cash flow were identified. Moreover, the CFaR was
calculated at a 95 percent confidence level for a period of one quarter ahead.
|
65 |
Risk Measurement and Performance Attribution for IRS Portfolios Using a Generalized Optimization Method for Term Structure EstimationGerdin Börjesson, Fredrik, Eduards, Christoffer January 2021 (has links)
With the substantial size of the interest rate markets, the importance of accurate pricing, risk measurement and performance attribution can not be understated. However, the models used on the markets often have underlying issues with capturing the market's fundamental behavior. With this thesis, we aim to improve the pricing, risk measurement, and performance attribution of interest rate swap portfolios. The paper is divided into six main parts, by subject, to aid in achieving these goals. To begin with, we validate all cash flows with SEB to increase the validity of the results. Next, we implement an optimization-based model developed by Jörgen Blomvall to estimate multiple yield curves. By considering innovations of the daily in-sample curves, risk factors are computed with principal component analysis. These risk factors are then used to simulate one-day and ten-day ahead scenarios for the multiple yield curves using a Monte Carlo method. Given these simulated scenarios, risk measures are then computed. When backtested, these risk measurements give an indication on the overall accuracy of the methodology, including the estimated curves, the derived risk factors, and the simulation methodology. Along with the simulation, on each out-of-sample day, monetary performance attribution for the portfolios is also performed. The performance attribution indicates what drives the value change in the portfolio. This can be used in order to evaluate the estimated yield curves and derived risk factors. The risk measurement and performance attribution is done for three different portfolios of interest rate swaps on the EUR, USD, and SEK markets. However, the risk factors are only estimated for EUR data and used for all portfolios. The main difference to previous work in this area is that, for all implementations, a multiple yield curve environment is studied. Different PCA algorithms are evaluated to increase the precision and speed of the risk factor calculation. Mean reverting risk factors are developed in the simulation framework, along with a Latin hypercube sampling method accounting for dependence in the random variables to reduce variance. We also study the EUR and SEK markets, while the focus in previous literature is on the USD market. Lastly, we calculate and backtest the risk measures value-at-risk and expected shortfall for one-day and ten-day horizons. Four different PCA methods are implemented, a bidiagonal divide and conquer SVD algorithm, a randomized SVD method, an Arnoldi method, and an optimization-based PCA algorithm. We opt to use the first one due to high accuracy and the ability to calculate all eigenpairs. However, we recommend to use the Arnoldi method in future implementations and to further study the optimization-based method. The Latin hypercube sampling with dependence method is able to produce random variables with the same correlation as the input variables. In the simulation, we are able to produce results that pass all backtests for the risk measures considering the USD portfolio. For the EUR and SEK portfolios, it is shown that the risk measures are too conservative. The results of the mean reversion method indicate that it produces slightly less conservative estimates for the ten-day horizon. In the performance attribution, we show that we are able to produce results with small error terms, therefore indicating accurately estimated term structures, risk factors, and pricing. We conclude that we are partly able to fulfill the stated purpose of this thesis due to having produced accurate pricing and satisfactory performance attribution results for all portfolios, and stable risk measures for the USD portfolio. However, it is not possible to state with certainty that improved risk measurements have been achieved for the EUR and SEK portfolios. Although, we present several alternative approaches to remedy this in future implementations.
|
66 |
Portfolio Risk Modelling in Venture Debt / Kreditriskmodellering inom Venture DebtEriksson, John, Holmberg, Jacob January 2023 (has links)
This thesis project is an experimental study on how to approach quantitative portfolio credit risk modelling in Venture Debt portfolios. Facing a lack of applicable default data from ArK and publicly available sets, as well as seeking to capture companies that fail to service debt obligations before defaulting per se, we present an approach to risk modeling based on trends in revenue. The main framework revolves around driving a Monte Carlo simulation with Copluas to predict future revenue scenarios across a portfolio of early-stage technology companies. Three models for a random Gaussian walk, a Linear Dynamic System and an Autoregressive Integrated Moving Average (ARIMA) time series are implemented and evaluated in terms of their portfolio Value-at-Risk influence. The model performance confirms that modeling portfolio risk in Venture Debt is challenging, especially due to lack of sufficient data and thus a heavy reliance on assumptions. However, the empirical results for Value-at-Risk and Expected Shortfall are in line with expectations. The evaluated portfolio is still in an early stage with a majority of assets not yet in their repayment period and consequently the spread of potential losses within one year is very tight. It should further be recognized that the scope in terms of explanatory variables for sales and model complexities has been narrowed and simplified for computational benefits, transparency and communicability. The main conclusion drawn is that alternative approaches to model Venture Debt risk is fully possible, and should improve in reliability and accuracy with more data feeding the model. For future research it is recommended to incorporate macroeconomic variables as well as similar company analysis to better capture macro, funding and sector conditions. Furthermore, it is suggested to extend the set of financial and operational explanatory variables for sales through machine learning or neural networks. / Detta examensarbete är en experimentell studie för kvantitativ modellering av kreditrisk i Venture Debt-portföljer. Givet en brist på tillgänlig konkurs-data från ArK samt från offentligt tillgängliga databaser i kombination med ambitionen att inkludera företag som misslyckas med skuldförpliktelser innan konkurs per se, presenterar vi en metod för riskmodellering baserad på trender i intäkter. Ramverket för modellen kretsar kring Monte Carlo-simulering med Copluas för att estimera framtida intäktsscenarier över en portfölj med tillväxtbolag inom tekniksektorn. Tre modeller för en random walk, ett linjärt dynamiskt system och ARIMA- tidsserier implementeras och utvärderas i termer av deras inflytande på portföljens Value-at- Risk. Modellens prestationer bekräftar att modellering av portföljrisk inom Venture Debt är utmanande, särskilt på grund av bristen på tillräckliga data och därmed ett stort beroende av antaganden. Dock är de empiriska resultaten för Value-at-Risk och Expected Shortfall i linje med förväntningarna. Den utvärderade portföljen är fortfarande i ett tidigt skede där en majoritet av tillgångarna fortfarande befinner sig i en amorteringsfri period och följaktligen är spridningen av potentiella förluster inom ett år mycket snäv. Det bör vidare tillkännages att omfattningen i termer av förklarande variabler för intäkter och modellkomplexitet har förenklats för beräkningsfördelar, transparens och kommunicerbarhet. Den främsta slutsatsen som dras är att alternativa metoder för att modellera risker inom Venture Debt är fullt möjliga och bör förbättras i tillförlitlighet och precision när mer data kan matas in i modellen. För framtida arbete rekommenderas det att inkorporera makroekonomiska variabler samt analys av liknande bolag för att bättre fånga makro-, finansierings- och sektorsförhållanden. Vidare föreslås det att utöka uppsättningen av finansiella och operationella förklarande variabler för intäkter genom maskininlärning eller neurala nätverk.
|
67 |
The Performance of Market Risk Models for Value at Risk and Expected Shortfall Backtesting : In the Light of the Fundamental Review of the Trading Book / Bakåttest av VaR och ES i marknadsriskmodellerDalne, Katja January 2017 (has links)
The global financial crisis that took off in 2007 gave rise to several adjustments of the risk regulation for banks. An extensive adjustment, that is to be implemented in 2019, is the Fundamental Review of the Trading Book (FRTB). It proposes to use Expected Shortfall (ES) as risk measure instead of the currently used Value at Risk (VaR), as well as applying varying liquidity horizons based on the various risk levels of the assets involved. A major difficulty of implementing the FRTB lies within the backtesting of ES. Righi and Ceretta proposes a robust ES backtest based on Monte Carlo simulation. It is flexible since it does not assume any probability distribution and can be performed without waiting for an entire backtesting period. Implementing some commonly used VaR backtests as well as the ES backtest by Righi and Ceretta, yield a perception of which risk models that are the most accurate from both a VaR and an ES backtesting perspective. It can be concluded that a model that is satisfactory from a VaR backtesting perspective does not necessarily remain so from an ES backtesting perspective and vice versa. Overall, the models that are satisfactory from a VaR backtesting perspective turn out to be probably too conservative from an ES backtesting perspective. Considering the confidence levels proposed by the FRTB, from a VaR backtesting perspective, a risk measure model with a normal copula and a hybrid distribution with the generalized Pareto distribution in the tails and the empirical distribution in the center along with GARCH filtration is the most accurate one, as from an ES backtesting perspective a risk measure model with univariate Student’s t distribution with ⱱ ≈ 7 together with GARCH filtration is the most accurate one for implementation. Thus, when implementing the FRTB, the bank will need to compromise between obtaining a good VaR model, potentially resulting in conservative ES estimates, and obtaining a less satisfactory VaR model, possibly resulting in more accurate ES estimates. The thesis was performed at SAS Institute, an American IT company that develops software for risk management among others. Targeted customers are banks and other financial institutions. Investigating the FRTB acts a potential advantage for the company when approaching customers that are to implement the regulation framework in a near future. / Den globala finanskrisen som inleddes år 2007 ledde till flertalet ändringar vad gäller riskreglering för banker. En omfattande förändring som beräknas implementeras år 2019, utgörs av Fundamental Review of the Trading Book (FRTB). Denna föreslår bland annat användande av Expected Shortfall (ES) som riskmått istället för Value at Risk (VaR) som används idag, liksom tillämpandet av varierande likviditetshorisonter beroende på risknivåerna för tillgångarna i fråga. Den huvudsakliga svårigheten med att implementera FRTB ligger i backtestingen av ES. Righi och Ceretta föreslår ett robust ES backtest som baserar sig på Monte Carlo-simulering. Det är flexibelt i den mening att det inte antar någon specifik sannolikhetsfördelning samt att det går att implementera utan att man behöver vänta en hel backtestingperiod. Vid implementation av olika standardbacktest för VaR, liksom backtestet för ES av Righi och Ceretta, fås en uppfattning av vilka riskmåttsmodeller som ger de mest korrekta resultaten från både ett VaR- och ES-backtestingperspektiv. Sammanfattningsvis kan man konstatera att en modell som är acceptabel från ett VaR-backtestingperspektiv inte nödvändigtvis är det från ett ES-backtestingperspektiv och vice versa. I det hela taget har det visat sig att de modeller som är acceptabla ur ett VaR-backtestingperspektiv troligtvis är för konservativa från ett ESbacktestingperspektiv. Om man betraktar de konfidensnivåer som föreslagits i FRTB, kan man ur ett VaR-backtestingperspektiv konstatera att en riskmåttsmodell med normal-copula och en hybridfördelning med generaliserad Pareto-fördelning i svansarna och empirisk fördelning i centrum tillsammans med GARCH-filtrering är den bäst lämpade, medan det från ett ES-backtestingperspektiv är att föredra en riskmåttsmodell med univariat Student t-fördelning med ⱱ ≈ 7 tillsammans med GARCH-filtrering. Detta innebär att när banker ska implementera FRTB kommer de behöva kompromissa mellan att uppnå en bra VaR-modell som potentiellt resulterar i för konservativa ES-estimat och en modell som är mindre bra ur ett VaRperspektiv men som resulterar i rimligare ES-estimat. Examensarbetet genomfördes vid SAS Institute, ett amerikanskt IT-företag som bland annat utvecklar mjukvara för riskhantering. Tänkbara kunder är banker och andra finansinstitut. Denna studie av FRTB innebär en potentiell fördel för företaget vid kontakt med kunder som planerar implementera regelverket inom en snar framtid. / Riskhantering, finansiella tidsserier, Value at Risk, Expected Shortfall, Monte Carlo-simulering, GARCH-modellering, Copulas, hybrida distributioner, generaliserad Pareto-fördelning, extremvärdesteori, Backtesting, likviditetshorisonter, Basels regelverk
|
68 |
Risks in Commodity and Currency MarketsBozovic, Milos 17 April 2009 (has links)
This thesis analyzes market risk factors in commodity and currency markets. It focuses on the impact of extreme events on the prices of financial products traded in these markets, and on the overall market risk faced by the investors. The first chapter develops a simple two-factor jump-diffusion model for valuation of contingent claims on commodities in order to investigate the pricing implications of shocks that are exogenous to this market. The second chapter analyzes the nature and pricing implications of the abrupt changes in exchange rates, as well as the ability of these changes to explain the shapes of option-implied volatility "smiles". Finally, the third chapter employs the notion that key results of the univariate extreme value theory can be applied separately to the principal components of ARMA-GARCH residuals of a multivariate return series. The proposed approach yields more precise Value at Risk forecasts than conventional multivariate methods, while maintaining the same efficiency. / El objetivo de esta tesis es analizar los factores del riesgo del mercado de las materias primas y las divisas. Está centrada en el impacto de los eventos extremos tanto en los precios de los productos financieros como en el riesgo total de mercado al cual se enfrentan los inversores. En el primer capítulo se introduce un modelo simple de difusión y saltos (jump-diffusion) con dos factores para la valuación de activos contingentes sobre las materias primas, con el objetivo de investigar las implicaciones de shocks en los precios que son exógenos a este mercado. En el segundo capítulo se analiza la naturaleza e implicaciones para la valuación de los saltos en los tipos de cambio, así como la capacidad de éstos para explicar las formas de sonrisa en la volatilidad implicada. Por último, en el tercer capítulo se utiliza la idea de que los resultados principales de la Teoria de Valores Extremos univariada se pueden aplicar por separado a los componentes principales de los residuos de un modelo ARMA-GARCH de series multivariadas de retorno. El enfoque propuesto produce pronósticos de Value at Risk más precisos que los convencionales métodos multivariados, manteniendo la misma eficiencia.
|
Page generated in 0.0592 seconds