• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 9
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

[en] EXTREME VALUE THEORY: VALUE AT RISK FOR VARIABLE-INCOME ASSETS / [pt] TEORIA DOS VALORES EXTREMOS: VALOR EM RISCO PARA ATIVOS DE RENDA VARIÁVEL

GUSTAVO LOURENÇO GOMES PIRES 26 June 2008 (has links)
[pt] A partir da década de 90, a metodologia de Valor em Risco (VaR) se difundiu pelo mundo, tanto em instituições financeiras quanto em não financeiras, como uma boa prática de mensuração de riscos. Um dos fatos estilizados mais pronunciados acerca das distribuições de retornos financeiros diz respeito à presença de caudas pesadas. Isso torna os modelos paramétricos tradicionais de cálculo de Valor em Risco (VaR) inadequados para a estimação de VaR de baixas probabilidades, dado que estes se baseiam na hipótese de normalidade para as distribuições dos retornos. Sendo assim, o objetivo do presente trabalho é investigar o desempenho de modelos baseados na Teoria dos Valores Extremos para o cálculo do VaR. Os resultados indicam que os modelos baseados na Teoria dos Valores Extremos são adequados para a modelagem das caudas, e consequentemente para a estimação de Valor em Risco quando os níveis de probabilidade de interesse são baixos. / [en] Since the 90 decade, the use of Value at Risk (VaR) methodology has been disseminated among both financial and non-financial institutions around the world, as a good practice in terms of risks management. The existence of fat tails is one of the striking stylized facts of financial returns distributions. This fact makes the use of traditional parametric models for Value at Risk (VaR) estimation unsuitable for the estimation of low probability events. This is because traditional models are based on the conditional normality assumption for financial returns distributions. The main purpose of this dissertation is to investigate the performance of VaR models based on Extreme Value Theory. The results indicates that Extreme Value Theory based models are suitable for low probability VaR estimation.
2

Uma comparação dos modelos de Value at Risk aplicados em carteiras de renda fixa

Caselato, Lucimeire 22 April 2009 (has links)
Made available in DSpace on 2016-04-25T16:45:15Z (GMT). No. of bitstreams: 1 Lucimeire Caselato.pdf: 1588642 bytes, checksum: d6bd3b9c1673980b6b7a57fe3a40174d (MD5) Previous issue date: 2009-04-22 / Financial institutions are vulnerable to several risks, one of the most important risks is named market risk. The exposure to market risk can be defined as the probability of financial losses, and it can be characterized as the exposure of financial institution to a certain risk factor and the changes that occur in the asset prices because of market volatility. Trying to measure risk exposures, financial insitutions use a methodology named Value-at-Risk (VaR). Among the methodologies developed to measure financial risks there are, basically, three methodologies: parametric, historical simulation and Monte Carlo simulation. The objective of this research is to compare the performance between historical simulation methodology and parametric (or Delta-Normal) methodology, applied to three different portfolios. After mesuring VaR using the two different methodologies, it will be applied the backtest, to verify wich of the mentioned methodologies had the best performance to measure market risks / Uma instituição financeira está exposta à vários tipos de risco, sendo que um dos principais é o risco de mercado. A exposição ao risco de mercado pode ser entendida como a probabilidade de ocorrerem perdas financeiras, dados: a exposição financeira de uma instituição financeira em um determinado fator de risco e mudanças que ocorrem nos preços dos ativos devido às oscilações de mercado. Tentando mensurar a exposição risco de mercado, as instituições financeiras recorrem ao cálculo do VaR (Value at Risk). Existem, basicamente, três métodos de cálculo de VaR: VaR paramétrico (ou delta-normal), VaR por simulação histórica e VaR por Simulação de Monte Carlo. No presente estudo será comparada a eficiência de dois métodos utilizados para o cálculo do VaR: VaR paramétrico e VaR por simulação Histórica, em três carteiras pré-fixadas. Após o cálculo do VaR utilizando estas duas metodologias será aplicado o backtest, para verificar qual das duas metodologias mensurou de forma eficaz valor do risco de mercado
3

Testování fundamentálních a technických indikátorů v dlouhém období na americkém akciovém trhu / Long term testing of fundamental and technical indicators on american stock market

Švajcr, Milan January 2017 (has links)
This master thesis in its opening chapter inform about some basic charakteristics of New York Stock Exchange. Then indicators and tools of fundamental and technical analysis of stocks are explained in folowing chapters. In practical part of the thesis is taking place a evaluation of fundamental and technical analysis based on historical data from american stock market. For this prupose the data of constituents companies of Dow Jones Industrial Average index are used covering market history as far as possible.
4

Data-Snooping Biases in Backtesting / Data-Snooping Biases in Backtesting

Krpálek, Jan January 2016 (has links)
In this paper, we utilize White's Reality Check, White (2000), and Hansen's SPA test, Hansen (2004), to evaluate technical trading rules while quantifying the data-snooping bias. Secondly, we discuss the result with Probability of Backtest Overfitting framework, introduced by Bailey et al. (2015). Hence, the study presents a comprehensive test of momentum trading across the US futures markets from 2004 to 2016. The evidence indicates that technical trading rules have not been pro?table in the US futures markets after correcting for the data snooping bias.
5

Empirical Analysis of Joint Quantile and Expected Shortfall Regression Backtests

Ågren, Viktor January 2023 (has links)
In this work, we look into the practical applicability of three joint quantile and expected shortfall regression backtests. The strict, auxiliary, and intercept ESR backtests are applied to the historical log returns of the OMX Stockholm 30 market-weight price index. We estimate the conditional variance using GARCH models for various rolling window lengths and refitting frequencies. We are particularly interested in the rejection rates of the one-sided intercept ESR backtest as it is comparable to the current standard of backtests. The one-sided test is found to perform well when the conditional variance is estimated by either the GARCH(1,1), GJR-GARCH(1,1), or EGARCH(1,1) coupled with student’s t-innovation residuals and a rolling window size of 1000 days.
6

En undersökning av VaR-modeller med Kupiecs Backtest

Runer, Carl-Johan, Linzander, Martin January 2009 (has links)
<p>SAMMANDRAG</p><p>Historisk Simulation, Delta-Normal och RiskMetrics prestation utvärderas med hjälp av Kupiecs Backtest. Value at Risk (VaR) beräknas med tre olika konfidensnivåer utifrån Affärsvärldens Generalindex och HSBC kopparindex. Utifrån överträdelser från verkligt utfall undersöks vilken VaR-modell som estimerar marknadsrisken bäst. VaR-modellernas prestation jämförs, och i analysen utreds hur konfidensnivå och tillgångars egenskaper påverkar VaR-modellernas prestation. Resultaten visar att Historisk Simulation presterar bättre än Delta-Normal och RiskMetrics på den högsta konfidensnivån vilket troligtvis beror på att RiskMetrics och Delta-Normal antar normalfördelning. RiskMetrics och Delta-Normal presterar dock bättre än Historisk Simulation på den lägsta konfidensnivån vilket sannolikt är en följd av att Historisk Simulation anpassar sig långsammare till volatilitetsförändringar. Undersökningen tyder även på att avtagningsfaktorn som RiskMetrics använder får minskad effekt vid högre konfidensnivåer varför skillnaden mellan Delta-Normals och RiskMetrics prestation är marginell på dessa nivåer.</p>
7

En undersökning av VaR-modeller med Kupiecs Backtest

Runer, Carl-Johan, Linzander, Martin January 2009 (has links)
SAMMANDRAG Historisk Simulation, Delta-Normal och RiskMetrics prestation utvärderas med hjälp av Kupiecs Backtest. Value at Risk (VaR) beräknas med tre olika konfidensnivåer utifrån Affärsvärldens Generalindex och HSBC kopparindex. Utifrån överträdelser från verkligt utfall undersöks vilken VaR-modell som estimerar marknadsrisken bäst. VaR-modellernas prestation jämförs, och i analysen utreds hur konfidensnivå och tillgångars egenskaper påverkar VaR-modellernas prestation. Resultaten visar att Historisk Simulation presterar bättre än Delta-Normal och RiskMetrics på den högsta konfidensnivån vilket troligtvis beror på att RiskMetrics och Delta-Normal antar normalfördelning. RiskMetrics och Delta-Normal presterar dock bättre än Historisk Simulation på den lägsta konfidensnivån vilket sannolikt är en följd av att Historisk Simulation anpassar sig långsammare till volatilitetsförändringar. Undersökningen tyder även på att avtagningsfaktorn som RiskMetrics använder får minskad effekt vid högre konfidensnivåer varför skillnaden mellan Delta-Normals och RiskMetrics prestation är marginell på dessa nivåer.
8

Modelagem de superfícies de volatilidades para opções pouco líquidas de ações

Vargas, Eric 11 February 2010 (has links)
Made available in DSpace on 2010-04-20T21:00:40Z (GMT). No. of bitstreams: 4 Eric Vargas.pdf.jpg: 2122 bytes, checksum: 12add4698955322137cb39412ab633fd (MD5) Eric Vargas.pdf.txt: 75568 bytes, checksum: a2bddaae2be0fda46e87189be043d0f1 (MD5) license.txt: 4712 bytes, checksum: 4dea6f7333914d9740702a2deb2db217 (MD5) Eric Vargas.pdf: 1952931 bytes, checksum: cbe27d3e7b2b2eaae623021cc15c6987 (MD5) Previous issue date: 2010-02-11T00:00:00Z / Neste trabalho é proposto um modelo de construção de superfície de volatilidade de opções pouco líquidas de ações listadas em bolsa. O modelo é uma função matemática que transforma a superfície de volatilidade de uma opção líquida de ação em uma superfície de volatilidade de opção pouco líquida de ação. O modelo apresenta alta sensibilidade em relação aos parâmetros da função matemática utilizada. Logo, podem ser obtidos resultados bem distintos, caso se utilize, por exemplo, a superfície de opção de Ibovespa ou de Vale como possíveis parâmetros da função. Neste caso, também é proposta uma metodologia para a decisão sobre a escolha do parâmetro de volatilidade mais adequado, tomando-se por base um backtest das diferenças observadas entre volatilidade realizada e proposta. A principal hipótese assumida no modelo é que a diferença calculada entre a volatilidade implícita da opção e a volatilidade histórica de seu ativo-objeto é uma medida que possui proporção entre mesmas medidas de todos os ativos-objetos. Assim, se conhecidas essa medida e a volatilidade histórica de um determinado ativo-objeto, calcula-se a volatilidade 'implícita' da opção desse mesmo ativo-objeto. Assim como a incerteza relacionada à hipótese descrita acima, existem outras relacionadas à seleção de parâmetros no modelo e à própria iliquidez da opção e de seu ativo-objeto. O impacto dessas incertezas reflete-se no resultado financeiro da opção. Assim, propõe-se um modelo de gestão financeira conservadora baseada em reserva de resultado financeiro como provisão dessas incertezas. Neste trabalho, apresentam-se testes para algumas opções, mostrando as provisões hipoteticamente necessárias em termos de vegas das opções para cada subjacente. / In this paper it is proposed a volatility surface model for less liquid stock options listed on the stock exchange. The model is a mathematical function that transforms the volatility surface of a liquid option of a stock into a volatility surface of a less liquid option of a stock. The model presents high sensitivity in relation to the mathematical function’s parameters employed. Hence, distinct results may be obtained, for example, if the volatility surface of Ibovespa’s or Vale’s options is considered as a possible parameter of the function. In this case, it is also proposed a methodology that allows a decision about the choice of the most appropriate parameters, using a backtest of the differences observed between realized volatility and proposed. The main hypothesis considered in the model is that the calculated difference between the option’s implied volatility and its historical volatility of its underlying is a measure that has a proportion within the same measurements of all specific underlying. This way, if this measurement and the historical volatility of a specific underlying are known, the “implied” volatility of the option of this underlying may be computed. As well as the uncertainty related to the hypothesis described above, there are others related to the selection of parameters in the model and to the illiquidity of the option and its underlying. The impact of these uncertainties will be reflected in the financial result of the option. This way, it is proposed a conservative financial management model based on reserve of financial results as a provision for these uncertainties. In this work, it is presented tests for some options, showing the provisions hypothetically required in terms of vegas of the options for each underlyings.
9

Online Non-linear Prediction of Financial Time Series Patterns

da Costa, Joel 11 September 2020 (has links)
We consider a mechanistic non-linear machine learning approach to learning signals in financial time series data. A modularised and decoupled algorithm framework is established and is proven on daily sampled closing time-series data for JSE equity markets. The input patterns are based on input data vectors of data windows preprocessed into a sequence of daily, weekly and monthly or quarterly sampled feature measurement changes (log feature fluctuations). The data processing is split into a batch processed step where features are learnt using a Stacked AutoEncoder (SAE) via unsupervised learning, and then both batch and online supervised learning are carried out on Feedforward Neural Networks (FNNs) using these features. The FNN output is a point prediction of measured time-series feature fluctuations (log differenced data) in the future (ex-post). Weight initializations for these networks are implemented with restricted Boltzmann machine pretraining, and variance based initializations. The validity of the FNN backtest results are shown under a rigorous assessment of backtest overfitting using both Combinatorially Symmetrical Cross Validation and Probabilistic and Deflated Sharpe Ratios. Results are further used to develop a view on the phenomenology of financial markets and the value of complex historical data under unstable dynamics.

Page generated in 0.0462 seconds