• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 17
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 57
  • 57
  • 20
  • 20
  • 17
  • 17
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

[pt] ENSAIOS SOBRE A PRECIFICAÇÃO EMPÍRICA DE ATIVOS, POLÍTICA MONETÁRIA E SUAS INTER-RELAÇÕES / [en] ESSAYS ON EMPIRICAL ASSET PRICING, MONETARY POLICY AND THEIR INTER-RELATIONS

FLÁVIO DE FREITAS VAL 20 September 2016 (has links)
[pt] A presente tese trata da estimação do risco e da precificação de ativos financeiros, de medidas que buscam estimar como os agentes de mercado estão avaliando a política monetária, bem como da inter-relação entre o mercado acionário e a política monetária. Esta inter-relação é representada pela estimação da reação do mercado acionário às mudanças na política monetária. O primeiro trabalho implementa dois recentes modelos de estimação de volatilidade que utilizam dados de alta frequência. O modelo Auto-Regressivo Heterogêneo (HAR) e o modelo Componente (2-Comp) são estimados e os resultados são comparados com os encontrados pelas estimações que utilizam a família de modelos Auto-Regressivos com Heteroscedasticidade Generalizados (GARCH). Durante o período analisado, os modelos que usam dados intradiários obtiveram melhores previsões de retornos dos ativos avaliados, tanto dentro como fora da amostra, confirmando assim que esses modelos possuem informações importantes para uma série de agentes econômicos. No trabalho seguinte se estima a credibilidade da política monetária implementada pelo Banco Central do Brasil - BCB nos últimos dez anos. Esta credibilidade foi estimada por meio de implementação do filtro de Kalman em medidas derivadas de expectativas inflacionárias de pesquisa ao consumidor, da pesquisa Focus do BCB e de curvas de juros dos títulos governamentais. Os resultados fornecem evidências da existência de três movimentos da credibilidade inflacionária estimada pela medida implícita e pela Focus no período analisado: (i) cedeu fortemente em meados de 2008, durante o momento mais crítico da Crise Subprime; (ii) relativa estabilidade entre o início de 2009 e meados de 2010 (meados de 2013, pela medida Focus); (iii) uma tendência de queda a partir de então, quando houve uma taxa real de juros abaixo da mínima compatível com a meta de inflação. Já a credibilidade inflacionária estimada a partir de pesquisa ao consumidor apresentou um comportamento mais errático que as demais, apresentando uma tendência de queda mais intensa a partir do início de 2013 e permanecendo em patamares próximos a zero desde então. Ao mesmo tempo, os resultados indicam que alterações da inflação são importantes para a previsão da credibilidade estimada a partir de pesquisa ao consumidor, validando sua característica backward looking e de ser formada a partir de expectativa adaptativa dos consumidores. A metodologia adotada possibilita desenvolver estimativas em tempo real do grau desta credibilidade e retornar avaliação quantitativa sobre a consistência da política monetária em um ambiente de metas de inflação. Ele contribui para a literatura existente ao implementar o teste de credibilidade de Svensson (1993) e o estender dentro de um arcabouço econométrico de espaço de estado, permitindo a estimação probabilística do grau de credibilidade da política monetária implementada pela autoridade monetária brasileira no período analisado. Finalmente, o terceiro e último trabalho é um estudo empírico da relação entre a política monetária, implementada pelo BCB, e o mercado de ações brasileiro. Utilizando a metodologia de Estudo de Eventos, analisa-se o efeito dos componentes esperados e não esperados das decisões de política monetária nos retornos do Índice Bovespa e de trinta e cinco ações de diferentes empresas. Os resultados fornecem evidências de que a política monetária possui um efeito significativo no mercado acionário, sendo que o evento de reversão na direção da política monetária tende a potencializar a resposta deste mercado. A análise no nível setorial indica que o setor de consumo cíclico é o mais afetado por esta política, enquanto os setores de utilidade pública e de petróleo, gás e biocombustíveis não são afetados significativamente. Os ativos individuais respondem de forma bastante heterogênea à política monetária, porém, ao se utilizar os retornos anormais destes ativos, identificou-se uma forte redução na intensidade e no número de empresas impactadas pela política monetária. Além disso, a surpresa monetária é explicada por variações não esperadas da taxa de desemprego, do índice de produção industrial e do IPCA, sendo Granger causada por variações não esperadas do índice de produção industrial, indicando a importância desta variável para a previsão da política monetária. / [en] This present thesis discusses the estimation of risk and of financial assets pricing, the measures that seek to estimate how the market players are evaluating the monetary policy, as well as the inter-relationship between the stock market and monetary policy. This interrelation is represented by the estimation of the stock market s reaction to changes in monetary policy. The first essay implements the estimation of two recent volatility models using high-frequency data. Heterogeneous Autoregressive model (HAR) and the Component model (2-Comp) are estimated and the results are compared with those found by estimations using the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) family models. During the analyzed period, the models using intraday data obtained better forecasts of asset returns valued both in-sample and out-of-sample, thus confirming that these models have important information for the economic agents. The next essay estimates the credibility of monetary policy implemented by the Central Bank of Brazil - BCB in the last ten years. This credibility was estimated by the use of Kalman filter on measures of inflation expectations derived from the consumer survey, the Focus survey from BCB and on the yield curves of government bonds. The results provide evidence of the existence of three changes on inflationary credibility in the analyzed period: (i) sharp downturn in mid-2008; (ii) relative stability between early 2009 and mid-2010 (mid-2013, for Focus measure); (iii) a downward trend since then, when there was a real interest rate below the minimum compatible with the inflation target. Besides, inflationary credibility estimated from the consumer pooling showed a more erratic behavior than the others, with a tendency to fall more intensely from the beginning of 2013 and remaining at levels close to zero since then. At the same time, the results indicate that inflation changes are important for the prediction of the credibility estimated from consumer pooling, validating its backwardated characteristic and its construction from adaptive consumer expectations. The adopted methodology enables to develop real-time estimates of BCB credibility and to return quantitative assessment of the consistency of monetary policy on an inflation target regime. This work adds to the existing literature in implementing Svensson credibility test (1993) and in extending it within an econometric framework of state space, allowing the probabilistic estimation of the degree of credibility of the monetary policy implemented by the Brazilian monetary authority during the analyzed period. Finally, the third and final essay is an empirical study of the relationship between monetary policy, implemented by the BCB, and the Brazilian stock market. Using the Event Study methodology, this essay analyzes the effect of expected and unexpected components of monetary policy decisions on the Bovespa index returns and on thirty-five different stock returns. The results provide evidence that monetary policy has a significant effect on the stock market returns, and the reversal event in the direction of monetary policy tends to enhance the response of the stock market. The analysis on a sectorial basis indicates that the cyclical consumer sector is the most affected by this policy, while the public utility and the oil, gas and biofuels sectors are not significantly affected. Individual assets respond in a very heterogeneous way to monetary policy. However, when using the abnormal returns, we identified a strong reduction in the intensity and in the number of companies affected by monetary policy. Furthermore, monetary surprise is explained by unexpected variations in the unemployment rate, in the industrial production index and in the CPI. Nonetheless, monetary surprise is Granger caused by unexpected variations in the industrial production index, indicating the importance of this variable for monetary policy forecasting.
52

Modeling the Relation Between Implied and Realized Volatility / Modellering av relationen mellan implicit och realiserad volatilitet

Brodd, Tobias January 2020 (has links)
Options are an important part in today's financial market. It's therefore of high importance to be able to understand when options are overvalued and undervalued to get a lead on the market. To determine this, the relation between the volatility of the underlying asset, called realized volatility, and the market's expected volatility, called implied volatility, can be analyzed. In this thesis five models were investigated for modeling the relation between implied and realized volatility. The five models consisted of one Ornstein–Uhlenbeck model, two autoregressive models and two artificial neural networks. To analyze the performance of the models, different accuracy measures were calculated for out-of-sample forecasts. Signals from the models were also calculated and used in a simulated options trading environment to get a better understanding of how well they perform in trading applications. The results suggest that artificial neural networks are able to model the relation more accurately compared to more traditional time series models. It was also shown that a trading strategy based on forecasting the relation was able to generate significant profits. Furthermore, it was shown that profits could be increased by combining a forecasting model with a signal classification model. / Optioner är en viktig del i dagens finansiella marknad. Det är därför viktigt att kunna förstå när optioner är över- och undervärderade för att vara i framkant av marknaden. För att bestämma detta kan relationen mellan den underliggande tillgångens volatilitet, kallad realiserad volatilitet, och marknadens förväntade volatilitet, kallad implicit volatilitet, analyseras. I den här avhandlingen undersöktes fem modeller för att modellera relationen mellan implicit och realiserad volatilitet. De fem modellerna var en Ornstein–Uhlenbeck modell, två autoregressiva modeller samt två artificiella neurala nätverk. För att analysera modellernas prestanda undersöktes olika nogrannhetsmått för prognoser från modellerna. Signaler från modellerna beräknades även och användes i en simulerad optionshandelsmiljö för att få en bättre förståelse för hur väl de presterar i en handelstillämpning. Resultaten tyder på att artificiella neurala nätverk kan modellera relationen bättre än mer traditionella tidsseriemodellerna. Det visades även att en handelsstrategi baserad på prognoser av relationen kunde generera en signifikant vinst. Det visades dessutom att vinster kunde ökas genom att kombinera en prognosmodell med en modell som klassificerar signaler.
53

Essays on Volatility Risk, Asset Returns and Consumption-Based Asset Pricing

Kim, Young Il 25 June 2008 (has links)
No description available.
54

Development of a Novel Social Media Sentiment Risk Model for Financial Assets / Utveckling av ett finansiellt riskmått med hänsyn till sentimentalitet från sociala medier

Rudert, Emelie January 2023 (has links)
This thesis aims to investigate the potential effects on Value at Risk (VaR) measurements when including social media sentiments from Reddit and Twitter. The investigated stock companies are Apple, Alphabet and Tesla. Furthermore, the VaR measurements will be computed through volatility forecasts and assumptions about the return distributions. The volatility will be forecasted by two different models and each model will both include and exclude social media sentiments, so there will be four different volatility forecasts for each stock. Moreover, the volatility models will be the Heterogeneous autoregression (HAR) model and the Heterogeneous autoregression Neural Network (HAR-NN) model. The assumptions of return distributions are a log-logistic distribution and a log-normal distribution. In addition to this, the VaR measurements are computed and evaluated through number of breaches for each of the volatility forecasts and for both assumptions of a return distribution. The result shows that there is an improvement in forecasting volatility for Apple and Alphabet, as well as fewer VaR breaches for both assumptions of log-return distributions. However, the results for Tesla showed that the volatility forecasts were better when excluding social media sentiment. A possible reason for this might be due to Twitter posts made by influential people, like Elon Musk that would have a larger effect on the volatility than the average sentiment score over that day. Another possible explanation to this might be due to multicollinearity. Overall, the results showed that the assumption of a log-logistic distribution was more suitable over a log- normal return distribution for all three stocks. / Den här studien undersöker de potentiella effekterna av att inkludera sentiment från Reddit och Twitter vid beräkning av det finansiella riskmåttet VaR. De undersökta aktierna är Apple, Alphabet och Tesla. VaR måtten beräknas genom att förutspå volatiliteten samt genom att göra antaganden om aktiernas avkast- ningsfördelning. Volatiliteten förutspås genom två olika modeller och bägge modeller kommer både att inkludera sentiment från sociala medier samt exkludera sentimenten. Därav kommer det totalt vara fyra olika volatilitets prognoser för vardera aktie. Volatilitetsmodellerna som används i denna studie är HAR modellen och HAR-NN modellen. De antaganden som görs om logartimen av avkastningsfördelningarna är att de följer en logistik fördelning samt en normalfördelning. Dessutom är VaR måtten beräknade och eval- uerade genom antalet gånger portföljen överskrider VaR måttet för varje volatilitetsprognos och för vardera antagande om avkastningsfördelning. Resultaten av denna studie visar att inkludering av sentiment från sociala medier förbättrar volatilitetsprognosen för Apple och Alphabet, samt att portföljen överskrider dessa VaR mått för båda fördelningsantaganden. Däremot, visar resultaten för Tesla att volatilitetsprog- nosen är sämre då sentiment från sociala medier inkluderas i modellerna. En möjlig anledning till detta skulle kunna vara på grund av inflyelserika personer, så som Elon Musk vars Twitter inlägg har större påverkan på aktievolatiliteten än medelsentimentet. En annan anledning till detta skulle kunna vara på grund av multikollinearitet, ifall sentimenten till Tesla är starkt korrelerade med volatiliteten. Samman- taget visade resultaten att antagandet av att logaritmen av avkastningarna följer en logistikt fördelning var mer passande än antagandet av en normalfördelning för alla tre aktier.
55

Estimation of State Space Models and Stochastic Volatility

Miller Lira, Shirley 09 1900 (has links)
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière. / My thesis consists of three chapters related to the estimation of state space models and stochastic volatility models. In the first chapter we develop a computationally efficient procedure for state smoothing in Gaussian linear state space models. We show how to exploit the special structure of state-space models to draw latent states efficiently. We analyze the computational efficiency of Kalman-filter-based methods, the Cholesky Factor Algorithm, and our new method using counts of operations and computational experiments. We show that for many important cases, our method is most efficient. Gains are particularly large for cases where the dimension of observed variables is large or where one makes repeated draws of states for the same parameter values. We apply our method to a multivariate Poisson model with time-varying intensities, which we use to analyze financial market transaction count data. In the second chapter, we propose a new technique for the analysis of multivariate stochastic volatility models, based on efficient draws of volatility from its conditional posterior distribution. It applies to models with several kinds of cross-sectional dependence. Full VAR coefficient and covariance matrices give cross-sectional volatility dependence. Mean factor structure allows conditional correlations, given states, to vary in time. The conditional return distribution features Student's t marginals, with asset-specific degrees of freedom, and copulas describing cross-sectional dependence. We draw volatility as a block in the time dimension and one-at-a-time in the cross-section. Following McCausland(2012), we use close approximations of the conditional posterior distributions of volatility blocks as Metropolis-Hastings proposal distributions. We illustrate using daily return data for ten currencies. We report results for univariate stochastic volatility models and two multivariate models. In the third chapter, we evaluate the information contributed by (variations of) realized volatility to the estimation and forecasting of volatility when prices are measured with and without error using a stochastic volatility model. We consider the viewpoint of an investor for whom volatility is an unknown latent variable and realized volatility is a sample quantity which contains information about it. We use Bayesian Markov Chain Monte Carlo (MCMC) methods to estimate the models, which allow the formulation of the posterior densities of in-sample volatilities, and the predictive densities of future volatilities. We then compare the volatility forecasts and hit rates from predictions that use and do not use the information contained in realized volatility. This approach is in contrast with most of the empirical realized volatility literature which most often documents the ability of realized volatility to forecast itself. Our empirical applications use daily index returns and foreign exchange during the 2008-2009 financial crisis.
56

預測S&P500指數實現波動度與VIX- 探討VIX、VIX選擇權與VVIX之資訊內涵 / The S&P 500 Index Realized Volatility and VIX Forecasting - The Information Content of VIX, VIX Options and VVIX

黃之澔 Unknown Date (has links)
波動度對於金融市場影響甚多,同時為金融資產定價的重要參數以及市場穩 定度的衡量指標,尤其在金融危機發生時,波動度指數的驟升反映資產價格震盪。 本篇論文嘗試捕捉S&P500 指數實現波動度與VIX變動率未來之動態,並將VIX、 VIX 選擇權與VVIX 納入預測模型中,探討其資訊內涵。透過研究S&P500 指數 實現波動度,能夠預測S&P500 指數未來之波動度與報酬,除了能夠觀察市場變 動,亦能使未來選擇權定價更為準確;而藉由模型預測VIX,能夠藉由VIX 選 擇權或VIX 期貨,提供避險或投資之依據。文章採用2006 年至2011 年之S&P500 指數、VIX、VIX 選擇權與VVIX 資料。 在 S&P500 指數之實現波動度預測當中,本篇論文的模型改良自先前文獻, 結合實現波動度、隱含波動度與S&P500 指數選擇權之風險中立偏態,所構成之 異質自我回歸模型(HAR-RV-IV-SK model)。論文額外加入VIX 變動率以及VIX指數選擇權之風險中立偏態作為模型因子,預測未來S&P500 指數實現波動度。 研究結果表示,加入VIX 變動率作為S&P500 指數實現波動度預測模型變數後, 可增加S&P500 指數實現波動度預測模型之準確性。 在 VIX 變動率預測模型之中,論文採用動態轉換模型,作為高低波動度之 下,區分預測模型的方法。以VIX 過去的變動率、VIX 選擇權之風險中立動差 以及VIX 之波動度指數(VVIX)作為變數,預測未來VIX 變動率。結果顯示動態 轉換模型能夠提升VIX 預測模型的解釋能力,並且在動態轉換模型下,VVIX 與 VIX 選擇權之風險中立動差,對於VIX 預測具有相當之資訊隱涵於其中。 / This paper tries to capture the future dynamic of S&P 500 index realized volatility and VIX. We add the VIX change rate and the risk neutral skewness of VIX options into the Heterogeneous Autoregressive model of Realized Volatility, Implied Volatility and Skewness (HAR-RV-IV-SK) model to forecast the S&P 500 realized volatility. Also, this paper uses the regime switching model and joins the VIX, risk neutral moments of VIX options and VVIX variables to raise the explanatory ability in the VIX forecasting. The result shows that the VIX change rate has additional information on the S&P 500 realized volatility. By using the regime switching model, the VVIX and the risk neutral moments of VIX options variables have information contents in VIX forecasting. These models can be used for hedging or investment purposes.
57

Estimation of State Space Models and Stochastic Volatility

Miller Lira, Shirley 09 1900 (has links)
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière. / My thesis consists of three chapters related to the estimation of state space models and stochastic volatility models. In the first chapter we develop a computationally efficient procedure for state smoothing in Gaussian linear state space models. We show how to exploit the special structure of state-space models to draw latent states efficiently. We analyze the computational efficiency of Kalman-filter-based methods, the Cholesky Factor Algorithm, and our new method using counts of operations and computational experiments. We show that for many important cases, our method is most efficient. Gains are particularly large for cases where the dimension of observed variables is large or where one makes repeated draws of states for the same parameter values. We apply our method to a multivariate Poisson model with time-varying intensities, which we use to analyze financial market transaction count data. In the second chapter, we propose a new technique for the analysis of multivariate stochastic volatility models, based on efficient draws of volatility from its conditional posterior distribution. It applies to models with several kinds of cross-sectional dependence. Full VAR coefficient and covariance matrices give cross-sectional volatility dependence. Mean factor structure allows conditional correlations, given states, to vary in time. The conditional return distribution features Student's t marginals, with asset-specific degrees of freedom, and copulas describing cross-sectional dependence. We draw volatility as a block in the time dimension and one-at-a-time in the cross-section. Following McCausland(2012), we use close approximations of the conditional posterior distributions of volatility blocks as Metropolis-Hastings proposal distributions. We illustrate using daily return data for ten currencies. We report results for univariate stochastic volatility models and two multivariate models. In the third chapter, we evaluate the information contributed by (variations of) realized volatility to the estimation and forecasting of volatility when prices are measured with and without error using a stochastic volatility model. We consider the viewpoint of an investor for whom volatility is an unknown latent variable and realized volatility is a sample quantity which contains information about it. We use Bayesian Markov Chain Monte Carlo (MCMC) methods to estimate the models, which allow the formulation of the posterior densities of in-sample volatilities, and the predictive densities of future volatilities. We then compare the volatility forecasts and hit rates from predictions that use and do not use the information contained in realized volatility. This approach is in contrast with most of the empirical realized volatility literature which most often documents the ability of realized volatility to forecast itself. Our empirical applications use daily index returns and foreign exchange during the 2008-2009 financial crisis.

Page generated in 0.1026 seconds