• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 13
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 26
  • 25
  • 23
  • 16
  • 15
  • 13
  • 13
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

GARCH effect in the residential property market.

January 2002 (has links)
Tam Chun Yu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 141-147). / Abstracts in English and Chinese. / Abstract --- p.I / Acknowledgements --- p.III / Table of Contents --- p.IV / List of Tables --- p.V / List of Figures --- p.VI / Chapter Chapter 1. --- Introduction --- p.1 / Chapter Chapter 2. --- Literature Review --- p.5 / Chapter 2.1 --- Real Estate Literature --- p.5 / Chapter 2.2 --- Financial Literature --- p.6 / Chapter 2.3 --- Impulse Response --- p.10 / Chapter Chapter 3. --- Methodology --- p.12 / Chapter 3.1 --- Augmented Dickey Fuller Test --- p.12 / Chapter 3.2 --- GARCH Model --- p.14 / Chapter 3.3 --- VAR Model --- p.16 / Chapter Chapter 4. --- Data Description --- p.18 / Chapter Chapter 5. --- Empirical Results --- p.20 / Chapter 5.1 --- Overview for the Data Set --- p.21 / Chapter 5.2 --- ADF Test --- p.22 / Chapter 5.3 --- GARCH Model --- p.22 / Chapter 5.4 --- VAR Model --- p.24 / Chapter 5.5 --- Impulse Response (IR) --- p.34 / Chapter Chapter 6. --- Conclusion --- p.38 / Appendix 1. Variable Definition --- p.41 / Appendix 2. Tables --- p.44 / Appendix 3. Figures --- p.61 / Appendix 4. Comparison of IR for different model in full sample case --- p.93 / Appendix 5. Comparison of IR for different model in first sub period --- p.109 / Appendix 6. Comparison of IR for different model in second sub period --- p.125 / Bibliography --- p.141
62

Analýza volatility akciových indexů na evropských burzách / Analysis of the stock index volatility on European stock exchanges

Švehla, Pavel January 2011 (has links)
This thesis focuses on analysis and comparison of volatility on selected European stock markets. At first paper briefly introduces the reader to the specific features of financial econometrics and the importance of asset returns volatility analysis. Further chapters precisely cover the construction of linear and nonlinear conditional heteroscedasticity models as an appropriate tool for describing the volatility in financial data. The empirical part of the thesis analyze four stock exchange indices from various European regions and seek appropriate models to express volatility behavior in period before the financial crisis in 2008 and also during the crisis phase. Based on selected models, the paper tries to compare the volatility in both periods within the specific stock market index and moreover between different regions. The last section examines asymmetric effects in volatility of stock indices using their graphical representation.
63

Detecting Influential observations in spatial models using Bregman divergence / Detecção de observações influentes em modelos espaciais usando divergência de Bregman

Danilevicz, Ian Meneghel 26 February 2018 (has links)
How to evaluate if a spatial model is well ajusted to a problem? How to know if it is the best model between the class of conditional autoregressive (CAR) and simultaneous autoregressive (SAR) models, including homoscedasticity and heteroscedasticity cases? To answer these questions inside Bayesian framework, we propose new ways to apply Bregman divergence, as well as recent information criteria as widely applicable information criterion (WAIC) and leave-one-out cross-validation (LOO). The functional Bregman divergence is a generalized form of the well known Kullback-Leiber (KL) divergence. There is many special cases of it which might be used to identify influential points. All the posterior distributions displayed in this text were estimate by Hamiltonian Monte Carlo (HMC), a optimized version of Metropolis-Hasting algorithm. All ideas showed here were evaluate by both: simulation and real data. / Como avaliar se um modelo espacial está bem ajustado? Como escolher o melhor modelo entre muitos da classe autorregressivo condicional (CAR) e autorregressivo simultâneo (SAR), homoscedásticos e heteroscedásticos? Para responder essas perguntas dentro do paradigma bayesiano, propomos novas formas de aplicar a divergência de Bregman, assim como critérios de informação bastante recentes na literatura, são eles o widely applicable information criterion (WAIC) e validação cruzada leave-one-out (LOO). O funcional de Bregman é uma generalização da famosa divergência de Kullback-Leiber (KL). Há diversos casos particulares dela que podem ser usados para identificar pontos influentes. Todas as distribuições a posteriori apresentadas nesta dissertação foram estimadas usando Monte Carlo Hamiltoniano (HMC), uma versão otimizada do algoritmo Metropolis-Hastings. Todas as ideias apresentadas neste texto foram submetidas a simulações e aplicadas em dados reais.
64

Some Contributions to Filtering, Modeling and Forecasting of Heteroscedastic Time Series

Stockhammar, Pär January 2010 (has links)
Heteroscedasticity (or time-dependent volatility) in economic and financial time series has been recognized for decades. Still, heteroscedasticity is surprisingly often neglected by practitioners and researchers. This may lead to inefficient procedures. Much of the work in this thesis is about finding more effective ways to deal with heteroscedasticity in economic and financial data. Paper I suggest a filter that, unlike the Box-Cox transformation, does not assume that the heteroscedasticity is a power of the expected level of the series. This is achieved by dividing the time series by a moving average of its standard deviations smoothed by a Hodrick-Prescott filter. It is shown that the filter does not colour white noise. An appropriate removal of heteroscedasticity allows more effective analyses of heteroscedastic time series. A few examples are presented in Paper II, III and IV of this thesis. Removing the heteroscedasticity using the proposed filter enables efficient estimation of the underlying probability distribution of economic growth. It is shown that the mixed Normal - Asymmetric Laplace (NAL) distributional fit is superior to the alternatives. This distribution represents a Schumpeterian model of growth, the driving mechanism of which is Poisson (Aghion and Howitt, 1992) distributed innovations. This distribution is flexible and has not been used before in this context. Another way of circumventing strong heteroscedasticity in the Dow Jones stock index is to divide the data into volatility groups using the procedure described in Paper III. For each such group, the most accurate probability distribution is searched for and is used in density forecasting. Interestingly, the NAL distribution fits best also here. This could hint at a new analogy between the financial sphere and the real economy, further investigated in Paper IV. These series are typically heteroscedastic, making standard detrending procedures, such as Hodrick-Prescott or Baxter-King, inadequate. Prior to this comovement study, the univariate and bivariate frequency domain results from these filters are compared to the filter proposed in Paper I. The effect of often neglected heteroscedasticity may thus be studied.
65

Les tests de causalité en variance entre deux séries chronologiques multivariées

Nkwimi-Tchahou, Herbert 12 1900 (has links)
Les modèles de séries chronologiques avec variances conditionnellement hétéroscédastiques sont devenus quasi incontournables afin de modéliser les séries chronologiques dans le contexte des données financières. Dans beaucoup d'applications, vérifier l'existence d'une relation entre deux séries chronologiques représente un enjeu important. Dans ce mémoire, nous généralisons dans plusieurs directions et dans un cadre multivarié, la procédure dévéloppée par Cheung et Ng (1996) conçue pour examiner la causalité en variance dans le cas de deux séries univariées. Reposant sur le travail de El Himdi et Roy (1997) et Duchesne (2004), nous proposons un test basé sur les matrices de corrélation croisée des résidus standardisés carrés et des produits croisés de ces résidus. Sous l'hypothèse nulle de l'absence de causalité en variance, nous établissons que les statistiques de test convergent en distribution vers des variables aléatoires khi-carrées. Dans une deuxième approche, nous définissons comme dans Ling et Li (1997) une transformation des résidus pour chaque série résiduelle vectorielle. Les statistiques de test sont construites à partir des corrélations croisées de ces résidus transformés. Dans les deux approches, des statistiques de test pour les délais individuels sont proposées ainsi que des tests de type portemanteau. Cette méthodologie est également utilisée pour déterminer la direction de la causalité en variance. Les résultats de simulation montrent que les tests proposés offrent des propriétés empiriques satisfaisantes. Une application avec des données réelles est également présentée afin d'illustrer les méthodes / Time series models with conditionnaly heteroskedastic variances have become almost inevitable to model financial time series. In many applications, to confirm the existence of a relationship between two time series is very important. In this Master thesis, we generalize in several directions and in a multivariate framework, the method developed by Cheung and Ng (1996) designed to examine causality in variance in the case of two univariate series. Based on the work of El Himdi and Roy (1997) and Duchesne (2004), we propose a test based on residual cross-correlation matrices of squared residuals and cross-products of these residuals. Under the null hypothesis of no causality in variance, we establish that the test statistics converge in distribution to chi-square random variables. In a second approach, we define as in Ling and Li (1997) a transformation of the residuals for each residual time series. The test statistics are built from the cross-correlations of these transformed residuals. In both approaches, test statistics at individual lags are presented and also portmanteau-type test statistics. That methodology is also used to determine the direction of causality in variance. The simulation results show that the proposed tests provide satisfactory empirical properties. An application with real data is also presented to illustrate the methods
66

Cálculo do Value at Risk (VaR) para o Ibovespa, pós crise de 2008, por meio dos modelos de heterocedasticidade condicional (GARCH) e de volatilidade estocástica (Local Scale Model - LSM)

Santos, Julio Cesar Grimalt dos 10 February 2015 (has links)
Submitted by JULIO CESAR GRIMALT DOS SANTOS (grimbil@hotmail.com) on 2015-02-23T21:08:49Z No. of bitstreams: 1 Dissertação Final.pdf: 1416129 bytes, checksum: fcbac3f948355bac6f5b59569bf2610a (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2015-03-04T16:04:21Z (GMT) No. of bitstreams: 1 Dissertação Final.pdf: 1416129 bytes, checksum: fcbac3f948355bac6f5b59569bf2610a (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-03-12T19:06:25Z (GMT) No. of bitstreams: 1 Dissertação Final.pdf: 1416129 bytes, checksum: fcbac3f948355bac6f5b59569bf2610a (MD5) / Made available in DSpace on 2015-03-12T19:06:41Z (GMT). No. of bitstreams: 1 Dissertação Final.pdf: 1416129 bytes, checksum: fcbac3f948355bac6f5b59569bf2610a (MD5) Previous issue date: 2015-02-10 / O objetivo deste estudo é propor a implementação de um modelo estatístico para cálculo da volatilidade, não difundido na literatura brasileira, o modelo de escala local (LSM), apresentando suas vantagens e desvantagens em relação aos modelos habitualmente utilizados para mensuração de risco. Para estimação dos parâmetros serão usadas as cotações diárias do Ibovespa, no período de janeiro de 2009 a dezembro de 2014, e para a aferição da acurácia empírica dos modelos serão realizados testes fora da amostra, comparando os VaR obtidos para o período de janeiro a dezembro de 2014. Foram introduzidas variáveis explicativas na tentativa de aprimorar os modelos e optou-se pelo correspondente americano do Ibovespa, o índice Dow Jones, por ter apresentado propriedades como: alta correlação, causalidade no sentido de Granger, e razão de log-verossimilhança significativa. Uma das inovações do modelo de escala local é não utilizar diretamente a variância, mas sim a sua recíproca, chamada de 'precisão' da série, que segue uma espécie de passeio aleatório multiplicativo. O LSM captou todos os fatos estilizados das séries financeiras, e os resultados foram favoráveis a sua utilização, logo, o modelo torna-se uma alternativa de especificação eficiente e parcimoniosa para estimar e prever volatilidade, na medida em que possui apenas um parâmetro a ser estimado, o que representa uma mudança de paradigma em relação aos modelos de heterocedasticidade condicional. / The objective of this study is to propose the implementation of a statistical model to calculate the volatility not widespread in Brazilian literature, LSM, with its advantages and disadvantages compared to the models commonly used for risk measurement. To estimate the parameters will be used daily prices of Ibovespa in the period from January 2009 to December 2014, and to measure the empirical accuracy of the models out of sample tests will be performed, comparing the VaR obtained for the period from January to December 2014. Explanatory variables were introduced in an attempt to improve the models, and we chose to its corresponding American Ibovespa, the Dow Jones index, for presenting characteristics such as high correlation, causality in the Granger sense, and reason for significant log-likelihood. One of the local scale model innovation is not directly use the variance, but its reciprocal, called 'precision' series, which follows a kind of multiplicative random walk. LSM captured all financial series of stylized facts, and the results were favorable to use, so the model becomes an efficient and economical alternative specification for estimating and predicting volatility, to the extent that only one parameter has to be estimated, which represents a paradigm shift in the models of conditional heteroscedasticity.
67

Detecting Influential observations in spatial models using Bregman divergence / Detecção de observações influentes em modelos espaciais usando divergência de Bregman

Ian Meneghel Danilevicz 26 February 2018 (has links)
How to evaluate if a spatial model is well ajusted to a problem? How to know if it is the best model between the class of conditional autoregressive (CAR) and simultaneous autoregressive (SAR) models, including homoscedasticity and heteroscedasticity cases? To answer these questions inside Bayesian framework, we propose new ways to apply Bregman divergence, as well as recent information criteria as widely applicable information criterion (WAIC) and leave-one-out cross-validation (LOO). The functional Bregman divergence is a generalized form of the well known Kullback-Leiber (KL) divergence. There is many special cases of it which might be used to identify influential points. All the posterior distributions displayed in this text were estimate by Hamiltonian Monte Carlo (HMC), a optimized version of Metropolis-Hasting algorithm. All ideas showed here were evaluate by both: simulation and real data. / Como avaliar se um modelo espacial está bem ajustado? Como escolher o melhor modelo entre muitos da classe autorregressivo condicional (CAR) e autorregressivo simultâneo (SAR), homoscedásticos e heteroscedásticos? Para responder essas perguntas dentro do paradigma bayesiano, propomos novas formas de aplicar a divergência de Bregman, assim como critérios de informação bastante recentes na literatura, são eles o widely applicable information criterion (WAIC) e validação cruzada leave-one-out (LOO). O funcional de Bregman é uma generalização da famosa divergência de Kullback-Leiber (KL). Há diversos casos particulares dela que podem ser usados para identificar pontos influentes. Todas as distribuições a posteriori apresentadas nesta dissertação foram estimadas usando Monte Carlo Hamiltoniano (HMC), uma versão otimizada do algoritmo Metropolis-Hastings. Todas as ideias apresentadas neste texto foram submetidas a simulações e aplicadas em dados reais.
68

GestÃo de risco das principais tesourarias de fundos de investimento em aÃÃes no Brasil / Risk management of major treasuries of funds investing in shares in Brazil

Antonio GlÃnio Moura Ferreira 10 February 2014 (has links)
nÃo hà / O presente trabalho busca analisar, empiricamente, o comportamento do modelo de mensuraÃÃo de risco de mercado Value-at-Risk â VaR em sua interpretaÃÃo paramÃtrica gaussiana incondicional e extensÃes que regulam as violaÃÃes sobre a nÃo normalidade e a heterocedasticidade dos retornos diÃrios dos fundos de investimentos em AÃÃes, das treze maiores instituiÃÃes financeiras residentes no Brasil, durante o perÃodo de janeiro/06 a dezembro/12. Para uma melhor avaliaÃÃo dos dados, buscou-se, inicialmente, modelar a evoluÃÃo condicional do risco e ajustar a idiossincrasia estatÃstica das sÃries temporais das treze tesourarias, utilizando distribuiÃÃes de probabilidade que mais se adaptassem à anÃlise dos modelos. Os resultados obtidos com esses modelos sÃo analisados à luz do teste para proporÃÃo de falhas proposto por Kupiec (1995) e Chisttoffersen (1998). A pesquisa ainda apresenta, com exemplos grÃficos, uma anÃlise de desempenho Risco â Retorno dos treze bancos utilizando a metodologia proposta por Balzer. / This study aims to examine empirically the behavior of the model for measuring market risk Value at Risk - VaR in its parametric interpretation unconditional Gaussian and extensions that regulate violations on heteroscedasticity and non-normality of daily returns of investment funds Actions, of the thirteen largest financial institutions resident in Brazil, during the January/06 dezembro/12. For a better evaluation of the data, we sought to initially model the conditional evolution of risk and adjust the statistic al idiosyncrasy of temporal series of thirteen treasuries, using probability distributions that best adapt to the analysis of the models. The results obtained with the semodels are analyzed by the test failure rate proposed by Kupiec (1995) and Chisttoffersen (1998). The survey also shows, with graphic examples, a performance Risk - Return of the thirteen banks using the methodology proposed by Balzer.
69

A heteroscedastic volatility model with Fama and French risk factors for portfolio returns in Japan / En heteroskedastisk volatilitetsmodell med Fama och Frenchriskfaktorer för portföljavkastning i Japan

Wallin, Edvin, Chapman, Timothy January 2021 (has links)
This thesis has used the Fama and French five-factor model (FF5M) and proposed an alternative model. The proposed model is named the Fama and French five-factor heteroscedastic student's model (FF5HSM). The model utilises an ARMA model for the returns with the FF5M factors incorporated and a GARCH(1,1) model for the volatility. The FF5HSM uses returns data from the FF5M's portfolio construction for the Japanese stock market and the five risk factors. The portfolio's capture different levels of market capitalisation, and the factors capture market risk. The ARMA modelling is used to address the autocorrelation present in the data. To deal with the heteroscedasticity in daily returns of stocks, a GARCH(1,1) model has been used. The order of the GARCH-model has been concluded to be reasonable in academic literature for this type of data. Another finding in earlier research is that asset returns do not follow the assumption of normality that a regular regression model assumes. Therefore, the skewed student's t-distribution has been assumed for the error terms. The result of the data indicates that the FF5HSM has a better in-sample fit than the FF5M. The FF5HSM addresses heteroscedasticity and autocorrelation in the data and minimises them depending on the portfolio. Regardingforecasting, both the FF5HSM and the FF5M are accurate models depending on what portfolio the model is applied on.
70

Business analytics tools for data collection and analysis of COVID-19

Widing, Härje January 2021 (has links)
The pandemic that struck the entire world 2020 caused by the SARS-CoV-2 (COVID-19) virus, will have an enormous interest for statistical and economical analytics for a long time. While the pandemic of 2020 is not the first that struck the entire world, it is the first pandemic in history where the data were gathered to this extent. Most countries have collected and shared its numbers of cases, tests and deaths related to the COVID-19 virus using different storage methods and different data types. Gaining quality data from the COVID-19 pandemic is a problem most countries had during the pandemic, since it is constantly changing not only for the current situation but also because past values have been altered when additional information has surfaced. The importance of having the latest data available for government officials to make an informed decision, leads to the usage of Business Intelligence tools and techniques for data gathering and aggregation being one way of solving the problem. One of the mostly used software to perform Business Intelligence is the Microsoft develop Power BI, designed to be a powerful visualizing and analysing tool, that could gather all data related to the COVID-19 pandemic into one application. The pandemic caused not only millions of deaths, but it also caused one of the largest drops on the stock market since the Great Recession of 2007. To determine if the deaths or other reasons directly caused the drop, the study modelled the volatility from index funds using Generalized Autoregressive Conditional Heteroscedasticity. One question often asked when talking of the COVID-19 virus, is how deadly the virus is. Analysing the effect the pandemic had on the mortality rate is one way of determining how the pandemic not only affected the mortality rate but also how deadly the virus is. The analysis of the mortality rate was preformed using Seasonal Artificial Neural Network. Forecasting deaths from the pandemic using the Seasonal Artificial Neural Network on the COVID-19 daily deaths data.

Page generated in 0.1226 seconds