Spelling suggestions: "subject:"[een] LONG MEMORY"" "subject:"[enn] LONG MEMORY""
1 |
High Quantile Estimation for some Stochastic Volatility ModelsLuo, Ling 05 October 2011 (has links)
In this thesis we consider estimation of the tail index for heavy tailed stochastic volatility models with long memory. We prove a central limit theorem for a Hill estimator. In particular, it is shown that neither the rate of convergence nor the asymptotic variance is affected by long memory. The theoretical findings are verified by simulation studies.
|
2 |
High Quantile Estimation for some Stochastic Volatility ModelsLuo, Ling 05 October 2011 (has links)
In this thesis we consider estimation of the tail index for heavy tailed stochastic volatility models with long memory. We prove a central limit theorem for a Hill estimator. In particular, it is shown that neither the rate of convergence nor the asymptotic variance is affected by long memory. The theoretical findings are verified by simulation studies.
|
3 |
High Quantile Estimation for some Stochastic Volatility ModelsLuo, Ling 05 October 2011 (has links)
In this thesis we consider estimation of the tail index for heavy tailed stochastic volatility models with long memory. We prove a central limit theorem for a Hill estimator. In particular, it is shown that neither the rate of convergence nor the asymptotic variance is affected by long memory. The theoretical findings are verified by simulation studies.
|
4 |
High Quantile Estimation for some Stochastic Volatility ModelsLuo, Ling January 2011 (has links)
In this thesis we consider estimation of the tail index for heavy tailed stochastic volatility models with long memory. We prove a central limit theorem for a Hill estimator. In particular, it is shown that neither the rate of convergence nor the asymptotic variance is affected by long memory. The theoretical findings are verified by simulation studies.
|
5 |
Essays on trading strategies and long memoryRambaccussing, Dooruj January 2012 (has links)
Present value based asset pricing models are explored empirically in this thesis. Three contributions are made. First, it is shown that a market timing strategy may be implemented in an excessively volatile market such as the S&P500. The main premise of the strategy is that asset prices may revert to the present value over time. The present value is computed in real-time where the present value variables (future dividends, dividend growth and the discount factor) are forecast from simple models. The strategy works well for monthly data and when dividends are forecast from autoregressive models. The performance of the strategy relies on how discount rates are empirically defined. When discount rates are defined by the rolling and recursive historic average of realized returns, the strategy performs well. The discount rate and dividend growth can also be derived using a structural approach. Using the Campbell and Shiller log-linearized present value equation, and assuming that expected and realized dividend growth are unit related, a state space model is constructed linking the price-dividend ratio to expected returns and expected dividend growth. The model parameters are estimated from the data and, are used to derive the filtered expected returns and expected dividend growth series. The present value is computed using the filtered series. The trading rule tends to perform worse in this case. Discount rates are again found to be the major determinant of its success. Although the structural approach offers a time series of discount rates which is less volatile, it is on average higher than that of the historical mean model. The filtered expected returns is a potential predictor of realized returns. The predictive performance of expected returns is compared to that of the price-dividend ratio. It is found that expected returns is not superior to the price-dividend ratio in forecasting returns both in-sample and out-of-sample. The predictive regression included both simple Ordinary Least Squares and Vector Autoregressions. The second contribution of this thesis is the modeling of expected returns using autoregressive fractionally integrated processes. According to the work of Granger and Joyeux(1980), aggregated series which are derived from utility maximization problems follow a Beta distribution. In the time series literature, it implies that the series may have a fractional order (I(d)). Autoregressive fractionally models may have better appeal than models which explicitly posit unit roots or no unit roots. Two models are presented. The first model, which incorporates an ARFIMA(p,d,q) within the present value through the state equations, is found to be highly unstable. Small sample size may be a reason for this finding. The second model involves predicting dividend growth from simple OLS models, and sequentially netting expected returns from the present value model. Based on the previous finding that expected returns may be a long memory process, the third contribution of this thesis derives a test of long memory based on the asymptotic properties of the variance of aggregated series in the context of the Geweke Porter-Hudak (1982) semiparametric estimator. The test makes use of the fact that pure long memory process will have the same autocorrelation across observations if the observations are drawn at repeated intervals to make a new series. The test is implemented using the Sieve-AR bootstrap which accommodates long range dependence in stochastic processes. The test is relatively powerful against both linear and nonlinear specifications in large samples.
|
6 |
Dynamics of demographic changes and economic developmentMishra, Tapas K. 20 October 2006 (has links)
Demographic changes and economic growth are inextricably linked. However, the complex role of demographic system, specifically its temporal features have not been treated with rigor till recently. This dissertation undertakes such an attempt to explain cross-country growth variations and focuses on longterm growth projections by explicitly treating demographic system in a stochastic shocks framework. We exploit the temporal characteristics of demographic system to shed light on its evolution, study its complex interaction with economic system and analyze the long-run effect on economic growth/development. The dissertation contains four chapters. After outlining the motivation of the thesis and an overview of the chapter scheme in the first chapter, we investigate in Chapter 2 how the effects of demographic components viz., age specific population have changed over the decades. Following the standard practice of assuming `stationary' features of population growth, we first evaluate and extend the popular empirical economic growth models. We find that decadal changes have brought forth variations in economic growth of developed and developing economies. We argue that accounting for temporal features of the demographic and economic growth system would provide clear insights into persistent growth fluctuations. In Chapter 3 we develop a new mechanism to characterize stochastic nature of demographic shocks in which population series with large temporal dimension is assumed to be governed by certain degree of stochastic shocks. By doing so, the conventional `stationary' assumption underlying the current theoretical and empirical exploration is relaxed and more dynamic information about the persistence of shocks is accommodated in the
economic growth models. To this end, we first provide an analytical framework to show that long-memory shocks in demographic age structure or population might induce long-memory in economic growth. An empirical illustration of both developed and developing countries is carried out to demonstrate that population age structure in these countries are characterized by long-memory. The causality of stochastic demographic shocks' influence and economic growth (and the converse) is also examined. Following the theoretical development and empirical illustration in Chapter 3, in Chapter 4 we propose to forecast total and age-structured population employing fractionally integrated ARMA (in short, ARFIMA) technique. The conventional methods of population forecasting is discussed in this chapter evaluating the advantages and potential weaknesses of these methods. Our approach to population forecasting can be considered as a shift from the conventional `low, medium, and high' variant and the recently used ARMA projections (assuming stationarity or first difference stationarity of aggregate population) and is a departure from the stochastic population forecast based on Leslie matrix as used in the extant population forecasting literature. In Chapter 5 we incorporate the memory properties of demographic age- distribution to forecast Gross Domestic Product (or National income) of some developed and developing countries. We relax the stationary age-structure and population growth assumption in the model while performing long term income projections. We argue that the growth of total age-structured population need not be stationary and that any degree of stochastic shocks in these series can affect forecasting performance. Given that a long-memory panel method is yet to be comprehensively built for forecasting, we perform forecast of demography-based income in the univariate context assuming a stochastic long-memory process for age-structured population growth. Finally, Chapter 6 summarizes the main findings of the thesis and outlines some possible directions for further research. / Les changements démographiques et la croissance économique sont intimement liées. Cependant, le complexe rôle du système démographique, particulièrement son aspect temporel, n'a pas encore été analysé avec rigueur jusqu'aujourd'hui. Cette dissertation tente d'aborder cette question afin d'expliquer les changements de croissance des pays. Elle insiste particulièrement sur les projections de croissance de long terme en traitant explicitement les systèmes démographiques dans une structure de chocs stochastiques. Nous exploitons les caractéristiques temporelles des systèmes démographiques pour analyser leur évolution, étudier sa complexe interaction avec le système économique ainsi que les effets de long terme sur la croissance économique. Dans le chapitre 2, nous nous intéressons sur les effets des composantes démographiques , plus précisément comment l'âge d'une population spécifique a changé avec le temps. Suivant la pratique standard qui suppose une « stationnarité » de la croissance de la population, nous évaluons et étendons les modèles empiriques populaires de croissance économique. Nous trouvons que les changements décennaux ont apporté quatre changements dans la croissance économique des pays aussi bien développés qu'en développement. Nous montrons que le fait de tenir compte de l'aspect temporel des systèmes de croissance économique et démographique améliore les résultats sur la persistance des fluctuations de la croissance. Dans le chapitre 3, nous développons un nouveau mécanisme pour caractériser la nature stochastique des chocs démographiques dans laquelle les séries de population avec une large dimension temporelle sont supposées régies par un certain nombre de chocs stochastiques. En procédant de cette manière, la supposition conventionnelle de « stationnarité » qui sous-tend l'exploration théorique et empirique courante est relâchée et beaucoup plus d'informations sur la persistance des chocs sont données dans les modèles
de croissance économique. Dans la croissance économique endogène avec un changement endogène de population, ce chapitre construit un modèle « long-memory » de population et de ses composantes (structure par âges) pour montrer les effets des changements démographiques sur les économies tant développées qu'en développement. Pour ce faire, nous donnons d'abord une formulation théorique pour montrer que les chocs « long-memory » dans la structure démographique de la population peut induire une croissance. Une illustration empirique est développée pour montrer que la structure de la population est caractérisée de « long-memory ».
Suite au développement théorique et à l'illustration empirique du chapitre 3, le chapitre 4 propose une prévision de la population totale et de la structure démographique en employant de manière fractionnée la technique intégrée ARMA ( ARFIMA en bref). Les méthodes conventionnelles de prévision de la population sont discutées dans ce chapitre valuant les avantages et les faiblesses potentielles de ces méthodes. Notre approche peut être considérée comme un changement de la variante de la méthode conventionnelle « faible, moyenne et élevée » par rapport à la récente projection ARIMA utilisée récemment (qui suppose stationnarité ou différence première de la population agrégée). De plus, notre approche est un départ de la prévision de la population stochastique basée sur la matrice de Leslie. Nous avons aussi examiné un départ de la prévision stochastique basée sur la matrice de Leslie. Dans ce chapitre, nous avons aussi analysé pourquoi les techniques de prévision en démographie n'ont pas beaucoup évolué alors que les méthodes ne sont pas restées si traditionnelles. Dans le chapitre 5, nous incorporons les propriétés démographiques « memory » âge - distribution pour prévoir le Produit Intérieur Brut (ou revenu national) de quelques économies développées et en développement. Nous relâchons l'hypothèse de stationnarité âge-structure et croissance de la population dans le modèle en faisant les projections du revenu de long terme. Nous montrons que la croissance de la
population totale n'a pas besoin d'être stationnaire et que tout degré de chocs stochastiques dans ces séries peut affecter la performance de prévision. Etant donné que la méthode de panel “long memory” est encore à construire pour une bonne prévision, nous faisons une prévision du revenu basée sur la démographie dans un contexte uni varié qui suppose une procédure stochastique « long- memory » pour une croissance de la population structurée suivant l'âge. Finalement, le chapitre 6 résume les resultants principaux de la thèse et montre quelques directions possibles pour des recherches futures.
|
7 |
A Reexamination for Fisher effectLin, Albert 23 July 2002 (has links)
none
|
8 |
Long memory conditional volatility and dynamic asset allocationNguyen, Anh Thi Hoang January 2011 (has links)
The thesis evaluates the benefit of allowing for long memory volatility dynamics in forecasts of the variance-covariance matrix for asset allocation. First, I compare the forecast performance of multivariate long memory conditional volatility models (the long memory EWMA, long memory EWMA-DCC, FIGARCH-DCC and Component GARCH-DCC models) with that of short memory conditional volatility models (the short memory EWMA and GARCH-DCC models), using the asset allocation framework of Engle and Colacito (2006). The research reports two main findings. First, for longer horizon forecasts, long memory volatility models generally produce forecasts of the covariance matrix that are statistically more accurate and informative, and economically more useful than those produced by short memory volatility models. Second, the two parsimonious long memory EWMA models outperform the other models – both short memory and long memory – in a majority of cases across all forecast horizons. These results apply to both low and high dimensional covariance matrices with both low and high correlation assets, and are robust to the choice of estimation window. The research then evaluates the application of multivariate long memory conditional volatility models in dynamic asset allocation, applying the volatility timing procedure of Fleming et al. (2001). The research consistently identifies the economic gains from incorporating long memory volatility dynamics in investment decisions. Investors are willing to pay to switch from the static to the dynamic strategies, and especially from the short memory volatility timing to the long memory volatility timing strategies across both short and long investment horizons. Among the long memory conditional volatility models, the two parsimonious long memory EWMA models, again, generally produce the most superior portfolios. When transaction costs are taken into account, the gains from the daily rebalanced dynamic portfolios deteriorate; however, it is still worth implementing the dynamic strategies at lower rebalancing frequencies. The results are robust to estimation error in expected returns, the choice of risk aversion coefficients and the use of a long-only constraint. To control for estimation error in forecasts of the long memory high dimensional covariance matrix, the research develops a dynamic long memory factor (the Orthogonal Factor Long Memory, or OFLM) model by embedding the univariate long memory EWMA model of Zumbach (2006) into an orthogonal factor structure. The factor-structured OFLM model is evaluated against the six above multivariate conditional volatility models in terms of forecast performance and economic benefits. The results suggest that the OFLM model generally produces impressive forecasts over both short and long forecast horizons. In the volatility timing framework, portfolios constructed with the OFLM model consistently dominate the static and other dynamic volatility timing portfolios in all rebalancing frequencies. Particularly, the outperformance of the factor-structured OFLM model to the fully estimated LM-EWMA model confirms the advantage of the factor structure in reducing estimation error. The factor structure also significantly reduces transaction costs, making the dynamic strategies more feasible in practice. The dynamic factor long memory volatility model also consistently produces more superior portfolios than those produced by the traditional unconditional factor and the dynamic factor short memory volatility models.
|
9 |
Modelling and forecasting stochastic volatilityLopes Moreira de Veiga, Maria Helena 19 April 2004 (has links)
El objetivo de esta tesis es modelar y predecir la volatilidad de las series financieras con modelos de volatilidad en tiempo discreto y continuo.En mi primer capítulo, intento modelar las principales características de las series financieras, como a persistencia y curtosis. Los modelos de volatilidad estocástica estimados son extensiones directas de los modelos de Gallant y Tauchen (2001), donde incluyo un elemento de retro-alimentación. Este elemento es de extrema importancia porque permite captar el hecho de que períodos de alta volatilidad están, en general, seguidos de periodos de gran volatilidad y viceversa. En este capítulo, como en toda la tesis, uso el método de estimación eficiente de momentos de Gallant y Tauchen (1996). De la estimación surgen dos modelos posibles de describir los datos, el modelo logarítmico con factor de volatilidad y retroalimentación y el modelo logarítmico con dos factores de volatilidad. Como no es posible elegir entre ellos basados en los tests efectuados en la fase de la estimación, tendremos que usar el método de reprogección para obtener mas herramientas de comparación. El modelo con un factor de volatilidad se comporta muy bien y es capaz de captar la "quiebra" de los mercados financieros de 1987.En el segundo capítulo, hago la evaluación del modelo con dos factores de volatilidad en términos de predicción y comparo esa predicción con las obtenidas con los modelos GARCH y ARFIMA. La evaluación de la predicción para los tres modelos es hecha con la ayuda del R2 de las regresiones individuales de la volatilidad "realizada" en una constante y en las predicciones. Los resultados empíricos indican un mejor comportamiento del modelo en tiempo continuo. Es más, los modelos GARCH y ARFIMA parecen tener problemas en seguir la marcha de la volatilidad "realizada". Finalmente, en el tercer capítulo hago una extensión del modelo de volatilidad estocástica de memoria larga de Harvey (2003). O sea, introduzco un factor de volatilidad de corto plazo. Este factor extra aumenta la curtosis y ayuda a captar la persistencia (que es captada con un proceso integrado fraccional, como en Harvey (1993)). Los resultados son probados y el modelo implementado empíricamente. / The purpose of my thesis is to model and forecast the volatility of the financial series of returns by using both continuous and discrete time stochastic volatility models.In my first chapter I try to fit the main characteristics of the financial series of returns such as: volatility persistence, volatility clustering and fat tails of the distribution of the returns.The estimated logarithmic stochastic volatility models are direct extensions of the Gallant and Tauchen's (2001) by including the feedback feature. This feature is of extreme importance because it allows to capture the low variability of the volatility factor when the factor is itself low (volatility clustering) and it also captures the increase in volatility persistence that occurs when there is an apparent change in the pattern of volatility at the very end of the sample. In this chapter, as well as in all the thesis, I use Efficient Method of Moments of Gallant and Tauchen (1996) as an estimation method. From the estimation step, two models come out, the logarithmic model with one factor of volatility and feedback (L1F) and the logarithmic model with two factors of volatility (L2). Since it is not possible to choose between them based on the diagnostics computed at the estimation step, I use the reprojection step to obtain more tools for comparing models. The L1F is able to reproject volatility quite well without even missing the crash of 1987.In the second chapter I fit the continuous time model with two factors of volatility of Gallant and Tauchen (2001) for the return of a Microsoft share. The aim of this chapter is to evaluate the volatility forecasting performance of the continuous time stochastic volatility model comparatively to the ones obtained with the traditional GARCH and ARFIMA models. In order to inquire into this, I estimate using the Efficient Method of Moments (EMM) of Gallant and Tauchen (1996) a continuous time stochastic volatility model for the logarithm of asset price and I filter the underlying volatility using the reprojection technique of Gallant and Tauchen (1998). Under the assumption that the model is correctly specified, I obtain a consistent estimator of the integrated volatility by fitting a continuous time stochastic volatility model to the data. The forecasting evaluation for the three estimated models is going to be done with the help of the R2 of the individual regressions of realized volatility on the volatility forecasts obtained from the estimated models. The empirical results indicate the better performance of the continuous time model in the out-of-sample periods compared to the ones of the traditional GARCH and ARFIMA models. Further, these two last models show difficulties in tracking the growth pattern of the realized volatility. This probably is due to the change of pattern in volatility in this last part of the sample. Finally, in the third chapter I come back to the model specification and I extend the long memory stochastic volatility model of Harvey (1993) by introducing a short run volatility factor. This extra factor increases kurtosis and helps the model capturing volatility persistence (that it is captured by a fractionally integrated process as in Harvey (1993) ). Futhermore, considering some restrictions of the parameters it is possible to fit the empirical fact of small first order autocorrelation of squared returns. All these results are proved theoretically and the model is implemented empirically using the S&P 500 composite index returns. The empirical results show the superiority of the model in fitting the main empirical facts of the financial series of returns.
|
10 |
Froecast the USA Stock Indices with GARCH-type ModelsCai, Xinhua January 2012 (has links)
No description available.
|
Page generated in 0.0502 seconds