• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 21
  • 9
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 26
  • 25
  • 23
  • 23
  • 17
  • 16
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Option Pricing with Long Memory Stochastic Volatility Models

Tong, Zhigang 06 November 2012 (has links)
In this thesis, we propose two continuous time stochastic volatility models with long memory that generalize two existing models. More importantly, we provide analytical formulae that allow us to study option prices numerically, rather than by means of simulation. We are not aware about analytical results in continuous time long memory case. In both models, we allow for the non-zero correlation between the stochastic volatility and stock price processes. We numerically study the effects of long memory on the option prices. We show that the fractional integration parameter has the opposite effect to that of volatility of volatility parameter in short memory models. We also find that long memory models have the potential to accommodate the short term options and the decay of volatility skew better than the corresponding short memory stochastic volatility models.
12

A Reexamination for Fisher effect

Lin, Albert 23 July 2002 (has links)
none
13

Option Pricing with Long Memory Stochastic Volatility Models

Tong, Zhigang January 2012 (has links)
In this thesis, we propose two continuous time stochastic volatility models with long memory that generalize two existing models. More importantly, we provide analytical formulae that allow us to study option prices numerically, rather than by means of simulation. We are not aware about analytical results in continuous time long memory case. In both models, we allow for the non-zero correlation between the stochastic volatility and stock price processes. We numerically study the effects of long memory on the option prices. We show that the fractional integration parameter has the opposite effect to that of volatility of volatility parameter in short memory models. We also find that long memory models have the potential to accommodate the short term options and the decay of volatility skew better than the corresponding short memory stochastic volatility models.
14

Long memory conditional volatility and dynamic asset allocation

Nguyen, Anh Thi Hoang January 2011 (has links)
The thesis evaluates the benefit of allowing for long memory volatility dynamics in forecasts of the variance-covariance matrix for asset allocation. First, I compare the forecast performance of multivariate long memory conditional volatility models (the long memory EWMA, long memory EWMA-DCC, FIGARCH-DCC and Component GARCH-DCC models) with that of short memory conditional volatility models (the short memory EWMA and GARCH-DCC models), using the asset allocation framework of Engle and Colacito (2006). The research reports two main findings. First, for longer horizon forecasts, long memory volatility models generally produce forecasts of the covariance matrix that are statistically more accurate and informative, and economically more useful than those produced by short memory volatility models. Second, the two parsimonious long memory EWMA models outperform the other models – both short memory and long memory – in a majority of cases across all forecast horizons. These results apply to both low and high dimensional covariance matrices with both low and high correlation assets, and are robust to the choice of estimation window. The research then evaluates the application of multivariate long memory conditional volatility models in dynamic asset allocation, applying the volatility timing procedure of Fleming et al. (2001). The research consistently identifies the economic gains from incorporating long memory volatility dynamics in investment decisions. Investors are willing to pay to switch from the static to the dynamic strategies, and especially from the short memory volatility timing to the long memory volatility timing strategies across both short and long investment horizons. Among the long memory conditional volatility models, the two parsimonious long memory EWMA models, again, generally produce the most superior portfolios. When transaction costs are taken into account, the gains from the daily rebalanced dynamic portfolios deteriorate; however, it is still worth implementing the dynamic strategies at lower rebalancing frequencies. The results are robust to estimation error in expected returns, the choice of risk aversion coefficients and the use of a long-only constraint. To control for estimation error in forecasts of the long memory high dimensional covariance matrix, the research develops a dynamic long memory factor (the Orthogonal Factor Long Memory, or OFLM) model by embedding the univariate long memory EWMA model of Zumbach (2006) into an orthogonal factor structure. The factor-structured OFLM model is evaluated against the six above multivariate conditional volatility models in terms of forecast performance and economic benefits. The results suggest that the OFLM model generally produces impressive forecasts over both short and long forecast horizons. In the volatility timing framework, portfolios constructed with the OFLM model consistently dominate the static and other dynamic volatility timing portfolios in all rebalancing frequencies. Particularly, the outperformance of the factor-structured OFLM model to the fully estimated LM-EWMA model confirms the advantage of the factor structure in reducing estimation error. The factor structure also significantly reduces transaction costs, making the dynamic strategies more feasible in practice. The dynamic factor long memory volatility model also consistently produces more superior portfolios than those produced by the traditional unconditional factor and the dynamic factor short memory volatility models.
15

Modelling and forecasting stochastic volatility

Lopes Moreira de Veiga, Maria Helena 19 April 2004 (has links)
El objetivo de esta tesis es modelar y predecir la volatilidad de las series financieras con modelos de volatilidad en tiempo discreto y continuo.En mi primer capítulo, intento modelar las principales características de las series financieras, como a persistencia y curtosis. Los modelos de volatilidad estocástica estimados son extensiones directas de los modelos de Gallant y Tauchen (2001), donde incluyo un elemento de retro-alimentación. Este elemento es de extrema importancia porque permite captar el hecho de que períodos de alta volatilidad están, en general, seguidos de periodos de gran volatilidad y viceversa. En este capítulo, como en toda la tesis, uso el método de estimación eficiente de momentos de Gallant y Tauchen (1996). De la estimación surgen dos modelos posibles de describir los datos, el modelo logarítmico con factor de volatilidad y retroalimentación y el modelo logarítmico con dos factores de volatilidad. Como no es posible elegir entre ellos basados en los tests efectuados en la fase de la estimación, tendremos que usar el método de reprogección para obtener mas herramientas de comparación. El modelo con un factor de volatilidad se comporta muy bien y es capaz de captar la "quiebra" de los mercados financieros de 1987.En el segundo capítulo, hago la evaluación del modelo con dos factores de volatilidad en términos de predicción y comparo esa predicción con las obtenidas con los modelos GARCH y ARFIMA. La evaluación de la predicción para los tres modelos es hecha con la ayuda del R2 de las regresiones individuales de la volatilidad "realizada" en una constante y en las predicciones. Los resultados empíricos indican un mejor comportamiento del modelo en tiempo continuo. Es más, los modelos GARCH y ARFIMA parecen tener problemas en seguir la marcha de la volatilidad "realizada". Finalmente, en el tercer capítulo hago una extensión del modelo de volatilidad estocástica de memoria larga de Harvey (2003). O sea, introduzco un factor de volatilidad de corto plazo. Este factor extra aumenta la curtosis y ayuda a captar la persistencia (que es captada con un proceso integrado fraccional, como en Harvey (1993)). Los resultados son probados y el modelo implementado empíricamente. / The purpose of my thesis is to model and forecast the volatility of the financial series of returns by using both continuous and discrete time stochastic volatility models.In my first chapter I try to fit the main characteristics of the financial series of returns such as: volatility persistence, volatility clustering and fat tails of the distribution of the returns.The estimated logarithmic stochastic volatility models are direct extensions of the Gallant and Tauchen's (2001) by including the feedback feature. This feature is of extreme importance because it allows to capture the low variability of the volatility factor when the factor is itself low (volatility clustering) and it also captures the increase in volatility persistence that occurs when there is an apparent change in the pattern of volatility at the very end of the sample. In this chapter, as well as in all the thesis, I use Efficient Method of Moments of Gallant and Tauchen (1996) as an estimation method. From the estimation step, two models come out, the logarithmic model with one factor of volatility and feedback (L1F) and the logarithmic model with two factors of volatility (L2). Since it is not possible to choose between them based on the diagnostics computed at the estimation step, I use the reprojection step to obtain more tools for comparing models. The L1F is able to reproject volatility quite well without even missing the crash of 1987.In the second chapter I fit the continuous time model with two factors of volatility of Gallant and Tauchen (2001) for the return of a Microsoft share. The aim of this chapter is to evaluate the volatility forecasting performance of the continuous time stochastic volatility model comparatively to the ones obtained with the traditional GARCH and ARFIMA models. In order to inquire into this, I estimate using the Efficient Method of Moments (EMM) of Gallant and Tauchen (1996) a continuous time stochastic volatility model for the logarithm of asset price and I filter the underlying volatility using the reprojection technique of Gallant and Tauchen (1998). Under the assumption that the model is correctly specified, I obtain a consistent estimator of the integrated volatility by fitting a continuous time stochastic volatility model to the data. The forecasting evaluation for the three estimated models is going to be done with the help of the R2 of the individual regressions of realized volatility on the volatility forecasts obtained from the estimated models. The empirical results indicate the better performance of the continuous time model in the out-of-sample periods compared to the ones of the traditional GARCH and ARFIMA models. Further, these two last models show difficulties in tracking the growth pattern of the realized volatility. This probably is due to the change of pattern in volatility in this last part of the sample. Finally, in the third chapter I come back to the model specification and I extend the long memory stochastic volatility model of Harvey (1993) by introducing a short run volatility factor. This extra factor increases kurtosis and helps the model capturing volatility persistence (that it is captured by a fractionally integrated process as in Harvey (1993) ). Futhermore, considering some restrictions of the parameters it is possible to fit the empirical fact of small first order autocorrelation of squared returns. All these results are proved theoretically and the model is implemented empirically using the S&P 500 composite index returns. The empirical results show the superiority of the model in fitting the main empirical facts of the financial series of returns.
16

Froecast the USA Stock Indices with GARCH-type Models

Cai, Xinhua January 2012 (has links)
No description available.
17

Statistical Control Charts of I(d) processes

Wang, Chi-Ling 10 July 2002 (has links)
Long range dependent processes occur in many fields, it is important to monitor these processes to early detect their shifts. This paper considers the problem of detecting changes in an I(d) process or an ARFIMA(p,d,q) process by statistical control charts. The control limits of EWMA and EWRMS control charts of I(d) processes are established and analytic forms are derived. The average run lengths of these control charts are estimated by Monte Carlo simulations. In addition, we illustrate the performance of the control charts by empirical examples of I(d) processes and ARFIMA(1,d,1) processes.
18

The Application of Atheoretical Regression Trees to Problems in Time Series Analysis

Rea, William Stanley January 2008 (has links)
This thesis applies Atheoretical Regression Trees (ART) to the problem of locating changes in mean in a time series where the number and location of those changes are unknown. We undertook an extensive simulation study into ART's performance on a range of time series. We found ART to be a useful addition to currently established structural break methodologies such as the CUSUM and that due to Bai and Perron. ART was found to be useful in the analysis of long time series which are not practical to analyze with the optimal procedure of Bai and Perron. ART was applied to a long standing problem in the analysis of long memory time series. We propose two new methods based on ART for distinguishing between true long memory and spurious long memory due to structural breaks. These methods are fundamentally different from current tests and procedures intended to discriminate between the two sets of competing models. The methods were subjected to a simulation study and shown to be effective in discrimination between simple regime switching models and fractionally integrated processes. We applied the new methods to 16 realized volatility series and concluded they were not fractionally integrated series. All 16 series had mean shifts, some of which could be identified with historical events. We applied the new methods to a range of geophysical time series and concluded they were not fractional Gaussian noises. All of the series examined had mean shifts, some of which could be identified with known climatic changes. We conclude that our new methods are a significant advance in model discrimination in long memory series.
19

On testing and forecasting in fractionally integrated time series models

Andersson, Michael K. January 1998 (has links)
This volume contains five essays in the field of time series econometrics. All five discuss properties of fractionally integrated processes and models. The first essay, entitled Do Long-Memory Models have Long Memory?, demonstrates that fractional integration can enhance the memory of ARMA processes enormously. This is however not true for all combinations of diffe-rencing, autoregressive and moving average parameters. The second essay, with the title On the Effects of Imposing or Ignoring Long-Memory when Forecasting, investigates how the choice between mo-delling stationary time series as ARMA or ARFIMA processes affect the accu-racy of forecasts. The results suggest that ignoring long-memory is worse than imposing it and that the maximum likelihood estimator for the ARFIMA model is to prefer. The third essay, Power and Bias of Likelihood Based Inference in the Cointegration Model under Fractional Cointegration, investigates the performance of the usual cointegration approach when the processes are fractionally cointegrated. Under these circumstances, it is shown that the maximum likelihood estimates of the long-run relationship are severely biased. The fourth and fifth essay, entitled respectively Bootstrap Testing for Fractional Integration and Robust Testing for Fractional Integration using the Bootstrap, propose and investigate the performance of some bootstrap testing procedures for fractional integration. The results suggest that the empirical size of a bootstrap test is (almost) always close to the nominal, and that a well-designed bootstrap test is quite robust to deviations from standard assumptions. / Diss. Stockholm : Handelshögsk. [7] s., s. x-xiv, s. 1-26: sammanfattning, s. 27-111, [4] s.: 5 uppsatser
20

GARCH Models with Long Memory and Nonparametric Specifications

Conrad, Christian, January 2006 (has links)
Mannheim, Univ., Diss., 2006.

Page generated in 0.0665 seconds