Return to search

GARCH models based on Brownian Inverse Gaussian innovation processes / Gideon Griebenow

In classic GARCH models for financial returns the innovations are usually assumed to be normally
distributed. However, it is generally accepted that a non-normal innovation distribution is needed
in order to account for the heavier tails often encountered in financial returns. Since the structure
of the normal inverse Gaussian (NIG) distribution makes it an attractive alternative innovation
distribution for this purpose, we extend the normal GARCH model by assuming that the
innovations are NIG-distributed. We use the normal variance mixture interpretation of the NIG
distribution to show that a NIG innovation may be interpreted as a normal innovation coupled with
a multiplicative random impact factor adjustment of the ordinary GARCH volatility. We relate this
new volatility estimate to realised volatility and suggest that the random impact factors are due to a
news noise process influencing the underlying returns process. This GARCH model with NIG-distributed
innovations leads to more accurate parameter estimates than the normal GARCH
model. In order to obtain even more accurate parameter estimates, and since we expect an
information gain if we use more data, we further extend the model to cater for high, low and close
data, as well as full intraday data, instead of only daily returns. This is achieved by introducing the
Brownian inverse Gaussian (BIG) process, which follows naturally from the unit inverse Gaussian
distribution and standard Brownian motion. Fitting these models to empirical data, we find that the
accuracy of the model fit increases as we move from the models assuming normally distributed
innovations and allowing for only daily data to those assuming underlying BIG processes and
allowing for full intraday data.
However, we do encounter one problematic result, namely that there is empirical evidence of time
dependence in the random impact factors. This means that the news noise processes, which we
assumed to be independent over time, are indeed time dependent, as can actually be expected. In
order to cater for this time dependence, we extend the model still further by allowing for
autocorrelation in the random impact factors. The increased complexity that this extension
introduces means that we can no longer rely on standard Maximum Likelihood methods, but have
to turn to Simulated Maximum Likelihood methods, in conjunction with Efficient Importance
Sampling and the Control Variate variance reduction technique, in order to obtain an approximation
to the likelihood function and the parameter estimates. We find that this time dependent model
assuming an underlying BIG process and catering for full intraday data fits generated data and
empirical data very well, as long as enough intraday data is available. / Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2006.

Identiferoai:union.ndltd.org:netd.ac.za/oai:union.ndltd.org:nwu/oai:dspace.nwu.ac.za:10394/1019
Date January 2006
CreatorsGriebenow, Gideon
PublisherNorth-West University
Source SetsSouth African National ETD Portal
Detected LanguageEnglish
TypeThesis

Page generated in 0.0018 seconds