Spelling suggestions: "subject:"actuar science.""
91 |
Markovian Approaches to Joint-life Mortality with Applications in Risk ManagementJi, Min 28 July 2011 (has links)
The combined survival status of the insured lives is a critical problem when pricing and reserving insurance products with more than one life. Our preliminary experience examination of bivariate annuity data from a large Canadian insurance company shows that the relative risk of mortality for an individual increases after the loss of his/her spouse, and that the increase is especially dramatic shortly after bereavement. This preliminary result is supported by the empirical studies over the past 50 years, which suggest dependence between a husband and wife.
The dependence between a married couple may be significant in risk management of joint-life policies. This dissertation progressively explores Markovian models in pricing and risk management of joint-life policies, illuminating their advantages in dependent modeling of joint time-until-death (or other exit time) random variables. This dissertation argues that in the dependent modeling of joint-life dependence, Markovian models are flexible, transparent, and easily extended.
Multiple state models have been widely used in historic data analysis, particularly in the modeling of failures that have event-related dependence. This dissertation introduces a ¡°common shock¡± factor into a standard Markov joint-life mortality model, and then extends it to a semi-Markov model to capture the decaying effect of the "broken heart" factor. The proposed models transparently and intuitively measure the extent of three types of dependence: the instantaneous dependence, the short-term impact of bereavement, and the long-term association between lifetimes. Some copula-based dependence measures, such as upper tail dependence, can also be derived from Markovian approaches.
Very often, death is not the only mode of decrement. Entry into long-term care and voluntary prepayment, for instance, can affect reverse mortgage terminations. The semi-Markov joint-life model is extended to incorporate more exit modes, to model joint-life reverse mortgage termination speed. The event-triggered dependence between a husband and wife is modeled. For example, one spouse's death increases the survivor's inclination to move close to kin. We apply the proposed model specifically to develop the valuation formulas for roll-up mortgages in the UK and Home Equity Conversion Mortgages in the US. We test the significance of each termination mode and then use the model to investigate the mortgage insurance premiums levied on Home Equity Conversion Mortgage borrowers.
Finally, this thesis extends the semi-Markov joint-life mortality model to having stochastic transition intensities, for modeling joint-life longevity risk in last-survivor annuities. We propose a natural extension of Gompertz' law to have correlated stochastic dynamics for its two parameters, and incorporate it into the semi-Markov joint-life mortality model. Based on this preliminary joint-life longevity model, we examine the impact of mortality improvement on the cost of a last survivor annuity, and investigate the market prices of longevity risk in last survivor annuities using risk-neutral pricing theory.
|
92 |
Lognormal Mixture Model for Option Pricing with Applications to Exotic OptionsFang, Mingyu January 2012 (has links)
The Black-Scholes option pricing model has several well recognized deficiencies, one of
which is its assumption of a constant and time-homogeneous stock return volatility term. The implied volatility smile has been studied by subsequent researchers and various models have been developed in an attempt to reproduce this phenomenon from within the models. However, few of these models yield closed-form pricing formulas that are easy to implement in practice. In this thesis, we study a Mixture Lognormal model (MLN) for European option pricing, which assumes that future stock prices are conditionally described by a mixture of lognormal distributions. The ability of mixture models in generating volatility
smiles as well as delivering pricing improvement over the traditional Black-Scholes framework have been much researched under multi-component mixtures for many derivatives and high-volatility individual stock options. In this thesis, we investigate the performance of the model under the simplest two-component mixture in a market characterized by relative tranquillity and over a relatively stable period for broad-based index options. A
careful interpretation is given to the model and the results obtained in the thesis. This
di erentiates our study from many previous studies on this subject. Throughout the thesis, we establish the unique advantage of the MLN model, which is having closed-form option pricing formulas equal to the weighted mixture of Black-Scholes
option prices. We also propose a robust calibration methodology to fit the model to market data. Extreme market states, in particular the so-called crash-o-phobia effect, are shown to be well captured by the calibrated model, albeit small pricing improvements are made over a relatively stable period of index option market. As a major contribution of this thesis, we extend the MLN model to price exotic options including binary, Asian, and barrier options.
Closed-form formulas are derived for binary and continuously monitored barrier options
and simulation-based pricing techniques are proposed for Asian and discretely monitored
barrier options. Lastly, comparative results are analysed for various strike-maturity combinations, which provides insights into the formulation of hedging and risk management strategies.
|
93 |
Aspects of generalized additive models and their application in actuarial scienceAmod, Farhaad 16 September 2015 (has links)
M.Sc. / Please refer to full text to view abstract
|
94 |
Estimating the risks in defined benefit pension funds under the constraints of PF117Mahmood, Ra'ees January 2017 (has links)
With the issuing of Pension Funds circular PF117 in 2004 in South Africa, regulation required valuation assumptions for defined benefit pension funds to be on a best-estimate basis. Allowance for prudence was to be made through explicit contingency reserves, in order to increase reporting transparency. These reserves for prudence, however, were not permitted to put the fund into deficit (the no-deficit clause). Analysis is conducted to understand the risk that PF117 poses to pension fund sponsors and members under two key measures: contribution rate risk and solvency risk. A stochastic model of a typical South African defined benefit fund is constructed with simulations run to determine the impact of the PF117 requirements. Findings show that a best-estimate funding basis, coupled with the no-deficit clause, results in significant risk under both contribution rate and solvency risk measures, particularly in the short-term. To mitigate these risks, alternative ways of introducing conservatism into the funding basis are required, with possible options including incorporating margins into investment return assumptions or the removal of the no-deficit clause.
|
95 |
Bayesian approaches of Markov models embedded in unbalanced panel dataMuller, Christoffel Joseph Brand 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Multi-state models are used in this dissertation to model panel data, also known as longitudinal
or cross-sectional time-series data. These are data sets which include units that are observed
across two or more points in time. These models have been used extensively in medical studies
where the disease states of patients are recorded over time.
A theoretical overview of the current multi-state Markov models when applied to panel data
is presented and based on this theory, a simulation procedure is developed to generate panel
data sets for given Markov models. Through the use of this procedure a simulation study
is undertaken to investigate the properties of the standard likelihood approach when fitting
Markov models and then to assess its shortcomings. One of the main shortcomings highlighted
by the simulation study, is the unstable estimates obtained by the standard likelihood models,
especially when fitted to small data sets.
A Bayesian approach is introduced to develop multi-state models that can overcome these
unstable estimates by incorporating prior knowledge into the modelling process. Two Bayesian
techniques are developed and presented, and their properties are assessed through the use of
extensive simulation studies.
Firstly, Bayesian multi-state models are developed by specifying prior distributions for the
transition rates, constructing a likelihood using standard Markov theory and then obtaining
the posterior distributions of the transition rates. A selected few priors are used in these
models. Secondly, Bayesian multi-state imputation techniques are presented that make use
of suitable prior information to impute missing observations in the panel data sets. Once
imputed, standard likelihood-based Markov models are fitted to the imputed data sets to
estimate the transition rates. Two different Bayesian imputation techniques are presented.
The first approach makes use of the Dirichlet distribution and imputes the unknown states at
all time points with missing observations. The second approach uses a Dirichlet process to
estimate the time at which a transition occurred between two known observations and then a
state is imputed at that estimated transition time.
The simulation studies show that these Bayesian methods resulted in more stable results, even
when small samples are available. / AFRIKAANSE OPSOMMING: Meerstadium-modelle word in hierdie verhandeling gebruik om paneeldata, ook bekend as
longitudinale of deursnee tydreeksdata, te modelleer. Hierdie is datastelle wat eenhede insluit
wat oor twee of meer punte in tyd waargeneem word. Hierdie tipe modelle word dikwels in
mediese studies gebruik indien verskillende stadiums van ’n siekte oor tyd waargeneem word.
’n Teoretiese oorsig van die huidige meerstadium Markov-modelle toegepas op paneeldata word
gegee. Gebaseer op hierdie teorie word ’n simulasieprosedure ontwikkel om paneeldatastelle
te simuleer vir gegewe Markov-modelle. Hierdie prosedure word dan gebruik in ’n simulasiestudie
om die eienskappe van die standaard aanneemlikheidsbenadering tot die pas vanMarkov
modelle te ondersoek en dan enige tekortkominge hieruit te beoordeel. Een van die hoof
tekortkominge wat uitgewys word deur die simulasiestudie, is die onstabiele beramings wat
verkry word indien dit gepas word op veral klein datastelle.
’n Bayes-benadering tot die modellering van meerstadiumpaneeldata word ontwikkel omhierdie
onstabiliteit te oorkom deur a priori-inligting in die modelleringsproses te inkorporeer. Twee
Bayes-tegnieke word ontwikkel en aangebied, en hulle eienskappe word ondersoek deur ’n
omvattende simulasiestudie.
Eerstens word Bayes-meerstadium-modelle ontwikkel deur a priori-verdelings vir die oorgangskoerse
te spesifiseer en dan die aanneemlikheidsfunksie te konstrueer deur van standaard
Markov-teorie gebruik te maak en die a posteriori-verdelings van die oorgangskoerse te bepaal.
’n Gekose aantal a priori-verdelings word gebruik in hierdie modelle. Tweedens word Bayesmeerstadium
invul tegnieke voorgestel wat gebruik maak van a priori-inligting om ontbrekende
waardes in die paneeldatastelle in te vul of te imputeer. Nadat die waardes ge-imputeer is,
word standaard Markov-modelle gepas op die ge-imputeerde datastel om die oorgangskoerse te
beraam. Twee verskillende Bayes-meerstadium imputasie tegnieke word bespreek. Die eerste
tegniek maak gebruik van ’n Dirichletverdeling om die ontbrekende stadium te imputeer by alle
tydspunte met ’n ontbrekende waarneming. Die tweede benadering gebruik ’n Dirichlet-proses
om die oorgangstyd tussen twee waarnemings te beraam en dan die ontbrekende stadium te
imputeer op daardie beraamde oorgangstyd.
Die simulasiestudies toon dat die Bayes-metodes resultate oplewer wat meer stabiel is, selfs
wanneer klein datastelle beskikbaar is.
|
96 |
Exploratory and inferential multivariate statistical techniques for multidimensional count and binary data with applications in RNtushelo, Nombasa Sheroline 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: The analysis of multidimensional (multivariate) data sets is a very important area of
research in applied statistics. Over the decades many techniques have been developed to
deal with such datasets. The multivariate techniques that have been developed include
inferential analysis, regression analysis, discriminant analysis, cluster analysis and many
more exploratory methods. Most of these methods deal with cases where the data contain
numerical variables. However, there are powerful methods in the literature that also deal
with multidimensional binary and count data.
The primary purpose of this thesis is to discuss the exploratory and inferential techniques
that can be used for binary and count data. In Chapter 2 of this thesis we give the detail of
correspondence analysis and canonical correspondence analysis. These methods are used
to analyze the data in contingency tables. Chapter 3 is devoted to cluster analysis. In this
chapter we explain four well-known clustering methods and we also discuss the distance
(dissimilarity) measures available in the literature for binary and count data. Chapter 4
contains an explanation of metric and non-metric multidimensional scaling. These
methods can be used to represent binary or count data in a lower dimensional Euclidean
space. In Chapter 5 we give a method for inferential analysis called the analysis of
distance. This method use a similar reasoning as the analysis of variance, but the
inference is based on a pseudo F-statistic with the p-value obtained using permutations of
the data. Chapter 6 contains real-world applications of these above methods on two
special data sets called the Biolog data and Barents Fish data.
The secondary purpose of the thesis is to demonstrate how the above techniques can be
performed in the software package R. Several R packages and functions are discussed
throughout this thesis. The usage of these functions is also demonstrated with appropriate
examples. Attention is also given to the interpretation of the output and graphics. The
thesis ends with some general conclusions and ideas for further research. / AFRIKAANSE OPSOMMING: Die analise van meerdimensionele (meerveranderlike) datastelle is ’n belangrike area van
navorsing in toegepaste statistiek. Oor die afgelope dekades is daar verskeie tegnieke
ontwikkel om sulke data te ontleed. Die meerveranderlike tegnieke wat ontwikkel is sluit
in inferensie analise, regressie analise, diskriminant analise, tros analise en vele meer
verkennende data analise tegnieke. Die meerderheid van hierdie metodes hanteer gevalle
waar die data numeriese veranderlikes bevat. Daar bestaan ook kragtige metodes in die
literatuur vir die analise van meerdimensionele binêre en telling data.
Die primêre doel van hierdie tesis is om tegnieke vir verkennende en inferensiële analise
van binêre en telling data te bespreek. In Hoofstuk 2 van hierdie tesis bespreek ons
ooreenkoms analise en kanoniese ooreenkoms analise. Hierdie metodes word gebruik om
data in gebeurlikheidstabelle te analiseer. Hoofstuk 3 bevat tegnieke vir tros analise. In
hierdie hoofstuk verduidelik ons vier gewilde tros analise metodes. Ons bespreek ook die
afstand maatstawwe wat beskikbaar is in die literatuur vir binêre en telling data. Hoofstuk
4 bevat ’n verduideliking van metriese en nie-metriese meerdimensionele skalering.
Hierdie metodes kan gebruik word om binêre of telling data in ‘n lae dimensionele
Euclidiese ruimte voor te stel. In Hoofstuk 5 beskryf ons ’n inferensie metode wat bekend
staan as die analise van afstande. Hierdie metode gebruik ’n soortgelyke redenasie as die
analise van variansie. Die inferensie hier is gebaseer op ’n pseudo F-toetsstatistiek en die
p-waardes word verkry deur gebruik te maak van permutasies van die data. Hoofstuk 6
bevat toepassings van bogenoemde tegnieke op werklike datastelle wat bekend staan as
die Biolog data en die Barents Fish data.
Die sekondêre doel van die tesis is om te demonstreer hoe hierdie tegnieke uitgevoer
word in the R sagteware. Verskeie R pakette en funksies word deurgaans bespreek in die
tesis. Die gebruik van die funksies word gedemonstreer met toepaslike voorbeelde.
Aandag word ook gegee aan die interpretasie van die afvoer en die grafieke. Die tesis
sluit af met algemene gevolgtrekkings en voorstelle vir verdere navorsing.
|
97 |
Aspects of copulas and goodness-of-fitKpanzou, Tchilabalo Abozou 12 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--Stellenbosch University, 2008. / The goodness-of- t of a statistical model describes how well it ts a set of observations. Measures
of goodness-of- t typically summarize the discrepancy between observed values and the values
expected under the model in question. Such measures can be used in statistical hypothesis
testing, for example to test for normality, to test whether two samples are drawn from identical
distributions, or whether outcome frequencies follow a speci ed distribution. Goodness-of- t
for copulas is a special case of the more general problem of testing multivariate models, but is
complicated due to the di culty of specifying marginal distributions.
In this thesis, the goodness-of- t test statistics for general distributions and the tests for copulas
are investigated, but prior to that an understanding of copulas and their properties is developed.
In fact copulas are useful tools for understanding relationships among multivariate variables, and
are important tools for describing the dependence structure between random variables. Several
univariate, bivariate and multivariate test statistics are investigated, the emphasis being on
tests for normality. Among goodness-of- t tests for copulas, tests based on the probability integral
transform, Rosenblatt's transformation, as well as some dimension reduction techniques are
considered. Bootstrap procedures are also described. Simulation studies are conducted to rst
compare the power of rejection of the null hypothesis of the Clayton copula by four di erent test
statistics under the alternative of the Gumbel-Hougaard copula, and also to compare the power
of rejection of the null hypothesis of the Gumbel-Hougaard copula under the alternative of the
Clayton copula. An application of the described techniques is made to a practical data set.
|
98 |
Calculation aspects of the European Rebalanced Basket Option using Monte Carlo methodsVan der Merwe, Carel Johannes 12 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science)--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: Life insurance and pension funds offer a wide range of products that are invested in a mix of
assets. These portfolios (II), underlying the products, are rebalanced back to predetermined fixed
proportions on a regular basis. This is done by selling the better performing assets and buying the
worse performing assets. Life insurance or pension fund contracts can offer the client a minimum
payout guarantee on the contract by charging them an extra premium (a). This problem can be
changed to that of the pricing of a put option with underlying . It forms a liability for the insurance
firm, and therefore needs to be managed in terms of risks as well. This can be done by studying the
option’s sensitivities. In this thesis the premium and sensitivities of this put option are calculated,
using different Monte Carlo methods, in order to find the most efficient method.
Using general Monte Carlo methods, a simplistic pricing method is found which is refined by applying
mathematical techniques so that the computational time is reduced significantly. After considering
Antithetic Variables, Control Variates and Latin Hypercube Sampling as variance reduction techniques,
option prices as Control Variates prove to reduce the error of the refined method most
efficiently. This is improved by considering different Quasi-Monte Carlo techniques, namely Halton,
Faure, normal Sobol’ and other randomised Sobol’ sequences. Owen and Faure-Tezuke type
randomised Sobol’ sequences improved the convergence of the estimator the most efficiently. Furthermore,
the best methods between Pathwise Derivatives Estimates and Finite Difference Approximations
for estimating sensitivities of this option are found.
Therefore by using the refined pricing method with option prices as Control Variates together with
Owen and Faure-Tezuke type randomised Sobol’ sequences as a Quasi-Monte Carlo method, more
efficient methods to price this option (compared to simplistic Monte Carlo methods) are obtained.
In addition, more efficient sensitivity estimators are obtained to help manage risks. / AFRIKAANSE OPSOMMING: Lewensversekering en pensioenfondse bied die mark ’n wye reeks produkte wat belê word in ’n
mengsel van bates. Hierdie portefeuljes (II), onderliggend aan die produkte, word op ’n gereelde basis
terug herbalanseer volgens voorafbepaalde vaste proporsies. Dit word gedoen deur bates wat beter
opbrengste gehad het te verkoop, en bates met swakker opbrengste aan te koop. Lewensversekeringof
pensioenfondskontrakte kan ’n kliënt ’n verdere minimum uitbetaling aan die einde van die kontrak
waarborg deur ’n ekstra premie (a) op die kontrak te vra. Die probleem kan verander word
na die prysing van ’n verkoopopsie met onderliggende bate . Hierdie vorm deel van die versekeringsmaatskappy
se laste en moet dus ook bestuur word in terme van sy risiko’s. Dit kan gedoen
word deur die opsie se sensitiwiteite te bestudeer. In hierdie tesis word die premie en sensitiwiteite
van die verkoopopsie met behulp van verskillende Monte Carlo metodes bereken, om sodoende die
effektiefste metode te vind.
Deur die gebruik van algemene Monte Carlo metodes word ’n simplistiese prysingsmetode, wat verfyn
is met behulp van wiskundige tegnieke wat die berekeningstyd wesenlik verminder, gevind. Nadat
Antitetiese Veranderlikes, Kontrole Variate en Latynse Hiperkubus Steekproefneming as variansiereduksietegnieke
oorweeg is, word gevind dat die verfynde metode se fout die effektiefste verminder
met behulp van opsiepryse as Kontrole Variate. Dit word verbeter deur verskillende Quasi-Monte
Carlo tegnieke, naamlik Halton, Faure, normale Sobol’ en ander verewekansigde Sobol’ reekse, te
vergelyk. Die Owen en Faure-Tezuke tipe verewekansigde Sobol’ reeks verbeter die konvergensie van
die beramer die effektiefste. Verder is die beste metode tussen Baanafhanklike Afgeleide Beramers
en Eindige Differensie Benaderings om die sensitiwiteit vir die opsie te bepaal, ook gevind.
Deur dus die verfynde prysingsmetode met opsiepryse as Kontrole Variate, saam met Owen en
Faure-Tezuke tipe verewekansigde Sobol’ reekse as ’n Quasi-Monte Carlo metode te gebruik, word
meer effektiewe metodes om die opsie te prys, gevind (in vergelyking met simplistiese Monte Carlo
metodes). Verder is meer effektiewe sensitiwiteitsberamers as voorheen gevind wat gebruik kan word
om risiko’s te help bestuur.
|
99 |
Non-parametric regression modelling of in situ fCO2 in the Southern OceanPretorius, Wesley Byron 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: The Southern Ocean is a complex system, where the relationship between CO2
concentrations and its drivers varies intra- and inter-annually. Due to the lack
of readily available in situ data in the Southern Ocean, a model approach
was required which could predict the CO2 concentration proxy variable, fCO2.
This must be done using predictor variables available via remote measurements
to ensure the usefulness of the model in the future. These predictor
variables were sea surface temperature, log transformed chlorophyll-a concentration,
mixed layer depth and at a later stage altimetry. Initial exploratory
analysis indicated that a non-parametric approach to the model should be
taken. A parametric multiple linear regression model was developed to use as
a comparison to previous studies in the North Atlantic Ocean as well as to
compare with the results of the non-parametric approach. A non-parametric
kernel regression model was then used to predict fCO2 and nally a combination
of the parametric and non-parametric regression models was developed,
referred to as the mixed regression model. The results indicated, as expected
from exploratory analyses, that the non-parametric approach produced more
accurate estimates based on an independent test data set. These more accurate
estimates, however, were coupled with zero estimates, caused by the
curse of dimensionality. It was also found that the inclusion of salinity (not
available remotely) improved the model and therefore altimetry was chosen
to attempt to capture this e ect in the model. The mixed model displayed
reduced errors as well as removing the zero estimates and hence reducing
the variance of the error rates. The results indicated that the mixed model
is the best approach to use to predict fCO2 in the Southern Ocean and that
altimetry's inclusion did improve the prediction accuracy. / AFRIKAANSE OPSOMMING: Die Suidelike Oseaan is 'n komplekse sisteem waar die verhouding tussen CO2
konsentrasies en die drywers daarvoor intra- en interjaarliks varieer. 'n Tekort
aan maklik verkrygbare in situ data van die Suidelike Oseaan het daartoe gelei
dat 'n model benadering nodig was wat die CO2 konsentrasie plaasvervangerveranderlike,
fCO2, kon voorspel. Dié moet gedoen word deur om gebruik te
maak van voorspellende veranderlikes, beskikbaar deur middel van afgeleë metings,
om die bruikbaarheid van die model in die toekoms te verseker. Hierdie
voorspellende veranderlikes het ingesluit see-oppervlaktetemperatuur, log getransformeerde
chloro l-a konsentrasie, gemengde laag diepte en op 'n latere
stadium, hoogtemeting. 'n Aanvanklike, ondersoekende analise het aangedui
dat 'n nie-parametriese benadering tot die data geneem moet word. 'n Parametriese
meerfoudige lineêre regressie model is ontwikkel om met die vorige
studies in die Noord-Atlantiese Oseaan asook met die resultate van die nieparametriese
benadering te vergelyk. 'n Nie-parametriese kern regressie model
is toe ingespan om die fCO2 te voorspel en uiteindelik is 'n kombinasie van
die parametriese en nie-parametriese regressie modelle ontwikkel vir dieselfde
doel, wat na verwys word as die gemengde regressie model. Die resultate het
aangetoon, soos verwag uit die ondersoekende analise, dat die nie-parametriese
benadering meer akkurate beramings lewer, gebaseer op 'n onafhanklike toets
datastel. Dié meer akkurate beramings het egter met "nul"beramings gepaartgegaan
wat veroorsaak word deur die vloek van dimensionaliteit. Daar is ook
gevind dat die insluiting van soutgehalte (nie beskikbaar oor via sateliet nie)
die model verbeter en juis daarom is hoogtemeting gekies om te poog om hierdie
e ek in die model vas te vang. Die gemengde model het kleiner foute
getoon asook die "nul"beramings verwyder en sodoende die variasie van die
foutkoerse verminder. Die resultate het dus aangetoon dat dat die gemengde
model die beste benadering is om te gebruik om die fCO2 in die Suidelike Oseaan
te beraam en dat die insluiting van altimetry die akkuraatheid van hierdie
beraming verbeter.
|
100 |
The effect of liquidity on stock returns on the JSEReisinger, Astrid Kim 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: This thesis examines the effect of liquidity on excess stock returns on the Johannesburg Stock Exchange (JSE) over the period 2003 to 2011. It builds on the findings of previous studies that found size, value and momentum effects to be significant in explaining market anomalies by adding a further explanatory factor, namely liquidity. A standard CAPM, as well as a momentum-augmented Fama-French (1993: 3) model are employed to perform regression analyses to examine the effect of the four variables on excess stock returns. Results suggested that the log of the stock‘s market value best captured the size effect, the earnings yield best captured the value effect and the previous three month‘s returns best captured the momentum effect. Five liquidity proxies are used: the bid-ask spread first proposed by Amihud (1986: 223), turnover, the price impact measure of Amihud (2002: 31) and two zero return measures proposed by Lesmond et al. (1999: 1113). Despite prior studies having found liquidity to be an influential factor, this thesis found the opposite to be true. This finding remains robust, irrespective of the type of liquidity measure used. While size, value and momentum are found to be significant to a certain extent in explaining excess stock returns over the period, liquidity is not found to be significant. This is a surprising result, given that the JSE is seen as an emerging market, which is generally regarded as illiquid. This fact is exacerbated by the fact that the JSE is a highly concentrated and therefore skewed market that is dominated by only a handful of shares. Hence liquidity is expected to be of utmost importance. The result that liquidity is however not a priced factor on this market is therefore an important finding that requires further analysis to determine why this is the case. In addition, significant non-zero intercepts remained, indicating continued missing risk factors. / AFRIKAANSE OPSOMMING: In hierdie tesis word die effek van likiditeit op oormaat aandeel-opbrengste op die Johannesburg Effektebeurs (JEB) ondersoek gedurende die periode 2003 tot 2011. Dit bou voort op die bevindinge van vorige studies wat toon dat grootte, waarde en momentum beduidend is in die verklaring van mark onreëlmatighede deur 'n addisionele verklarende faktor, likiditeit, toe te voeg. 'n Standaard kapitaalbateprysingsmodel (KBPM) sowel as 'n momentum-aangepaste Fama-French (1993: 3) model word gebruik om deur middel van regressie analise die effek van die vier veranderlikes op oormaat aandeel-opbrengste te ondersoek. Die resultate toon dat die grootte effek die beste verteenwoordig word deur die logaritme van die aandeel se mark kapitalisasie, die verdienste-opbrengs verteenwoordig die waarde effek en die vorige drie-maande opbrengskoerse verteenwoordig die momentum effek die beste. Vyf likiditeitsveranderlikes is gebruik: bod-en-aanbod spreiding voorgestel deur Amihud (1986: 223), omset, die prys-impak maatstaf van Amihud (2002: 31) en twee nul-opbrengskoers maatstawwe voorgestel deur Lesmond et al. (1999: 1113). Afgesien van die feit dat vorige studies die effek van likiditeit beduidend vind, word die teenoorgestelde in hierdie tesis gevind. Hierdie bevinding bly robuus, ongeag van die likiditeitsveranderlike wat gebruik word. Terwyl bevind is dat grootte, waarde en momentum beduidend is tot 'n sekere mate in die verklaring van oormaat aandeel-opbrengste tydens die periode, is geen aanduiding dat likiditeit 'n addisionele beduidende verklarende faktor is gevind nie. Hierdie bevinding is onverwags, aangesien die JEB beskou word as 'n ontluikende mark, wat normaalweg illikied is. Hierdie feit word vererger deur dat die JEB hoogs gekonsentreerd is en dus 'n skewe mark is wat oorheers word deur slegs 'n hand vol aandele. Dus word verwag dat likiditeit 'n baie belangrike faktor behoort te wees. Die bevinding dat likiditeit nie 'n prysingsfaktor op hierdie mark is nie, is dus 'n belangrike bevinding en vereis verdere analise om vas te stel waarom dit die geval is. Addisioneel word beduidende nie-nul afsnitte verkry, wat aandui dat daar steeds risiko faktore is wat nog nie geïdentifiseer is nie.
|
Page generated in 0.0817 seconds