Spelling suggestions: "subject:"actuarial science."" "subject:"actuariale science.""
21 |
Inflation modelling for long-term liability driven investmentsDe Kock, Justin January 2014 (has links)
Includes bibliographical references. / A regime-switching model allows a process to switch randomly between different regimes which have different parameter estimates. This study investigates the use of a two regime-switching model for inflation in South Africa as a means of determining a hedging strategy for inflation linked liabilities of a financial institution. Each regime is modeled using an autoregressive process with different parameters and the change in regimes is governed by a two state Markov chain. Once the parameters have been estimated, the predictive validity of the regime-switching process as a model for inflation in South Africa is tested and a hedging strategy is outlined for a set of inflation linked cash flows. The hedging strategy is to invest in inflation linked bonds, the number of which is determined through the use of a Rand-per-point methodology that is applied to the inflation linked cash flows and inflation linked bonds. Over the period from January 2008 to June 2013 this hedging strategy was shown to be profitable.
|
22 |
Stock Option Valuations and Constraint Enforcement Using Neural NetworksNutt, Frans Ignatius 12 April 2023 (has links) (PDF)
Stock option valuations have long been studied, being inherently non-linear financial derivatives. These instruments have a ubiquitous presence in institutional investment practice, and present many favourable and unique benefits to an investment portfolio. Neural Networks on the other hand have become a more familiar concept in recent times. They are by design set to deal with complex, non-linear classification and prediction tasks. Using Neural Networks to predict stock option prices has been studied at length, by various authors in the last 30 years. These studies have considered their relative performance against closed-form pricing solutions like the infamous Black-Scholes-Merton model, as well as in real-world settings. The collective conclusion that is deduced from past literature presents a clear case for their use in finance, albeit that there are some notable pitfalls, like the lack of interpretability and the ability to explicitly enforce certain constraints. Constraints such as option price bounds (upper and lower) and the Put-Call parity, that a stock option's value should satisfy have not been considered in many prior studies. This dissertation sets out to study stock option valuations using Neural Networks with techniques to enforce constraints. First, a functional and appropriately performing Neural Network configuration is derived that outputs European call and put option prices under one model. Thereafter, enforcement of the lower, upper and relative bounds (Put-Call parity) is incorporated into the model. Finally, the Neural Network application is extended to the real-world setting. The performance of the Neural Network model is assessed by means of mean error, as well as percentiles.
|
23 |
Estimating adult mortality in South Africa using information on the year-of-death of parents from the 2016 Community SurveyMambwe, Chibwe 17 November 2022 (has links) (PDF)
In developing countries, systems that collect vital statistics are usually inadequate to facilitate the direct estimation of adult mortality. This has necessitated the development of indirect methods such as the orphanhood method. These methods are however limited, i.e., the single-survey approach produces out of date estimates of mortality and the two-survey approach is affected by the differential reporting of orphanhood between two surveys. To avoid these limitations, this research considers an extension of the orphanhood approach pioneered by Chackiel and Orellana (1985) to estimate adult mortality using year-of-death data rather than the conventional form of the orphanhood data. This is because the year-of- death data can be used to produce accurate time locations to which estimates of mortality apply but more important, one can create a synthetic survey from a single survey and hence obtain more recent and accurate estimates of mortality. The single-survey orphanhood method is applied to survey data to obtain estimates of adult mortality and time location. A variation of the two-survey orphanhood method (Timxus 1991b) is also applied to survey data and the synthetic survey that is created from year-of-death data in order to derive estimates of adult mortality. In addition, the age range of respondents is extended down to age O to include year-of-death data from younger respondents on the assumption that underestimating orphanhood due to the adoption effect is minimal. This is done to investigate if the estimates derived from the two-survey method can be improved. Further, a cohort survival method that involves the calculation of a survival ratio for each age group at the first survey and the equivalent older ages groups at the second survey is applied to investigate the possibility of producing useful estimates of adult mortality based on cohort survival. The level and trend in mortality estimates calculated from the single-survey, two - survey and the cohort survival approaches are discussed and compared to the estimates from the Rapid Mortality Surveillance (RMS) which are used as a benchmark for the trend and level of adult mortality in South Africa. The estimates produced using the single-survey method appear too low, while those from the two-survey method appear to be reasonable for the conventional from of the orphanhood data. Extending the two-survey method to include younger respondents produces estimates that are too low indicating that both the conventional form of the orphanhood data and the year-of-death data suffer from the adoption effect. The cohort survival approach produces reasonable estimates that are consistent with the RMS benchmark for both the conventional form of the orphanhood data and year-of-death data.
|
24 |
Expenses of UK life insurers with special reference to 1980-86 data provided by the association of British insurersKaye, Geraldine Della January 1991 (has links)
No description available.
|
25 |
Optimal Reinsurance Retentions under Ruin-Related Optimization CriteriaLi, Zhi 19 November 2008 (has links)
Quota-share and stop-loss/excess-of-loss reinsurances are two
important reinsurance strategies. An important question, both in
theory and in application, is to determine optimal retentions for
these reinsurances. In this thesis, we study the optimal retentions
of quota-share and stop-loss/excess-of-loss reinsurances under
ruin-related optimization criteria.
We attempt to balance the interest for a ceding company and a
reinsurance company and employ an optimization criterion that
considers the interests of both a cedent and a reinsurer. We also
examine the influence of interest, dividend, commission, expense,
and diffusion on reinsurance retentions.
|
26 |
Optimal Reinsurance Retentions under Ruin-Related Optimization CriteriaLi, Zhi 19 November 2008 (has links)
Quota-share and stop-loss/excess-of-loss reinsurances are two
important reinsurance strategies. An important question, both in
theory and in application, is to determine optimal retentions for
these reinsurances. In this thesis, we study the optimal retentions
of quota-share and stop-loss/excess-of-loss reinsurances under
ruin-related optimization criteria.
We attempt to balance the interest for a ceding company and a
reinsurance company and employ an optimization criterion that
considers the interests of both a cedent and a reinsurer. We also
examine the influence of interest, dividend, commission, expense,
and diffusion on reinsurance retentions.
|
27 |
HIERARCHICAL BAYESIAN MODELLING FOR THE ANALYSIS OF THE LACTATION OF DAIRY ANIMALSLombaard, Carolina Susanna 03 November 2006 (has links)
This thesis was written with the aim of modelling the lactation process in dairy cows and
goats by applying a hierarchical Bayesian approach. Information on cofactors that could
possibly affect lactation is included in the model through a novel approach using covariates.
Posterior distributions of quantities of interest are obtained by means of the Markov chain
Monte Carlo methods. Prediction of future lactation cycle(s) is also performed.
In chapter one lactation is defined, its characteristics considered, the factors that could
possibly influence lactation mentioned, and the reasons for modelling lactation explained.
Chapter two provides a historical perspective to lactation models, considers typical lactation
curve shapes and curves fitted to the lactation composition traits fat and protein of milk.
Attention is also paid to persistency of lactation.
Chapter three considers alternative methods of obtaining total yield and producing Standard
Lactation Curves (SLACâs). Attention is paid to methods used in fitting lactation curves and
the assumptions about the errors.
In chapter four the generalised Bayesian model approach used to simultaneous ly model more
than one lactation trait, while also incorporating information on cofactors that could possibly
influence lactation, is developed. Special attention is paid not only to the model for complete
data, but also how modelling is adjusted to make provision for cases where not all lactation
cycles have been observed for all animals, also referred to as incomplete data. The use of the
Gibbs sampler and the Metropolis-Hastings algorithm in determining marginal posterior
distributions of model parameters and quantities that are functions of such parameters are also
discussed. Prediction of future lactation cycles using the model is also considered.
In chapter five the Bayesian approach together with the Wood model, applied to 4564
lactation cycles of 1141 Jersey cows, is used to illustrate the approach to modelling and
prediction of milk yield, percentage of fat and percentage of protein in milk composition in
the case of complete data. The incorporation of cofactor information through the use of the
covariate matrix is also considered in greater detail. The results from the Gibbs sampler are
evaluated and convergence there-of investigated. Attention is also paid to the expected
lactation curve characteristics as defined by Wood, as well as obtaining the expected lactation
254
curve of one of the levels of a cofactor when the influence of the other cofactors on the
lactation curve has be eliminated.
Chapter six considers the use of the Bayesian approach together with the general exponential
and 4-parameter Morant model, as well as an adaptation of a model suggested by Wilmink, in
modelling and predicting milk yield, fat content and protein content of milk for the Jersey
data.
In chapter seven a diagnostic comparison by means of Bayes factors of the results from the
four models in the preceding two chapters, when used together with the Bayesian approach, is
performed. As a result the adapted form of the Wilmink model fared best of the models
considered!
Chapter eight illustrates the use of the Bayesian approach, together with the four lactation
models considered in this study, to predict the lactation traits for animals similar to, but not
contained in the data used to develop the respective models.
In chapter nine the Bayesian approach together with the Wood model, applied to 755 lactation
cycles of 493 Saanen does collected during either or both of two consecutive year, is used to
illustrate the approach to modelling and predicting milk yield, percentage of fat and
percentage of protein in milk in the case of incomplete data.
Chapter ten provides a summary of the results and a perspective of the contribution of this
research to lactation modelling.
|
28 |
ON THE USE OF EXTREME VALUE THEORY IN ENERGY MARKETSMicali, V 16 November 2007 (has links)
The thesis intent is to provide a set of statistical
methodologies in the field of Extreme Value Theory
(EVT) with a particular application to energy losses,
in Gigawatt-hours (GWh) experienced by electrical
generating units (GUâs).
Due to the complexity of the energy market, the thesis
focuses on the volume loss only and does not expand
into the price, cost or mixes thereof (although the
strong relationship between volume and price is
acknowledged by some initial work on the energy
price [SMP] is provided in Appendix B)
Hence, occurrences of excessive unexpected energy
losses incurred by these GUâs formulate the problem.
Exploratory Data Analysis (EDA) structures the data
and attempts at giving an indication on the
categorisation of the excessive losses. The size of the
GU failure is also investigated from an aggregated
perspective to relate to the Generation System. Here
the effect of concomitant variables (such as the Load
Factor imposed by the market) is emphasised. Cluster
Analysis (2-Way Joining) provided an initial
categorising technique. EDA highlights the shortfall
of a scientific approach to determine the answer to the
question at when is a large loss sufficiently large that
it affects the System. The usage of EVT shows that
the GWh Losses tend to behave as a variable in the
Fréchet domain of attraction. The Block Maxima
(BM) and Peak-Over-Threshold (POT), the latter as
semi and full parametric, methods are investigated.
The POT methodologies are both applicable. Of
particular interest is the Q-Q plots results on the semiparametric
POT method, which yielded results that fit the data satisfactorily (pp 55-56). The Generalised
Pareto Distribution (GPD) models well the tail of the
GWh Losses above a threshold under the POT full
parametric method. Different methodologies were
explored in determining the parameters of the GPD.
The method of 3-LM (linear combinations of
Probability Weighted Moments) is used to arrive at
initial estimates of the GPD parameters. A GPD is
finally parameterised for the GWh Losses above 766
GWh. The Bayesian philosophy is also utilised in this
thesis as it provides a predictive distribution of (high
quantiles) the large GWh Losses. Results are found in
this part of the thesis in so far that it utilises the ratio
of the Mean Excess Function (the expectation of a
loss above a certain threshold) over its probability of
exceeding the threshold as an indicator to establish
the minimum of this ratio. The technique was
developed for the GPD by using the Fisher
Information Matrix (FIM) and the Delta-Method.
Prediction of high quantiles were done by using
Markov Chain Monte Carlo (MCMC) and eliciting
the GPD Maximal Data Information (MDI) prior. The
last EVT methodology investigated in the thesis is the
one that uses the Dirichlet process and the method of
Negative Differential Entropy (NDE). The thesis also
opened new areas of pertinent research.
|
29 |
BAYESIAN INFERENCE FOR LINEAR AND NONLINEAR FUNCTIONS OF POISSON AND BINOMIAL RATESRaubenheimer, Lizanne 16 August 2012 (has links)
This thesis focuses on objective Bayesian statistics, by evaluating a number of noninformative priors.
Choosing the prior distribution is the key to Bayesian inference. The probability matching prior for
the product of different powers of k binomial parameters is derived in Chapter 2. In the case of two
and three independently distributed binomial variables, the Jeffreys, uniform and probability matching
priors for the product of the parameters are compared. This research is an extension of the work by
Kim (2006), who derived the probability matching prior for the product of k independent Poisson
rates. In Chapter 3 we derive the probability matching prior for a linear combination of binomial
parameters. The construction of Bayesian credible intervals for the difference of two independent
binomial parameters is discussed.
The probability matching prior for the product of different powers of k Poisson rates is derived in
Chapter 4. This is achieved by using the differential equation procedure of Datta & Ghosh (1995). The
reference prior for the ratio of two Poisson rates is also obtained. Simulation studies are done to com-
pare different methods for constructing Bayesian credible intervals. It seems that if one is interested
in making Bayesian inference on the product of different powers of k Poisson rates, the probability
matching prior is the best. On the other hand, if we want to obtain point estimates, credibility intervals
or do hypothesis testing for the ratio of two Poisson rates, the uniform prior should be used.
In Chapter 5 the probability matching prior for a linear contrast of Poisson parameters is derived,
this prior is extended in such a way that it is also the probability matching prior for the average of
Poisson parameters. This research is an extension of the work done by Stamey & Hamilton (2006). A
comparison is made between the confidence intervals obtained by Stamey & Hamilton (2006) and the
intervals derived by us when using the Jeffreys and probability matching priors. A weighted Monte
Carlo method is used for the computation of the Bayesian credible intervals, in the case of the proba-
bility matching prior. In the last section of this chapter hypothesis testing for two means is considered.
The power and size of the test, using Bayesian methods, are compared to tests used by Krishnamoorthy
& Thomson (2004). For the Bayesian methods the Jeffreys prior, probability matching prior and two
other priors are used.
Bayesian estimation for binomial rates from pooled samples are considered in Chapter 6, where
the Jeffreys prior is used. Bayesian credibility intervals for a single proportion and the difference of
two binomial proportions estimated from pooled samples are considered. The results are compared This thesis focuses on objective Bayesian statistics, by evaluating a number of noninformative priors.
Choosing the prior distribution is the key to Bayesian inference. The probability matching prior for
the product of different powers of k binomial parameters is derived in Chapter 2. In the case of two
and three independently distributed binomial variables, the Jeffreys, uniform and probability matching
priors for the product of the parameters are compared. This research is an extension of the work by
Kim (2006), who derived the probability matching prior for the product of k independent Poisson
rates. In Chapter 3 we derive the probability matching prior for a linear combination of binomial
parameters. The construction of Bayesian credible intervals for the difference of two independent
binomial parameters is discussed.
The probability matching prior for the product of different powers of k Poisson rates is derived in
Chapter 4. This is achieved by using the differential equation procedure of Datta & Ghosh (1995). The
reference prior for the ratio of two Poisson rates is also obtained. Simulation studies are done to com-
pare different methods for constructing Bayesian credible intervals. It seems that if one is interested
in making Bayesian inference on the product of different powers of k Poisson rates, the probability
matching prior is the best. On the other hand, if we want to obtain point estimates, credibility intervals
or do hypothesis testing for the ratio of two Poisson rates, the uniform prior should be used.
In Chapter 5 the probability matching prior for a linear contrast of Poisson parameters is derived,
this prior is extended in such a way that it is also the probability matching prior for the average of
Poisson parameters. This research is an extension of the work done by Stamey & Hamilton (2006). A
comparison is made between the confidence intervals obtained by Stamey & Hamilton (2006) and the
intervals derived by us when using the Jeffreys and probability matching priors. A weighted Monte
Carlo method is used for the computation of the Bayesian credible intervals, in the case of the proba-
bility matching prior. In the last section of this chapter hypothesis testing for two means is considered.
The power and size of the test, using Bayesian methods, are compared to tests used by Krishnamoorthy
& Thomson (2004). For the Bayesian methods the Jeffreys prior, probability matching prior and two
other priors are used.
Bayesian estimation for binomial rates from pooled samples are considered in Chapter 6, where
the Jeffreys prior is used. Bayesian credibility intervals for a single proportion and the difference of
two binomial proportions estimated from pooled samples are considered. The results are compared
|
30 |
REGULARISED ITERATIVE MULTIPLE CORRESPONDENCE ANALYSIS IN MULTIPLE IMPUTATIONNienkemper, Johané 07 August 2014 (has links)
Non-responses in survey data are a prevalent problem. Various techniques for the handling of missing data have been studied and published. The application of a regularised iterative multiple correspondence analysis (RIMCA) algorithm in single imputation (SI) has been suggested for the handling of missing data in survey analysis.
Multiple correspondence analysis (MCA) as an imputation procedure is appropriate for survey data, since MCA is concerned with the relationships among the variables in the data. Therefore, missing data can be imputed by exploiting the relationship between observed and missing data.
The RIMCA algorithm expresses MCA as a weighted principal component analysis (PCA) of a data triplet ( ), which represents a weighted data matrix, a metric and a diagonal matrix containing row masses, respectively. Performing PCA on a triplet involves the generalised singular value decomposition of the weighted data matrix . Here, standard singular value decomposition (SVD) will not suffice, since constraints are imposed on the rows and columns because of the weighting.
The success of this algorithm lies in the fact that all eigenvalues are shrunk and the last components are omitted; thus a âdouble shrinkageâ occurs, which reduces variance and stabilises predictions. RIMCA seems to overcome overfitting and underfitting problems with regard to categorical missing data in surveys.
The idea of applying the RIMCA algorithm in MI was appealing, since advantages of MI occur over SI, such as an increase in the accuracy of estimations and the attainment of valid inferences when combining multiple datasets.
The aim of this study was to establish the performance of RIMCA in MI. This was achieved by two objectives: to determine whether RIMCA in MI outperforms RIMCA in SI and to determine the accuracy of predictions made from RIMCA in MI as an imputation model.
Real and simulated data were used. A simulation protocol was followed creating data drawn from multivariate Normal distributions with both high and low correlation structures. Varying the percentages of missing values in the data and missingness mechanisms (missing completely at random (MCAR) and missing at random (MAR)), as is done by Josse et al. (2012), were created in the data.
The first objective was achieved by applying RIMCA in both SI and MI to real data and simulated data. The performance of RIMCA in SI and MI were compared with regard to the obtained mean estimates and confidence intervals. In the case of the real data, the estimates were compared to the mean estimates of the incomplete data, whereas for the simulated data the true mean values and confidence intervals could be compared to the estimates obtained from the imputation procedures.
The second objective was achieved by calculating the apparent error rates of predictions made by the RIMCA algorithm in SI and MI in simulated datasets. Along with the apparent error rates, approximate overall success rates were calculated in order to establish the accuracy of imputations made by the SI and MI.
The results of this study show that the confidence intervals provided by MI are wider in most of the cases, which confirmed the incorporation of additional variance. It was found that for some of the variables the SI procedures were statistically different from the true confidence intervals, which shows that SI was not suitable in these instances for imputation. Overall the mean estimates provided by MI were closer to the true values, with respect to the simulated and real data. A summary of the bias, mean square errors and coverage for the imputation techniques over a thousand simulations were provided, which also confirmed that RIMCA in MI was a better model than RIMCA in SI in the contexts provided by this research.
|
Page generated in 0.4152 seconds