• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 44
  • 5
  • 4
  • 1
  • Tagged with
  • 185
  • 185
  • 79
  • 69
  • 38
  • 32
  • 30
  • 29
  • 23
  • 23
  • 18
  • 17
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Economic Pricing of Mortality-Linked Securities

Zhou, Rui January 2012 (has links)
In previous research on pricing mortality-linked securities, the no-arbitrage approach is often used. However, this method, which takes market prices as given, is difficult to implement in today's embryonic market where there are few traded securities. In particular, with limited market price data, identifying a risk neutral measure requires strong assumptions. In this thesis, we approach the pricing problem from a different angle by considering economic methods. We propose pricing approaches in both competitive market and non-competitive market. In the competitive market, we treat the pricing work as a Walrasian tâtonnement process, in which prices are determined through a gradual calibration of supply and demand. Such a pricing framework provides with us a pair of supply and demand curves. From these curves we can tell if there will be any trade between the counterparties, and if there will, at what price the mortality-linked security will be traded. This method does not require the market prices of other mortality-linked securities as input. This can spare us from the problems associated with the lack of market price data. We extend the pricing framework to incorporate population basis risk, which arises when a pension plan relies on standardized instruments to hedge its longevity risk exposure. This extension allows us to obtain the price and trading quantity of mortality-linked securities in the presence of population basis risk. The resulting supply and demand curves help us understand how population basis risk would affect the behaviors of agents. We apply the method to a hypothetical longevity bond, using real mortality data from different populations. Our illustrations show that, interestingly, population basis risk can affect the price of a mortality-linked security in different directions, depending on the properties of the populations involved. We have also examined the impact of transitory mortality jumps on trading in a competitive market. Mortality dynamics are subject to jumps, which are due to events such as the Spanish flu in 1918. Such jumps can have a significant impact on prices of mortality-linked securities, and therefore should be taken into account in modeling. Although several single-population mortality models with jump effects have been developed, they are not adequate for trades in which population basis risk exists. We first develop a two-population mortality model with transitory jump effects, and then we use the proposed mortality model to examine how mortality jumps may affect the supply and demand of mortality-linked securities. Finally, we model the pricing process in a non-competitive market as a bargaining game. Nash's bargaining solution is applied to obtain a unique trading contract. With no requirement of a competitive market, this approach is more appropriate for the current mortality-linked security market. We compare this approach with the other proposed pricing method. It is found that both pricing methods lead to Pareto optimal outcomes.
142

News media, asset prices and capital flows: evidence from a small open economy

Sher, Galen January 2017 (has links)
Objectives: This work investigates the role for the content of print news media in determining asset prices and capital flows in a small open economy (South Africa). Specifically, it examines how much of the daily variation in stock prices, bond prices, trading volume and capital flows can be explained by phrases in the print news media. Furthermore, this work links such evidence to the existing theoretical and empirical literature. Methods: This work employs natural language processing techniques for counting words and phrases within articles published in national newspapers. Variance decompositions of the resulting word and phrase counts summarise the information extracted from national newspapers in this way. Following previous studies of the United States, least squares regression relates stock returns to single positive or negative 'sentiment' factors. New in this study, support vector regression relates South African stock returns, bond returns and capital flows to the high-dimensional word and phrase counts from national newspapers. Results: I find that domestic asset prices and capital flows between residents and non-residents reflect the content of domestic print news media. In particular, I find that the contents of national newspapers can predict 9 percent of the variation in daily stock returns one day ahead and 7 percent of the variation in the daily excess return of long-term bonds over short-term bonds three days ahead. This predictability in stocks and bonds coincides with predictability of the content of domestic print news media for net equity and debt portfolio capital inflows, suggesting that the domestic print news media affects foreign residents' demand for domestic assets. Moreover, predictability of domestic print news media for near future stock returns is driven by emotive language, suggesting a role for 'sentiment', while such predictability for stock returns further ahead and the premium on long-term bonds is driven by non-emotive language, suggesting a role for other media factors in determining asset prices. These results do not seem to reflect a purely historical phenomenon, finite-sample biases, reverse causality, serial correlation, volatility or day-of-the-week effects. The results support models where foreign agents' short-run beliefs or preferences respond to the content of domestic print news media heterogeneously from those of domestic agents, while becoming more homogeneous in the medium term.
143

Analysing the structure and nature of medical scheme benefit design in South Africa

Kaplan, Josh Tana January 2015 (has links)
Includes bibliographical references / This dissertation intends to shed light on open-membership medical scheme benefit design in South Africa. This will be done by analysing the benefit design of 118 benefit options, so as to provide an overview of the structure and nature of the benefit offerings available in the market in 2014. In addition, affordability of these benefit options was analysed in order to identify whether or not there exist connections between the benefits on offer and the price of cover. This paper will argue that at present, the large number of benefit options available in the market, the lack of standardisation between benefit options, together with the mosaic of confusing terminology employed in scheme brochures, creates a highly complex environment that hampers consumer decision making. However, this implicit complexity was found to be necessary owing to the incomplete regulatory environment surrounding medical schemes. The findings of this investigation show that benefit design requires significant attention in order to facilitate equitable access to cover in South Africa.
144

A General Approach to Buhlmann Credibility Theory

Yan, Yujie yy 08 1900 (has links)
Credibility theory is widely used in insurance. It is included in the examination of the Society of Actuaries and in the construction and evaluation of actuarial models. In particular, the Buhlmann credibility model has played a fundamental role in both actuarial theory and practice. It provides a mathematical rigorous procedure for deciding how much credibility should be given to the actual experience rating of an individual risk relative to the manual rating common to a particular class of risks. However, for any selected risk, the Buhlmann model assumes that the outcome random variables in both experience periods and future periods are independent and identically distributed. In addition, the Buhlmann method uses sample mean-based estimators to insure the selected risk, which may be a poor estimator of future costs if only a few observations of past events (costs) are available. We present an extension of the Buhlmann model and propose a general method based on a linear combination of both robust and efficient estimators in a dependence framework. The performance of the proposed procedure is demonstrated by Monte Carlo simulations.
145

The valuation of no-negative equity guarantees and equity release mortgages

Dowd, K., Buckner, D., Blake, D., Fry, John 05 January 2020 (has links)
Yes / We outline the valuation process for a No-Negative Equity Guarantee in an Equity Release Mortgage loan and for an Equity Release Mortgage that has such a guarantee. Illustrative valuations are provided based on the Black ’76 put pricing formula and mortality projections based on the M5, M6 and M7 mortality versions of the Cairns–Blake–Dowd (CBD) family of mortality models. Results indicate that the valuations of No-Negative Equity Guarantees are high relative to loan amounts and subject to considerable model risk but that the valuations of Equity Release Mortgage loans are robust to the choice of mortality model. Results have significant ramifications for industry practice and prudential regulation.
146

Evaluating the Predictive Power and suitability of Mortality Rate Models : A Comparison of Makeham and Lee-Carter for Life Insurance Applications

Ljunggren, Carl January 2024 (has links)
Life insurance companies rely on mortality rate models to set appropriate premiums for their services. Over the past century, average life expectancy has increased and continues to do so, necessitating more accurate models. Two commonly used models are the Gompertz-Makeham law of mortality and the Lee-Carter model. The Gompertz-Makeham model depends solely on an age variable, while the Lee-Carter model incorporates a time-varying aspect which accounts for the increase in life expectancy over time. This paper constructs both models using training data acquired from Skandia Mutual Life Insurance Company and compares them to validation data from the same set. The study suggests that the Lee-Carter model may be able to offer some improvements compared to the Gompertz-Makeham law of mortality in terms of predicting future mortality rates. However, due to a lack of qualitative data, creating a competitive Lee-Carter model through Singular Value Decomposition, SVD, proved to be problematic. Switching from the current Gompertz-Makeham model to the Lee-Carter model should, therefore, be explored further when more high quality data becomes available.
147

A framework for estimating risk

Kroon, Rodney Stephen 03 1900 (has links)
Thesis (PhD (Statistics and Actuarial Sciences))--Stellenbosch University, 2008. / We consider the problem of model assessment by risk estimation. Various approaches to risk estimation are considered in a uni ed framework. This a discussion of various complexity dimensions and approaches to obtaining bounds on covering numbers is also presented. The second type of training sample interval estimator discussed in the thesis is Rademacher bounds. These bounds use advanced concentration inequalities, so a chapter discussing such inequalities is provided. Our discussion of Rademacher bounds leads to the presentation of an alternative, slightly stronger, form of the core result used for deriving local Rademacher bounds, by avoiding a few unnecessary relaxations. Next, we turn to a discussion of PAC-Bayesian bounds. Using an approach developed by Olivier Catoni, we develop new PAC-Bayesian bounds based on results underlying Hoe ding's inequality. By utilizing Catoni's concept of \exchangeable priors", these results allowed the extension of a covering number-based result to averaging classi ers, as well as its corresponding algorithm- and data-dependent result. The last contribution of the thesis is the development of a more exible shell decomposition bound: by using Hoe ding's tail inequality rather than Hoe ding's relative entropy inequality, we extended the bound to general loss functions, allowed the use of an arbitrary number of bins, and introduced between-bin and within-bin \priors". Finally, to illustrate the calculation of these bounds, we applied some of them to the UCI spam classi cation problem, using decision trees and boosted stumps. framework is an extension of a decision-theoretic framework proposed by David Haussler. Point and interval estimation based on test samples and training samples is discussed, with interval estimators being classi ed based on the measure of deviation they attempt to bound. The main contribution of this thesis is in the realm of training sample interval estimators, particularly covering number-based and PAC-Bayesian interval estimators. The thesis discusses a number of approaches to obtaining such estimators. The rst type of training sample interval estimator to receive attention is estimators based on classical covering number arguments. A number of these estimators were generalized in various directions. Typical generalizations included: extension of results from misclassi cation loss to other loss functions; extending results to allow arbitrary ghost sample size; extending results to allow arbitrary scale in the relevant covering numbers; and extending results to allow arbitrary choice of in the use of symmetrization lemmas. These extensions were applied to covering number-based estimators for various measures of deviation, as well as for the special cases of misclassi - cation loss estimators, realizable case estimators, and margin bounds. Extended results were also provided for strati cation by (algorithm- and datadependent) complexity of the decision class. In order to facilitate application of these covering number-based bounds,
148

An analysis of income and poverty in South Africa

Malherbe, Jeanine Elizabeth 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2007. / The aim of this study is to assess the welfare of South Africa in terms of poverty and inequality. This is done using the Income and Expenditure Survey (IES) of 2000, released by Statistics South Africa, and reviewing the distribution of income in the country. A brief literature review of similar studies is given along with a broad de nition of poverty and inequality. A detailed description of the dataset used is given together with aspects of concern surrounding the dataset. An analysis of poverty and income inequality is made using datasets containing the continuous income variable, as well as a created grouped income variable. Results from these datasets are compared and conclusions made on the use of continuous or grouped income variables. Covariate analysis is also applied in the form of biplots. A brief overview of biplots is given and it is then used to obtain a graphical description of the data and identify any patterns. Lastly, the conclusions made in this study are put forward and some future research is mentioned.
149

Value at risk and expected shortfall : traditional measures and extreme value theory enhancements with a South African market application

Dicks, Anelda 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Accurate estimation of Value at Risk (VaR) and Expected Shortfall (ES) is critical in the management of extreme market risks. These risks occur with small probability, but the financial impacts could be large. Traditional models to estimate VaR and ES are investigated. Following usual practice, 99% 10 day VaR and ES measures are calculated. A comprehensive theoretical background is first provided and then the models are applied to the Africa Financials Index from 29/01/1996 to 30/04/2013. The models considered include independent, identically distributed (i.i.d.) models and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) stochastic volatility models. Extreme Value Theory (EVT) models that focus especially on extreme market returns are also investigated. For this, the Peaks Over Threshold (POT) approach to EVT is followed. For the calculation of VaR, various scaling methods from one day to ten days are considered and their performance evaluated. The GARCH models fail to converge during periods of extreme returns. During these periods, EVT forecast results may be used. As a novel approach, this study considers the augmentation of the GARCH models with EVT forecasts. The two-step procedure of pre-filtering with a GARCH model and then applying EVT, as suggested by McNeil (1999), is also investigated. This study identifies some of the practical issues in model fitting. It is shown that no single forecasting model is universally optimal and the choice will depend on the nature of the data. For this data series, the best approach was to augment the GARCH stochastic volatility models with EVT forecasts during periods where the first do not converge. Model performance is judged by the actual number of VaR and ES violations compared to the expected number. The expected number is taken as the number of return observations over the entire sample period, multiplied by 0.01 for 99% VaR and ES calculations. / AFRIKAANSE OPSOMMING: Akkurate beraming van Waarde op Risiko (Value at Risk) en Verwagte Tekort (Expected Shortfall) is krities vir die bestuur van ekstreme mark risiko’s. Hierdie risiko’s kom met klein waarskynlikheid voor, maar die finansiële impakte is potensieel groot. Tradisionele modelle om Waarde op Risiko en Verwagte Tekort te beraam, word ondersoek. In ooreenstemming met die algemene praktyk, word 99% 10 dag maatstawwe bereken. ‘n Omvattende teoretiese agtergrond word eers gegee en daarna word die modelle toegepas op die Africa Financials Index vanaf 29/01/1996 tot 30/04/2013. Die modelle wat oorweeg word sluit onafhanklike, identies verdeelde modelle en Veralgemeende Auto-regressiewe Voorwaardelike Heteroskedastiese (GARCH) stogastiese volatiliteitsmodelle in. Ekstreemwaarde Teorie modelle, wat spesifiek op ekstreme mark opbrengste fokus, word ook ondersoek. In hierdie verband word die Peaks Over Threshold (POT) benadering tot Ekstreemwaarde Teorie gevolg. Vir die berekening van Waarde op Risiko word verskillende skaleringsmetodes van een dag na tien dae oorweeg en die prestasie van elk word ge-evalueer. Die GARCH modelle konvergeer nie gedurende tydperke van ekstreme opbrengste nie. Gedurende hierdie tydperke, kan Ekstreemwaarde Teorie modelle gebruik word. As ‘n nuwe benadering oorweeg hierdie studie die aanvulling van die GARCH modelle met Ekstreemwaarde Teorie vooruitskattings. Die sogenaamde twee-stap prosedure wat voor-af filtrering met ‘n GARCH model behels, gevolg deur die toepassing van Ekstreemwaarde Teorie (soos voorgestel deur McNeil, 1999), word ook ondersoek. Hierdie studie identifiseer sommige van die praktiese probleme in model passing. Daar word gewys dat geen enkele vooruistkattingsmodel universeel optimaal is nie en die keuse van die model hang af van die aard van die data. Die beste benadering vir die data reeks wat in hierdie studie gebruik word, was om die GARCH stogastiese volatiliteitsmodelle met Ekstreemwaarde Teorie vooruitskattings aan te vul waar die voorafgenoemde nie konvergeer nie. Die prestasie van die modelle word beoordeel deur die werklike aantal Waarde op Risiko en Verwagte Tekort oortredings met die verwagte aantal te vergelyk. Die verwagte aantal word geneem as die aantal obrengste waargeneem oor die hele steekproefperiode, vermenigvuldig met 0.01 vir die 99% Waarde op Risiko en Verwagte Tekort berekeninge.
150

The identification and application of common principal components

Pepler, Pieter Theo 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: When estimating the covariance matrices of two or more populations, the covariance matrices are often assumed to be either equal or completely unrelated. The common principal components (CPC) model provides an alternative which is situated between these two extreme assumptions: The assumption is made that the population covariance matrices share the same set of eigenvectors, but have di erent sets of eigenvalues. An important question in the application of the CPC model is to determine whether it is appropriate for the data under consideration. Flury (1988) proposed two methods, based on likelihood estimation, to address this question. However, the assumption of multivariate normality is untenable for many real data sets, making the application of these parametric methods questionable. A number of non-parametric methods, based on bootstrap replications of eigenvectors, is proposed to select an appropriate common eigenvector model for two population covariance matrices. Using simulation experiments, it is shown that the proposed selection methods outperform the existing parametric selection methods. If appropriate, the CPC model can provide covariance matrix estimators that are less biased than when assuming equality of the covariance matrices, and of which the elements have smaller standard errors than the elements of the ordinary unbiased covariance matrix estimators. A regularised covariance matrix estimator under the CPC model is proposed, and Monte Carlo simulation results show that it provides more accurate estimates of the population covariance matrices than the competing covariance matrix estimators. Covariance matrix estimation forms an integral part of many multivariate statistical methods. Applications of the CPC model in discriminant analysis, biplots and regression analysis are investigated. It is shown that, in cases where the CPC model is appropriate, CPC discriminant analysis provides signi cantly smaller misclassi cation error rates than both ordinary quadratic discriminant analysis and linear discriminant analysis. A framework for the comparison of di erent types of biplots for data with distinct groups is developed, and CPC biplots constructed from common eigenvectors are compared to other types of principal component biplots using this framework. A subset of data from the Vermont Oxford Network (VON), of infants admitted to participating neonatal intensive care units in South Africa and Namibia during 2009, is analysed using the CPC model. It is shown that the proposed non-parametric methodology o ers an improvement over the known parametric methods in the analysis of this data set which originated from a non-normally distributed multivariate population. CPC regression is compared to principal component regression and partial least squares regression in the tting of models to predict neonatal mortality and length of stay for infants in the VON data set. The tted regression models, using readily available day-of-admission data, can be used by medical sta and hospital administrators to counsel parents and improve the allocation of medical care resources. Predicted values from these models can also be used in benchmarking exercises to assess the performance of neonatal intensive care units in the Southern African context, as part of larger quality improvement programmes. / AFRIKAANSE OPSOMMING: Wanneer die kovariansiematrikse van twee of meer populasies beraam word, word dikwels aanvaar dat die kovariansiematrikse of gelyk, of heeltemal onverwant is. Die gemeenskaplike hoofkomponente (GHK) model verskaf 'n alternatief wat tussen hierdie twee ekstreme aannames gele e is: Die aanname word gemaak dat die populasie kovariansiematrikse dieselfde versameling eievektore deel, maar verskillende versamelings eiewaardes het. 'n Belangrike vraag in die toepassing van die GHK model is om te bepaal of dit geskik is vir die data wat beskou word. Flury (1988) het twee metodes, gebaseer op aanneemlikheidsberaming, voorgestel om hierdie vraag aan te spreek. Die aanname van meerveranderlike normaliteit is egter ongeldig vir baie werklike datastelle, wat die toepassing van hierdie metodes bevraagteken. 'n Aantal nie-parametriese metodes, gebaseer op skoenlus-herhalings van eievektore, word voorgestel om 'n geskikte gemeenskaplike eievektor model te kies vir twee populasie kovariansiematrikse. Met die gebruik van simulasie eksperimente word aangetoon dat die voorgestelde seleksiemetodes beter vaar as die bestaande parametriese seleksiemetodes. Indien toepaslik, kan die GHK model kovariansiematriks beramers verskaf wat minder sydig is as wanneer aanvaar word dat die kovariansiematrikse gelyk is, en waarvan die elemente kleiner standaardfoute het as die elemente van die gewone onsydige kovariansiematriks beramers. 'n Geregulariseerde kovariansiematriks beramer onder die GHK model word voorgestel, en Monte Carlo simulasie resultate toon dat dit meer akkurate beramings van die populasie kovariansiematrikse verskaf as ander mededingende kovariansiematriks beramers. Kovariansiematriks beraming vorm 'n integrale deel van baie meerveranderlike statistiese metodes. Toepassings van die GHK model in diskriminantanalise, bi-stippings en regressie-analise word ondersoek. Daar word aangetoon dat, in gevalle waar die GHK model toepaslik is, GHK diskriminantanalise betekenisvol kleiner misklassi kasie foutkoerse lewer as beide gewone kwadratiese diskriminantanalise en line^ere diskriminantanalise. 'n Raamwerk vir die vergelyking van verskillende tipes bi-stippings vir data met verskeie groepe word ontwikkel, en word gebruik om GHK bi-stippings gekonstrueer vanaf gemeenskaplike eievektore met ander tipe hoofkomponent bi-stippings te vergelyk. 'n Deelversameling van data vanaf die Vermont Oxford Network (VON), van babas opgeneem in deelnemende neonatale intensiewe sorg eenhede in Suid-Afrika en Namibi e gedurende 2009, word met behulp van die GHK model ontleed. Daar word getoon dat die voorgestelde nie-parametriese metodiek 'n verbetering op die bekende parametriese metodes bied in die ontleding van hierdie datastel wat afkomstig is uit 'n nie-normaal verdeelde meerveranderlike populasie. GHK regressie word vergelyk met hoofkomponent regressie en parsi ele kleinste kwadrate regressie in die passing van modelle om neonatale mortaliteit en lengte van verblyf te voorspel vir babas in die VON datastel. Die gepasde regressiemodelle, wat maklik bekombare dag-van-toelating data gebruik, kan deur mediese personeel en hospitaaladministrateurs gebruik word om ouers te adviseer en die toewysing van mediese sorg hulpbronne te verbeter. Voorspelde waardes vanaf hierdie modelle kan ook gebruik word in normwaarde oefeninge om die prestasie van neonatale intensiewe sorg eenhede in die Suider-Afrikaanse konteks, as deel van groter gehalteverbeteringprogramme, te evalueer.

Page generated in 0.0736 seconds