Spelling suggestions: "subject:"actuar science.""
161 |
The statistical analyses of a complex survey of banana pests and diseases in Uganda.Ngoya, Japheth N. January 1999 (has links)
No abstract available. / Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1999.
|
162 |
Statistical analysis of the incidence and mortality of African horse sickness in South Africa.Burne, Rebecca. January 2011 (has links)
No abstract available. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2011.
|
163 |
Multilevel modelling of HIV in Swaziland using frequentist and Bayesian approaches.Vilakati, Sifiso E. January 2012 (has links)
Multilevel models account for different levels of aggregation that may be
present in the data. Researchers are sometimes faced with the task of
analysing data that are collected at different levels such that attributes about
individual cases are provided as well as the attributes of groupings of these
individual cases. Data with multilevel structure is common in the social
sciences and other fields such as epidemiology. Ignoring hierarchies in data
(where they exist) can have damaging consequences to subsequent statistical
inference.
This study applied multilevel models from frequentist and Bayesian perspectives
to the Swaziland Demographic and Health Survey (SDHS) data.
The first model fitted to the data was a Bayesian generalised linear mixed
model (GLMM) using two estimation techniques: the Integrated Laplace
Approximation (INLA) and Monte Carlo Markov Chain (MCMC) methods.
The study aimed at identifying determinants of HIV in Swaziland and as
well as comparing the different statistical models. The outcome variable of
interest in this study is HIV status and it is binary, in all the models fitted
the logit link was used.
The results of the analysis showed that the INLA estimation approach
is superior to the MCMC approach in Bayesian GLMMs in terms of computational speed. The INLA approach produced the results within seconds compared to the many minutes taken by the MCMC methods. There were
minimal differences observed between the Bayesian multilevel model and
the frequentist multilevel model. A notable difference observed between the
Bayesian GLMMs and the the multilevel models is that of differing estimates
for cluster effects. In the Bayesian GLMM, the estimates for the cluster
effects are larger than the ones from the multilevel models. The inclusion
of cluster level variables in the multilevel models reduced the unexplained
group level variation.
In an attempt to identify key drivers of HIV in Swaziland, this study
found that age, age at first sex, marital status and the number of sexual
partners one had in the last 12 months are associated with HIV serostatus.
Weak between cluster variations were found in both men and women. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2012.
|
164 |
Estimating the force of infection from prevalence data : infectious disease modelling.Balakrishna, Yusentha. January 2013 (has links)
By knowing the incidence of an infectious disease, we can ascertain the high
risk factors of the disease as well as the e ectiveness of awareness programmes
and treatment strategies. Since the work of Hugo Muench in 1934, many
methods of estimating the force of infection have been developed, each with
their own advantages and disadvantages.
The objective of this thesis is to explore the di erent compartmental models
of infectious diseases and establish and interpret the parameters associated
with them. Seven models formulated to estimate the force of infection were
discussed and applied to data obtained from CAPRISA. The data was agespeci
c HIV prevalence data based on antenatal clinic attendees from the
Vulindlela district in KwaZulu-Natal.
The link between the survivor function, the prevalence and the force of infection
was demonstrated and generalized linear model methodology was used
i
to estimate the force of infection. Parametric and nonparametric force of
infection models were used to t the models to data from 2009 to 2010. The
best tting model was determined and thereafter applied to data from 2002
to 2010. The occurring trends of HIV incidence and prevalence were then
evaluated. It should be noted that the sample size for the year 2002 was considerably
smaller than that of the following years. This resulted in slightly
inaccurate estimates for the year 2002.
Despite the general increase in HIV prevalence (from 54.07% in 2003 to
61.33% in 2010), the rate of new HIV infections was found to be decreasing.
The results also showed that the age at which the force of infection peaked
for each year increased from 16.5 years in 2003 to 18 years in 2010.
Farrington's two parameter model for estimating the force of HIV infection
was shown to be the most useful. The results obtained emphasised the importance
of HIV awareness campaigns being targeted at the 15 to 19 year
old age group. The results also suggest that using only prevalence as a measure
of disease can be misleading and should rather be used in conjunction
with incidence estimates to determine the success of intervention and control
strategies. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
|
165 |
Toward a unified global regulatory capital framework for life insurersSharara, Ishmael 28 February 2011 (has links)
In many regions of the world, the solvency regulation of insurers is becoming more principle-based and market oriented. However, the exact forms of the solvency standards that are emerging in individual jurisdictions are not entirely consistent. A common risk and capital framework can level the global playing field and possibly reduce the cost of capital for insurers. In the thesis, a conceptual framework for measuring the insolvency risk of life insurance companies will be proposed. The two main advantages of the proposed solvency framework are that it addresses the issue of incentives in the calibration of the capital requirements and it also provides an associated decomposition of the insurer's insolvency risk by term. The proposed term structure of insolvency risk is an efficient risk summary that should be readily accessible to both regulators and policyholders. Given the inherent complexity of the long-term guarantees and options of typical life insurance policies, the term structure of insolvency risk is able to provide stakeholders with more complete information than that provided by a single number that relates to a specific period. The capital standards for life insurers that are currently existing or have been proposed in Canada, U.S., and in the EU are then reviewed within the risk and capital measurement framework of the proposed standard to identify potential shortcomings.
|
166 |
Financial Risk Management of Guaranteed Minimum Income Benefits Embedded in Variable AnnuitiesMarshall, Claymore January 2011 (has links)
A guaranteed minimum income benefit (GMIB) is a long-dated option that can be embedded in a deferred variable annuity. The GMIB is attractive because, for policyholders who plan to annuitize, it offers protection against poor market performance during the accumulation phase, and adverse interest rate experience at annuitization. The GMIB also provides an upside equity guarantee that resembles the benefit provided by a lookback option.
We price the GMIB, and determine the fair fee rate that should be charged. Due to the long dated nature of the option, conventional hedging methods, such as delta hedging, will only be partially successful. Therefore, we are motivated to find alternative hedging methods which are practicable for long-dated options. First, we measure the effectiveness of static hedging strategies for the GMIB. Static hedging portfolios are constructed based on minimizing the Conditional Tail Expectation of the hedging loss distribution, or minimizing the mean squared hedging loss. Next, we measure the performance of semi-static hedging strategies for the GMIB. We present a practical method for testing semi-static strategies applied to long term options, which employs nested Monte Carlo simulations and standard optimization methods. The semi-static strategies involve periodically rebalancing the hedging portfolio at certain time intervals during the accumulation phase, such that, at the option maturity date, the hedging portfolio payoff is equal to or exceeds the option value, subject to an acceptable level of risk. While we focus on the GMIB as a case study, the methods we utilize are extendable to other types of long-dated options with similar features.
|
167 |
Coherent Distortion Risk Measures in Portfolio SelectionFeng, Ming Bin January 2011 (has links)
The theme of this thesis relates to solving the optimal portfolio selection problems using linear programming. There are two key contributions in this thesis. The first contribution is to generalize the well-known linear optimization framework of Conditional Value-at-Risk (CVaR)-based portfolio selection problems (see Rockafellar and Uryasev (2000, 2002)) to more general risk measure portfolio selection problems. In particular, the class of risk measure under consideration is called the Coherent Distortion Risk Measure (CDRM) and is the intersection of two well-known classes of risk measures in the literature: the Coherent Risk Measure (CRM) and the Distortion Risk Measure (DRM). In addition to CVaR, other risk measures which belong to CDRM include the Wang Transform (WT) measure, Proportional Hazard (PH) transform measure, and lookback (LB) distortion measure. Our generalization implies that the portfolio selection problems can be solved very efficiently using the linear programming approach and over a much wider class of risk measures.
The second contribution of the thesis is to establish the equivalences among four formulations of CDRM optimization problems: the return maximization subject to CDRM constraint, the CDRM minimization subject to return constraint, the return-CDRM utility maximization, the CDRM-based Sharpe Ratio maximization. Equivalences among these four formulations are established in a sense that they produce the same efficient frontier when varying the parameters in their corresponding problems. We point out that the first three formulations have already been investigated in Krokhmal et al. (2002) with milder assumptions on risk measures (convex functional of portfolio weights). Here we apply their results to CDRM and establish the fourth equivalence. For every one of these formulations, the relationship between its given parameter and the implied parameters for the other three formulations is explored. Such equivalences and relationships can help verifying consistencies (or inconsistencies) for risk management with different objectives and constraints. They are also helpful for uncovering the implied information of a decision making process or of a given investment market.
We conclude the thesis by conducting two case studies to illustrate the methodologies and implementations of our linear optimization approach, to verify the equivalences among four different problem formulations, and to investigate the properties of different members of CDRM. In addition, the efficiency (or inefficiency) of the so-called 1/n portfolio strategy in terms of the trade off between portfolio return and portfolio CDRM. The properties of optimal portfolios and their returns with respect to different CDRM minimization problems are compared through their numerical results.
|
168 |
The optimality of a dividend barrier strategy for Levy insurance risk processes, with a focus on the univariate Erlang mixtureAli, Javid January 2011 (has links)
In insurance risk theory, the surplus of an insurance company is modelled to monitor and quantify its risks. With the outgo of claims and inflow of premiums, the insurer needs to determine what financial portfolio ensures the soundness of the company’s future while satisfying the shareholders’ interests. It is usually assumed that the net profit condition (i.e. the expectation of the process is positive) is satisfied, which then implies that this process would drift towards infinity. To correct this unrealistic behaviour, the surplus process was modified to include the payout of dividends until the time of ruin.
Under this more realistic surplus process, a topic of growing interest is determining which dividend strategy is optimal, where optimality is in the sense of maximizing the expected present value of dividend payments. This problem dates back to the work of Bruno De Finetti (1957) where it was shown that if the surplus process is modelled as a random walk with ± 1 step sizes, the optimal dividend payment strategy is a barrier strategy. Such a strategy pays as dividends any excess of the surplus above some threshold. Since then, other examples where a barrier strategy is optimal include the Brownian motion model (Gerber and Shiu (2004)) and the compound Poisson process model with exponential claims (Gerber and Shiu (2006)).
In this thesis, we focus on the optimality of a barrier strategy in the more general Lévy risk models. The risk process will be formulated as a spectrally negative Lévy process, a continuous-time stochastic process with stationary increments which provides an extension of the classical Cramér-Lundberg model. This includes the Brownian and the compound Poisson risk processes as special cases. In this setting, results are expressed in terms of “scale functions”, a family of functions known only through their Laplace transform. In Loeffen (2008), we can find a sufficient condition on the jump distribution of the process for a barrier strategy to be optimal. This condition was then improved upon by Loeffen and Renaud (2010) while considering a more general control problem.
The first chapter provides a brief review of theory of spectrally negative Lévy processes and scale functions. In chapter 2, we define the optimal dividends problem and provide existing results in the literature. When the surplus process is given by the Cramér-Lundberg process with a Brownian motion component, we provide a sufficient condition on the parameters of this process for the optimality of a dividend barrier strategy.
Chapter 3 focuses on the case when the claims distribution is given by a univariate mixture of Erlang distributions with a common scale parameter. Analytical results for the Value-at-Risk and Tail-Value-at-Risk, and the Euler risk contribution to the Conditional Tail Expectation are provided. Additionally, we give some results for the scale function and the optimal dividends problem. In the final chapter, we propose an expectation maximization (EM) algorithm similar to that in Lee and Lin (2009) for fitting the univariate distribution to data. This algorithm is implemented and numerical results on the goodness of fit to sample data and on the optimal dividends problem are presented.
|
169 |
Approches micro-macro des dynamiques de populations hétérogènes structurées par âge. Application aux processus auto-excitants et à la démographie / Micro-macro analysis of heterogenous age-structured populations dynamics. Application to self-exciting processes and demographyBoumezoued, Alexandre 13 April 2016 (has links)
Cette thèse porte sur la modélisation de la dynamique des populations et de ses applications, à la démographie et l’actuariat d’une part, et à l’étude des processus de Hawkes d’autre part. Ces travaux de thèse proposent d’explorer à travers différents points de vue comment se déforme la structure d’une population, tant concernant la répartition des âges que sa composition en terme de caractéristiques. À travers cinq chapitres, nous déclinons une même philosophie qui, pour comprendre comment évoluent des quantités agrégées, propose d’étudier la dynamique de la population à une échelle plus fine, celle de l’individu. Après un premier chapitre introductif en langue française, détaillant les motivations et les principales contributions, nous proposons d’abord dans le Chapitre 2 la description du cadre général de la modélisation dynamique aléatoire de populations structurées en caractéristiques et en âges, sur la base de Bensusan et al. (2010–2015), ainsi que plusieurs exemples motivés par les applications démographiques et actuarielles. Nous détaillons la construction mathématique de tels processus ainsi que le lien avec les équations déterministes classiques en démographie. Nous discutons également l’impact de l’hétérogénéité sur l’exemple d’un effet cohorte, ainsi que le rôle de l’environnement aléatoire. Les deux chapitres suivants mettent en avant l’importance de la pyramide des âges. Le modèle de population général issu du Chapitre 2 est décliné dans le Chapitre 3 pour étudier des processus de Hawkes avec immigrants généraux, pour lesquels nous exploitons le concept de pyramide des âges. Dans cette étude théorique, basée sur Boumezoued (2015b), nous établissons de nouveaux résultats sur leur distribution pour une classe de fonctions qui généralisent le cas exponentiel étudié jusqu’ici. Dans le Chapitre 4, qui reprend Arnold et al. (2015), nous analysons l’impact de changements dans la mortalité par causes de décès sur la dynamique de la pyramide des âges, et en particulier sur le ratio de dépendance qui est un indicateur crucial du vieillissement de la population. En incluant le jeu des naissances dans la dynamique, ce travail de simulations, basé sur les données de l’OMS, permet de compléter la littérature existante sur les causes de décès qui se focalise traditionnellement sur des indicateurs de mortalité. Les deux derniers chapitres étudient plus particulièrement l’hétérogénéité des populations. Le Chapitre 5, basé sur Boumezoued et al. (2015), propose de mesurer l’hétérogénéité de la mortalité dans les données de l’Échantillon Démographique Permanent de l’INSEE. Dans le cadre de cette contribution d’adaptation de méthodes statistiques et de sa mise en oeuvre sur données réelles, nous proposons une méthode d’estimation paramétrique par maximum de vraisemblance pour les modèles multi-états qui prend en compte à la fois la censure par intervalle, caractéristique des données longitudinales issues du recensement, et également le retour dans les états intermédiaires. Enfin, le Chapitre 6, tiré de Boumezoued (2015a), reprend le modèle général du Chapitre 2 dans lequel les individus peuvent donner naissance, changer de caractéristiques et décéder. La contribution de cette partie théorique est d’étudier le comportement de la population lorsque les caractéristiques individuelles changent fréquemment. Nous établissons un thèorème limite en grande population pour le processus de pyramide des âges, dont le comportement est alors décrit par des taux de naissance et mort agrégés sur la structure stable en terme de caractéristiques. / This thesis focuses on population dynamics models and their applications, on one hand to demography and actuarial science, and on the other hand to Hawkes processes. This work explores through several viewpoints how population structures evolve over time, both in terms of ages and characteristics. In five chapters, we develop a common philosophy which studies the population at the scale of the individual in order to better understand the behavior of aggregate quantities. The first chapter introduces the motivations and details the main contributions in French. In Chapter 2, based on Bensusan et al. (2010–2015), we survey the modeling of characteristic and age-structured populations and their dynamics, as well as several examples motivated by demographic issues. We detail the mathematical construction of such population processes, as well as their link with well known deterministic equations in demography. We illustrate the simulation algorithm on an example of cohort effect, and we also discuss the role of the random environment. The two following chapters emphasize on the importance of the age pyramid. Chapter 3 uses a particular form of the general model introduced in Chapter 2 in order to study Hawkes processes with general immigrants. In this theoretical part based on Boumezoued (2015b) we use the concept of age pyramid to derive new distribution properties for a class of fertility functions which generalize the popular exponential case. Chapter 4 is based on Arnold et al. (2015) and analyses the impact of cause-of- death mortality changes on the population age pyramid, and in particular on the dependency ratio which is crucial to measure population ageing. By including birth patterns, this numerical work based on WHO data gives additional insights compared to the existing literature on causes of death focusing only on mortality indicators. The last two chapters focus on population heterogeneity. The aim of Chapter 5, based on Boumezoued et al. (2015), is to measure mortality heterogeneity on French longitudinal data called Échantillon Démographique Permanent. In this work, inspired by recent advances in the statistical literature, we develop a parametric maximum likelihood method for multi-state models which takes into account both interval censoring and reversible transitions. Finally, Chapter 6, based on Boumezoued (2015a), considers the general model introduced in Chapter 2 in which individuals can give birth, change their characteristics and die. The contribution of this theoretical work is the analysis of the population behavior when individual characteristics change very often. We establish a large population limit theorem for the age pyramid process, whose dynamics is described at the limit by birth and death rates which are averaged over the stable population composition.
|
170 |
Algorithmes de machine learning en assurance : solvabilité, textmining, anonymisation et transparence / Machine learning algorithms in insurance : solvency, textmining, anonymization and transparencyLy, Antoine 19 November 2019 (has links)
En été 2013, le terme de "Big Data" fait son apparition et suscite un fort intérêt auprès des entreprises. Cette thèse étudie ainsi l'apport de ces méthodes aux sciences actuarielles. Elle aborde aussi bien les enjeux théoriques que pratiques sur des thématiques à fort potentiel comme l'textit{Optical Character Recognition} (OCR), l'analyse de texte, l'anonymisation des données ou encore l'interprétabilité des modèles. Commençant par l'application des méthodes du machine learning dans le calcul du capital économique, nous tentons ensuite de mieux illustrer la frontrière qui peut exister entre l'apprentissage automatique et la statistique. Mettant ainsi en avant certains avantages et différentes techniques, nous étudions alors l'application des réseaux de neurones profonds dans l'analyse optique de documents et de texte, une fois extrait. L'utilisation de méthodes complexes et la mise en application du Réglement Général sur la Protection des Données (RGPD) en 2018 nous a amené à étudier les potentiels impacts sur les modèles tarifaires. En appliquant ainsi des méthodes d'anonymisation sur des modèles de calcul de prime pure en assurance non-vie, nous avons exploré différentes approches de généralisation basées sur l'apprentissage non-supervisé. Enfin, la réglementation imposant également des critères en terme d'explication des modèles, nous concluons par une étude générale des méthodes qui permettent aujourd'hui de mieux comprendre les méthodes complexes telles que les réseaux de neurones / In summer 2013, the term "Big Data" appeared and attracted a lot of interest from companies. This thesis examines the contribution of these methods to actuarial science. It addresses both theoretical and practical issues on high-potential themes such as textit{Optical Character Recognition} (OCR), text analysis, data anonymization and model interpretability. Starting with the application of machine learning methods in the calculation of economic capital, we then try to better illustrate the boundary that may exist between automatic learning and statistics. Highlighting certain advantages and different techniques, we then study the application of deep neural networks in the optical analysis of documents and text, once extracted. The use of complex methods and the implementation of the General Data Protection Regulation (GDPR) in 2018 led us to study its potential impacts on pricing models. By applying anonymization methods to pure premium calculation models in non-life insurance, we explored different generalization approaches based on unsupervised learning. Finally, as regulations also impose criteria in terms of model explanation, we conclude with a general study of methods that now allow a better understanding of complex methods such as neural networks
|
Page generated in 0.0798 seconds