• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 44
  • 5
  • 4
  • 1
  • Tagged with
  • 185
  • 185
  • 79
  • 69
  • 38
  • 32
  • 30
  • 29
  • 23
  • 23
  • 18
  • 17
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

On the distribution of the time to ruin and related topics

Shi, Tianxiang 19 June 2013 (has links)
Following the introduction of the discounted penalty function by Gerber and Shiu (1998), significant progress has been made on the analysis of various ruin-related quantities in risk theory. As we know, the discounted penalty function not only provides a systematic platform to jointly analyze various quantities of interest, but also offers the convenience to extract key pieces of information from a risk management perspective. For example, by eliminating the penalty function, the Gerber-Shiu function becomes the Laplace-Stieltjes transform of the time to ruin, inversion of which results in a series expansion for the associated density of the time to ruin (see, e.g., Dickson and Willmot (2005)). In this thesis, we propose to analyze the long-standing finite-time ruin problem by incorporating the number of claims until ruin into the Gerber-Shiu analysis. As will be seen in Chapter 2, many nice analytic properties of the original Gerber-Shiu function are preserved by this generalized analytic tool. For instance, the Gerber-Shiu function still satisfies a defective renewal equation and can be generally expressed in terms of some roots of Lundberg's generalized equation in the Sparre Andersen risk model. In this thesis, we propose not only to unify previous methodologies on the study of the density of the time to ruin through the use of Lagrange's expansion theorem, but also to provide insight into the nature of the series expansion by identifying the probabilistic contribution of each term in the expansion through analysis involving the distribution of the number of claims until ruin. In Chapter 3, we study the joint generalized density of the time to ruin and the number of claims until ruin in the classical compound Poisson risk model. We also utilize an alternative approach to obtain the density of the time to ruin based on the Lagrange inversion technique introduced by Dickson and Willmot (2005). In Chapter 4, relying on the Lagrange expansion theorem for analytic inversion, the joint density of the time to ruin, the surplus immediately before ruin and the number of claims until ruin is examined in the Sparre Andersen risk model with exponential claim sizes and arbitrary interclaim times. To our knowledge, existing results on the finite-time ruin problem in the Sparre Andersen risk model typically involve an exponential assumption on either the interclaim times or the claim sizes (see, e.g., Borovkov and Dickson (2008)). Among the few exceptions, we mention Dickson and Li (2010, 2012) who analyzed the density of the time to ruin for Erlang-n interclaim times. In Chapter 5, we propose a significant breakthrough by utilizing the multivariate version of Lagrange's expansion theorem to obtain a series expansion for the density of the time to ruin under a more general distribution assumption, namely when interclaim times are distributed as a combination of n exponentials. It is worth emphasizing that this technique can also be applied to other areas of applied probability. For instance, the proposed methodology can be used to obtain the distribution of some first passage times for particular stochastic processes. As an illustration, the duration of a busy period in a queueing risk model will be examined. Interestingly, the proposed technique can also be used to analyze some first passage times for the compound Poisson processes with diffusion. In Chapter 6, we propose an extension to Kendall's identity (see, e.g., Kendall (1957)) by further examining the distribution of the number of jumps before the first passage time. We show that the main result is particularly relevant to enhance our understanding of some problems of interest, such as the finite-time ruin probability of a dual compound Poisson risk model with diffusion and pricing barrier options issued on an insurer's stock price. Another closely related quantity of interest is the so-called occupation times of the surplus process below zero (also referred to as the duration of negative surplus, see, e.g., Egidio dos Reis (1993)) or in a certain interval (see, e.g., Kolkovska et al. (2005)). Occupation times have been widely used as a contingent characteristic to develop advanced derivatives in financial mathematics. In risk theory, it can be used as an important risk management tool to examine the overall health of an insurer's business. The main subject matter of Chapter 7 is to extend the analysis of occupation times to a class of renewal risk processes. We provide explicit expressions for the duration of negative surplus and the double-barrier occupation time in terms of their Laplace-Stieltjes transform. In the process, we revisit occupation times in the content of the classical compound Poisson risk model and examine some results proposed by Kolkovska et al. (2005). Finally, some concluding remarks and discussion of future research are made in Chapter 8.
152

Toward a unified global regulatory capital framework for life insurers

Sharara, Ishmael 28 February 2011 (has links)
In many regions of the world, the solvency regulation of insurers is becoming more principle-based and market oriented. However, the exact forms of the solvency standards that are emerging in individual jurisdictions are not entirely consistent. A common risk and capital framework can level the global playing field and possibly reduce the cost of capital for insurers. In the thesis, a conceptual framework for measuring the insolvency risk of life insurance companies will be proposed. The two main advantages of the proposed solvency framework are that it addresses the issue of incentives in the calibration of the capital requirements and it also provides an associated decomposition of the insurer's insolvency risk by term. The proposed term structure of insolvency risk is an efficient risk summary that should be readily accessible to both regulators and policyholders. Given the inherent complexity of the long-term guarantees and options of typical life insurance policies, the term structure of insolvency risk is able to provide stakeholders with more complete information than that provided by a single number that relates to a specific period. The capital standards for life insurers that are currently existing or have been proposed in Canada, U.S., and in the EU are then reviewed within the risk and capital measurement framework of the proposed standard to identify potential shortcomings.
153

Financial Risk Management of Guaranteed Minimum Income Benefits Embedded in Variable Annuities

Marshall, Claymore January 2011 (has links)
A guaranteed minimum income benefit (GMIB) is a long-dated option that can be embedded in a deferred variable annuity. The GMIB is attractive because, for policyholders who plan to annuitize, it offers protection against poor market performance during the accumulation phase, and adverse interest rate experience at annuitization. The GMIB also provides an upside equity guarantee that resembles the benefit provided by a lookback option. We price the GMIB, and determine the fair fee rate that should be charged. Due to the long dated nature of the option, conventional hedging methods, such as delta hedging, will only be partially successful. Therefore, we are motivated to find alternative hedging methods which are practicable for long-dated options. First, we measure the effectiveness of static hedging strategies for the GMIB. Static hedging portfolios are constructed based on minimizing the Conditional Tail Expectation of the hedging loss distribution, or minimizing the mean squared hedging loss. Next, we measure the performance of semi-static hedging strategies for the GMIB. We present a practical method for testing semi-static strategies applied to long term options, which employs nested Monte Carlo simulations and standard optimization methods. The semi-static strategies involve periodically rebalancing the hedging portfolio at certain time intervals during the accumulation phase, such that, at the option maturity date, the hedging portfolio payoff is equal to or exceeds the option value, subject to an acceptable level of risk. While we focus on the GMIB as a case study, the methods we utilize are extendable to other types of long-dated options with similar features.
154

Coherent Distortion Risk Measures in Portfolio Selection

Feng, Ming Bin January 2011 (has links)
The theme of this thesis relates to solving the optimal portfolio selection problems using linear programming. There are two key contributions in this thesis. The first contribution is to generalize the well-known linear optimization framework of Conditional Value-at-Risk (CVaR)-based portfolio selection problems (see Rockafellar and Uryasev (2000, 2002)) to more general risk measure portfolio selection problems. In particular, the class of risk measure under consideration is called the Coherent Distortion Risk Measure (CDRM) and is the intersection of two well-known classes of risk measures in the literature: the Coherent Risk Measure (CRM) and the Distortion Risk Measure (DRM). In addition to CVaR, other risk measures which belong to CDRM include the Wang Transform (WT) measure, Proportional Hazard (PH) transform measure, and lookback (LB) distortion measure. Our generalization implies that the portfolio selection problems can be solved very efficiently using the linear programming approach and over a much wider class of risk measures. The second contribution of the thesis is to establish the equivalences among four formulations of CDRM optimization problems: the return maximization subject to CDRM constraint, the CDRM minimization subject to return constraint, the return-CDRM utility maximization, the CDRM-based Sharpe Ratio maximization. Equivalences among these four formulations are established in a sense that they produce the same efficient frontier when varying the parameters in their corresponding problems. We point out that the first three formulations have already been investigated in Krokhmal et al. (2002) with milder assumptions on risk measures (convex functional of portfolio weights). Here we apply their results to CDRM and establish the fourth equivalence. For every one of these formulations, the relationship between its given parameter and the implied parameters for the other three formulations is explored. Such equivalences and relationships can help verifying consistencies (or inconsistencies) for risk management with different objectives and constraints. They are also helpful for uncovering the implied information of a decision making process or of a given investment market. We conclude the thesis by conducting two case studies to illustrate the methodologies and implementations of our linear optimization approach, to verify the equivalences among four different problem formulations, and to investigate the properties of different members of CDRM. In addition, the efficiency (or inefficiency) of the so-called 1/n portfolio strategy in terms of the trade off between portfolio return and portfolio CDRM. The properties of optimal portfolios and their returns with respect to different CDRM minimization problems are compared through their numerical results.
155

The optimality of a dividend barrier strategy for Levy insurance risk processes, with a focus on the univariate Erlang mixture

Ali, Javid January 2011 (has links)
In insurance risk theory, the surplus of an insurance company is modelled to monitor and quantify its risks. With the outgo of claims and inflow of premiums, the insurer needs to determine what financial portfolio ensures the soundness of the company’s future while satisfying the shareholders’ interests. It is usually assumed that the net profit condition (i.e. the expectation of the process is positive) is satisfied, which then implies that this process would drift towards infinity. To correct this unrealistic behaviour, the surplus process was modified to include the payout of dividends until the time of ruin. Under this more realistic surplus process, a topic of growing interest is determining which dividend strategy is optimal, where optimality is in the sense of maximizing the expected present value of dividend payments. This problem dates back to the work of Bruno De Finetti (1957) where it was shown that if the surplus process is modelled as a random walk with ± 1 step sizes, the optimal dividend payment strategy is a barrier strategy. Such a strategy pays as dividends any excess of the surplus above some threshold. Since then, other examples where a barrier strategy is optimal include the Brownian motion model (Gerber and Shiu (2004)) and the compound Poisson process model with exponential claims (Gerber and Shiu (2006)). In this thesis, we focus on the optimality of a barrier strategy in the more general Lévy risk models. The risk process will be formulated as a spectrally negative Lévy process, a continuous-time stochastic process with stationary increments which provides an extension of the classical Cramér-Lundberg model. This includes the Brownian and the compound Poisson risk processes as special cases. In this setting, results are expressed in terms of “scale functions”, a family of functions known only through their Laplace transform. In Loeffen (2008), we can find a sufficient condition on the jump distribution of the process for a barrier strategy to be optimal. This condition was then improved upon by Loeffen and Renaud (2010) while considering a more general control problem. The first chapter provides a brief review of theory of spectrally negative Lévy processes and scale functions. In chapter 2, we define the optimal dividends problem and provide existing results in the literature. When the surplus process is given by the Cramér-Lundberg process with a Brownian motion component, we provide a sufficient condition on the parameters of this process for the optimality of a dividend barrier strategy. Chapter 3 focuses on the case when the claims distribution is given by a univariate mixture of Erlang distributions with a common scale parameter. Analytical results for the Value-at-Risk and Tail-Value-at-Risk, and the Euler risk contribution to the Conditional Tail Expectation are provided. Additionally, we give some results for the scale function and the optimal dividends problem. In the final chapter, we propose an expectation maximization (EM) algorithm similar to that in Lee and Lin (2009) for fitting the univariate distribution to data. This algorithm is implemented and numerical results on the goodness of fit to sample data and on the optimal dividends problem are presented.
156

The role of immune-genetic factors in modelling longitudinally measured HIV bio-markers including the handling of missing data.

Odhiambo, Nancy. 20 December 2013 (has links)
Since the discovery of AIDS among the gay men in 1981 in the United States of America, it has become a major world pandemic with over 40 million individuals infected world wide. According to the Joint United Nations Programme against HIV/AIDS epidermic updates in 2012, 28.3 million individuals are living with HIV world wide, 23.5 million among them coming from sub-saharan Africa and 4.8 million individuals residing in Asia. The report showed that approximately 1.7 million individuals have died from AIDS related deaths, 34 million ± 50% know their HIV status, a total of 2:5 million individuals are newly infected, 14:8 million individuals are eligible for HIV treatment and only 8 million are on HIV treatment (Joint United Nations Programme on HIV/AIDS and health sector progress towards universal access: progress report, 2011). Numerous studies have been carried out to understand the pathogenesis and the dynamics of this deadly disease (AIDS) but, still its pathogenesis is poorly understood. More understanding of the disease is still needed so as to reduce the rate of its acquisition. Researchers have come up with statistical and mathematical models which help in understanding and predicting the progression of the disease better so as to find ways in which its acquisition can be prevented and controlled. Previous studies on HIV/AIDS have shown that, inter-individual variability plays an important role in susceptibility to HIV-1 infection, its transmission, progression and even response to antiviral therapy. Certain immuno-genetic factors (human leukocyte antigen (HLA), Interleukin-10 (IL-10) and single nucleotide polymorphisms (SNPs)) have been associated with the variability among individuals. In this dissertation we are going to reaffirm previous studies through statistical modelling and analysis that have shown that, immuno-genetic factors could play a role in susceptibility, transmission, progression and even response to antiviral therapy. This will be done using the Sinikithemba study data from the HIV Pathogenesis Programme (HPP) at Nelson Mandela Medical school, University of Kwazulu-Natal consisting of 451 HIV positive and treatment naive individuals to model how the HIV Bio-markers (viral load and CD4 count) are associated with the immuno-genetic factors using linear mixed models. We finalize the dissertation by dealing with drop-out which is a pervasive problem in longitudinal studies, regardless of how well they are designed and executed. We demonstrate the application and performance of multiple imputation (MI) in handling drop-out using a longitudinal count data from the Sinikithemba study with log viral load as the response. Our aim is to investigate the influence of drop-out on the evolution of HIV Bio-markers in a model including selected genetic factors as covariates, assuming the missing mechanism is missing at random (MAR). We later compare the results obtained from the MI method to those obtained from the incomplete dataset. From the results, we can clearly see that there is much difference in the findings obtained from the two analysis. Therefore, there is need to account for drop-out since it can lead to biased results if not accounted for. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
157

Bayesian hierarchical spatial and spatio-temporal modeling and mapping of tuberculosis in Kenya.

Iddrisu, Abdul-Karim. 20 December 2013 (has links)
Global spread of infectious disease threatens the well-being of human, domestic, and wildlife health. A proper understanding of global distribution of these diseases is an important part of disease management and policy making. However, data are subject to complexities by heterogeneity across host classes and space-time epidemic processes [Waller et al., 1997, Hosseini et al., 2006]. The use of frequentist methods in Biostatistics and epidemiology are common and are therefore extensively utilized in answering varied research questions. In this thesis, we proposed the Hierarchical Bayesian approach to study the spatial and the spatio-temporal pattern of tuberculosis in Kenya [Knorr-Held et al., 1998, Knorr-Held, 1999, L opez-Qu lez and Munoz, 2009, Waller et al., 1997, Julian Besag, 1991]. Space and time interaction of risk (ψ[ij]) is an important factor considered in this thesis. The Markov Chain Monte Carlo (MCMC) method via WinBUGS and R packages were used for simulations [Ntzoufras, 2011, Congdon, 2010, David et al., 1995, Gimenez et al., 2009, Brian, 2003], and the Deviance Information Criterion (DIC), proposed by [Spiegelhalter et al., 2002], used for models comparison and selection. Variation in TB risk is observed among Kenya counties and clustering among counties with high TB relative risk (RR). HIV prevalence is identified as the dominant determinant of TB. We found clustering and heterogeneity of risk among high rate counties and the overall TB risk is slightly decreasing from 2002-2009. Interaction of TB relative risk in space and time is found to be increasing among rural counties that share boundaries with urban counties with high TB risk. This is as a result of the ability of models to borrow strength from neighbouring counties, such that near by counties have similar risk. Although the approaches are less than ideal, we hope that our formulations provide a useful stepping stone in the development of spatial and spatio-temporal methodology for the statistical analysis of risk from TB in Kenya. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
158

Longitudinal survey data analysis.

January 2006 (has links)
To investigate the effect of environmental pollution on the health of children in the Durban South Industrial Basin (DSIB) due to its proximity to industrial activities, 233 children from five primary schools were considered. Three of these schools were located in the south of Durban while the other two were in the northern residential areas that were closer to industrial activities. Data collected included the participants' demographic, health, occupational, social and economic characteristics. In addition, environmental information was monitored throughout the study specifically, measurements on the levels of some ambient air pollutants. The objective of this thesis is to investigate which of these factors had an effect on the lung function of the children. In order to achieve this objective, different sample survey data analysis techniques are investigated. This includes the design-based and model-based approaches. The nature of the survey data finally leads to the longitudinal mixed model approach. The multicolinearity between the pollutant variables leads to the fitting of two separate models: one with the peak counts as the independent pollutant measures and the other with the 8-hour maximum moving average as the independent pollutant variables. In the selection of the fixed-effects structure, a scatter-plot smoother known as the loess fit is applied to the response variable individual profile plots. The random effects and the residual effect are assumed to have different covariance structures. The unstructured (UN) covariance structure is used for the random effects, while using the Akaike information criterion (AIC), the compound symmetric (CS) covariance structure is selected to be appropriate for the residual effects. To check the model fit, the profiles of the fitted and observed values of the dependent variables are compared graphically. The data is also characterized by the problem of intermittent missingness. The type of missingness is investigated by applying a modified logistic regression model missing at random (MAR) test. The results indicate that school location, sex and weight are the significant factors for the children's respiratory conditions. More specifically, the children in schools located in the northern residential areas are found to have poor respiratory conditions as compared to those in the Durban-South schools. In addition, poor respiratory conditions are also identified for overweight children. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2006.
159

On the distribution of the time to ruin and related topics

Shi, Tianxiang 19 June 2013 (has links)
Following the introduction of the discounted penalty function by Gerber and Shiu (1998), significant progress has been made on the analysis of various ruin-related quantities in risk theory. As we know, the discounted penalty function not only provides a systematic platform to jointly analyze various quantities of interest, but also offers the convenience to extract key pieces of information from a risk management perspective. For example, by eliminating the penalty function, the Gerber-Shiu function becomes the Laplace-Stieltjes transform of the time to ruin, inversion of which results in a series expansion for the associated density of the time to ruin (see, e.g., Dickson and Willmot (2005)). In this thesis, we propose to analyze the long-standing finite-time ruin problem by incorporating the number of claims until ruin into the Gerber-Shiu analysis. As will be seen in Chapter 2, many nice analytic properties of the original Gerber-Shiu function are preserved by this generalized analytic tool. For instance, the Gerber-Shiu function still satisfies a defective renewal equation and can be generally expressed in terms of some roots of Lundberg's generalized equation in the Sparre Andersen risk model. In this thesis, we propose not only to unify previous methodologies on the study of the density of the time to ruin through the use of Lagrange's expansion theorem, but also to provide insight into the nature of the series expansion by identifying the probabilistic contribution of each term in the expansion through analysis involving the distribution of the number of claims until ruin. In Chapter 3, we study the joint generalized density of the time to ruin and the number of claims until ruin in the classical compound Poisson risk model. We also utilize an alternative approach to obtain the density of the time to ruin based on the Lagrange inversion technique introduced by Dickson and Willmot (2005). In Chapter 4, relying on the Lagrange expansion theorem for analytic inversion, the joint density of the time to ruin, the surplus immediately before ruin and the number of claims until ruin is examined in the Sparre Andersen risk model with exponential claim sizes and arbitrary interclaim times. To our knowledge, existing results on the finite-time ruin problem in the Sparre Andersen risk model typically involve an exponential assumption on either the interclaim times or the claim sizes (see, e.g., Borovkov and Dickson (2008)). Among the few exceptions, we mention Dickson and Li (2010, 2012) who analyzed the density of the time to ruin for Erlang-n interclaim times. In Chapter 5, we propose a significant breakthrough by utilizing the multivariate version of Lagrange's expansion theorem to obtain a series expansion for the density of the time to ruin under a more general distribution assumption, namely when interclaim times are distributed as a combination of n exponentials. It is worth emphasizing that this technique can also be applied to other areas of applied probability. For instance, the proposed methodology can be used to obtain the distribution of some first passage times for particular stochastic processes. As an illustration, the duration of a busy period in a queueing risk model will be examined. Interestingly, the proposed technique can also be used to analyze some first passage times for the compound Poisson processes with diffusion. In Chapter 6, we propose an extension to Kendall's identity (see, e.g., Kendall (1957)) by further examining the distribution of the number of jumps before the first passage time. We show that the main result is particularly relevant to enhance our understanding of some problems of interest, such as the finite-time ruin probability of a dual compound Poisson risk model with diffusion and pricing barrier options issued on an insurer's stock price. Another closely related quantity of interest is the so-called occupation times of the surplus process below zero (also referred to as the duration of negative surplus, see, e.g., Egidio dos Reis (1993)) or in a certain interval (see, e.g., Kolkovska et al. (2005)). Occupation times have been widely used as a contingent characteristic to develop advanced derivatives in financial mathematics. In risk theory, it can be used as an important risk management tool to examine the overall health of an insurer's business. The main subject matter of Chapter 7 is to extend the analysis of occupation times to a class of renewal risk processes. We provide explicit expressions for the duration of negative surplus and the double-barrier occupation time in terms of their Laplace-Stieltjes transform. In the process, we revisit occupation times in the content of the classical compound Poisson risk model and examine some results proposed by Kolkovska et al. (2005). Finally, some concluding remarks and discussion of future research are made in Chapter 8.
160

Forecasting the monthly electricity consumption of municipalities in KwaZulu-Natal.

Walton, Alison Norma. January 1997 (has links)
Eskom is the major electricity supplier in South Africa and medium term forecasting within the company is a critical activity to ensure that enough electricity is generated to support the country's growth, that the networks can supply the electricity and that the revenue derived from electricity consumption is managed efficiently. This study investigates the most suitable forecasting technique for predicting monthly electricity consumption, one year ahead for four major municipalities within Kwa-Zulu Natal. / Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1997.

Page generated in 0.0821 seconds