• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 44
  • 5
  • 4
  • 1
  • Tagged with
  • 184
  • 184
  • 79
  • 69
  • 38
  • 32
  • 30
  • 29
  • 23
  • 22
  • 18
  • 17
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Directional Control of Generating Brownian Path under Quasi Monte Carlo

Liu, Kai January 2012 (has links)
Quasi-Monte Carlo (QMC) methods are playing an increasingly important role in computational finance. This is attributed to the increased complexity of the derivative securities and the sophistication of the financial models. Simple closed-form solutions for the finance applications typically do not exist and hence numerical methods need to be used to approximate their solutions. QMC method has been proposed as an alternative method to Monte Carlo (MC) method to accomplish this objective. Unlike MC methods, the efficiency of QMC-based methods is highly dependent on the dimensionality of the problems. In particular, numerous researches have documented, under the Black-Scholes models, the critical role of the generating matrix for simulating the Brownian paths. Numerical results support the notion that generating matrix that reduces the effective dimension of the underlying problems is able to increase the efficiency of QMC. Consequently, dimension reduction methods such as principal component analysis, Brownian bridge, Linear Transformation and Orthogonal Transformation have been proposed to further enhance QMC. Motivated by these results, we first propose a new measure to quantify the effective dimension. We then propose a new dimension reduction method which we refer as the directional method (DC). The proposed DC method has the advantage that it depends explicitly on the given function of interest. Furthermore, by assigning appropriately the direction of importance of the given function, the proposed method optimally determines the generating matrix used to simulate the Brownian paths. Because of the flexibility of our proposed method, it can be shown that many of the existing dimension reduction methods are special cases of our proposed DC methods. Finally, many numerical examples are provided to support the competitive efficiency of the proposed method.
122

Economic Pricing of Mortality-Linked Securities

Zhou, Rui January 2012 (has links)
In previous research on pricing mortality-linked securities, the no-arbitrage approach is often used. However, this method, which takes market prices as given, is difficult to implement in today's embryonic market where there are few traded securities. In particular, with limited market price data, identifying a risk neutral measure requires strong assumptions. In this thesis, we approach the pricing problem from a different angle by considering economic methods. We propose pricing approaches in both competitive market and non-competitive market. In the competitive market, we treat the pricing work as a Walrasian tâtonnement process, in which prices are determined through a gradual calibration of supply and demand. Such a pricing framework provides with us a pair of supply and demand curves. From these curves we can tell if there will be any trade between the counterparties, and if there will, at what price the mortality-linked security will be traded. This method does not require the market prices of other mortality-linked securities as input. This can spare us from the problems associated with the lack of market price data. We extend the pricing framework to incorporate population basis risk, which arises when a pension plan relies on standardized instruments to hedge its longevity risk exposure. This extension allows us to obtain the price and trading quantity of mortality-linked securities in the presence of population basis risk. The resulting supply and demand curves help us understand how population basis risk would affect the behaviors of agents. We apply the method to a hypothetical longevity bond, using real mortality data from different populations. Our illustrations show that, interestingly, population basis risk can affect the price of a mortality-linked security in different directions, depending on the properties of the populations involved. We have also examined the impact of transitory mortality jumps on trading in a competitive market. Mortality dynamics are subject to jumps, which are due to events such as the Spanish flu in 1918. Such jumps can have a significant impact on prices of mortality-linked securities, and therefore should be taken into account in modeling. Although several single-population mortality models with jump effects have been developed, they are not adequate for trades in which population basis risk exists. We first develop a two-population mortality model with transitory jump effects, and then we use the proposed mortality model to examine how mortality jumps may affect the supply and demand of mortality-linked securities. Finally, we model the pricing process in a non-competitive market as a bargaining game. Nash's bargaining solution is applied to obtain a unique trading contract. With no requirement of a competitive market, this approach is more appropriate for the current mortality-linked security market. We compare this approach with the other proposed pricing method. It is found that both pricing methods lead to Pareto optimal outcomes.
123

Garch modelling of volatility in the Johannesburg Stock Exchange index.

Mzamane, Tsepang Patrick. 17 December 2013 (has links)
Modelling and forecasting stock market volatility is a critical issue in various fields of finance and economics. Forecasting volatility in stock markets find extensive use in portfolio management, risk management and option pricing. The primary objective of this study was to describe the volatility in the Johannesburg Stock Exchange (JSE) index using univariate and multivariate GARCH models. We used daily log-returns of the JSE index over the period 6 June 1995 to 30 June 2012. In the univariate GARCH modelling, both asymmetric and symmetric GARCH models were employed. We investigated volatility in the market using the simple GARCH, GJR-GARCH, EGARCH and APARCH models assuming di erent distributional assumptions in the error terms. The study indicated that the volatility in the residuals and the leverage effect was present in the JSE index returns. Secondly, we explored the dynamics of the correlation between the JSE index, FTSE-100 and NASDAQ-100 index on the basis of weekly returns over the period 6 June 1995 to 30 June 2012. The DCC-GARCH (1,1) model was employed to study the correlation dynamics. These results suggested that the correlation between the JSE index and the other two indices varied over time. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
124

Flexible statistical modeling of deaths by diarrhoea in South Africa.

Mbona, Sizwe Vincent. 17 December 2013 (has links)
The purpose of this study is to investigate and understand data which are grouped into categories. Various statistical methods was studied for categorical binary responses to investigate the causes of death from diarrhoea in South Africa. Data collected included death type, sex, marital status, province of birth, province of death, place of death, province of residence, education status, smoking status and pregnancy status. The objective of this thesis is to investigate which of the above explanatory variables was most affected by diarrhoea in South Africa. To achieve this objective, different sample survey data analysis techniques are investigated. This includes sketching bar graphs and using several statistical methods namely, logistic regression, surveylogistic, generalised linear model, generalised linear mixed model, and generalised additive model. In the selection of the fixed effects, a bar graph is applied to the response variable individual profile graphs. A logistic regression model is used to identify which of the explanatory variables are more affected by diarrhoea. Statistical applications are conducted in SAS (Statistical Analysis Software). Hosmer and Lemeshow (2000) propose a statistic that they show, through simulation, is distributed as chi‐square when there is no replication in any of the subpopulations. Due to the similarity of the Hosmer and Lemeshow test for logistic regression, Parzen and Lipsitz (1999) suggest using 10 risk score groups. Nevertheless, based on simulation results, May and Hosmer (2004) show that, for all samples or samples with a large percentage of censored observations, the test rejects the null hypothesis too often. They suggest that the number of groups be chosen such that G=integer of {maximum of 12 and minimum of 10}. Lemeshow et al. (2004) state that the observations are firstly sorted in increasing order of their estimated event probability. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
125

Multivariate time series modelling.

Vayej, Suhayl Muhammed. January 2012 (has links)
This research is based on a detailed description of model building for multivariate time series models. Under the assumption of stationarity, identification, estimation of the parameters and diagnostic checking for the Vector Auto regressive (p) (VAR(p)), Vector Moving Average (q) (VMA(q)) and Vector Auto regressive Moving Average (VARMA(p, q) ) models are described in detail. With reference to the non-stationary case, the concept of cointegration is explained. Procedures for testing for cointegration, determining the cointegrating rank and estimation of the cointegrated model in the VAR(p) and VARMA(p, q) cases are discussed. The utility of multivariate time series models in the field of economics is discussed and its use is demonstrated by analysing quarterly South African inflation and wage data from April 1996 to December 2008. A review of the literature shows that multivariate time series analysis allows the researcher to: (i) understand phenomenon which occur regularly over a period of time (ii) determine interdependencies between series (iii) establish causal relationships between series and (iv) forecast future variables in a time series based on current and past values of that variable. South African wage and inflation data was analysed using SAS version 9.2. Stationary VAR and VARMA models were run. The model with the best fit was the VAR model as the forecasts were reliable, and the small values of the Portmanteau statistic indicated that the model had a good fit. The VARMA models by contrast, had large values of the Portmanteau statistic as well as unreliable forecasts and thus were found not to fit the data well. There is therefore good evidence to suggest that wage increases occur independently of inflation, and while inflation can be predicted from its past values, it is dependent on wages. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2012.
126

Modeling the factors affecting cereal crop yields in the Amhara National Regional State of Ethiopia.

January 2010 (has links)
The agriculture sector in Amhara National Regional State is characterised by producing cereal crops which occupy the largest percentage (84.3%) of the total crop area cultivated in the region. As a result, it is imperative to investigate which factors influence the yields of cereal crops particularly in relation to the five major types of cereals in the study region namely barley, maize, sorghum, teff and wheat. Therefore, in this thesis, using data collected by the Central Statistical Agency of Ethiopia, various statistical methods such as multiple regression analysis were applied to investigate the factors which influence the mean yields of the major cereal crops. Moreover, a mixed model analysis was implemented to assess the effects associated with the sampling units (enumeration areas), and a cluster analysis to classify the region into similar groups of zones. The multiple regression results indicate that all the studied cereals mean yields are affected by zone, fertilizer type and crop damage effects. In addition to this, barley is affected by extension programme; maize crop by seed type, irrigation, and protection of soil erosion; sorghum and teff crops are additionally affected by crop prevention method, extension programme, protection of soil erosion, and gender of the household head; and wheat crop by crop prevention methods, extension programme and gender of the household head. The results from the mixed model analysis were entirely different from the regression results due to the observed dependencies of the cereals mean yields on the sampling unit. Based on the hierarchical cluster analysis, five groups of classes (clusters) were identified which seem to be in agreement with the geographical neighbouring positions of the locations and the similarity of the type of crops produced. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2010.
127

Gerber-Shiu analysis in some dependent Sparre Andersen risk models

Woo, Jae-Kyung 03 August 2010 (has links)
In this thesis, we consider a generalization of the classical Gerber-Shiu function in various risk models. The generalization involves introduction of two new variables in the original penalty function including the surplus prior to ruin and the deficit at ruin. These new variables are the minimum surplus level before ruin occurs and the surplus immediately after the second last claim before ruin occurs. Although these quantities can not be observed until ruin occurs, we can still identify their distributions in advance because they do not functionally depend on the time of ruin, but only depend on known quantities including the initial surplus allocated to the business. Therefore, some ruin related quantities obtained by incorporating four variables in the generalized Gerber-Shiu function can help our understanding of the analysis of the random walk and the resultant risk management. In Chapter 2, we demonstrate the generalized Gerber-Shiu functions satisfy the defective renewal equation in terms of the compound geometric distribution in the ordinary Sparre Andersen renewal risk models (continuous time). As a result, forms of joint and marginal distributions associated with the variables in the generalized penalty function are derived for an arbitrary distribution of interclaim/interarrival times. Because the identification of the compound geometric components is difficult without any specific conditions on the interclaim times, in Chapter 3 we consider the special case when the interclaim time distribution is from the Coxian class of distribution, as well as the classical compound Poisson models. Note that the analysis of the generalized Gerber-Shiu function involving three (the classical two variables and the surplus after the second last claim) is sufficient to study of four variable. It is shown to be true even in the cases where the interclaim of the first event is assumed to be different from the subsequent interclaims (i.e. delayed renewal risk models) in Chapter 4 or the counting (the number of claims) process is defined in the discrete time (i.e. discrete renewal risk models) in Chapter 5. In Chapter 6 the two-sided bounds for a renewal equation are studied. These results may be used in many cases related to the various ruin quantities from the generalized Gerber-Shiu function analyzed in previous chapters. Note that the larger number of iterations of computing the bound produces the closer result to the exact value. However, for the nonexponential bound the form of bound contains the convolution involving usually heavy-tailed distribution (e.g. heavy-tailed claims, extreme events), we need to find the alternative method to reinforce the convolution computation in this case.
128

Analysis of some risk models involving dependence

Cheung, Eric C.K. January 2010 (has links)
The seminal paper by Gerber and Shiu (1998) gave a huge boost to the study of risk theory by not only unifying but also generalizing the treatment and the analysis of various risk-related quantities in one single mathematical function - the Gerber-Shiu expected discounted penalty function, or Gerber-Shiu function in short. The Gerber-Shiu function is known to possess many nice properties, at least in the case of the classical compound Poisson risk model. For example, upon the introduction of a dividend barrier strategy, it was shown by Lin et al. (2003) and Gerber et al. (2006) that the Gerber-Shiu function with a barrier can be expressed in terms of the Gerber-Shiu function without a barrier and the expected value of discounted dividend payments. This result is the so-called dividends-penalty identity, and it holds true when the surplus process belongs to a class of Markov processes which are skip-free upwards. However, one stringent assumption of the model considered by the above authors is that all the interclaim times and the claim sizes are independent, which is in general not true in reality. In this thesis, we propose to analyze the Gerber-Shiu functions under various dependent structures. The main focus of the thesis is the risk model where claims follow a Markovian arrival process (MAP) (see, e.g., Latouche and Ramaswami (1999) and Neuts (1979, 1989)) in which the interclaim times and the claim sizes form a chain of dependent variables. The first part of the thesis puts emphasis on certain dividend strategies. In Chapter 2, it is shown that a matrix form of the dividends-penalty identity holds true in a MAP risk model perturbed by diffusion with the use of integro-differential equations and their solutions. Chapter 3 considers the dual MAP risk model which is a reflection of the ordinary MAP model. A threshold dividend strategy is applied to the model and various risk-related quantities are studied. Our methodology is based on an existing connection between the MAP risk model and a fluid queue (see, e.g., Asmussen et al. (2002), Badescu et al. (2005), Ramaswami (2006) and references therein). The use of fluid flow techniques to analyze risk processes opens the door for further research as to what types of risk model with dependency structure can be studied via probabilistic arguments. In Chapter 4, we propose to analyze the Gerber-Shiu function and some discounted joint densities in a risk model where each pair of the interclaim time and the resulting claim size is assumed to follow a bivariate phase-type distribution, with the pairs assumed to be independent and identically distributed (i.i.d.). To this end, a novel fluid flow process is constructed to ease the analysis. In the classical Gerber-Shiu function introduced by Gerber and Shiu (1998), the random variables incorporated into the analysis include the time of ruin, the surplus prior to ruin and the deficit at ruin. The later part of this thesis focuses on generalizing the classical Gerber-Shiu function by incorporating more random variables into the so-called penalty function. These include the surplus level immediately after the second last claim before ruin, the minimum surplus level before ruin and the maximum surplus level before ruin. In Chapter 5, the focus will be on the study of the generalized Gerber-Shiu function involving the first two new random variables in the context of a semi-Markovian risk model (see, e.g., Albrecher and Boxma (2005) and Janssen and Reinhard (1985)). It is shown that the generalized Gerber-Shiu function satisfies a matrix defective renewal equation, and some discounted joint densities involving the new variables are derived. Chapter 6 revisits the MAP risk model in which the generalized Gerber-Shiu function involving the maximum surplus before ruin is examined. In this case, the Gerber-Shiu function no longer satisfies a defective renewal equation. Instead, the generalized Gerber-Shiu function can be expressed in terms of the classical Gerber-Shiu function and the Laplace transform of a first passage time that are both readily obtainable. In a MAP risk model, the interclaim time distribution must be phase-type distributed. This leads us to propose a generalization of the MAP risk model by allowing for the interclaim time to have an arbitrary distribution. This is the subject matter of Chapter 7. Chapter 8 is concerned with the generalized Sparre Andersen risk model with surplus-dependent premium rate, and some ordering properties of certain ruin-related quantities are studied. Chapter 9 ends the thesis by some concluding remarks and directions for future research.
129

Notions of Dependence with Applications in Insurance and Finance

Wei, Wei January 2013 (has links)
Many insurance and finance activities involve multiple risks. Dependence structures between different risks play an important role in both theoretical models and practical applications. However, stochastic and actuarial models with dependence are very challenging research topics. In most literature, only special dependence structures have been considered. However, most existing special dependence structures can be integrated into more-general contexts. This thesis is motivated by the desire to develop more-general dependence structures and to consider their applications. This thesis systematically studies different dependence notions and explores their applications in the fields of insurance and finance. It contributes to the current literature in the following three main respects. First, it introduces some dependence notions to actuarial science and initiates a new approach to studying optimal reinsurance problems. Second, it proposes new notions of dependence and provides a general context for the studies of optimal allocation problems in insurance and finance. Third, it builds the connections between copulas and the proposed dependence notions, thus enabling the constructions of the proposed dependence structures and enhancing their applicability in practice. The results derived in the thesis not only unify and generalize the existing studies of optimization problems in insurance and finance, but also admit promising applications in other fields, such as operations research and risk management.
130

Modelling longitudinal binary disease outcome data including the effect of covariates and extra variability.

Ngcobo, Siyabonga. January 2011 (has links)
The current work deals with modelling longitudinal or repeated non-Gaussian measurements for a respiratory disease. The analysis of longitudinal data for non-Gaussian binary disease outcome data can broadly be modeled using three different approaches; the marginal, random effects and transition models. The marginal type model is used if one is interested in estimating population averaged effects such as whether a treatment works or not on an average individual. On the other hand random effects models are important if apart from measuring population averaged effects a researcher is also interested in subject specific effects. In this case to get marginal effects from the subject-specific model we integrate out the random effects. Transition models are also called conditional models as a general term. Thus all the three types of models are important in understanding the effects of covariates and disease progression and distribution of outcomes in a population. In the current work the three models have been researched on and fitted to data. The random effects or subject-specific model is further modified to relax the assumption that the random effects should be strictly normal. This leads to the so called hierarchical generalized linear model (HGLM) based on the h-likelihood formulation suggested by Lee and Nelder (1996). The marginal model was fitted using generalized estimating equations (GEE) using PROC GENMOD in SAS. The random effects model was fitted using PROC GLIMMIX and PROC NLMIXED in SAS (generalized linear mixed model). The latter approach was found to be more flexible except for the need of specifying initial parameter values. The transition model was used to capture the dependence between outcomes in particular the dependence of the current response or outcome on the previous response and fitted using PROC GENMOD. The HGLM was fitted using the GENSTAT software. Longitudinal disease outcome data can provide real and reliable data to model disease progression in the sense that it can be used to estimate important disease i parameters such as prevalence, incidence and others such as the force of infection. Problem associated with longitudinal data include loss of information due to loss to follow up such as dropout and missing data in general. In some cases cross-sectional data can be used to find the required estimates but longitudinal data is more efficient but may require more time, effort and cost to collect. However the successful estimation of a given parameter or function depends on the availability of the relevant data for it. It is sometimes impossible to estimate a parameter of interest if the data cannot its estimation. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2011.

Page generated in 0.0503 seconds