• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 44
  • 24
  • 10
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 188
  • 85
  • 69
  • 49
  • 38
  • 35
  • 32
  • 32
  • 31
  • 27
  • 26
  • 22
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Robust principal component analysis biplots

Wedlake, Ryan Stuart 03 1900 (has links)
Thesis (MSc (Mathematical Statistics))--University of Stellenbosch, 2008. / In this study several procedures for finding robust principal components (RPCs) for low and high dimensional data sets are investigated in parallel with robust principal component analysis (RPCA) biplots. These RPCA biplots will be used for the simultaneous visualisation of the observations and variables in the subspace spanned by the RPCs. Chapter 1 contains: a brief overview of the difficulties that are encountered when graphically investigating patterns and relationships in multidimensional data and why PCA can be used to circumvent these difficulties; the objectives of this study; a summary of the work done in order to meet these objectives; certain results in matrix algebra that are needed throughout this study. In Chapter 2 the derivation of the classic sample principal components (SPCs) is first discussed in detail since they are the „building blocks‟ of classic principal component analysis (CPCA) biplots. Secondly, the traditional CPCA biplot of Gabriel (1971) is reviewed. Thirdly, modifications to this biplot using the new philosophy of Gower & Hand (1996) are given attention. Reasons why this modified biplot has several advantages over the traditional biplot – some of which are aesthetical in nature – are given. Lastly, changes that can be made to the Gower & Hand (1996) PCA biplot to optimally visualise the correlations between the variables is discussed. Because the SPCs determine the position of the observations as well as the orientation of the arrows (traditional biplot) or axes (Gower and Hand biplot) in the PCA biplot subspace, it is useful to give estimates of the standard errors of the SPCs together with the biplot display as an indication of the stability of the biplot. A computer-intensive statistical technique called the Bootstrap is firstly discussed that is used to calculate the standard errors of the SPCs without making underlying distributional assumptions. Secondly, the influence of outliers on Bootstrap results is investigated. Lastly, a robust form of the Bootstrap is briefly discussed for calculating standard error estimates that remain stable with or without the presence of outliers in the sample. All the preceding topics are the subject matter of Chapter 3. In Chapter 4, reasons why a PC analysis should be made robust in the presence of outliers are firstly discussed. Secondly, different types of outliers are discussed. Thirdly, a method for identifying influential observations and a method for identifying outlying observations are investigated. Lastly, different methods for constructing robust estimates of location and dispersion for the observations receive attention. These robust estimates are used in numerical procedures that calculate RPCs. In Chapter 5, an overview of some of the procedures that are used to calculate RPCs for lower and higher dimensional data sets is firstly discussed. Secondly, two numerical procedures that can be used to calculate RPCs for lower dimensional data sets are discussed and compared in detail. Details and examples of robust versions of the Gower & Hand (1996) PCA biplot that can be constructed using these RPCs are also provided. In Chapter 6, five numerical procedures for calculating RPCs for higher dimensional data sets are discussed in detail. Once RPCs have been obtained by using these methods, they are used to construct robust versions of the PCA biplot of Gower & Hand (1996). Details and examples of these robust PCA biplots are also provided. An extensive software library has been developed so that the biplot methodology discussed in this study can be used in practice. The functions in this library are given in an appendix at the end of this study. This software library is used on data sets from various fields so that the merit of the theory developed in this study can be visually appraised.
82

The implementation of noise addition partial least squares

Moller, Jurgen Johann 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2009. / When determining the chemical composition of a specimen, traditional laboratory techniques are often both expensive and time consuming. It is therefore preferable to employ more cost effective spectroscopic techniques such as near infrared (NIR). Traditionally, the calibration problem has been solved by means of multiple linear regression to specify the model between X and Y. Traditional regression techniques, however, quickly fail when using spectroscopic data, as the number of wavelengths can easily be several hundred, often exceeding the number of chemical samples. This scenario, together with the high level of collinearity between wavelengths, will necessarily lead to singularity problems when calculating the regression coefficients. Ways of dealing with the collinearity problem include principal component regression (PCR), ridge regression (RR) and PLS regression. Both PCR and RR require a significant amount of computation when the number of variables is large. PLS overcomes the collinearity problem in a similar way as PCR, by modelling both the chemical and spectral data as functions of common latent variables. The quality of the employed reference method greatly impacts the coefficients of the regression model and therefore, the quality of its predictions. With both X and Y subject to random error, the quality the predictions of Y will be reduced with an increase in the level of noise. Previously conducted research focussed mainly on the effects of noise in X. This paper focuses on a method proposed by Dardenne and Fernández Pierna, called Noise Addition Partial Least Squares (NAPLS) that attempts to deal with the problem of poor reference values. Some aspects of the theory behind PCR, PLS and model selection is discussed. This is then followed by a discussion of the NAPLS algorithm. Both PLS and NAPLS are implemented on various datasets that arise in practice, in order to determine cases where NAPLS will be beneficial over conventional PLS. For each dataset, specific attention is given to the analysis of outliers, influential values and the linearity between X and Y, using graphical techniques. Lastly, the performance of the NAPLS algorithm is evaluated for various
83

Modelling market risk with SAS Risk Dimensions : a step by step implementation

Du Toit, Carl 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2005. / Financial institutions invest in financial securities like equities, options and government bonds. Two measures, namely return and risk, are associated with each investment position. Return is a measure of the profit or loss of the investment, whilst risk is defined as the uncertainty about return. A financial institution that holds a portfolio of securities is exposed to different types of risk. The most well-known types are market, credit, liquidity, operational and legal risk. An institution has the need to quantify for each type of risk, the extent of its exposure. Currently, standard risk measures that aim to quantify risk only exist for market and credit risk. Extensive calculations are usually required to obtain values for risk measures. The investments positions that form the portfolio, as well as the market information that are used in the risk measure calculations, change during each trading day. Hence, the financial institution needs a business tool that has the ability to calculate various standard risk measures for dynamic market and position data at the end of each trading day. SAS Risk Dimensions is a software package that provides a solution to the calculation problem. A risk management system is created with this package and is used to calculate all the relevant risk measures on a daily basis. The purpose of this document is to explain and illustrate all the steps that should be followed to create a suitable risk management system with SAS Risk Dimensions.
84

Reaseguro proporcional de umbral y su influencia en la probabilidad y el momento de ruina en una cartera de seguros no vida, El

Castañer Garriga, Anna 16 July 2009 (has links)
En la presente tesis se plantea y analiza una nueva estrategia de reaseguro denominada estrategia de reaseguro proporcional de umbral, que consiste en aplicar diferentes niveles de retención en función de las reservas. Esta nueva estrategia de reaseguro permite mejorar la solvencia de la cartera de seguros no vida, al compararla con un modelo sin reaseguro y un modelo con reaseguro proporcional. Así, se plantea el cálculo de la probabilidad de ruina y momento de ruina como medidas de solvencia, y se obtiene la combinación óptima de los porcentajes de retención y del nivel de umbral que minimizan las probabilidades de ruina. / TITLE:"The threshold proportional reinsurance and its influence on the probability and the time of ruin in a non-life insurance portfolio"TEXT:In the current thesis a new reinsurance strategy is analyzed: the threshold proportional reinsurance strategy, where different levels of retention are applied depending on the level of reserves. This new reinsurance strategy allows us to improve the solvency of the non-life insurances portfolio when compared with a model without reinsurance and a model with proportional reinsurance. Thus, the probability of ruin and the time of ruin, two measures of solvency, are presented. Then, the optimal combination of the retention percentages and the threshold level that minimizes ruin probabilities are obtained.
85

Optimal Interest Rate for a Borrower with Estimated Default and Prepayment Risk

Howard, Scott T. 27 May 2008 (has links)
Today's mortgage industry is constantly changing, with adjustable rate mortgages (ARM), loans originated to the so-called "subprime" market, and volatile interest rates. Amid the changes and controversy, lenders continue to originate loans because the interest paid over the loan lifetime is profitable. Measuring the profitability of those loans, along with return on investment to the lender is assessed using Actuarial Present Value (APV), which incorporates the uncertainty that exists in the mortgage industry today, with many loans defaulting and prepaying. The hazard function, or instantaneous failure rate, is used as a measure of probability of failure to make a payment. Using a logit model, the default and prepayment risks are estimated as a function of interest rate. The "optimal" interest rate can be found where the profitability is maximized to the lender.
86

Pricing and Hedging the Guaranteed Minimum Withdrawal Benefits in Variable Annuities

Liu, Yan January 2010 (has links)
The Guaranteed Minimum Withdrawal Benefits (GMWBs) are optional riders provided by insurance companies in variable annuities. They guarantee the policyholders' ability to get the initial investment back by making periodic withdrawals regardless of the impact of poor market performance. With GMWBs attached, variable annuities become more attractive. This type of guarantee can be challenging to price and hedge. We employ two approaches to price GMWBs. Under the constant static withdrawal assumption, the first approach is to decompose the GMWB and the variable annuity into an arithmetic average strike Asian call option and an annuity certain. The second approach is to treat the GMWB alone as a put option whose maturity and payoff are random. Hedging helps insurers specify and manage the risks of writing GMWBs, as well as find their fair prices. We propose semi-static hedging strategies that offer several advantages over dynamic hedging. The idea is to construct a portfolio of European options that replicate the conditional expected GMWB liability in a short time period, and update the portfolio after the options expire. This strategy requires fewer portfolio adjustments, and outperforms the dynamic strategy when there are random jumps in the underlying price. We also extend the semi-static hedging strategies to the Heston stochastic volatility model.
87

Pricing and Hedging the Guaranteed Minimum Withdrawal Benefits in Variable Annuities

Liu, Yan January 2010 (has links)
The Guaranteed Minimum Withdrawal Benefits (GMWBs) are optional riders provided by insurance companies in variable annuities. They guarantee the policyholders' ability to get the initial investment back by making periodic withdrawals regardless of the impact of poor market performance. With GMWBs attached, variable annuities become more attractive. This type of guarantee can be challenging to price and hedge. We employ two approaches to price GMWBs. Under the constant static withdrawal assumption, the first approach is to decompose the GMWB and the variable annuity into an arithmetic average strike Asian call option and an annuity certain. The second approach is to treat the GMWB alone as a put option whose maturity and payoff are random. Hedging helps insurers specify and manage the risks of writing GMWBs, as well as find their fair prices. We propose semi-static hedging strategies that offer several advantages over dynamic hedging. The idea is to construct a portfolio of European options that replicate the conditional expected GMWB liability in a short time period, and update the portfolio after the options expire. This strategy requires fewer portfolio adjustments, and outperforms the dynamic strategy when there are random jumps in the underlying price. We also extend the semi-static hedging strategies to the Heston stochastic volatility model.
88

Analysis of Financial Data using a Difference-Poisson Autoregressive Model

Baroud, Hiba January 2011 (has links)
Box and Jenkins methodologies have massively contributed to the analysis of time series data. However, the assumptions used in these methods impose constraints on the type of the data. As a result, difficulties arise when we apply those tools to a more generalized type of data (e.g. count, categorical or integer-valued data) rather than the classical continuous or more specifically Gaussian type. Papers in the literature proposed alternate methods to model discrete-valued time series data, among these methods is Pegram's operator (1980). We use this operator to build an AR(p) model for integer-valued time series (including both positive and negative integers). The innovations follow the differenced Poisson distribution, or Skellam distribution. While the model includes the usual AR(p) correlation structure, it can be made more general. In fact, the operator can be extended in a way where it is possible to have components which contribute to positive correlation, while at the same time have components which contribute to negative correlation. As an illustration, the process is used to model the change in a stock’s price, where three variations are presented: Variation I, Variation II and Variation III. The first model disregards outliers; however, the second and third include large price changes associated with the effect of large volume trades and market openings. Parameters of the model are estimated using Maximum Likelihood methods. We use several model selection criteria to select the best order for each variation of the model as well as to determine which is the best variation of the model. The most adequate order for all the variations of the model is $AR(3)$. While the best fit for the data is Variation II, residuals' diagnostic plots suggest that Variation III represents a better correlation structure for the model.
89

Markovian Approaches to Joint-life Mortality with Applications in Risk Management

Ji, Min 28 July 2011 (has links)
The combined survival status of the insured lives is a critical problem when pricing and reserving insurance products with more than one life. Our preliminary experience examination of bivariate annuity data from a large Canadian insurance company shows that the relative risk of mortality for an individual increases after the loss of his/her spouse, and that the increase is especially dramatic shortly after bereavement. This preliminary result is supported by the empirical studies over the past 50 years, which suggest dependence between a husband and wife. The dependence between a married couple may be significant in risk management of joint-life policies. This dissertation progressively explores Markovian models in pricing and risk management of joint-life policies, illuminating their advantages in dependent modeling of joint time-until-death (or other exit time) random variables. This dissertation argues that in the dependent modeling of joint-life dependence, Markovian models are flexible, transparent, and easily extended. Multiple state models have been widely used in historic data analysis, particularly in the modeling of failures that have event-related dependence. This dissertation introduces a ¡°common shock¡± factor into a standard Markov joint-life mortality model, and then extends it to a semi-Markov model to capture the decaying effect of the "broken heart" factor. The proposed models transparently and intuitively measure the extent of three types of dependence: the instantaneous dependence, the short-term impact of bereavement, and the long-term association between lifetimes. Some copula-based dependence measures, such as upper tail dependence, can also be derived from Markovian approaches. Very often, death is not the only mode of decrement. Entry into long-term care and voluntary prepayment, for instance, can affect reverse mortgage terminations. The semi-Markov joint-life model is extended to incorporate more exit modes, to model joint-life reverse mortgage termination speed. The event-triggered dependence between a husband and wife is modeled. For example, one spouse's death increases the survivor's inclination to move close to kin. We apply the proposed model specifically to develop the valuation formulas for roll-up mortgages in the UK and Home Equity Conversion Mortgages in the US. We test the significance of each termination mode and then use the model to investigate the mortgage insurance premiums levied on Home Equity Conversion Mortgage borrowers. Finally, this thesis extends the semi-Markov joint-life mortality model to having stochastic transition intensities, for modeling joint-life longevity risk in last-survivor annuities. We propose a natural extension of Gompertz' law to have correlated stochastic dynamics for its two parameters, and incorporate it into the semi-Markov joint-life mortality model. Based on this preliminary joint-life longevity model, we examine the impact of mortality improvement on the cost of a last survivor annuity, and investigate the market prices of longevity risk in last survivor annuities using risk-neutral pricing theory.
90

Lognormal Mixture Model for Option Pricing with Applications to Exotic Options

Fang, Mingyu January 2012 (has links)
The Black-Scholes option pricing model has several well recognized deficiencies, one of which is its assumption of a constant and time-homogeneous stock return volatility term. The implied volatility smile has been studied by subsequent researchers and various models have been developed in an attempt to reproduce this phenomenon from within the models. However, few of these models yield closed-form pricing formulas that are easy to implement in practice. In this thesis, we study a Mixture Lognormal model (MLN) for European option pricing, which assumes that future stock prices are conditionally described by a mixture of lognormal distributions. The ability of mixture models in generating volatility smiles as well as delivering pricing improvement over the traditional Black-Scholes framework have been much researched under multi-component mixtures for many derivatives and high-volatility individual stock options. In this thesis, we investigate the performance of the model under the simplest two-component mixture in a market characterized by relative tranquillity and over a relatively stable period for broad-based index options. A careful interpretation is given to the model and the results obtained in the thesis. This di erentiates our study from many previous studies on this subject. Throughout the thesis, we establish the unique advantage of the MLN model, which is having closed-form option pricing formulas equal to the weighted mixture of Black-Scholes option prices. We also propose a robust calibration methodology to fit the model to market data. Extreme market states, in particular the so-called crash-o-phobia effect, are shown to be well captured by the calibrated model, albeit small pricing improvements are made over a relatively stable period of index option market. As a major contribution of this thesis, we extend the MLN model to price exotic options including binary, Asian, and barrier options. Closed-form formulas are derived for binary and continuously monitored barrier options and simulation-based pricing techniques are proposed for Asian and discretely monitored barrier options. Lastly, comparative results are analysed for various strike-maturity combinations, which provides insights into the formulation of hedging and risk management strategies.

Page generated in 0.2144 seconds