• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 44
  • 24
  • 10
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 188
  • 85
  • 69
  • 49
  • 38
  • 35
  • 32
  • 32
  • 31
  • 27
  • 26
  • 22
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The identification and application of common principal components

Pepler, Pieter Theo 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: When estimating the covariance matrices of two or more populations, the covariance matrices are often assumed to be either equal or completely unrelated. The common principal components (CPC) model provides an alternative which is situated between these two extreme assumptions: The assumption is made that the population covariance matrices share the same set of eigenvectors, but have di erent sets of eigenvalues. An important question in the application of the CPC model is to determine whether it is appropriate for the data under consideration. Flury (1988) proposed two methods, based on likelihood estimation, to address this question. However, the assumption of multivariate normality is untenable for many real data sets, making the application of these parametric methods questionable. A number of non-parametric methods, based on bootstrap replications of eigenvectors, is proposed to select an appropriate common eigenvector model for two population covariance matrices. Using simulation experiments, it is shown that the proposed selection methods outperform the existing parametric selection methods. If appropriate, the CPC model can provide covariance matrix estimators that are less biased than when assuming equality of the covariance matrices, and of which the elements have smaller standard errors than the elements of the ordinary unbiased covariance matrix estimators. A regularised covariance matrix estimator under the CPC model is proposed, and Monte Carlo simulation results show that it provides more accurate estimates of the population covariance matrices than the competing covariance matrix estimators. Covariance matrix estimation forms an integral part of many multivariate statistical methods. Applications of the CPC model in discriminant analysis, biplots and regression analysis are investigated. It is shown that, in cases where the CPC model is appropriate, CPC discriminant analysis provides signi cantly smaller misclassi cation error rates than both ordinary quadratic discriminant analysis and linear discriminant analysis. A framework for the comparison of di erent types of biplots for data with distinct groups is developed, and CPC biplots constructed from common eigenvectors are compared to other types of principal component biplots using this framework. A subset of data from the Vermont Oxford Network (VON), of infants admitted to participating neonatal intensive care units in South Africa and Namibia during 2009, is analysed using the CPC model. It is shown that the proposed non-parametric methodology o ers an improvement over the known parametric methods in the analysis of this data set which originated from a non-normally distributed multivariate population. CPC regression is compared to principal component regression and partial least squares regression in the tting of models to predict neonatal mortality and length of stay for infants in the VON data set. The tted regression models, using readily available day-of-admission data, can be used by medical sta and hospital administrators to counsel parents and improve the allocation of medical care resources. Predicted values from these models can also be used in benchmarking exercises to assess the performance of neonatal intensive care units in the Southern African context, as part of larger quality improvement programmes. / AFRIKAANSE OPSOMMING: Wanneer die kovariansiematrikse van twee of meer populasies beraam word, word dikwels aanvaar dat die kovariansiematrikse of gelyk, of heeltemal onverwant is. Die gemeenskaplike hoofkomponente (GHK) model verskaf 'n alternatief wat tussen hierdie twee ekstreme aannames gele e is: Die aanname word gemaak dat die populasie kovariansiematrikse dieselfde versameling eievektore deel, maar verskillende versamelings eiewaardes het. 'n Belangrike vraag in die toepassing van die GHK model is om te bepaal of dit geskik is vir die data wat beskou word. Flury (1988) het twee metodes, gebaseer op aanneemlikheidsberaming, voorgestel om hierdie vraag aan te spreek. Die aanname van meerveranderlike normaliteit is egter ongeldig vir baie werklike datastelle, wat die toepassing van hierdie metodes bevraagteken. 'n Aantal nie-parametriese metodes, gebaseer op skoenlus-herhalings van eievektore, word voorgestel om 'n geskikte gemeenskaplike eievektor model te kies vir twee populasie kovariansiematrikse. Met die gebruik van simulasie eksperimente word aangetoon dat die voorgestelde seleksiemetodes beter vaar as die bestaande parametriese seleksiemetodes. Indien toepaslik, kan die GHK model kovariansiematriks beramers verskaf wat minder sydig is as wanneer aanvaar word dat die kovariansiematrikse gelyk is, en waarvan die elemente kleiner standaardfoute het as die elemente van die gewone onsydige kovariansiematriks beramers. 'n Geregulariseerde kovariansiematriks beramer onder die GHK model word voorgestel, en Monte Carlo simulasie resultate toon dat dit meer akkurate beramings van die populasie kovariansiematrikse verskaf as ander mededingende kovariansiematriks beramers. Kovariansiematriks beraming vorm 'n integrale deel van baie meerveranderlike statistiese metodes. Toepassings van die GHK model in diskriminantanalise, bi-stippings en regressie-analise word ondersoek. Daar word aangetoon dat, in gevalle waar die GHK model toepaslik is, GHK diskriminantanalise betekenisvol kleiner misklassi kasie foutkoerse lewer as beide gewone kwadratiese diskriminantanalise en line^ere diskriminantanalise. 'n Raamwerk vir die vergelyking van verskillende tipes bi-stippings vir data met verskeie groepe word ontwikkel, en word gebruik om GHK bi-stippings gekonstrueer vanaf gemeenskaplike eievektore met ander tipe hoofkomponent bi-stippings te vergelyk. 'n Deelversameling van data vanaf die Vermont Oxford Network (VON), van babas opgeneem in deelnemende neonatale intensiewe sorg eenhede in Suid-Afrika en Namibi e gedurende 2009, word met behulp van die GHK model ontleed. Daar word getoon dat die voorgestelde nie-parametriese metodiek 'n verbetering op die bekende parametriese metodes bied in die ontleding van hierdie datastel wat afkomstig is uit 'n nie-normaal verdeelde meerveranderlike populasie. GHK regressie word vergelyk met hoofkomponent regressie en parsi ele kleinste kwadrate regressie in die passing van modelle om neonatale mortaliteit en lengte van verblyf te voorspel vir babas in die VON datastel. Die gepasde regressiemodelle, wat maklik bekombare dag-van-toelating data gebruik, kan deur mediese personeel en hospitaaladministrateurs gebruik word om ouers te adviseer en die toewysing van mediese sorg hulpbronne te verbeter. Voorspelde waardes vanaf hierdie modelle kan ook gebruik word in normwaarde oefeninge om die prestasie van neonatale intensiewe sorg eenhede in die Suider-Afrikaanse konteks, as deel van groter gehalteverbeteringprogramme, te evalueer.
192

Redovisningskonsekvenser vid förändringen av pensionsredovisningen

Björk, Magnus, Harrå, Stefan January 2013 (has links)
Abstract Authors:Stefan Harrå and Magnus Björk Advisor: Markku Penttinen Title: Accounting Consequences of the change in pension accounting Background to problem: When the revised IAS 19 comes into force January 1, 2013, it means that two of the three accounting principles for defined benefit pension plans are disappearing, including the corridor method. The corridor method has made it possible for companies to defer its actuarial gains and losses. Now that the corridor approach abolished then the unrecognized actuarial gains and losses immediately be covered by equity, which involves very large amounts of some companies. Why the amounts have grown so big is much because of the discount rate. The discount rate is a controversial parameter, and there is disagreement on how it should be fixed. Purpose: The purpose of this thesis is to examine the accounting implications this will have for the company applied the corridor method, and if there is some parameters in the actuarial assumption that is more important than others. Methodology: The thesis has mainly been based on a qualitative research through qualitative interviews with a small sample that is affected by this change. There are quantitative elements to a greater depth by examining the annual reports, discount and deferred pension liabilities of the various companies. The approach is exploratory as it is a qualitative study and there was little knowledge of the subject before the work of it started. Therefore, a study of literature, regulations and previous research before the empirical study. This made it possible to gain a broader understanding of the subject and to shape relevant and essential interview questions. Conclusions: The conclusion shows that the largest accounting consequences for the companies in the study in conjunction with the change is that the unrecognized actuarial gains and losses will now be covered by equity and that the expected return on plan assets is based on the discount rate. The study also shows that it is the discount rate which is considered the most important parameter that the companies are looking at in the actuarial assumption. The conclusion also provides a shared sense of the true and fair picture of the companies after the revised IAS 19. Suggestions for further research: That after 2013 to study how the actual result of this rule change did this compare to the expected. Look at the problem of determining the discount rate. How will the IASB look at it if more and more begin to deviate from the standard? Keywords: "IAS 19", "IAS 19 revised", "corridor method", "pension accounting", "pension liabilities", "defined contribution plans", "actuarial assumptions", "actuarial gains and losses" and "discount rate".
193

IAS 19 och aktuariella antaganden i praktiken : En studie ur ett beslutsteoretiskt perspektiv / IAS 19 and actuarial assumptions in practice : A study from a decision theory perspective

Rebensdorff, Henrik, Prom, Nichola January 2013 (has links)
Ett mycket debatterat ämne är pensionsområdet. IASB har genom IAS19 gjort ett försök att harmonisera redovisningen inom detta område, trots detta uppkommer det nationella så väl som internationella skillnader. Det huvudsakliga problemet för bolag är att framställa en rättvis diskonteringsränta vid nuvärdesberäkning av deras pensionsskuld. Detta har medfört att de flesta bolag, på grund av kunskapsbrist i ämnet, valt att hyra in denna expertis från en aktuariekonsult. Syftet med denna uppsats är att ur ett beslutsteoretiskt perspektiv beskriva hur aktuariekonsulten resonerar och agerar kring valet avföretags aktuariella antaganden. Det som legat till grund för denna studie är teori och regler utifrån IFRS/IAS 19 samtidigt som valda delar utifrån beslutsteori varit i fokus. För att få svar på vår frågeställning har vi valt att utföra sex intervjuer, fyra med aktuariekonsulter och två med redovisningsansvariga på börsnoterade bolag. Utgångspunkten för ansatsen har tagits från det abduktiva synsättet eftersom en växelverkan mellan teori och empiri genomförts. Denna uppsats har påvisat att konsulterna inte såg sig själva sombeslutsfattare utan intog en mer stöttande roll. Dialogen och förhållningssättet från aktuariekonsult till kund om nödvändig data varen viktig del i beslutsprocessen. Det visade sig även att yrkesheder har en vital betydelse för aktuariekonsulten i deras arbete och i beslutsprocessen. / A much debated topic is the area of pensions. IASB has by IAS 19 made an attempt to harmonize the accounting, although this raises national as well as international differences. The main problem for companies is to produce a fair discount rate in calculating the present value of a pension liability. This has resulted that most companies, due to lack of knowledge on the subject, has decided to hire this expertise from an actuary consultant. The purpose of this paper is within a decision theory perspective, describing how actuary consultant reasons and acts on the choice of corporate actuarial assumptions. To get an answer to our question, we have chosen to perform six interviews, four with actuary consultants and two with the chief accountants for listed companies. The starting point of the approach has been taken by an abductive approach as it is an interaction between theory and empirical implementation. The basis for this study is the theory and rules based on IFRS/IAS 19, while selected parts have been based on decision theory which has been in focus. This thesis has proven the consultants did not see themselves as decisionmakers but took a more supportive role. The data was an important part in the decision process, therefore dialogue and the approach actuary consultants made towards the client played a significant role. This thesis has also shown professional integrity has a vital influence on actuary consultants in their work and in the decision-making process.
194

On the distribution of the time to ruin and related topics

Shi, Tianxiang 19 June 2013 (has links)
Following the introduction of the discounted penalty function by Gerber and Shiu (1998), significant progress has been made on the analysis of various ruin-related quantities in risk theory. As we know, the discounted penalty function not only provides a systematic platform to jointly analyze various quantities of interest, but also offers the convenience to extract key pieces of information from a risk management perspective. For example, by eliminating the penalty function, the Gerber-Shiu function becomes the Laplace-Stieltjes transform of the time to ruin, inversion of which results in a series expansion for the associated density of the time to ruin (see, e.g., Dickson and Willmot (2005)). In this thesis, we propose to analyze the long-standing finite-time ruin problem by incorporating the number of claims until ruin into the Gerber-Shiu analysis. As will be seen in Chapter 2, many nice analytic properties of the original Gerber-Shiu function are preserved by this generalized analytic tool. For instance, the Gerber-Shiu function still satisfies a defective renewal equation and can be generally expressed in terms of some roots of Lundberg's generalized equation in the Sparre Andersen risk model. In this thesis, we propose not only to unify previous methodologies on the study of the density of the time to ruin through the use of Lagrange's expansion theorem, but also to provide insight into the nature of the series expansion by identifying the probabilistic contribution of each term in the expansion through analysis involving the distribution of the number of claims until ruin. In Chapter 3, we study the joint generalized density of the time to ruin and the number of claims until ruin in the classical compound Poisson risk model. We also utilize an alternative approach to obtain the density of the time to ruin based on the Lagrange inversion technique introduced by Dickson and Willmot (2005). In Chapter 4, relying on the Lagrange expansion theorem for analytic inversion, the joint density of the time to ruin, the surplus immediately before ruin and the number of claims until ruin is examined in the Sparre Andersen risk model with exponential claim sizes and arbitrary interclaim times. To our knowledge, existing results on the finite-time ruin problem in the Sparre Andersen risk model typically involve an exponential assumption on either the interclaim times or the claim sizes (see, e.g., Borovkov and Dickson (2008)). Among the few exceptions, we mention Dickson and Li (2010, 2012) who analyzed the density of the time to ruin for Erlang-n interclaim times. In Chapter 5, we propose a significant breakthrough by utilizing the multivariate version of Lagrange's expansion theorem to obtain a series expansion for the density of the time to ruin under a more general distribution assumption, namely when interclaim times are distributed as a combination of n exponentials. It is worth emphasizing that this technique can also be applied to other areas of applied probability. For instance, the proposed methodology can be used to obtain the distribution of some first passage times for particular stochastic processes. As an illustration, the duration of a busy period in a queueing risk model will be examined. Interestingly, the proposed technique can also be used to analyze some first passage times for the compound Poisson processes with diffusion. In Chapter 6, we propose an extension to Kendall's identity (see, e.g., Kendall (1957)) by further examining the distribution of the number of jumps before the first passage time. We show that the main result is particularly relevant to enhance our understanding of some problems of interest, such as the finite-time ruin probability of a dual compound Poisson risk model with diffusion and pricing barrier options issued on an insurer's stock price. Another closely related quantity of interest is the so-called occupation times of the surplus process below zero (also referred to as the duration of negative surplus, see, e.g., Egidio dos Reis (1993)) or in a certain interval (see, e.g., Kolkovska et al. (2005)). Occupation times have been widely used as a contingent characteristic to develop advanced derivatives in financial mathematics. In risk theory, it can be used as an important risk management tool to examine the overall health of an insurer's business. The main subject matter of Chapter 7 is to extend the analysis of occupation times to a class of renewal risk processes. We provide explicit expressions for the duration of negative surplus and the double-barrier occupation time in terms of their Laplace-Stieltjes transform. In the process, we revisit occupation times in the content of the classical compound Poisson risk model and examine some results proposed by Kolkovska et al. (2005). Finally, some concluding remarks and discussion of future research are made in Chapter 8.
195

Toward a unified global regulatory capital framework for life insurers

Sharara, Ishmael 28 February 2011 (has links)
In many regions of the world, the solvency regulation of insurers is becoming more principle-based and market oriented. However, the exact forms of the solvency standards that are emerging in individual jurisdictions are not entirely consistent. A common risk and capital framework can level the global playing field and possibly reduce the cost of capital for insurers. In the thesis, a conceptual framework for measuring the insolvency risk of life insurance companies will be proposed. The two main advantages of the proposed solvency framework are that it addresses the issue of incentives in the calibration of the capital requirements and it also provides an associated decomposition of the insurer's insolvency risk by term. The proposed term structure of insolvency risk is an efficient risk summary that should be readily accessible to both regulators and policyholders. Given the inherent complexity of the long-term guarantees and options of typical life insurance policies, the term structure of insolvency risk is able to provide stakeholders with more complete information than that provided by a single number that relates to a specific period. The capital standards for life insurers that are currently existing or have been proposed in Canada, U.S., and in the EU are then reviewed within the risk and capital measurement framework of the proposed standard to identify potential shortcomings.
196

Financial Risk Management of Guaranteed Minimum Income Benefits Embedded in Variable Annuities

Marshall, Claymore January 2011 (has links)
A guaranteed minimum income benefit (GMIB) is a long-dated option that can be embedded in a deferred variable annuity. The GMIB is attractive because, for policyholders who plan to annuitize, it offers protection against poor market performance during the accumulation phase, and adverse interest rate experience at annuitization. The GMIB also provides an upside equity guarantee that resembles the benefit provided by a lookback option. We price the GMIB, and determine the fair fee rate that should be charged. Due to the long dated nature of the option, conventional hedging methods, such as delta hedging, will only be partially successful. Therefore, we are motivated to find alternative hedging methods which are practicable for long-dated options. First, we measure the effectiveness of static hedging strategies for the GMIB. Static hedging portfolios are constructed based on minimizing the Conditional Tail Expectation of the hedging loss distribution, or minimizing the mean squared hedging loss. Next, we measure the performance of semi-static hedging strategies for the GMIB. We present a practical method for testing semi-static strategies applied to long term options, which employs nested Monte Carlo simulations and standard optimization methods. The semi-static strategies involve periodically rebalancing the hedging portfolio at certain time intervals during the accumulation phase, such that, at the option maturity date, the hedging portfolio payoff is equal to or exceeds the option value, subject to an acceptable level of risk. While we focus on the GMIB as a case study, the methods we utilize are extendable to other types of long-dated options with similar features.
197

Coherent Distortion Risk Measures in Portfolio Selection

Feng, Ming Bin January 2011 (has links)
The theme of this thesis relates to solving the optimal portfolio selection problems using linear programming. There are two key contributions in this thesis. The first contribution is to generalize the well-known linear optimization framework of Conditional Value-at-Risk (CVaR)-based portfolio selection problems (see Rockafellar and Uryasev (2000, 2002)) to more general risk measure portfolio selection problems. In particular, the class of risk measure under consideration is called the Coherent Distortion Risk Measure (CDRM) and is the intersection of two well-known classes of risk measures in the literature: the Coherent Risk Measure (CRM) and the Distortion Risk Measure (DRM). In addition to CVaR, other risk measures which belong to CDRM include the Wang Transform (WT) measure, Proportional Hazard (PH) transform measure, and lookback (LB) distortion measure. Our generalization implies that the portfolio selection problems can be solved very efficiently using the linear programming approach and over a much wider class of risk measures. The second contribution of the thesis is to establish the equivalences among four formulations of CDRM optimization problems: the return maximization subject to CDRM constraint, the CDRM minimization subject to return constraint, the return-CDRM utility maximization, the CDRM-based Sharpe Ratio maximization. Equivalences among these four formulations are established in a sense that they produce the same efficient frontier when varying the parameters in their corresponding problems. We point out that the first three formulations have already been investigated in Krokhmal et al. (2002) with milder assumptions on risk measures (convex functional of portfolio weights). Here we apply their results to CDRM and establish the fourth equivalence. For every one of these formulations, the relationship between its given parameter and the implied parameters for the other three formulations is explored. Such equivalences and relationships can help verifying consistencies (or inconsistencies) for risk management with different objectives and constraints. They are also helpful for uncovering the implied information of a decision making process or of a given investment market. We conclude the thesis by conducting two case studies to illustrate the methodologies and implementations of our linear optimization approach, to verify the equivalences among four different problem formulations, and to investigate the properties of different members of CDRM. In addition, the efficiency (or inefficiency) of the so-called 1/n portfolio strategy in terms of the trade off between portfolio return and portfolio CDRM. The properties of optimal portfolios and their returns with respect to different CDRM minimization problems are compared through their numerical results.
198

The optimality of a dividend barrier strategy for Levy insurance risk processes, with a focus on the univariate Erlang mixture

Ali, Javid January 2011 (has links)
In insurance risk theory, the surplus of an insurance company is modelled to monitor and quantify its risks. With the outgo of claims and inflow of premiums, the insurer needs to determine what financial portfolio ensures the soundness of the company’s future while satisfying the shareholders’ interests. It is usually assumed that the net profit condition (i.e. the expectation of the process is positive) is satisfied, which then implies that this process would drift towards infinity. To correct this unrealistic behaviour, the surplus process was modified to include the payout of dividends until the time of ruin. Under this more realistic surplus process, a topic of growing interest is determining which dividend strategy is optimal, where optimality is in the sense of maximizing the expected present value of dividend payments. This problem dates back to the work of Bruno De Finetti (1957) where it was shown that if the surplus process is modelled as a random walk with ± 1 step sizes, the optimal dividend payment strategy is a barrier strategy. Such a strategy pays as dividends any excess of the surplus above some threshold. Since then, other examples where a barrier strategy is optimal include the Brownian motion model (Gerber and Shiu (2004)) and the compound Poisson process model with exponential claims (Gerber and Shiu (2006)). In this thesis, we focus on the optimality of a barrier strategy in the more general Lévy risk models. The risk process will be formulated as a spectrally negative Lévy process, a continuous-time stochastic process with stationary increments which provides an extension of the classical Cramér-Lundberg model. This includes the Brownian and the compound Poisson risk processes as special cases. In this setting, results are expressed in terms of “scale functions”, a family of functions known only through their Laplace transform. In Loeffen (2008), we can find a sufficient condition on the jump distribution of the process for a barrier strategy to be optimal. This condition was then improved upon by Loeffen and Renaud (2010) while considering a more general control problem. The first chapter provides a brief review of theory of spectrally negative Lévy processes and scale functions. In chapter 2, we define the optimal dividends problem and provide existing results in the literature. When the surplus process is given by the Cramér-Lundberg process with a Brownian motion component, we provide a sufficient condition on the parameters of this process for the optimality of a dividend barrier strategy. Chapter 3 focuses on the case when the claims distribution is given by a univariate mixture of Erlang distributions with a common scale parameter. Analytical results for the Value-at-Risk and Tail-Value-at-Risk, and the Euler risk contribution to the Conditional Tail Expectation are provided. Additionally, we give some results for the scale function and the optimal dividends problem. In the final chapter, we propose an expectation maximization (EM) algorithm similar to that in Lee and Lin (2009) for fitting the univariate distribution to data. This algorithm is implemented and numerical results on the goodness of fit to sample data and on the optimal dividends problem are presented.
199

Konsekvenser vid slopandet av korridormetoden : Engångseffekten på eget kapital / Consequences of abolishing the corridor method : One time effect on equity

Jarnbo, Cecilia, Ljungberg, Sophie January 2015 (has links)
Hur företag påverkades av slopandet av korridormetoden varierade från företag till företag. Dock hade alla undersökta företag som tillämpat korridormetoden räkenskapsår 2012, ackumelerade underskott. Den totala genomsnittliga effekten på eget kapial blev en minskning med -4,49 procent. En sjättedel av företagen i undersökningen vidtog någon åtgärd för att förebygga en negativ effekt på eget kapital. Den främsta åtgärd som dessa företag hade vidtagit, var att de successivt hade övergått till avgiftsbestämda pensionsplaner under de senaste åren och stängt de förmånsbestämda pensionsplanerna, vilket har förebyggt en negativ effekt. En ytterligare anledning till varför effekten inte blev så stor som befarades i media, kan bero på att en del företag valde att tillämpa den reviderade IAS 19 i förtid, det vill säga under tidigare räkenskapsår. / How the corporates equity was affected due to the abolishment of the corridor method varies from corporate to corporate. However, all investigated corporates in the essay which practiced the corridor method the fiscal year 2012 had an accumulated deficit. The total average effect on equity shows a decrease by -4,49 percent. One sixth of the corporates reviewed performed preventive actions to mitigate a negative effect on equity. One foremost action these corporates performed was to move from defined benefit plans to contribution plans the last years and close the defined benefit plans. One additional cause why the effect on equity was not as big as the media predicted, could be because some corporates choose to practice the revised IAS 19 in their previous fiscal years.
200

The role of immune-genetic factors in modelling longitudinally measured HIV bio-markers including the handling of missing data.

Odhiambo, Nancy. 20 December 2013 (has links)
Since the discovery of AIDS among the gay men in 1981 in the United States of America, it has become a major world pandemic with over 40 million individuals infected world wide. According to the Joint United Nations Programme against HIV/AIDS epidermic updates in 2012, 28.3 million individuals are living with HIV world wide, 23.5 million among them coming from sub-saharan Africa and 4.8 million individuals residing in Asia. The report showed that approximately 1.7 million individuals have died from AIDS related deaths, 34 million ± 50% know their HIV status, a total of 2:5 million individuals are newly infected, 14:8 million individuals are eligible for HIV treatment and only 8 million are on HIV treatment (Joint United Nations Programme on HIV/AIDS and health sector progress towards universal access: progress report, 2011). Numerous studies have been carried out to understand the pathogenesis and the dynamics of this deadly disease (AIDS) but, still its pathogenesis is poorly understood. More understanding of the disease is still needed so as to reduce the rate of its acquisition. Researchers have come up with statistical and mathematical models which help in understanding and predicting the progression of the disease better so as to find ways in which its acquisition can be prevented and controlled. Previous studies on HIV/AIDS have shown that, inter-individual variability plays an important role in susceptibility to HIV-1 infection, its transmission, progression and even response to antiviral therapy. Certain immuno-genetic factors (human leukocyte antigen (HLA), Interleukin-10 (IL-10) and single nucleotide polymorphisms (SNPs)) have been associated with the variability among individuals. In this dissertation we are going to reaffirm previous studies through statistical modelling and analysis that have shown that, immuno-genetic factors could play a role in susceptibility, transmission, progression and even response to antiviral therapy. This will be done using the Sinikithemba study data from the HIV Pathogenesis Programme (HPP) at Nelson Mandela Medical school, University of Kwazulu-Natal consisting of 451 HIV positive and treatment naive individuals to model how the HIV Bio-markers (viral load and CD4 count) are associated with the immuno-genetic factors using linear mixed models. We finalize the dissertation by dealing with drop-out which is a pervasive problem in longitudinal studies, regardless of how well they are designed and executed. We demonstrate the application and performance of multiple imputation (MI) in handling drop-out using a longitudinal count data from the Sinikithemba study with log viral load as the response. Our aim is to investigate the influence of drop-out on the evolution of HIV Bio-markers in a model including selected genetic factors as covariates, assuming the missing mechanism is missing at random (MAR). We later compare the results obtained from the MI method to those obtained from the incomplete dataset. From the results, we can clearly see that there is much difference in the findings obtained from the two analysis. Therefore, there is need to account for drop-out since it can lead to biased results if not accounted for. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.

Page generated in 0.1728 seconds