• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 25
  • 1
  • Tagged with
  • 80
  • 80
  • 68
  • 68
  • 37
  • 31
  • 24
  • 17
  • 17
  • 14
  • 13
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Calculation aspects of the European Rebalanced Basket Option using Monte Carlo methods

Van der Merwe, Carel Johannes 12 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science)--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: Life insurance and pension funds offer a wide range of products that are invested in a mix of assets. These portfolios (II), underlying the products, are rebalanced back to predetermined fixed proportions on a regular basis. This is done by selling the better performing assets and buying the worse performing assets. Life insurance or pension fund contracts can offer the client a minimum payout guarantee on the contract by charging them an extra premium (a). This problem can be changed to that of the pricing of a put option with underlying . It forms a liability for the insurance firm, and therefore needs to be managed in terms of risks as well. This can be done by studying the option’s sensitivities. In this thesis the premium and sensitivities of this put option are calculated, using different Monte Carlo methods, in order to find the most efficient method. Using general Monte Carlo methods, a simplistic pricing method is found which is refined by applying mathematical techniques so that the computational time is reduced significantly. After considering Antithetic Variables, Control Variates and Latin Hypercube Sampling as variance reduction techniques, option prices as Control Variates prove to reduce the error of the refined method most efficiently. This is improved by considering different Quasi-Monte Carlo techniques, namely Halton, Faure, normal Sobol’ and other randomised Sobol’ sequences. Owen and Faure-Tezuke type randomised Sobol’ sequences improved the convergence of the estimator the most efficiently. Furthermore, the best methods between Pathwise Derivatives Estimates and Finite Difference Approximations for estimating sensitivities of this option are found. Therefore by using the refined pricing method with option prices as Control Variates together with Owen and Faure-Tezuke type randomised Sobol’ sequences as a Quasi-Monte Carlo method, more efficient methods to price this option (compared to simplistic Monte Carlo methods) are obtained. In addition, more efficient sensitivity estimators are obtained to help manage risks. / AFRIKAANSE OPSOMMING: Lewensversekering en pensioenfondse bied die mark ’n wye reeks produkte wat belê word in ’n mengsel van bates. Hierdie portefeuljes (II), onderliggend aan die produkte, word op ’n gereelde basis terug herbalanseer volgens voorafbepaalde vaste proporsies. Dit word gedoen deur bates wat beter opbrengste gehad het te verkoop, en bates met swakker opbrengste aan te koop. Lewensversekeringof pensioenfondskontrakte kan ’n kliënt ’n verdere minimum uitbetaling aan die einde van die kontrak waarborg deur ’n ekstra premie (a) op die kontrak te vra. Die probleem kan verander word na die prysing van ’n verkoopopsie met onderliggende bate . Hierdie vorm deel van die versekeringsmaatskappy se laste en moet dus ook bestuur word in terme van sy risiko’s. Dit kan gedoen word deur die opsie se sensitiwiteite te bestudeer. In hierdie tesis word die premie en sensitiwiteite van die verkoopopsie met behulp van verskillende Monte Carlo metodes bereken, om sodoende die effektiefste metode te vind. Deur die gebruik van algemene Monte Carlo metodes word ’n simplistiese prysingsmetode, wat verfyn is met behulp van wiskundige tegnieke wat die berekeningstyd wesenlik verminder, gevind. Nadat Antitetiese Veranderlikes, Kontrole Variate en Latynse Hiperkubus Steekproefneming as variansiereduksietegnieke oorweeg is, word gevind dat die verfynde metode se fout die effektiefste verminder met behulp van opsiepryse as Kontrole Variate. Dit word verbeter deur verskillende Quasi-Monte Carlo tegnieke, naamlik Halton, Faure, normale Sobol’ en ander verewekansigde Sobol’ reekse, te vergelyk. Die Owen en Faure-Tezuke tipe verewekansigde Sobol’ reeks verbeter die konvergensie van die beramer die effektiefste. Verder is die beste metode tussen Baanafhanklike Afgeleide Beramers en Eindige Differensie Benaderings om die sensitiwiteit vir die opsie te bepaal, ook gevind. Deur dus die verfynde prysingsmetode met opsiepryse as Kontrole Variate, saam met Owen en Faure-Tezuke tipe verewekansigde Sobol’ reekse as ’n Quasi-Monte Carlo metode te gebruik, word meer effektiewe metodes om die opsie te prys, gevind (in vergelyking met simplistiese Monte Carlo metodes). Verder is meer effektiewe sensitiwiteitsberamers as voorheen gevind wat gebruik kan word om risiko’s te help bestuur.
52

Non-parametric regression modelling of in situ fCO2 in the Southern Ocean

Pretorius, Wesley Byron 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: The Southern Ocean is a complex system, where the relationship between CO2 concentrations and its drivers varies intra- and inter-annually. Due to the lack of readily available in situ data in the Southern Ocean, a model approach was required which could predict the CO2 concentration proxy variable, fCO2. This must be done using predictor variables available via remote measurements to ensure the usefulness of the model in the future. These predictor variables were sea surface temperature, log transformed chlorophyll-a concentration, mixed layer depth and at a later stage altimetry. Initial exploratory analysis indicated that a non-parametric approach to the model should be taken. A parametric multiple linear regression model was developed to use as a comparison to previous studies in the North Atlantic Ocean as well as to compare with the results of the non-parametric approach. A non-parametric kernel regression model was then used to predict fCO2 and nally a combination of the parametric and non-parametric regression models was developed, referred to as the mixed regression model. The results indicated, as expected from exploratory analyses, that the non-parametric approach produced more accurate estimates based on an independent test data set. These more accurate estimates, however, were coupled with zero estimates, caused by the curse of dimensionality. It was also found that the inclusion of salinity (not available remotely) improved the model and therefore altimetry was chosen to attempt to capture this e ect in the model. The mixed model displayed reduced errors as well as removing the zero estimates and hence reducing the variance of the error rates. The results indicated that the mixed model is the best approach to use to predict fCO2 in the Southern Ocean and that altimetry's inclusion did improve the prediction accuracy. / AFRIKAANSE OPSOMMING: Die Suidelike Oseaan is 'n komplekse sisteem waar die verhouding tussen CO2 konsentrasies en die drywers daarvoor intra- en interjaarliks varieer. 'n Tekort aan maklik verkrygbare in situ data van die Suidelike Oseaan het daartoe gelei dat 'n model benadering nodig was wat die CO2 konsentrasie plaasvervangerveranderlike, fCO2, kon voorspel. Dié moet gedoen word deur om gebruik te maak van voorspellende veranderlikes, beskikbaar deur middel van afgeleë metings, om die bruikbaarheid van die model in die toekoms te verseker. Hierdie voorspellende veranderlikes het ingesluit see-oppervlaktetemperatuur, log getransformeerde chloro l-a konsentrasie, gemengde laag diepte en op 'n latere stadium, hoogtemeting. 'n Aanvanklike, ondersoekende analise het aangedui dat 'n nie-parametriese benadering tot die data geneem moet word. 'n Parametriese meerfoudige lineêre regressie model is ontwikkel om met die vorige studies in die Noord-Atlantiese Oseaan asook met die resultate van die nieparametriese benadering te vergelyk. 'n Nie-parametriese kern regressie model is toe ingespan om die fCO2 te voorspel en uiteindelik is 'n kombinasie van die parametriese en nie-parametriese regressie modelle ontwikkel vir dieselfde doel, wat na verwys word as die gemengde regressie model. Die resultate het aangetoon, soos verwag uit die ondersoekende analise, dat die nie-parametriese benadering meer akkurate beramings lewer, gebaseer op 'n onafhanklike toets datastel. Dié meer akkurate beramings het egter met "nul"beramings gepaartgegaan wat veroorsaak word deur die vloek van dimensionaliteit. Daar is ook gevind dat die insluiting van soutgehalte (nie beskikbaar oor via sateliet nie) die model verbeter en juis daarom is hoogtemeting gekies om te poog om hierdie e ek in die model vas te vang. Die gemengde model het kleiner foute getoon asook die "nul"beramings verwyder en sodoende die variasie van die foutkoerse verminder. Die resultate het dus aangetoon dat dat die gemengde model die beste benadering is om te gebruik om die fCO2 in die Suidelike Oseaan te beraam en dat die insluiting van altimetry die akkuraatheid van hierdie beraming verbeter.
53

A brief introduction to basic multivariate economic statistical process control

Mudavanhu, Precious 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Statistical process control (SPC) plays a very important role in monitoring and improving industrial processes to ensure that products produced or shipped to the customer meet the required specifications. The main tool that is used in SPC is the statistical control chart. The traditional way of statistical control chart design assumed that a process is described by a single quality characteristic. However, according to Montgomery and Klatt (1972) industrial processes and products can have more than one quality characteristic and their joint effect describes product quality. Process monitoring in which several related variables are of interest is referred to as multivariate statistical process control (MSPC). The most vital and commonly used tool in MSPC is the statistical control chart as in the case of the SPC. The design of a control chart requires the user to select three parameters which are: sample size, n , sampling interval, h and control limits, k.Several authors have developed control charts based on more than one quality characteristic, among them was Hotelling (1947) who pioneered the use of the multivariate process control techniques through the development of a 2 T -control chart which is well known as Hotelling 2 T -control chart. Since the introduction of the control chart technique, the most common and widely used method of control chart design was the statistical design. However, according to Montgomery (2005), the design of control has economic implications. There are costs that are incurred during the design of a control chart and these are: costs of sampling and testing, costs associated with investigating an out-of-control signal and possible correction of any assignable cause found, costs associated with the production of nonconforming products, etc. The paper is about giving an overview of the different methods or techniques that have been employed to develop the different economic statistical models for MSPC. The first multivariate economic model presented in this paper is the economic design of the Hotelling‟s 2 T -control chart to maintain current control of a process developed by Montgomery and Klatt (1972). This is followed by the work done by Kapur and Chao (1996) in which the concept of creating a specification region for the multiple quality characteristics together with the use of a multivariate quality loss function is implemented to minimize total loss to both the producer and the customer. Another approach by Chou et al (2002) is also presented in which a procedure is developed that simultaneously monitor the process mean and covariance matrix through the use of a quality loss function. The procedure is based on the test statistic 2ln L and the cost model is based on Montgomery and Klatt (1972) as well as Kapur and Chao‟s (1996) ideas. One example of the use of the variable sample size technique on the economic and economic statistical design of the control chart will also be presented. Specifically, an economic and economic statistical design of the 2 T -control chart with two adaptive sample sizes (Farazet al, 2010) will be presented. Farazet al (2010) developed a cost model of a variable sampling size 2 T -control chart for the economic and economic statistical design using Lorenzen and Vance‟s (1986) model. There are several other approaches to the multivariate economic statistical process control (MESPC) problem, but in this project the focus is on the cases based on the phase II stadium of the process where the mean vector, and the covariance matrix, have been fairly well established and can be taken as known, but both are subject to assignable causes. This latter aspect is often ignored by researchers. Nevertheless, the article by Farazet al (2010) is included to give more insight into how more sophisticated approaches may fit in with MESPC, even if the mean vector, only may be subject to assignable cause. Keywords: control chart; statistical process control; multivariate statistical process control; multivariate economic statistical process control; multivariate control chart; loss function. / AFRIKAANSE OPSOMMING: Statistiese proses kontrole (SPK) speel 'n baie belangrike rol in die monitering en verbetering van industriële prosesse om te verseker dat produkte wat vervaardig word, of na kliënte versend word wel aan die vereiste voorwaardes voldoen. Die vernaamste tegniek wat in SPK gebruik word, is die statistiese kontrolekaart. Die tradisionele wyse waarop statistiese kontrolekaarte ontwerp is, aanvaar dat ‟n proses deur slegs 'n enkele kwaliteitsveranderlike beskryf word. Montgomery and Klatt (1972) beweer egter dat industriële prosesse en produkte meer as een kwaliteitseienskap kan hê en dat hulle gesamentlik die kwaliteit van 'n produk kan beskryf. Proses monitering waarin verskeie verwante veranderlikes van belang mag wees, staan as meerveranderlike statistiese proses kontrole (MSPK) bekend. Die mees belangrike en algemene tegniek wat in MSPK gebruik word, is ewe eens die statistiese kontrolekaart soos dit die geval is by SPK. Die ontwerp van 'n kontrolekaart vereis van die gebruiker om drie parameters te kies wat soos volg is: steekproefgrootte, n , tussensteekproefinterval, h en kontrolegrense, k . Verskeie skrywers het kontrolekaarte ontwikkel wat op meer as een kwaliteitseienskap gebaseer is, waaronder Hotelling wat die gebruik van meerveranderlike proses kontrole tegnieke ingelei het met die ontwikkeling van die T2 -kontrolekaart wat algemeen bekend is as Hotelling se 2 T -kontrolekaart (Hotelling, 1947). Sedert die ingebruikneming van die kontrolekaart tegniek is die statistiese ontwerp daarvan die mees algemene benadering en is dit ook in daardie formaat gebruik. Nietemin, volgens Montgomery and Klatt (1972) en Montgomery (2005), het die ontwerp van die kontrolekaart ook ekonomiese implikasies. Daar is kostes betrokke by die ontwerp van die kontrolekaart en daar is ook die kostes t.o.v. steekproefneming en toetsing, kostes geassosieer met die ondersoek van 'n buite-kontrole-sein, en moontlike herstel indien enige moontlike korreksie van so 'n buite-kontrole-sein gevind word, kostes geassosieer met die produksie van niekonforme produkte, ens. In die eenveranderlike geval is die hantering van die ekonomiese eienskappe al in diepte ondersoek. Hierdie werkstuk gee 'n oorsig oor sommige van die verskillende metodes of tegnieke wat al daargestel is t.o.v. verskillende ekonomiese statistiese modelle vir MSPK. In die besonder word aandag gegee aan die gevalle waar die vektor van gemiddeldes sowel as die kovariansiematriks onderhewig is aan potensiële verskuiwings, in teenstelling met 'n neiging om slegs na die vektor van gemiddeldes in isolasie te kyk synde onderhewig aan moontlike verskuiwings te wees.
54

The effect of liquidity on stock returns on the JSE

Reisinger, Astrid Kim 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: This thesis examines the effect of liquidity on excess stock returns on the Johannesburg Stock Exchange (JSE) over the period 2003 to 2011. It builds on the findings of previous studies that found size, value and momentum effects to be significant in explaining market anomalies by adding a further explanatory factor, namely liquidity. A standard CAPM, as well as a momentum-augmented Fama-French (1993: 3) model are employed to perform regression analyses to examine the effect of the four variables on excess stock returns. Results suggested that the log of the stock‘s market value best captured the size effect, the earnings yield best captured the value effect and the previous three month‘s returns best captured the momentum effect. Five liquidity proxies are used: the bid-ask spread first proposed by Amihud (1986: 223), turnover, the price impact measure of Amihud (2002: 31) and two zero return measures proposed by Lesmond et al. (1999: 1113). Despite prior studies having found liquidity to be an influential factor, this thesis found the opposite to be true. This finding remains robust, irrespective of the type of liquidity measure used. While size, value and momentum are found to be significant to a certain extent in explaining excess stock returns over the period, liquidity is not found to be significant. This is a surprising result, given that the JSE is seen as an emerging market, which is generally regarded as illiquid. This fact is exacerbated by the fact that the JSE is a highly concentrated and therefore skewed market that is dominated by only a handful of shares. Hence liquidity is expected to be of utmost importance. The result that liquidity is however not a priced factor on this market is therefore an important finding that requires further analysis to determine why this is the case. In addition, significant non-zero intercepts remained, indicating continued missing risk factors. / AFRIKAANSE OPSOMMING: In hierdie tesis word die effek van likiditeit op oormaat aandeel-opbrengste op die Johannesburg Effektebeurs (JEB) ondersoek gedurende die periode 2003 tot 2011. Dit bou voort op die bevindinge van vorige studies wat toon dat grootte, waarde en momentum beduidend is in die verklaring van mark onreëlmatighede deur 'n addisionele verklarende faktor, likiditeit, toe te voeg. 'n Standaard kapitaalbateprysingsmodel (KBPM) sowel as 'n momentum-aangepaste Fama-French (1993: 3) model word gebruik om deur middel van regressie analise die effek van die vier veranderlikes op oormaat aandeel-opbrengste te ondersoek. Die resultate toon dat die grootte effek die beste verteenwoordig word deur die logaritme van die aandeel se mark kapitalisasie, die verdienste-opbrengs verteenwoordig die waarde effek en die vorige drie-maande opbrengskoerse verteenwoordig die momentum effek die beste. Vyf likiditeitsveranderlikes is gebruik: bod-en-aanbod spreiding voorgestel deur Amihud (1986: 223), omset, die prys-impak maatstaf van Amihud (2002: 31) en twee nul-opbrengskoers maatstawwe voorgestel deur Lesmond et al. (1999: 1113). Afgesien van die feit dat vorige studies die effek van likiditeit beduidend vind, word die teenoorgestelde in hierdie tesis gevind. Hierdie bevinding bly robuus, ongeag van die likiditeitsveranderlike wat gebruik word. Terwyl bevind is dat grootte, waarde en momentum beduidend is tot 'n sekere mate in die verklaring van oormaat aandeel-opbrengste tydens die periode, is geen aanduiding dat likiditeit 'n addisionele beduidende verklarende faktor is gevind nie. Hierdie bevinding is onverwags, aangesien die JEB beskou word as 'n ontluikende mark, wat normaalweg illikied is. Hierdie feit word vererger deur dat die JEB hoogs gekonsentreerd is en dus 'n skewe mark is wat oorheers word deur slegs 'n hand vol aandele. Dus word verwag dat likiditeit 'n baie belangrike faktor behoort te wees. Die bevinding dat likiditeit nie 'n prysingsfaktor op hierdie mark is nie, is dus 'n belangrike bevinding en vereis verdere analise om vas te stel waarom dit die geval is. Addisioneel word beduidende nie-nul afsnitte verkry, wat aandui dat daar steeds risiko faktore is wat nog nie geïdentifiseer is nie.
55

Portfolio Opportunity Distributions (PODs) for the South African market : based on regulation requirements

Nortje, Hester Maria 04 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: In this study Portfolio Opportunity Distributions (PODs) is applied as an alternative performance evaluation method. Traditionally, Broad-Market Indices or peer group comparisons are used to perform performance evaluation. These methods however have various biases and other problems related to its use. These biases and problems include composition bias, classification bias, concentration, etc. R.J. Surz (1994) introduced PODs in order to eliminate some of these problems. Each fund has its own opportunity set based on its style mandate and constraints. The style mandate of the fund is determined by calculating the fund’s exposure to the nine Surz Style Indices through the use of Returns-Based Style Analysis (RBSA). The indices are created based on the style proposed by R.J. Surz (1994). Some adjustments were made to incorporate the unique nature of the South African equity market. The combination of the fund’s exposures to the indices best explains the return that the fund generated. In this paper the fund’s constraints are based on the regulation requirements imposed on the funds in South Africa by the Collective Investment Schemes Control Act No. 45 of 2002 (CISCA). Thousands of random portfolios are then generated based on the fund’s opportunity set. The return and risk of the simulated portfolios represent the possible investment outcomes that the manager could have achieved given its opportunity set. Together the return and risk of the simulated portfolios represent a range of possible outcomes against which the performance of the fund is compared. It is also possible to determine the skill of the manager since it can be concluded that a manager who consistently outperforms most of the simulated portfolios shows skill in selecting shares to be included in the portfolio and assigning the correct weights to these shares. The South African Rand depreciated quite a bit during the period under evaluation and therefore funds invested large portions of their assets in foreign investments. These investments mostly yielded very high or very low returns compared to the returns available in the domestic equity market which impacted the application of PODs. Although the PODs methodology shows great potential, it is impossible to conclude with certainty whether the PODs methodology is superior to the traditional methods based on the current data. / AFRIKAANSE OPSOMMING: In hierdie studie word Portefeulje Geleentheids Verdelings (“PODs”) bekendgestel as ‘n alternatiewe manier om die obrengste van bestuurders te evalueer. Gewoonlik word indekse en die vergelyking van die fonds met soortgelyke fondse gebruik om fondse te evalueer. Die metodes het egter verskeie probleme wat met die gebruik daarvan verband hou. Die probleme sluit onder andere in: die samestelling en klassifikasie van soortgelyke fondse, die konsentrasie in die mark, ens. R.J. Surz (1994) het dus Portefeulje Geleentheids Verdelings (“PODs”) bekendgestel in ‘n poging om sommige van die probeleme te elimineer. Elke fonds het sy eie unieke geleentheids versameling wat gebaseer is op die fonds se styl en enige beperkings wat op die fonds van toepassing is. Die fonds se styl word bepaal deur die fonds se blootstelling aan die nege Surz Styl Indekse te meet met behulp van opbrengs-gebaseerde styl analise (“RBSA”). Die indekse is geskep gebaseer op die metode wat deur R.J. Surz (1994) voorgestel is. Daar is egter aanpassings gemaak om die unieke aard van die Suid-Afrikaanse aandele mark in ag te neem. Die kombinasie van die fonds se blootstelling aan die indekse verduidelik waar die fonds se opbrengs vandaan kom. In die navorsingstuk is die beperkings wat van toepassing is op die fonds afkomstig uit die regulasie vereistes wat deur die “Collective Investment Schemes Control Act No. 45 of 2002 (CISCA)” in Suid-Afrika op fondse van toepassing is. Duisende ewekansige portefeuljes word dan gegenereer gebaseer op die fonds se unieke groep aandele waarin die fonds kan belê. Die opbrengs en risiko van die gesimuleerde portefeuljes verteenwoordig al die moontlike beleggings uitkomste wat die fonds bestuurder kon gegenereer het gegewe die fonds se unieke groep aandele waarin dit kon belê. Die opbrengs en risiko van al die gesimuleerde portefeuljes skep saam ‘n verdeling van moontlike beleggings uitkomste waarteen die opbrengs en risiko van die fonds vergelyk word. Hierdie proses maak dit moontlik om die fonds bestuurder se vermoë om beter as meeste van die gesimuleerde portefeuljes te presteer te bepaal. Die aanname kan gemaak word dat ‘n bestuurder wat konsekwent oor tyd beter as meeste van die gesimuleerde portefeuljes presteer oor die vermoë beskik om die regte aandele te kies om in die portefeulje in te sluit en ook die regte gewigte aan die aandele toe te ken. Die Suid-Afrikaanse Rand het heelwat gedepresieer tydens die evaluasie periode en daarom het fondse groot porsies van hul beleggings oorsee belê. Die beleggings het dus of heelwat groter of heelwat kleiner opbrengste gehad in vergelyking met die opbrengste beskikbaar in die plaaslike aandelemark en dit het die toepassing van PODs beïnvloed. PODs toon baie potential, maar dit is egter onmoontlik om met die huidige data stel vas te stel of dit ‘n beter metode is.
56

The role of immune-genetic factors in modelling longitudinally measured HIV bio-markers including the handling of missing data.

Odhiambo, Nancy. 20 December 2013 (has links)
Since the discovery of AIDS among the gay men in 1981 in the United States of America, it has become a major world pandemic with over 40 million individuals infected world wide. According to the Joint United Nations Programme against HIV/AIDS epidermic updates in 2012, 28.3 million individuals are living with HIV world wide, 23.5 million among them coming from sub-saharan Africa and 4.8 million individuals residing in Asia. The report showed that approximately 1.7 million individuals have died from AIDS related deaths, 34 million ± 50% know their HIV status, a total of 2:5 million individuals are newly infected, 14:8 million individuals are eligible for HIV treatment and only 8 million are on HIV treatment (Joint United Nations Programme on HIV/AIDS and health sector progress towards universal access: progress report, 2011). Numerous studies have been carried out to understand the pathogenesis and the dynamics of this deadly disease (AIDS) but, still its pathogenesis is poorly understood. More understanding of the disease is still needed so as to reduce the rate of its acquisition. Researchers have come up with statistical and mathematical models which help in understanding and predicting the progression of the disease better so as to find ways in which its acquisition can be prevented and controlled. Previous studies on HIV/AIDS have shown that, inter-individual variability plays an important role in susceptibility to HIV-1 infection, its transmission, progression and even response to antiviral therapy. Certain immuno-genetic factors (human leukocyte antigen (HLA), Interleukin-10 (IL-10) and single nucleotide polymorphisms (SNPs)) have been associated with the variability among individuals. In this dissertation we are going to reaffirm previous studies through statistical modelling and analysis that have shown that, immuno-genetic factors could play a role in susceptibility, transmission, progression and even response to antiviral therapy. This will be done using the Sinikithemba study data from the HIV Pathogenesis Programme (HPP) at Nelson Mandela Medical school, University of Kwazulu-Natal consisting of 451 HIV positive and treatment naive individuals to model how the HIV Bio-markers (viral load and CD4 count) are associated with the immuno-genetic factors using linear mixed models. We finalize the dissertation by dealing with drop-out which is a pervasive problem in longitudinal studies, regardless of how well they are designed and executed. We demonstrate the application and performance of multiple imputation (MI) in handling drop-out using a longitudinal count data from the Sinikithemba study with log viral load as the response. Our aim is to investigate the influence of drop-out on the evolution of HIV Bio-markers in a model including selected genetic factors as covariates, assuming the missing mechanism is missing at random (MAR). We later compare the results obtained from the MI method to those obtained from the incomplete dataset. From the results, we can clearly see that there is much difference in the findings obtained from the two analysis. Therefore, there is need to account for drop-out since it can lead to biased results if not accounted for. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
57

Bayesian hierarchical spatial and spatio-temporal modeling and mapping of tuberculosis in Kenya.

Iddrisu, Abdul-Karim. 20 December 2013 (has links)
Global spread of infectious disease threatens the well-being of human, domestic, and wildlife health. A proper understanding of global distribution of these diseases is an important part of disease management and policy making. However, data are subject to complexities by heterogeneity across host classes and space-time epidemic processes [Waller et al., 1997, Hosseini et al., 2006]. The use of frequentist methods in Biostatistics and epidemiology are common and are therefore extensively utilized in answering varied research questions. In this thesis, we proposed the Hierarchical Bayesian approach to study the spatial and the spatio-temporal pattern of tuberculosis in Kenya [Knorr-Held et al., 1998, Knorr-Held, 1999, L opez-Qu lez and Munoz, 2009, Waller et al., 1997, Julian Besag, 1991]. Space and time interaction of risk (ψ[ij]) is an important factor considered in this thesis. The Markov Chain Monte Carlo (MCMC) method via WinBUGS and R packages were used for simulations [Ntzoufras, 2011, Congdon, 2010, David et al., 1995, Gimenez et al., 2009, Brian, 2003], and the Deviance Information Criterion (DIC), proposed by [Spiegelhalter et al., 2002], used for models comparison and selection. Variation in TB risk is observed among Kenya counties and clustering among counties with high TB relative risk (RR). HIV prevalence is identified as the dominant determinant of TB. We found clustering and heterogeneity of risk among high rate counties and the overall TB risk is slightly decreasing from 2002-2009. Interaction of TB relative risk in space and time is found to be increasing among rural counties that share boundaries with urban counties with high TB risk. This is as a result of the ability of models to borrow strength from neighbouring counties, such that near by counties have similar risk. Although the approaches are less than ideal, we hope that our formulations provide a useful stepping stone in the development of spatial and spatio-temporal methodology for the statistical analysis of risk from TB in Kenya. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
58

Statistical and mathematical modelling of HIV and AIDS, effect of reverse transcriptase inhibitors and causal inference for HIV mortality.

Ngwenya, Olina. 29 January 2014 (has links)
The HIV and AIDS epidemic has remained one of the leading causes of death in the world and has been destructive in Africa with Sub-Saharan Africa remaining the epidemiological locus of the epidemic. HIV and AIDS hinders development by erasing decades of health, economic and social progress, reducing life expectancy by years and deepening poverty [57].The most urgent public-health problem globally is to devise effective strategies to minimize the destruction caused by the HIV and AIDS epidemic. Due to the problems caused by HIV and AIDS, well defined endpoints to evaluate treatment benefits are needed. The surrogate and true endpoints for a disease need to be specified. The purpose of a surrogate endpoint is to draw conclusions about the effect of intervention on true endpoint without having to observe the true endpoint. It is of great importance to understand the surrogate validation methods. At present the question remains as to whether CD4 count and viral load are good surrogate markers for death in HIV or there are some better surrogate markers. This dissertation was undertaken to obtain some clarity on this question by adopting a mathematical model for HIV at immune system level and the impact of treatment in the form of reverse transcriptase inhibitors (RTIs). For an understanding of HIV, the dissertation begins with the description of the human immune system, HIV virion structure, HIV disease progression and HIV drugs. Then a review of an existing mathematical model follows, analyses and simulations of this model are done. These gave an insight into the dynamics of the CD4 count, viral load and HIV therapy. Thereafter surrogate marker validation methods followed. Finally generalized estimating equations (GEEs) approach was used to analyse real data for HIV positive individuals, from the Centre for the AIDS Programme of Research in South Africa (CAPRISA). Numerical simulations for the HIV dynamic model with treatment suggest that the higher the treatment efficacy, the lower the infected cells are left in the body. The infected cells are suppressed to a lower threshold value but they do not completely disappear, as long as the treatment is not 100% efficacious. Further numerical simulations suggest that it is advantageous to have a low proportion of infectious virions (ω) at an individual level because the individual would produce few infectious virions to infect healthy cells. Statistical analysis model using GEEs suggest that CD4 count< 200 and viral load are highly associated with death, meaning that they are good surrogate markers for death. An interesting finding from the analysis of this particular data from CAPRISA was that low CD4 count and high viral loads as surrogates for HIV survival act independently/additively. The interaction effect was found to be insignificant. Individual characteristics or factors that were found to be significantly associated with HIV related death are weight, CD4 count< 200 and viral load. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2010.
59

Longitudinal survey data analysis.

January 2006 (has links)
To investigate the effect of environmental pollution on the health of children in the Durban South Industrial Basin (DSIB) due to its proximity to industrial activities, 233 children from five primary schools were considered. Three of these schools were located in the south of Durban while the other two were in the northern residential areas that were closer to industrial activities. Data collected included the participants' demographic, health, occupational, social and economic characteristics. In addition, environmental information was monitored throughout the study specifically, measurements on the levels of some ambient air pollutants. The objective of this thesis is to investigate which of these factors had an effect on the lung function of the children. In order to achieve this objective, different sample survey data analysis techniques are investigated. This includes the design-based and model-based approaches. The nature of the survey data finally leads to the longitudinal mixed model approach. The multicolinearity between the pollutant variables leads to the fitting of two separate models: one with the peak counts as the independent pollutant measures and the other with the 8-hour maximum moving average as the independent pollutant variables. In the selection of the fixed-effects structure, a scatter-plot smoother known as the loess fit is applied to the response variable individual profile plots. The random effects and the residual effect are assumed to have different covariance structures. The unstructured (UN) covariance structure is used for the random effects, while using the Akaike information criterion (AIC), the compound symmetric (CS) covariance structure is selected to be appropriate for the residual effects. To check the model fit, the profiles of the fitted and observed values of the dependent variables are compared graphically. The data is also characterized by the problem of intermittent missingness. The type of missingness is investigated by applying a modified logistic regression model missing at random (MAR) test. The results indicate that school location, sex and weight are the significant factors for the children's respiratory conditions. More specifically, the children in schools located in the northern residential areas are found to have poor respiratory conditions as compared to those in the Durban-South schools. In addition, poor respiratory conditions are also identified for overweight children. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2006.
60

Application of statistical multivariate techniques to wood quality data.

Negash, Asnake Worku. January 2010 (has links)
Sappi is one of the leading producer and supplier of Eucalyptus pulp to the world market. It is also a great contributor to South Africa economy in terms of employment opportunity to the rural people through its large plantation and export earnings. Pulp mills production of quality wood pulp is mainly affected by the supply of non uniform raw material namely Eucalyptus tree supply from various plantations. Improvement in quality of the pulp depends directly on the improvement on the quality of the raw materials. Knowing factors which affect the pulp quality is important for tree breeders. Thus, the main objective of this research is first to determine which of the anatomical, chemical and pulp properties of wood are significant factors that affect pulp properties namely viscosity, brightness and yield. Secondly the study will also investigate the effect of the difference in plantation location and site quality, trees age and species type difference on viscosity, brightness and yield of wood pulp. In order to meet the above mentioned objectives, data for this research was obtained from Sappi’s P186 trial and other two published reports from the Council for Scientific and Industrial Research (CSIR). Principal component analysis, cluster analysis, multiple regression analysis and multivariate linear regression analysis were used. These statistical analysis methods were used to carry out mean comparison of pulp quality measurements based on viscosity, brightness and yield of trees of different age, location, site quality and hybrid type and the results indicate that these four factors (age, location, site quality and hybrid type) and some anatomical and chemical measurements (fibre lumen diameter, kappa number, total hemicelluloses and total lignin) have significant effect on pulp quality measurements. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2010.

Page generated in 0.0816 seconds