• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 17
  • 17
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Addressing censoring issues in estimating the serial interval for tuberculosis

Ma, Yicheng 13 November 2019 (has links)
The serial interval (SI), defined as the symptom time between an infector and an infectee, is widely used to better understand transmission patterns of an infectious disease. Estimating the SI for tuberculosis (TB) is complicated by the slow progression from asymptomatic infection to active, symptomatic disease, and the fact that there is only a 5-10% lifetime risk of developing active TB disease. Furthermore, the time of symptom onset for infectors and infectees is rarely observed accurately. In this dissertation, we first conduct a systematic literature review to demonstrate the limited methods currently available to estimate the serial interval for TB as well as the few estimates that have been published. Secondly, under the assumption of an ideal scenario where all SIs are observed with precision, we evaluate the effect of prior information on estimating the SI in a Bayesian framework. Thirdly, we apply cure models, proposed by Boag in 1949, to estimate the SI for TB in a Bayesian framework. We show that the cure models perform better in the presence of credible prior information on the proportion of the study population that develop active TB disease, and should be chosen over traditional survival models which assume that all of the study population will eventually have the event of interest—active TB disease. Next, we modify the method by Reich et al. in 2009 by using a Riemann sum to approximate the likelihood function that involves a double integral. In doing so, we are able to reduce the computing time of the approximation method by around 50%. We are also able to relax the assumption of uniformity on the censoring intervals. We show that when using weights that are consistent with the underlying skewness of the intervals, the proposed approaches consistently produce more accurate estimates than the existing approaches. We provide SI estimates for TB using empirical datasets from Brazil and USA/Canada.
2

Precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function

Galloway, Katherine Anne Forsyth 03 July 2013 (has links)
In reliability studies, the goal is to gain knowledge about a product's failure times or life expectancy. Precedence tests do not require large sample sizes and are used in reliability studies to compare the life-time distributions from two samples. Precedence tests are useful since they provide reliable results early in a life-test and the surviving units can be used in other tests. Ng and Balakrishnan (2010) proposed a precedence-type test based on the Kaplan-Meier estimator of the cumulative distribution function. A precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function has been proposed. This test was developed for both Type-II right censoring and progressive Type-II right censoring. Numerical results, including illustrative examples, critical values and a power study have been provided. The results from this test were compared with those from the test based on the Kaplan-Meier estimator.
3

Precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function

Galloway, Katherine Anne Forsyth 03 July 2013 (has links)
In reliability studies, the goal is to gain knowledge about a product's failure times or life expectancy. Precedence tests do not require large sample sizes and are used in reliability studies to compare the life-time distributions from two samples. Precedence tests are useful since they provide reliable results early in a life-test and the surviving units can be used in other tests. Ng and Balakrishnan (2010) proposed a precedence-type test based on the Kaplan-Meier estimator of the cumulative distribution function. A precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function has been proposed. This test was developed for both Type-II right censoring and progressive Type-II right censoring. Numerical results, including illustrative examples, critical values and a power study have been provided. The results from this test were compared with those from the test based on the Kaplan-Meier estimator.
4

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime 02 February 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
5

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime 02 February 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
6

Analysis of Additive Risk Model with High Dimensional Covariates Using Partial Least Squares

Zhou, Yue 09 June 2006 (has links)
In this thesis, we consider the problem of constructing an additive risk model based on the right censored survival data to predict the survival times of the cancer patients, especially when the dimension of the covariates is much larger than the sample size. For microarray Gene Expression data, the number of gene expression levels is far greater than the number of samples. Such ¡°small n, large p¡± problems have attracted researchers to investigate the association between cancer patient survival times and gene expression profiles for recent few years. We apply Partial Least Squares to reduce the dimension of the covariates and get the corresponding latent variables (components), and these components are used as new regressors to fit the extensional additive risk model. Also we employ the time dependent AUC curve (area under the Receiver Operating Characteristic (ROC) curve) to assess how well the model predicts the survival time. Finally, this approach is illustrated by re-analysis of the well known AML data set and breast cancer data set. The results show that the model fits both of the data sets very well.
7

Analysis of Additive Risk Model with High Dimensional Covariates Using Correlation Principal Component Regression

Wang, Guoshen 22 April 2008 (has links)
One problem of interest is to relate genes to survival outcomes of patients for the purpose of building regression models to predict future patients¡¯ survival based on their gene expression data. Applying semeparametric additive risk model of survival analysis, this thesis proposes a new approach to conduct the analysis of gene expression data with the focus on model¡¯s predictive ability. The method modifies the correlation principal component regression to handle the censoring problem of survival data. Also, we employ the time dependent AUC and RMSEP to assess how well the model predicts the survival time. Furthermore, the proposed method is able to identify significant genes which are related to the disease. Finally, this proposed approach is illustrated by simulation data set, the diffuse large B-cell lymphoma (DLBCL) data set, and breast cancer data set. The results show that the model fits both of the data sets very well.
8

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime 02 February 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
9

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime January 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
10

LIKELIHOOD INFERENCE FOR LOG-LOGISTIC DISTRIBUTION UNDER PROGRESSIVE TYPE-II RIGHT CENSORING

Alzahrani, Alya 10 1900 (has links)
<p>Censoring arises quite often in lifetime data. Its presence may be planned or unplanned. In this project, we demonstrate progressive Type-II right censoring when the underlying distribution is log-logistic. The objective is to discuss inferential methods for the unknown parameters of the distribution based on the maximum likelihood estimation method. The Newton-Raphson method is proposed as a numerical technique to solve the pertinent non-linear equations. In addition, confidence intervals for the unknown parameters are constructed based on (i) asymptotic normality of the maximum likelihood estimates, and (ii) percentile bootstrap resampling technique. A Monte Carlo simulation study is conducted to evaluate the performance of the methods of inference developed here. Some illustrative examples are also presented.</p> / Master of Science (MSc)

Page generated in 0.0521 seconds