• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 10
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime 02 February 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
2

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime 02 February 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
3

Analysis of Dependently Truncated Sample Using Inverse Probability Weighted Estimator

Liu, Yang 01 August 2011 (has links)
Many statistical methods for truncated data rely on the assumption that the failure and truncation time are independent, which can be unrealistic in applications. The study cohorts obtained from bone marrow transplant (BMT) registry data are commonly recognized as truncated samples, the time-to-failure is truncated by the transplant time. There are clinical evidences that a longer transplant waiting time is a worse prognosis of survivorship. Therefore, it is reasonable to assume the dependence between transplant and failure time. To better analyze BMT registry data, we utilize a Cox analysis in which the transplant time is both a truncation variable and a predictor of the time-to-failure. An inverse-probability-weighted (IPW) estimator is proposed to estimate the distribution of transplant time. Usefulness of the IPW approach is demonstrated through a simulation study and a real application.
4

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime 02 February 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
5

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime January 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
6

Semiparametric Regression Under Left-Truncated and Interval-Censored Competing Risks Data and Missing Cause of Failure

Park, Jun 04 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Observational studies and clinical trials with time-to-event data frequently involve multiple event types, known as competing risks. The cumulative incidence function (CIF) is a particularly useful parameter as it explicitly quantifies clinical prognosis. Common issues in competing risks data analysis on the CIF include interval censoring, missing event types, and left truncation. Interval censoring occurs when the event time is not observed but is only known to lie between two observation times, such as clinic visits. Left truncation, also known as delayed entry, is the phenomenon where certain participants enter the study after the onset of disease under study. These individuals with an event prior to their potential study entry time are not included in the analysis and this can induce selection bias. In order to address unmet needs in appropriate methods and software for competing risks data analysis, this thesis focuses the following development of application and methods. First, we develop a convenient and exible tool, the R package intccr, that performs semiparametric regression analysis on the CIF for interval-censored competing risks data. Second, we adopt the augmented inverse probability weighting method to deal with both interval censoring and missing event types. We show that the resulting estimates are consistent and double robust. We illustrate this method using data from the East-African International Epidemiology Databases to Evaluate AIDS (IeDEA EA) where a significant portion of the event types is missing. Last, we develop an estimation method for semiparametric analysis on the CIF for competing risks data subject to both interval censoring and left truncation. This method is applied to the Indianapolis-Ibadan Dementia Project to identify prognostic factors of dementia in elder adults. Overall, the methods developed here are incorporated in the R package intccr. / 2021-05-06
7

Joint Modeling the Relationship between Longitudinal and Survival Data Subject to Left Truncation with Applications to Cystic Fibrosis

VanderWyden Piccorelli, Annalisa January 2010 (has links)
No description available.
8

Modelling children under five mortality in South Africa using copula and frailty survival models

Mulaudzi, Tshilidzi Benedicta January 2022 (has links)
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2022 / This thesis is based on application of frailty and copula models to under five child mortality data set in South Africa. The main purpose of the study was to apply sample splitting techniques in a survival analysis setting and compare clustered survival models considering left truncation to the under five child mortality data set in South Africa. The major contributions of this thesis is in the application of the shared frailty model and a class of Archimedean copulas in particular, Clayton-Oakes copula with completely monotone generator, and introduction of sample splitting techniques in a survival analysis setting. The findings based on shared frailty model show that clustering effect was sig nificant for modelling the determinants of time to death of under five children, and revealed the importance of accounting for clustering effect. The conclusion based on Clayton-Oakes model showed association between survival times of children from the same mother. It was found that the parameter estimates for the shared frailty and the Clayton-Oakes models were quite different and that the two models cannot be comparable. Gender, province, year, birth order and whether a child is part of twin or not were found to be significant factors affect ing under five child mortality in South Africa. / NRF-TDG Flemish Interuniversity Council Institutional corporation (VLIR-IUC) VLIR-IUC Programme of the University of Limpopo
9

Statistical Methods for Life History Analysis Involving Latent Processes

Shen, Hua January 2014 (has links)
Incomplete data often arise in the study of life history processes. Examples include missing responses, missing covariates, and unobservable latent processes in addition to right censoring. This thesis is on the development of statistical models and methods to address these problems as they arise in oncology and chronic disease. Methods of estimation and inference in parametric, weakly parametric and semiparametric settings are investigated. Studies of chronic diseases routinely sample individuals subject to conditions on an event time of interest. In epidemiology, for example, prevalent cohort studies aiming to evaluate risk factors for survival following onset of dementia require subjects to have survived to the point of screening. In clinical trials designed to assess the effect of experimental cancer treatments on survival, patients are required to survive from the time of cancer diagnosis to recruitment. Such conditions yield samples featuring left-truncated event time distributions. Incomplete covariate data often arise in such settings, but standard methods do not deal with the fact that the covariate distribution is also affected by left truncation. We develop a likelihood and algorithm for estimation for dealing with incomplete covariate data in such settings. An expectation-maximization algorithm deals with the left truncation by using the covariate distribution conditional on the selection criterion. An extension to deal with sub-group analyses in clinical trials is described for the case in which the stratification variable is incompletely observed. In studies of affective disorder, individuals are often observed to experience recurrent symptomatic exacerbations of symptoms warranting hospitalization. Interest lies in modeling the occurrence of such exacerbations over time and identifying associated risk factors to better understand the disease process. In some patients, recurrent exacerbations are temporally clustered following disease onset, but cease to occur after a period of time. We develop a dynamic mover-stayer model in which a canonical binary variable associated with each event indicates whether the underlying disease has resolved. An individual whose disease process has not resolved will experience events following a standard point process model governed by a latent intensity. If and when the disease process resolves, the complete data intensity becomes zero and no further events will arise. An expectation-maximization algorithm is developed for parametric and semiparametric model fitting based on a discrete time dynamic mover-stayer model and a latent intensity-based model of the underlying point process. The method is applied to a motivating dataset from a cohort of individuals with affective disorder experiencing recurrent hospitalization for their mental health disorder. Interval-censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. Extensions on model fitting techniques for the dynamic mover-stayer model are discussed and incorporate interval censoring. The likelihood and algorithm for estimation are developed for piecewise constant baseline rate functions and are shown to yield estimators with small empirical bias in simulation studies. Data on the cumulative number of damaged joints in patients with psoriatic arthritis are analysed to provide an illustrative application.
10

LIKELIHOOD INFERENCE FOR LEFT TRUNCATED AND RIGHT CENSORED LIFETIME DATA

Mitra, Debanjan 04 1900 (has links)
<p>Left truncation arises because in many situations, failure of a unit is observed only if it fails after a certain period. In many situations, the units under study may not be followed until all of them fail and the experimenter may have to stop at a certain time when some of the units may still be working. This introduces right censoring into the data. Some commonly used lifetime distributions are lognormal, Weibull and gamma, all of which are special cases of the flexible generalized gamma family. Likelihood inference via the Expectation Maximization (EM) algorithm is used to estimate the model parameters of lognormal, Weibull, gamma and generalized gamma distributions, based on left truncated and right censored data. The asymptotic variance-covariance matrices of the maximum likelihood estimates (MLEs) are derived using the missing information principle. By using the asymptotic variances and the asymptotic normality of the MLEs, asymptotic confidence intervals for the parameters are constructed. For comparison purpose, Newton-Raphson (NR) method is also used for the parameter estimation, and asymptotic confidence intervals corresponding to the NR method and parametric bootstrap are also obtained. Through Monte Carlo simulations, the performance of all these methods of inference are studied. With regard to prediction analysis, the probability that a right censored unit will be working until a future year is estimated, and an asymptotic confidence interval for the probability is then derived by the delta-method. All the methods of inference developed here are illustrated with some numerical examples.</p> / Doctor of Philosophy (PhD)

Page generated in 0.0678 seconds