• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 24
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 90
  • 90
  • 36
  • 36
  • 35
  • 33
  • 32
  • 32
  • 28
  • 26
  • 24
  • 24
  • 23
  • 21
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Empirical Likelihood Confidence Intervals for the Difference of Two Quantiles with Right Censoring

Yau, Crystal Cho Ying 21 November 2008 (has links)
In this thesis, we study two independent samples under right censoring. Using a smoothed empirical likelihood method, we investigate the difference of quantiles in the two samples and construct the pointwise confidence intervals from it as well. The empirical log-likelihood ratio is proposed and its asymptotic limit is shown as a chi-squared distribution. In the simulation studies, in terms of coverage accuracy and average length of confidence intervals, we compare the empirical likelihood and the normal approximation method. It is concluded that the empirical likelihood method has a better performance. At last, a real clinical trial data is used for the purpose of illustration. Numerical examples to illustrate the efficacy of the method are presented.
12

Délka doktorského studia na Fakultě informatiky a statistiky / Length of doctoral studies at the Faculty of Informatics and Statistics

Hybšová, Aneta January 2011 (has links)
This thesis describes the survival analysis, exactly Kaplan-Meier estimate. A main part of the thesis deals with the problem of censored data, which is typical for survival analysis. The empirical part describes lenght of PhD studies at the Faculty of Informatics and Statistics and their "survival" in studies by Kaplan-Meier curves. First are analyzed uncensored data and then the whole data set (censored and uncensored data).
13

Accelerated Life Model With Various Types Of Censored Data

Pridemore, Kathryn 01 January 2013 (has links)
The Accelerated Life Model is one of the most commonly used tools in the analysis of survival data which are frequently encountered in medical research and reliability studies. In these types of studies we often deal with complicated data sets for which we cannot observe the complete data set in practical situations due to censoring. Such difficulties are particularly apparent by the fact that there is little work in statistical literature on the Accelerated Life Model for complicated types of censored data sets, such as doubly censored data, interval censored data, and partly interval censored data. In this work, we use the Weighted Empirical Likelihood approach (Ren, 2001) [33] to construct tests, confidence intervals, and goodness-of-fit tests for the Accelerated Life Model in a unified way for various types of censored data. We also provide algorithms for implementation and present relevant simulation results. I began working on this problem with Dr. Jian-Jian Ren. Upon Dr. Ren’s departure from the University of Central Florida I completed this dissertation under the supervision of Dr. Marianna Pensky.
14

Variable selection in the general linear model for censored data

Yu, Lili 08 March 2007 (has links)
No description available.
15

Cumulative Sum Control Charts for Censored Reliability Data

Olteanu, Denisa Anca 28 April 2010 (has links)
Companies routinely perform life tests for their products. Typically, these tests involve running a set of products until the units fail. Most often, the data are censored according to different censoring schemes, depending on the particulars of the test. On occasion, tests are stopped at a predetermined time and the units that are yet to fail are suspended. In other instances, the data are collected through periodic inspection and only upper and lower bounds on the lifetimes are recorded. Reliability professionals use a number of non-normal distributions to model the resulting lifetime data with the Weibull distribution being the most frequently used. If one is interested in monitoring the quality and reliability characteristics of such processes, one needs to account for the challenges imposed by the nature of the data. We propose likelihood ratio based cumulative sum (CUSUM) control charts for censored lifetime data with non-normal distributions. We illustrate the development and implementation of the charts, and we evaluate their properties through simulation studies. We address the problem of interval censoring, and we construct a CUSUM chart for censored ordered categorical data, which we illustrate by a case study at Becton Dickinson (BD). We also address the problem of monitoring both of the parameters of the Weibull distribution for processes with right-censored data. / Ph. D.
16

A simulation comparison of parametric and nonparametric estimators of quantiles from right censored data

Serasinghe, Shyamalee Kumary January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / Quantiles are useful in describing distributions of component lifetimes. Data, consisting of the lifetimes of sample units, used to estimate quantiles are often censored. Right censoring, the setting investigated here, occurs, for example, when some test units may still be functioning when the experiment is terminated. This study investigated and compared the performance of parametric and nonparametric estimators of quantiles from right censored data generated from Weibull and Lognormal distributions, models which are commonly used in analyzing lifetime data. Parametric quantile estimators based on these assumed models were compared via simulation to each other and to quantile estimators obtained from the nonparametric Kaplan- Meier Estimator of the survival function. Various combinations of quantiles, censoring proportion, sample size, and distributions were considered. Our simulation show that the larger the sample size and the lower the censoring rate the better the performance of the estimates of the 5th percentile of Weibull data. The lognormal data are very sensitive to the censoring rate and we observed that for higher censoring rates the incorrect parametric estimates perform the best. If you do not know the underlying distribution of the data, it is risky to use parametric estimates of quantiles close to one. A limitation in using the nonparametric estimator of large quantiles is their instability when the censoring rate is high and the largest observations are censored. Key Words: Quantiles, Right Censoring, Kaplan-Meier estimator
17

Análise de influência local nos modelos de riscos múltiplos / Influence diagnostics for polyhazard models in the presence of covariates

Fachini, Juliana Betini 06 February 2007 (has links)
Neste trabalho, é apresentado vários métodos de diagnóstico para modelos de riscos múltiplos. A vantagem desse modelo é sua flexibilidade em relação aos modelos de risco simples, como, os modelos Weibull e log-logístico, pois acomoda uma grande classe de funções de risco, função de risco não-monótona, por exemplo, forma de "banheira" e curvas multimodal. Alguns métodos de influência, assim como, a influência local, influência local total de um indivíduo são calculadas, analizadas e discutidas. Uma discussão computacional do método do afastamento da verossimilhança, bem como da curvatura normal em influência local são apresentados. Finalmente, um conjunto de dados reais é usado para ilustrar a teoria estudada. Uma análise de resíduo é aplicada para a seleção do modelo apropriado. / In this paperwork is present various diagnostic methods for polyhazard models. Polyhazard models are a flexible family for fitting lifetime data. Their main advantage over the single hazard models, such as the Weibull and the log-logistic models, is to include a large amount of nonmonotone hazard shapes, as bathtub and multimodal curves. Some influence methods, such as the local influence, total local influence of an individual are derived, analyzed and discussed. A discussion of the computation of the likelihood displacement as well as the normal curvature in the local influence method are presented. Finally, an example with real data is given for illustration. A residual analysis is performed in order to select an appropriate model.
18

Conception d’un outil simple d'utilisation pour réaliser des analyses statistiques ajustées valorisant les données de cohortes observationnelles de pathologies chroniques : application à la cohorte DIVAT / Conception of an easy to use application allowing to perform adjusted statistical analysis for the valorization of observational data from cohorts of chronic disease : application to the DIVAT cohort

Le Borgne, Florent 06 October 2016 (has links)
En recherche médicale, les cohortes permettent de mieux comprendre l'évolution d'une pathologie et d'améliorer la prise en charge des patients. La mise en évidence de liens de causalité entre certains facteurs de risque et l'évolution de l'état de santé des patients est possible grâce à des études étiologiques. L'analyse de cohortes permet aussi d'identifier des marqueurs pronostiques de l'évolution d'un état de santé. Cependant, les facteurs de confusion constituent souvent une source de biais importante dans l'interprétation des résultats des études étiologiques ou pronostiques. Dans ce manuscrit, nous présentons deux travaux de recherche en Biostatistique dans la thématique des scores de propension. Dans le premier travail, nous comparons les performances de différents modèles permettant d'évaluer la causalité d'une exposition sur l'incidence d'un événement en présence de données censurées à droite. Dans le second travail, nous proposons un estimateur de courbes ROC dépendantes du temps standardisées et pondérées permettant d'estimer la capacité prédictive d'un marqueur en prenant en compte les facteurs de confusion potentiels.En cohérence avec l'objectif de fournir des outils statistiques adaptés, nous présentons également dans ce manuscrit une application nommée Plug-Stat®. En lien direct avec la base de données, elle permet de réaliser des analyses statistiques adaptées à la pathologie afin de faciliter la recherche épidémiologique et de mieux valoriser les données de cohortes observationnelles. / In medical research, cohorts help to better understandthe evolution of a pathology and improve the care ofpatients. Causal associations between risk factors andoutcomes are regularly studied through etiological studies. Cohorts analysis also allow the identification of new markers for the prediction of the patient evolution.However, confounding factors are often source of bias in the interpretation of the results of etiologic or prognostic studies.In this manuscript, we presented two research works in Biostatistics, the common topic being propensity scores.In the first work, we compared the performances of different models allowing to evaluate the causality of an exposure on an outcome in the presence of rightc ensored data. In the second work, we proposed anestimator of standardized and weighted time-dependentROC curves. This estimator provides a measure of theprognostic capacities of a marker by taking into accountthe possible confounding factors. Consistent with our objective to provide adapted statistical tools, we also present in this manuscript an application, so-calledPlug-Stat®. Directly linked with the database, it allows toperform statistical analyses adapted to the pathology in order to facilitate epidemiological studies and improve the valorization of data from observational cohorts.
19

Survival analysis issues with interval-censored data

Oller Piqué, Ramon 30 June 2006 (has links)
L'anàlisi de la supervivència s'utilitza en diversos àmbits per tal d'analitzar dades que mesuren el temps transcorregut entre dos successos. També s'anomena anàlisi de la història dels esdeveniments, anàlisi de temps de vida, anàlisi de fiabilitat o anàlisi del temps fins a l'esdeveniment. Una de les dificultats que té aquesta àrea de l'estadística és la presència de dades censurades. El temps de vida d'un individu és censurat quan només és possible mesurar-lo de manera parcial o inexacta. Hi ha diverses circumstàncies que donen lloc a diversos tipus de censura. La censura en un interval fa referència a una situació on el succés d'interès no es pot observar directament i només tenim coneixement que ha tingut lloc en un interval de temps aleatori. Aquest tipus de censura ha generat molta recerca en els darrers anys i usualment té lloc en estudis on els individus són inspeccionats o observats de manera intermitent. En aquesta situació només tenim coneixement que el temps de vida de l'individu es troba entre dos temps d'inspecció consecutius.Aquesta tesi doctoral es divideix en dues parts que tracten dues qüestions importants que fan referència a dades amb censura en un interval. La primera part la formen els capítols 2 i 3 els quals tracten sobre condicions formals que asseguren que la versemblança simplificada pot ser utilitzada en l'estimació de la distribució del temps de vida. La segona part la formen els capítols 4 i 5 que es dediquen a l'estudi de procediments estadístics pel problema de k mostres. El treball que reproduïm conté diversos materials que ja s'han publicat o ja s'han presentat per ser considerats com objecte de publicació.En el capítol 1 introduïm la notació bàsica que s'utilitza en la tesi doctoral. També fem una descripció de l'enfocament no paramètric en l'estimació de la funció de distribució del temps de vida. Peto (1973) i Turnbull (1976) van ser els primers autors que van proposar un mètode d'estimació basat en la versió simplificada de la funció de versemblança. Altres autors han estudiat la unicitat de la solució obtinguda en aquest mètode (Gentleman i Geyer, 1994) o han millorat el mètode amb noves propostes (Wellner i Zhan, 1997).El capítol 2 reprodueix l'article d'Oller et al. (2004). Demostrem l'equivalència entre les diferents caracteritzacions de censura no informativa que podem trobar a la bibliografia i definim una condició de suma constant anàloga a l'obtinguda en el context de censura per la dreta. També demostrem que si la condició de no informació o la condició de suma constant són certes, la versemblança simplificada es pot utilitzar per obtenir l'estimador de màxima versemblança no paramètric (NPMLE) de la funció de distribució del temps de vida. Finalment, caracteritzem la propietat de suma constant d'acord amb diversos tipus de censura. En el capítol 3 estudiem quina relació té la propietat de suma constant en la identificació de la distribució del temps de vida. Demostrem que la distribució del temps de vida no és identificable fora de la classe dels models de suma constant. També demostrem que la probabilitat del temps de vida en cadascun dels intervals observables és identificable dins la classe dels models de suma constant. Tots aquests conceptes elsil·lustrem amb diversos exemples.El capítol 4 s'ha publicat parcialment en l'article de revisió metodològica de Gómez et al. (2004). Proporciona una visió general d'aquelles tècniques que s'han aplicat en el problema no paramètric de comparació de dues o més mostres amb dades censurades en un interval. També hem desenvolupat algunes rutines amb S-Plus que implementen la versió permutacional del tests de Wilcoxon, Logrank i de la t de Student per a dades censurades en un interval (Fay and Shih, 1998). Aquesta part de la tesi doctoral es complementa en el capítol 5 amb diverses propostes d'extensió del test de Jonckeere. Amb l'objectiu de provar una tendència en el problema de k mostres, Abel (1986) va realitzar una de les poques generalitzacions del test de Jonckheere per a dades censurades en un interval. Nosaltres proposem altres generalitzacions d'acord amb els resultats presentats en el capítol 4. Utilitzem enfocaments permutacionals i de Monte Carlo. Proporcionem programes informàtics per a cada proposta i realitzem un estudi de simulació per tal de comparar la potència de cada proposta sota diferents models paramètrics i supòsits de tendència. Com a motivació de la metodologia, en els dos capítols s'analitza un conjunt de dades d'un estudi sobre els beneficis de la zidovudina en pacients en els primers estadis de la infecció del virus VIH (Volberding et al., 1995).Finalment, el capítol 6 resumeix els resultats i destaca aquells aspectes que s'han de completar en el futur. / Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. Interval censoring refers to the situation when the event of interest cannot be directly observed and it is only known to have occurred during a random interval of time. This kind of censoring has produced a lot of work in the last years and typically occurs for individuals in a study being inspected or observed intermittently, so that an individual's lifetime is known only to lie between two successive observation times.This PhD thesis is divided into two parts which handle two important issues of interval censored data. The first part is composed by Chapter 2 and Chapter 3 and it is about formal conditions which allow estimation of the lifetime distribution to be based on a well known simplified likelihood. The second part is composed by Chapter 4 and Chapter 5 and it is devoted to the study of test procedures for the k-sample problem. The present work reproduces several material which has already been published or has been already submitted.In Chapter 1 we give the basic notation used in this PhD thesis. We also describe the nonparametric approach to estimate the distribution function of the lifetime variable. Peto (1973) and Turnbull (1976) were the first authors to propose an estimation method which is based on a simplified version of the likelihood function. Other authors have studied the uniqueness of the solution given by this method (Gentleman and Geyer, 1994) or have improved it with new proposals (Wellner and Zhan, 1997).Chapter 2 reproduces the paper of Oller et al. (2004). We prove the equivalence between different characterizations of noninformative censoring appeared in the literature and we define an analogous constant-sum condition to the one derived in the context of right censoring. We prove as well that when the noninformative condition or the constant-sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator (NPMLE) of the failure time distribution function. Finally, we characterize the constant-sum property according to different types of censoring. In Chapter 3 we study the relevance of the constant-sum property in the identifiability of the lifetime distribution. We show that the lifetime distribution is not identifiable outside the class of constant-sum models. We also show that the lifetime probabilities assigned to the observable intervals are identifiable inside the class of constant-sum models. We illustrate all these notions with several examples.Chapter 4 has partially been published in the survey paper of Gómez et al. (2004). It gives a general view of those procedures which have been applied in the nonparametric problem of the comparison of two or more interval-censored samples. We also develop some S-Plus routines which implement the permutational version of the Wilcoxon test, the Logrank test and the t-test for interval censored data (Fay and Shih, 1998). This part of the PhD thesis is completed in Chapter 5 by different proposals of extension of the Jonckeere's test. In order to test for an increasing trend in the k-sample problem, Abel (1986) gives one of the few generalizations of the Jonckheree's test for interval-censored data. We also suggest different Jonckheere-type tests according to the tests presented in Chapter 4. We use permutational and Monte Carlo approaches. We give computer programs for each proposal and perform a simulation study in order compare the power of each proposal under different parametric assumptions and different alternatives. We motivate both chapters with the analysis of a set of data from a study of the benefits of zidovudine in patients in the early stages of the HIV infection (Volberding et al., 1995).Finally, Chapter 6 summarizes results and address those aspects which remain to be completed.
20

Measurement Error and Misclassification in Interval-Censored Life History Data

White, Bethany Joy Giddings January 2007 (has links)
In practice, data are frequently incomplete in one way or another. It can be a significant challenge to make valid inferences about the parameters of interest in this situation. In this thesis, three problems involving such data are addressed. The first two problems involve interval-censored life history data with mismeasured covariates. Data of this type are incomplete in two ways. First, the exact event times are unknown due to censoring. Second, the true covariate is missing for most, if not all, individuals. This work focuses primarily on the impact of covariate measurement error in progressive multi-state models with data arising from panel (i.e., interval-censored) observation. These types of problems arise frequently in clinical settings (e.g. when disease progression is of interest and patient information is collected during irregularly spaced clinic visits). Two and three state models are considered in this thesis. This work is motivated by a research program on psoriatic arthritis (PsA) where the effects of error-prone covariates on rates of disease progression are of interest and patient information is collected at clinic visits (Gladman et al. 1995; Bond et al. 2006). Information regarding the error distributions were available based on results from a separate study conducted to evaluate the reliability of clinical measurements that are used in PsA treatment and follow-up (Gladman et al. 2004). The asymptotic bias of covariate effects obtained ignoring error in covariates is investigated and shown to be substantial in some settings. In a series of simulation studies, the performance of corrected likelihood methods and methods based on a simulation-extrapolation (SIMEX) algorithm (Cook \& Stefanski 1994) were investigated to address covariate measurement error. The methods implemented were shown to result in much smaller empirical biases and empirical coverage probabilities which were closer to the nominal levels. The third problem considered involves an extreme case of interval censoring known as current status data. Current status data arise when individuals are observed only at a single point in time and it is then determined whether they have experienced the event of interest. To complicate matters, in the problem considered here, an unknown proportion of the population will never experience the event of interest. Again, this type of data is incomplete in two ways. One assessment is made on each individual to determine whether or not an event has occurred. Therefore, the exact event times are unknown for those who will eventually experience the event. In addition, whether or not the individuals will ever experience the event is unknown for those who have not experienced the event by the assessment time. This problem was motivated by a series of orthopedic trials looking at the effect of blood thinners in hip and knee replacement surgeries. These blood thinners can cause a negative serological response in some patients. This response was the outcome of interest and the only available information regarding it was the seroconversion time under current status observation. In this thesis, latent class models with parametric, nonparametric and piecewise constant forms of the seroconversion time distribution are described. They account for the fact that only a proportion of the population will experience the event of interest. Estimators based on an EM algorithm were evaluated via simulation and the orthopedic surgery data were analyzed based on this methodology.

Page generated in 0.0781 seconds