• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 19
  • 12
  • 9
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 112
  • 34
  • 30
  • 27
  • 26
  • 22
  • 22
  • 21
  • 20
  • 20
  • 18
  • 14
  • 14
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The performance of inverse probability of treatment weighting and propensity score matching for estimating marginal hazard ratios

Nåtman, Jonatan January 2019 (has links)
Propensity score methods are increasingly being used to reduce the effect of measured confounders in observational research. In medicine, censored time-to-event data is common. Using Monte Carlo simulations, this thesis evaluates the performance of nearest neighbour matching (NNM) and inverse probability of treatment weighting (IPTW) in combination with Cox proportional hazards models for estimating marginal hazard ratios. Focus is on the performance for different sample sizes and censoring rates, aspects which have not been fully investigated in this context before. The results show that, in the absence of censoring, both methods can reduce bias substantially. IPTW consistently had better performance in terms of bias and MSE compared to NNM. For the smallest examined sample size with 60 subjects, the use of IPTW led to estimates with bias below 15 %. Since the data were generated using a conditional parametrisation, the estimation of univariate models violates the proportional hazards assumption. As a result, censoring the data led to an increase in bias.
52

Adaptation des designs de phase II en cancérologie à un critère de jugement censuré / Adaptation of phase II oncology designs to a time-to-event endpoint

Belin, Lisa 30 May 2016 (has links)
La phase II d’un essai clinique représente une étape importante de l’évaluation d’une thérapeutique. Il s’agit d’une étape de sélection ayant pour objectif d’identifier les traitements efficaces, qui seront évalués de manière comparative en phase III, et ceux qui, jugés inefficaces seront abandonnés. Le choix du critère de jugement et la règle de décision sont les éléments essentiels de cette étape de l’essai clinique. En cancérologie, le critère de jugement est principalement de nature binaire (réponse au traitement). Cependant, depuis quelques années, les essais de phase II considèrent également les délais d’événement censurés ou non (délai de survie sans progression). Dans le cadre de ce travail de thèse, nous nous sommes intéressés à ces deux types de critères dans le cas de plans d’expérience à deux étapes comportant une possibilité d’arrêt précoce pour futilité. Les conditions réelles des phases II voient souvent la réalisation pratique de l'essai s'écarter du protocole défini a priori. Les cas les plus fréquents sont l'impossibilité d'évaluer le patient (liée par exemple à un événement intercurrent) ou la réalisation d’un suivi trop court. L’objectif de ce travail de thèse était d'évaluer les répercussions entrainées par ces modifications et de proposer des solutions alternatives. Ce travail comporte deux parties principales. Dans la première partie, nous traitons d’un critère de jugement binaire avec patients non évaluables ; dans la seconde d’un critère censuré avec un suivi plus court. Lorsque le critère de jugement est binaire, le plan d’expérience dit « de Simon » est le plus souvent utilisé. Mais ce plan d’expérience nécessite que la réponse de tous les sujets inclus soit connue au temps d’intérêt clinique choisi. En conséquence, comment devons-nous analyser les résultats de l’essai lorsque certains patients sont non évaluables et quelles sont les conséquences sur les caractéristiques opérationnelles de ce plan ? Plusieurs stratégies ont été envisagées (exclusion, remplacement ou considération des patients non évaluables comme des échecs), l’étude de simulation que nous avons réalisée dans le cadre de ce travail montre qu’aucune de ces stratégies ne permet de respecter les risques de première et de seconde espèce fixés a priori. Pour pallier à ces défaillances, nous avons proposé une stratégie dite de « sauvetage » qui repose sur une reconstruction du plan de Simon à partir des probabilités de réponse sous les hypothèses nulle et alternative sachant que le patient est évaluable. Les résultats de nos études de simulations montrent que cette stratégie permet de minimiser les déviations aux risques de première et de seconde espèce alors qu’il y a moins d’information que planifiée dans l’essai. Depuis la dernière décennie, de nombreux schémas considérant les délais d’événement ont été développés. En pratique, l’approche naïve estime la survie sans progression par un taux brut à un temps fixé et utilise un schéma de Simon pour établir une règle de décision. Case et Morgan ont proposé en 2003 un plan d’expérience comparant les taux de survie sans progression estimés à un temps choisi. Considérant l’ensemble du suivi, Kwak et al. (2013) ont proposé d’utiliser la statistique dite du one-sample log-rank pour comparer la courbe de survie sans progression observée à une courbe théorique. Ce dernier schéma permet d’intégrer le maximum d’information disponible et ainsi d’inclure moins de patients pour tester les mêmes hypothèses. Ce plan d’expérience nécessite cependant un suivi de tous les patients de leur inclusion jusqu’à la fin de l’essai. Cette hypothèse semble peu réaliste ; en conséquence, nous avons proposé une nouvelle stratégie basée sur une modification du schéma de Kwak pour un suivi réduit. (...) / Phase II clinical trials are a key stage in drug development. This screening stage goal is to identify the active drugs which deserve further confirmatory studies in phase III trials from the inactive ones whose development will be stopped. The choice of the endpoint and the decision rules are major elements in phase II trials. In oncology, the endpoint is usually binary (response to treatment). For few years, phase II trials have considered time-to-event data as primary endpoint e.g. progression-free survival. In this work, we studied two-stage designs with futility stop with binary or time-to-event endpoints. In real life, phase II trials deviate from the protocol when patient evaluation is no longer feasible (because of intercurrent event) or when the follow-up is too short. The goal of this work is to evaluate the consequences of these deviations on the planned design and to propose some alternative ways to analyze or plan the trials. This work has two main parts. The first one deals with binary endpoints when some unevaluable patients occur and the second one studies time-to-event endpoints with a reduced follow-up. With binary endpoints, the Simon’s plan is the most often used design in oncology. In Simon’s plans, response at a clinical time point should be available for all the included patients. Therefore, should be analyzed the trial when some patients are unevaluable? What are the consequences of these unevaluable patients on the operating characteristics of the design? Several strategies have been studied (exclusion, replacement, use unevaluable patients as treatment failures), our simulations show that none of these strategies preserve type I and type II error rates planned in the protocol. So, a « rescue » strategy has been proposed by computing the stopping boundaries from the conditional probability of responding for an evaluable patient under null and alternative hypotheses. Although there is less information than required by the protocol, simulations show that the “rescue” strategy minimizes the deviations of type I and type II error rates. For the last decades, several time-to-event designs have been developed. The naive approach established a Simon’s plan with progression-free survival rates estimated by crude rates at clinical time-point. In 2005, Case and Morgan proposed a design comparing progression-free survival rates calculated by Nelson and Aalen estimates. Failure times were incorporated in the estimates but the comparison was defined at a pre-specified time point. Considering the whole follow-up, Kwak et al. proposed to use one-sample log-rank test to compare an observed survival curve to a theoretical survival curve. The maximum amount of information is integrated into this design. Therefore, the design reduces the sample size. In the Kwak’s design, patients must be followed from their inclusions to the end of the trial (no loss to follow-up). In some cases (good prognosis, long duration trial), this hypothesis seems unrealistic and a modification of the Kwak’s design integrating reduced follow-up has been proposed. This restriction has been compared to the original design of Kwak in several censoring scenario. The two new methods have been illustrated using some phase II clinical trials planned in the Institut Curie. They demonstrate the interest of these methods in real life.
53

Estimação e comparação de curvas de sobrevivência sob censura informativa. / Estimation and comparison of survival curves with informative censoring.

Cesar, Raony Cassab Castro 10 July 2013 (has links)
A principal motivação desta dissertação é um estudo realizado pelo Instituto do Câncer do Estado de São Paulo (ICESP), envolvendo oitocentos e oito pacientes com câncer em estado avançado. Cada paciente foi acompanhado a partir da primeira admissão em uma unidade de terapia intensiva (UTI) pelo motivo de câncer, por um período de no máximo dois anos. O principal objetivo do estudo é avaliar o tempo de sobrevivência e a qualidade de vida desses pacientes através do uso de um tempo ajustado pela qualidade de vida (TAQV). Segundo Gelber et al. (1989), a combinação dessas duas informações, denominada TAQV, induz a um esquema de censura informativa; consequentemente, os métodos tradicionais de análise para dados censurados, tais como o estimador de Kaplan-Meier (Kaplan e Meier, 1958) e o teste de log-rank (Peto e Peto, 1972), tornam-se inapropriados. Visando sanar essa deficiência, Zhao e Tsiatis (1997) e Zhao e Tsiatis (1999) propuseram novos estimadores para a função de sobrevivência e, em Zhao e Tsiatis (2001), foi desenvolvido um teste análogo ao teste log-rank para comparar duas funções de sobrevivência. Todos os métodos considerados levam em conta a ocorrência de censura informativa. Neste trabalho avaliamos criticamente esses métodos, aplicando-os para estimar e testar curvas de sobrevivência associadas ao TAQV no estudo do ICESP. Por fim, utilizamos um método empírico, baseado na técnica de reamostragem bootstrap, a m de propor uma generalização do teste de Zhao e Tsiatis para mais do que dois grupos. / The motivation for this research is related to a study undertaken at the Cancer Institute at São Paulo (ICESP), which comprises the follow up of eight hundred and eight patients with advanced cancer. The patients are followed up from the first admission to the intensive care unit (ICU) for a period up to two years. The main objective is to evaluate the quality-adjusted lifetime (QAL). According to Gelber et al. (1989), the combination of both this information leads to informative censoring; therefore, traditional methods of survival analisys, such as the Kaplan-Meier estimator (Kaplan and Meier, 1958) and log-rank test (Peto and Peto, 1972) become inappropriate. For these reasons, Zhao and Tsiatis (1997) and Zhao and Tsiatis (1999) proposed new estimators for the survival function, and Zhao and Tsiatis (2001) developed a test similar to the log-rank test to compare two survival functions. In this dissertation we critically evaluate and summarize these methods, and employ then in the estimation and hypotheses testing to compare survival curves derived for QAL, the proposed methods to estimate and test survival functions under informative censoring. We also propose a empirical method, based on the bootstrap resampling method, to compare more than two groups, extending the proposed test by Zhao and Tsiatis.
54

Modelos semiparamétricos de fração de cura para dados com censura intervalar / Semiparametric cure rate models for interval censored data

Costa, Julio Cezar Brettas da 18 February 2016 (has links)
Modelos de fração de cura compõem uma vasta subárea da análise de sobrevivência, apresentando grande aplicabilidade em estudos médicos. O uso deste tipo de modelo é adequado em situações tais que o pesquisador reconhece a existência de uma parcela da população não suscetível ao evento de interesse, consequentemente considerando a probabilidade de que o evento não ocorra. Embora a teoria encontre-se consolidada tratando-se de censuras à direita, a literatura de modelos de fração de cura carece de estudos que contemplem a estrutura de censura intervalar, incentivando os estudos apresentados neste trabalho. Três modelos semiparamétricos de fração de cura para este tipo de censura são aqui considerados para aplicações em conjuntos de dados reais e estudados por meio de simulações. O primeiro modelo, apresentado por Liu e Shen (2009), trata-se de um modelo de tempo de promoção com estimação baseada em uma variação do algoritmo EM e faz uso de técnicas de otimização convexa em seu processo de maximização. O modelo proposto por Lam et al. (2013) considera um modelo semiparamétrico de Cox, modelando a fração de cura da população através de um efeito aleatório com distribuição Poisson composta, utilizando métodos de aumento de dados em conjunto com estimadores de máxima verossimilhança. Em Xiang et al. (2011), um modelo de mistura padrão é proposto adotando um modelo logístico para explicar a incidência e fazendo uso da estrutura de riscos proporcionais para os efeitos sobre o tempo. Os dois últimos modelos mencionados possuem extensões para dados agrupados, utilizadas nas aplicações deste trabalho. Uma das principais motivações desta dissertação consiste em um estudo conduzido por pesquisadores da Fundação Pró-Sangue, em São Paulo - SP, cujo interesse reside em avaliar o tempo até a ocorrência de anemia em doadores de repetição por meio de avaliações periódicas do hematócrito, medido em cada visita ao hemocentro. A existência de uma parcela de doadores não suscetíveis à doença torna conveniente o uso dos modelos estudados. O segundo conjunto de dados analisado trata-se de um conjunto de observações periódicas de cervos de cauda branca equipados com rádiocolares. Tem-se como objetivo a avaliação do comportamento migratório dos animais no inverno para determinadas condições climáticas e geográficas, contemplando a possibilidade de os cervos não migrarem. Um estudo comparativo entre os modelos propostos é realizado por meio de simulações, a fim de avaliar a robustez ao assumir-se determinadas especificações de cenário e fração de cura. Até onde sabemos, nenhum trabalho comparando os diferentes mecanismos de cura na presença de censura intervalar foi realizado até o presente momento. / Cure rate models define an vast sub-area of the survival analysis, presenting great applicability in medical studies. The use of this type of model is suitable in situations such that the researcher recognizes the existence of an non-susceptible part of the population to the event of interest, considering then the probability that such a event does not occur. Although the theory finds itself consolidated when considering right censoring, the literature of cure rate models lacks of interval censoring studies, encouraging then the studies presented in this work. Three semiparametric cure rate models for this type of censoring are considered here for real data analysis and then studied by means of simulations. The first model, presented by Liu e Shen (2009), refers to a promotion time model with its estimation based on an EM algorithm variation and using convex optimization techniques for the maximization process. The model proposed by Lam et al. (2013) considers a Cox semiparametric model, modelling then the population cure fraction by an frailty distributed as an compound Poisson, used jointly with data augmentation methods and maximum likelihood estimators. In Xiang et al. (2011), an standard mixture cure rate model is proposed adopting an logistic model for explaining incidence and using proportional hazards structure for the effects over the time to event. The two last mentioned models have extensions for clustered data analysis and are used on the examples of applications of this work. One of the main motivations of this dissertation consists on a study conducted by researches of Fundação Pró-Sangue, in São Paulo - SP, whose interest resides on evaluating the time until anaemia, occurring to recurrent donors, detected through periodic evaluations of the hematocrit, measured on each visit to the blood center. The existence of a non-susceptible portion of donors turns the use of the cure rate models convenient. The second analysed dataset consists on an set of periodic observations of radio collar equipped white tail deers. The goal here is the evaluation of when these animals migrate in the winter for specic weather and geographic conditions, contemplating the possibility that deer could not migrate. A comparative study among the proposed models is realized using simulations, in order to assess the robustness when assuming determined specifications about scenario and cure fraction. As far as we know, no work has been done comparing different cure mechanisms in the presence of interval censoring data until the present moment.
55

Estimation of wood fibre length distributions from censored mixture data

Svensson, Ingrid January 2007 (has links)
<p>The motivating forestry background for this thesis is the need for fast, non-destructive, and cost-efficient methods to estimate fibre length distributions in standing trees in order to evaluate the effect of silvicultural methods and breeding programs on fibre length. The usage of increment cores is a commonly used non-destructive sampling method in forestry. An increment core is a cylindrical wood sample taken with a special borer, and the methods proposed in this thesis are especially developed for data from increment cores. Nevertheless the methods can be used for data from other sampling frames as well, for example for sticks with the shape of an elongated rectangular box.</p><p>This thesis proposes methods to estimate fibre length distributions based on censored mixture data from wood samples. Due to sampling procedures, wood samples contain cut (censored) and uncut observations. Moreover the samples consist not only of the fibres of interest but of other cells (fines) as well. When the cell lengths are determined by an automatic optical fibre-analyser, there is no practical possibility to distinguish between cut and uncut cells or between fines and fibres. Thus the resulting data come from a censored version of a mixture of the fine and fibre length distributions in the tree. The methods proposed in this thesis can handle this lack of information.</p><p>Two parametric methods are proposed to estimate the fine and fibre length distributions in a tree. The first method is based on grouped data. The probabilities that the length of a cell from the sample falls into different length classes are derived, the censoring caused by the sampling frame taken into account. These probabilities are functions of the unknown parameters, and ML estimates are found from the corresponding multinomial model.</p><p>The second method is a stochastic version of the EM algorithm based on the individual length measurements. The method is developed for the case where the distributions of the true lengths of the cells at least partially appearing in the sample belong to exponential families. The cell length distribution in the sample and the conditional distribution of the true length of a cell at least partially appearing in the sample given the length in the sample are derived. Both these distributions are necessary in order to use the stochastic EM algorithm. Consistency and asymptotic normality of the stochastic EM estimates is proved.</p><p>The methods are applied to real data from increment cores taken from Scots pine trees (Pinus sylvestris L.) in Northern Sweden and further evaluated through simulation studies. Both methods work well for sample sizes commonly obtained in practice.</p>
56

Nonparametric statistical inference for dependent censored data

El Ghouch, Anouar 05 October 2007 (has links)
A frequent problem that appears in practical survival data analysis is censoring. A censored observation occurs when the observation of the event time (duration or survival time) may be prevented by the occurrence of an earlier competing event (censoring time). Censoring may be due to different causes. For example, the loss of some subjects under study, the end of the follow-up period, drop out or the termination of the study and the limitation in the sensitivity of a measurement instrument. The literature about censored data focuses on the i.i.d. case. However in many real applications the data are collected sequentially in time or space and so the assumption of independence in such case does not hold. Here we only give some typical examples from the literature involving correlated data which are subject to censoring. In the clinical trials domain it frequently happens that the patients from the same hospital have correlated survival times due to unmeasured variables like the quality of the hospital equipment. Censored correlated data are also a common problem in the domain of environmental and spatial (geographical or ecological) statistics. In fact, due to the process being used in the data sampling procedure, e.g. the analytical equipment, only the measurements which exceed some thresholds, for example the method detection limits or the instrumental detection limits, can be included in the data analysis. Many other examples can also be found in other fields like econometrics and financial statistics. Observations on duration of unemployment e.g., may be right censored and are typically correlated. When the data are not independent and are subject to censoring, estimation and inference become more challenging mathematical problems with a wide area of applications. In this context, we propose here some new and flexible tools based on a nonparametric approach. More precisely, allowing dependence between individuals, our main contribution to this domain concerns the following aspects. First, we are interested in developing more suitable confidence intervals for a general class of functionals of a survival distribution via the empirical likelihood method. Secondly, we study the problem of conditional mean estimation using the local linear technique. Thirdly, we develop and study a new estimator of the conditional quantile function also based on the local linear method. In this dissertation, for each proposed method, asymptotic results like consistency and asymptotic normality are derived and the finite sample performance is evaluated in a simulation study.
57

A Study of the Calibration Regression Model with Censored Lifetime Medical Cost

Lu, Min 03 August 2006 (has links)
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
58

Nonparametric statistical inference for dependent censored data

El Ghouch, Anouar 05 October 2007 (has links)
A frequent problem that appears in practical survival data analysis is censoring. A censored observation occurs when the observation of the event time (duration or survival time) may be prevented by the occurrence of an earlier competing event (censoring time). Censoring may be due to different causes. For example, the loss of some subjects under study, the end of the follow-up period, drop out or the termination of the study and the limitation in the sensitivity of a measurement instrument. The literature about censored data focuses on the i.i.d. case. However in many real applications the data are collected sequentially in time or space and so the assumption of independence in such case does not hold. Here we only give some typical examples from the literature involving correlated data which are subject to censoring. In the clinical trials domain it frequently happens that the patients from the same hospital have correlated survival times due to unmeasured variables like the quality of the hospital equipment. Censored correlated data are also a common problem in the domain of environmental and spatial (geographical or ecological) statistics. In fact, due to the process being used in the data sampling procedure, e.g. the analytical equipment, only the measurements which exceed some thresholds, for example the method detection limits or the instrumental detection limits, can be included in the data analysis. Many other examples can also be found in other fields like econometrics and financial statistics. Observations on duration of unemployment e.g., may be right censored and are typically correlated. When the data are not independent and are subject to censoring, estimation and inference become more challenging mathematical problems with a wide area of applications. In this context, we propose here some new and flexible tools based on a nonparametric approach. More precisely, allowing dependence between individuals, our main contribution to this domain concerns the following aspects. First, we are interested in developing more suitable confidence intervals for a general class of functionals of a survival distribution via the empirical likelihood method. Secondly, we study the problem of conditional mean estimation using the local linear technique. Thirdly, we develop and study a new estimator of the conditional quantile function also based on the local linear method. In this dissertation, for each proposed method, asymptotic results like consistency and asymptotic normality are derived and the finite sample performance is evaluated in a simulation study.
59

Testing the Hazard Rate, Part I

Liero, Hannelore January 2003 (has links)
We consider a nonparametric survival model with random censoring. To test whether the hazard rate has a parametric form the unknown hazard rate is estimated by a kernel estimator. Based on a limit theorem stating the asymptotic normality of the quadratic distance of this estimator from the smoothed hypothesis an asymptotic ®-test is proposed. Since the test statistic depends on the maximum likelihood estimator for the unknown parameter in the hypothetical model properties of this parameter estimator are investigated. Power considerations complete the approach.
60

Tests for homogeneity of survival distributions against non-location alternatives and analysis of the gastric cancer data

Bagdonavičius, Vilijandas B., Levuliene, Ruta, Nikulin, Mikhail S., Zdorova-Cheminade, Olga January 2004 (has links)
The two and k-sample tests of equality of the survival distributions against the alternatives including cross-effects of survival functions, proportional and monotone hazard ratios, are given for the right censored data. The asymptotic power against approaching alternatives is investigated. The tests are applied to the well known chemio and radio therapy data of the Gastrointestinal Tumor Study Group. The P-values for both proposed tests are much smaller then in the case of other known tests. Differently from the test of Stablein and Koutrouvelis the new tests can be applied not only for singly but also to randomly censored data.

Page generated in 0.0854 seconds