• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Bootstrap Application in Adjusting Asymptotic Distribution for Interval-Censored Data

Chung, Yun-yuan 20 June 2007 (has links)
Comparison of two or more failure time distributions based on interval-censored data is tested by extension of log-rank test proposed by Sun (1996, 2001, 2004). Furthermore, Chang (2004) verified that the proposed test statistics are approximately chi-cquare with degrees of freedom p-1 after constants factor adjustment which can be obtained from simulations. In this paper we approach in a different way to estimate the adjustment factor of a given interval-censored data by applying the bootstrap technique to the test statistics. Simulation results indicate that the bootstrap technique performs well on those test statistics except the one proposed in 1996. By using chi-square goodness of fit test, we found that Sun's test in 1996 is significantly far from any chi-square.
2

Card-Shuffling Analysis with Weighted Rank Distance

Wu, Kung-sheng 24 June 2007 (has links)
In this paper, we cite two weighted rank distances (Wilcoxon rank and Log rank) to analyze how many times must a deck of 52 cards be shuffled to become sufficiently randomized. Bayer and Diaconis (1992) used the variation distance as a measure of randomness to analyze the card-shuffling. Lin (2006) used the deviation distance to analyze card-shuffling without complicated mathematics formulas. We provide two new ideas to measure the distance for card-shuffling analysis.
3

Gráficos de controle CUSUM para monitoramento de dados de sobrevivência / CUSUM control charts to monitor survival data

Oliveira, Jocelânio Wesley de 18 May 2018 (has links)
Neste trabalho propomos gráficos de controle tipo CUSUM para monitoramento de tempos de sobrevivência. Nossa proposta é desenvolver diferentes estatísticas para o escore do gráfico CUSUM de forma prospectiva. Inicialmente propomos um gráfico CUSUM não paramétrico para monitoramento de populações homogêneas que avalia a variação na estatística log-rank como forma de identificar se há uma mudança significativa no risco de falha ao longo do tempo. Algumas abordagens diferentes foram consideradas e em destaque colocamos o gráfico ZDiff CUSUM, que tem como escore o incremento na estatística Z do teste log-rank em relação à inspeção anterior. Foi constatado, via simulação, que este método é eficiente. Posteriormente investigamos abordagens que levam em conta heterogeneidade na população por meio do modelo de Cox, considerando medidas baseadas na razão de verossimilhanças e em resíduos martingal e deviance. Através de simulações, verificou-se que o método com base na razão de verossimilhanças se mostrou ágil para detectar alteração na taxa de falha, quando se conhece a intensidade da mudança e este valor é informado na construção do teste. Por outro lado, os gráficos CUSUM com base em resíduos são mais simples e se mostraram eficazes para identificar aumentos no padrão da sobrevivência. Estes três métodos e o ZDiff CUSUM foram aplicados a dados de um estudo conduzido no Instituto do Coração (InCor) envolvendo pacientes com insuficiência cardíaca. Foi detectado que ao longo do tempo estes pacientes apresentam sobrevida maior, o que pode estar ligado à melhoria no tratamento e procedimentos realizados no hospital. Como conclusão, sugerimos que os gráficos tipo CUSUM com resíduos do modelo de Cox e o método não paramétrico com teste log-rank podem ser alternativas para utilização na prática em monitoramento de dados de sobrevivência. / In this work we propose CUSUM control charts to monitor survival times. Our proposal is to develop different statistics for the CUSUM chart score in a prospective way, to take into account SA approaches. We initially consider a non-parametric approach to monitor homogeneous populations. This CUSUM evaluates the variation on the log-rank test statistics as a way to identify significant changes in the risk of failure. Some different expressions for this have been considered and, in particular, we propose a ZDiff CUSUM chart computed as the increment on the log-rank test statistics Z at each inspection point in relation to the previous one. Based on simulation studies it was found that this method is efficient. Subsequently we investigated approaches that take into account heterogeneity in the population through the Cox model, considering measures based on the likelihood ratio and on martingal and deviance residuals. Through simulations, it was verified that the method based on the likelihood ratio was agile to detect a change in the hazard rate, when the intensity of the change is known and this value is informed in the construction of the test. On the other hand, CUSUM methods based on residuals are simpler and have been shown to be effective in identifying increases in survival pattern. These three methods and the ZDiff CUSUM were applied to a dataset from a study conducted at the Heart Institute (InCor) on patients with heart failure. It has been found that, over time, these patients have greater survival, which may be linked to improved treatment and procedures performed at the hospital. As a conclusion, we suggest that the CUSUM methods based on Cox model residuals and the nonparametric method on the log-rank test may be alternatives for practice in monitoring survival data.
4

Gráficos de controle CUSUM para monitoramento de dados de sobrevivência / CUSUM control charts to monitor survival data

Jocelânio Wesley de Oliveira 18 May 2018 (has links)
Neste trabalho propomos gráficos de controle tipo CUSUM para monitoramento de tempos de sobrevivência. Nossa proposta é desenvolver diferentes estatísticas para o escore do gráfico CUSUM de forma prospectiva. Inicialmente propomos um gráfico CUSUM não paramétrico para monitoramento de populações homogêneas que avalia a variação na estatística log-rank como forma de identificar se há uma mudança significativa no risco de falha ao longo do tempo. Algumas abordagens diferentes foram consideradas e em destaque colocamos o gráfico ZDiff CUSUM, que tem como escore o incremento na estatística Z do teste log-rank em relação à inspeção anterior. Foi constatado, via simulação, que este método é eficiente. Posteriormente investigamos abordagens que levam em conta heterogeneidade na população por meio do modelo de Cox, considerando medidas baseadas na razão de verossimilhanças e em resíduos martingal e deviance. Através de simulações, verificou-se que o método com base na razão de verossimilhanças se mostrou ágil para detectar alteração na taxa de falha, quando se conhece a intensidade da mudança e este valor é informado na construção do teste. Por outro lado, os gráficos CUSUM com base em resíduos são mais simples e se mostraram eficazes para identificar aumentos no padrão da sobrevivência. Estes três métodos e o ZDiff CUSUM foram aplicados a dados de um estudo conduzido no Instituto do Coração (InCor) envolvendo pacientes com insuficiência cardíaca. Foi detectado que ao longo do tempo estes pacientes apresentam sobrevida maior, o que pode estar ligado à melhoria no tratamento e procedimentos realizados no hospital. Como conclusão, sugerimos que os gráficos tipo CUSUM com resíduos do modelo de Cox e o método não paramétrico com teste log-rank podem ser alternativas para utilização na prática em monitoramento de dados de sobrevivência. / In this work we propose CUSUM control charts to monitor survival times. Our proposal is to develop different statistics for the CUSUM chart score in a prospective way, to take into account SA approaches. We initially consider a non-parametric approach to monitor homogeneous populations. This CUSUM evaluates the variation on the log-rank test statistics as a way to identify significant changes in the risk of failure. Some different expressions for this have been considered and, in particular, we propose a ZDiff CUSUM chart computed as the increment on the log-rank test statistics Z at each inspection point in relation to the previous one. Based on simulation studies it was found that this method is efficient. Subsequently we investigated approaches that take into account heterogeneity in the population through the Cox model, considering measures based on the likelihood ratio and on martingal and deviance residuals. Through simulations, it was verified that the method based on the likelihood ratio was agile to detect a change in the hazard rate, when the intensity of the change is known and this value is informed in the construction of the test. On the other hand, CUSUM methods based on residuals are simpler and have been shown to be effective in identifying increases in survival pattern. These three methods and the ZDiff CUSUM were applied to a dataset from a study conducted at the Heart Institute (InCor) on patients with heart failure. It has been found that, over time, these patients have greater survival, which may be linked to improved treatment and procedures performed at the hospital. As a conclusion, we suggest that the CUSUM methods based on Cox model residuals and the nonparametric method on the log-rank test may be alternatives for practice in monitoring survival data.
5

Nonparametric tests for interval-censored failure time data via multiple imputation

Huang, Jin-long 26 June 2008 (has links)
Interval-censored failure time data often occur in follow-up studies where subjects can only be followed periodically and the failure time can only be known to lie in an interval. In this paper we consider the problem of comparing two or more interval-censored samples. We propose a multiple imputation method for discrete interval-censored data to impute exact failure times from interval-censored observations and then apply existing test for exact data, such as the log-rank test, to imputed exact data. The test statistic and covariance matrix are calculated by our proposed multiple imputation technique. The formula of covariance matrix estimator is similar to the estimator used by Follmann, Proschan and Leifer (2003) for clustered data. Through simulation studies we find that the performance of the proposed log-rank type test is comparable to that of the test proposed by Finkelstein (1986), and is better than that of the two existing log-rank type tests proposed by Sun (2001) and Zhao and Sun (2004) due to the differences in the method of multiple imputation and the covariance matrix estimation. The proposed method is illustrated by means of an example involving patients with breast cancer. We also investigate applying our method to the other two-sample comparison tests for exact data, such as Mantel's test (1967) and the integrated weighted difference test.
6

Determining the late effect parameter in the Fleming-Harrington test using asymptotic relative efficiency in cancer immunotherapy clinical trials / がん免疫治療臨床試験における漸近相対効率を用いたFleming-Harrington検定の遅延した治療効果の検出のパラメータの設定

Kaneko, Yuichiro 23 January 2024 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第24998号 / 医博第5032号 / 新制||医||1069(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 佐藤 俊哉, 教授 山本 洋介, 教授 永井 洋士 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
7

A Study of the Calibration Regression Model with Censored Lifetime Medical Cost

Lu, Min 03 August 2006 (has links)
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
8

Test de type-log rank pour l'évolution de la qualité de vie liée à la santé

Boisson, Véronique 03 December 2008 (has links) (PDF)
Les études épidémiologiques longitudinales sur la qualité de vie (QdV) connaissent un essor depuis quelques années, surtout pour les maladies chroniques où aucun traitement curatif n'existe. L'objectif de ces études est la surveillance de la santé incluant la QdV et la survie. Une telle surveillance repose sur la comparaison de l'évolution longitudinale de QdV entre groupes de patients. Aussi, avons nous élaboré un test global de type log-rank pour l'évolution longitudinale de QdV par rapport à un taux de dégradation de QdV pour deux groupes de patients. <br />Généralement lors de ces études, des questionnaires de QdV sont donnés à remplir aux patients permettant de calculer leur score de QdV. L'évolution de QdV se traduit par le concept de dégradation de QdV. Un taux critique x de dégradation de QdV peut être fixé. Les patients sont considérés comme dégradés si leur score de QdV est supérieur à x. Nous étendons la statistique du score de vraisemblance partielle afin de prendre en compte un taux x de dégradation de QdV préalablement fixé et montrons que le vecteur du processus de score normalisé converge vers un processus gaussien centré. Le taux x de dégradation de QdV est ensuite supposé variable. A l'aide de la théorie des processus empiriques nous prouvons la convergence en distribution de la statistique du score normalisé vers un processus gaussien. Ces travaux ont permis de construire, lorsque le taux x de dégradation de QdV est variable, un test de type log-rank permettant de comparer l'évolution longitudinale de la dégradation de QdV pour deux goupes de patients.<br />Des simulations et une application à une cohorte de patients infectés par le VIH sont présentées.
9

The Use of Net Benefit in Modeling Non-Proportional Hazards

Alharbi, Abdulwahab 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Background: The hazard ratio (HR), representing the quantified estimate of treatment effect in survival analysis, measures the instantaneous relative difference of failure risk between two groups. The HR is typically assumed to be independent of time; however, this assumption is usually violated in practice. If the proportionality assumption holds, HR can be validly with the popular Cox proportional hazards model. When not proportional, the Wilcoxon-Gehan has been proposed to test the hypothesis of no difference. These have been recently generalized to evaluate differences in survival time for more than zero survival differences (the “net survival benefit”). Method: In this thesis, an attempt is made to illustrate the properties of generalized Wilcoxon Gehan tests as proposed by Buyse (2009). We use the concept of net survival benefit to re-analyze the trial by the Gastrointestinal Tumor Study Group (1982) by comparing chemotherapy versus combined chemotherapy and radiation in the treatment of locally unresectable gastric cancer. Survival times in days, for the 45 patients were recorded in each treatment arm. In that trial, a delayed treatment effect was observed, thus the HR is non-proportional. To provide a flexible assessment of the treatment effect, the net survival benefit was computed using datasets simulated under typical scenarios of proportional hazards, such as delayed treatment effect. Results: The generalized Wilcoxon statistic U, favored not adding radiation to chemotherapy, but only for survival up to 12 months. At Δ=0, U (0) = 491. In the simulated data sets, the confidence interval under the null hypothesis U (0) is (-152, 388). The test statistic 491 is outside this interval indicating radiation treatment might be beneficial. At U(12) = 219, it is inside the confidence interval of no treatment effect (-154,268) indicating the benefit of Chemo only is gone after 12 months. Conclusions: The net survival benefit measured via Buyse’s generalized Wilcoxon statistic is a measure of treatment effect that is meaningful whether or not hazards are proportional. The associated statistical test is more powerful than the standard log-rank test when a delayed treatment effect is anticipated.
10

Identification of factors affecting the survival lifetime of HIV+ terminal patients in Albert Luthuli municipality of South Africa / Identification of factors affecting the survival lifetime of HIV positive terminal patients in Albert Luthuli municipality of South Africa

Bengura, Pepukai 19 December 2019 (has links)
The objective of the study was to identify the factors that affect the survival lifetime of HIV+ terminal patients in rural district hospitals of Albert Luthuli municipality in the Mpumalanga province of South Africa. A cohort of HIV+ terminal patients was retrospectively followed from 2010 to 2017 until a patient died, was lost to follow-up or was still alive at the end of the observation period. Nonparametric survival analysis and semiparametric survival analysis methods were used to analyse the data. Through Cox proportional hazards regression modelling, it was found that ART adherence (poor, fair, good), Age, Follow-up mass, Baseline sodium, Baseline viral load, Follow CD4 count by Treatment (Regimen 1) interaction and Follow-up lymphocyte by TB history (yes, no) interaction had significant effects on survival lifetime of HIV+ terminal patients (p-values<0.1). Furthermore, through quantile regression modelling, it was found that short, medium and long survival times of HIV+ patients, respectively represented by the 0.1, 0.5 and 0.9 quantiles, were not necessarily significantly affected by the same factors. / Statistics / M. Sc. (Statistics)

Page generated in 0.0517 seconds