Spelling suggestions: "subject:"timedevent"" "subject:"bioevent""
1 |
Nonparametric statistical procedures for therapeutic clinical trials with survival endpointsLuo, Yingchun 02 August 2007 (has links)
This thesis proposed two nonparametric statistical tests, based on the Kolmogorov-Smirnov distance and L2 mallows disatnce.
To implement the proposed tests, nonparametric bootstrap method is employed to approximate the distributions of the test statistics to construct the corresponding bootstrap confidence interval procedures. Monte-Carlo simulations are performed to investigate the actual type I error of the proposed bootstrap procedures. It is found that the type I error of the bootstrap BC confidence interval procedure is close to the nominal level when censoring is not heavy and the boosttrap percentile confidence interval procedure works well when Kolmogorov-Smirnov distance is used to characterize the equivalence. When the data is heavily censored, the procedures based on the Kolmogorov-Smirnov distance have very conservative type I errors, while the procedures based on the Mallows distance are very liberal. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2007-08-01 10:43:32.345
|
2 |
Efficiency of an Unbalanced Design in Collecting Time to Event Data with Interval CensoringCheng, Peiyao 10 November 2016 (has links)
In longitudinal studies, the exact timing of an event often cannot be observed, and is usually detected at a subsequent visit, which is called interval censoring. Spacing of the visits is important when designing study with interval censored data. In a typical longitudinal study, the spacing of visits is usually the same across all subjects (balanced design). In this dissertation, I propose an unbalanced design: subjects at baseline are divided into a high risk group and a low risk group based on a risk factor, and the subjects in the high risk group are followed more frequently than those in the low risk group. Using a simple setting of a single binary exposure of interest (covariate) and exponentially distributed survival times, I derive the explicit formula for the asymptotic sampling variance of the estimate for the covariate effect. It shows that the asymptotic sampling variance can be simply reduced by increasing the number of examinations in the high risk group. The relative reduction tends to be greater when the baseline hazard rate in the high risk group is much higher than that in the low risk group and tends to be larger when the frequency of assessments in the low risk group is relatively sparse. Numeric simulations are also used to verify the asymptotic results in small samples and evaluate the efficiency of the unbalanced design in more complicated settings. Beyond comparing the asymptotic sampling variances, I further evaluate the power and empirical Type I error from unbalanced design and compare against the traditional balanced design. Data from a randomized clinical trial for type 1 diabetes are further used to test the performance of the proposed unbalanced design, and the parametric analyses of these data confirmed the findings from the theoretical and numerical studies.
|
3 |
A joint model of an internal time-dependent covariate and bivariate time-to-event data with an application to muscular dystrophy surveillance, tracking and research network dataLiu, Ke 01 December 2015 (has links)
Joint modeling of a single event time response with a longitudinal covariate dates back to the 1990s. The three basic types of joint modeling formulations are selection models, pattern mixture models and shared parameter models. The shared parameter models are most widely used. One type of a shared parameter model (Joint Model I) utilizes unobserved random effects to jointly model a longitudinal sub-model and a survival sub-model to assess the impact of an internal time-dependent covariate on the time-to-event response.
Motivated by the Muscular Dystrophy Surveillance, Tracking and Research Network (MD STARnet), we constructed a new model (Joint Model II), to jointly analyze correlated bivariate time-to-event responses associated with an internal time-dependent covariate in the Frequentist paradigm. This model exhibits two distinctive features: 1) a correlation between bivariate time-to-event responses and 2) a time-dependent internal covariate in both survival models. Developing a model that sufficiently accommodates both characteristics poses a challenge. To address this challenge, in addition to the random variables that account for the association between the time-to-event responses and the internal time-dependent covariate, a Gamma frailty random variable was used to account for the correlation between the two event time outcomes. To estimate the model parameters, we adopted the Expectation-Maximization (EM) algorithm. We built a complete joint likelihood function with respect to both latent variables and observed responses. The Gauss-Hermite quadrature method was employed to approximate the two-dimensional integrals in the E-step of the EM algorithm, and the maximum profile likelihood type of estimation method was implemented in the M-step. The bootstrap method was then applied to estimate the standard errors of the estimated model parameters. Simulation studies were conducted to examine the finite sample performance of the proposed methodology. Finally, the proposed method was applied to MD STARnet data to assess the impact of shortening fractions and steroid use on the onsets of scoliosis and mental health issues.
|
4 |
A Joint Model of Longitudinal Data and Time to Event Data with Cured FractionPanneerselvam, Ashok January 2010 (has links)
No description available.
|
5 |
Estimating Companies’ Survival in Financial Crisis : Using the Cox Proportional Hazards ModelAndersson, Niklas January 2014 (has links)
This master thesis is aimed towards answering the question What is the contribution from a company’s sector with regards to its survival of a financial crisis? with the sub question Can we use survival analysis on financial data to answer this?. Thus survival analysis is used to answer our main question which is seldom used on financial data. This is interesting since it will study how well survival analysis can be used on financial data at the same time as it will evaluate if all companies experiences a financial crisis in the same way. The dataset consists of all companies traded on the Swedish stock market during 2008. The results show that the survival method is very suitable the data that is used. The sector a company operated in has a significant effect. However the power is to low too give any indication of specific differences between the different sectors. Further on it is found that the group of smallest companies had much better survival than larger companies.
|
6 |
Bivariate Generalization of the Time-to-Event Conditional Reassessment Method with a Novel Adaptive Randomization MethodYan, Donglin 01 January 2018 (has links)
Phase I clinical trials in oncology aim to evaluate the toxicity risk of new therapies and identify a safe but also effective dose for future studies. Traditional Phase I trials of chemotherapies focus on estimating the maximum tolerated dose (MTD). The rationale for finding the MTD is that better therapeutic effects are expected at higher dose levels as long as the risk of severe toxicity is acceptable. With the advent of a new generation of cancer treatments such as the molecularly targeted agents (MTAs) and immunotherapies, higher dose levels no longer guarantee increased therapeutic effects, and the focus has shifted to estimating the optimal biological dose (OBD). The OBD is a dose level with the highest biologic activity with acceptable toxicity. The search for OBD requires joint evaluation of toxicity and efficacy. Although several seamleass phase I/II designs have been published in recent years, there is not a consensus regarding an optimal design and further improvement is needed for some designs to be widely used in practice.
In this dissertation, we propose a modification to an existing seamless phase I/II design by Wages and Tait (2015) for locating the OBD based on binary outcomes, and extend it to time to event (TITE) endpoints. While the original design showed promising results, we hypothesized that performance could be improved by replacing the original adaptive randomization stage with a different randomization strategy. We proposed to calculate dose assigning probabilities by averaging all candidate models that fit the observed data reasonably well, as opposed to the original design that based all calculations on one best-fit model. We proposed three different strategies to select and average among candidate models, and simulations are used to compare the proposed strategies to the original design. Under most scenarios, one of the proposed strategies allocates more patients to the optimal dose while improving accuracy in selecting the final optimal dose without increasing the overall risk of toxicity.
We further extend this design to TITE endpoints to address a potential issue of delayed outcomes. The original design is most appropriate when both toxicity and efficacy outcomes can be observed shortly after the treatment, but delayed outcomes are common, especially for efficacy endpoints. The motivating example for this TITE extension is a Phase I/II study evaluating optimal dosing of all-trans retinoic acid (ATRA) in combination with a fixed dose of daratumumab in the treatment of relapsed or refractory multiple myeloma. The toxicity endpoint is observed in one cycle of therapy (i.e., 4 weeks) while the efficacy endpoint is assessed after 8 weeks of treatment. The difference in endpoint observation windows causes logistical challenges in conducting the trial, since it is not acceptable in practice to wait until both outcomes for each participant have been observed before sequentially assigning the dose of a newly eligible participant. The result would be a delay in treatment for patients and undesirably long trial duration. To address this issue, we generalize the time-to-event continual reassessment method (TITE-CRM) to bivariate outcomes with potentially non-monotonic dose-efficacy relationship. Simulation studies show that the proposed TITE design maintains similar probability in selecting the correct OBD comparing to the binary original design, but the number of patients treated at the OBD decreases as the rate of enrollment increases.
We also develop an R package for the proposed methods and document the R functions used in this research. The functions in this R package assist implementation of the proposed randomization strategy and design. The input and output format of these functions follow similar formatting of existing R packages such as "dfcrm" or "pocrm" to allow direct comparison of results. Input parameters include efficacy skeletons, prior distribution of any model parameters, escalation restrictions, design method, and observed data. Output includes recommended dose level for the next patient, MTD, estimated model parameters, and estimated probabilities of each set of skeletons. Simulation functions are included in this R package so that the proposed methods can be used to design a trial based on certain parameters and assess performance. Parameters of these scenarios include total sample size, true dose-toxicity relationship, true dose-efficacy relationship, patient recruit rate, delay in toxicity and efficacy responses.
|
7 |
Marginal Methods for Multivariate Time to Event DataWu, Longyang 05 April 2012 (has links)
This thesis considers a variety of statistical issues related to the design and analysis of clinical trials involving multiple
lifetime events. The use of composite endpoints, multivariate survival methods with dependent censoring, and
recurrent events with dependent termination are considered. Much of this work is based on problems arising in oncology research.
Composite endpoints are routinely adopted in multi-centre randomized trials designed to evaluate the effect of
experimental interventions in cardiovascular disease, diabetes, and cancer. Despite their widespread use, relatively
little attention has been paid to the statistical properties of estimators of treatment effect based on composite
endpoints. In Chapter 2 we consider this issue in the context of multivariate models for time to event data in which copula
functions link marginal distributions with a proportional hazards structure. We then examine the asymptotic and
empirical properties of the estimator of treatment effect arising from a Cox regression model for the time to the
first event. We point out that even when the treatment effect is the same for the component events, the limiting value
of the estimator based on the composite endpoint is usually inconsistent for this common value. The limiting value
is determined by the degree of association between the events, the stochastic ordering of events, and the censoring
distribution. Within the framework adopted, marginal methods for the analysis of multivariate failure time data
yield consistent estimators of treatment effect and are therefore preferred. We illustrate the methods by application
to a recent asthma study.
While there is considerable potential for more powerful tests of treatment effect when marginal methods are used,
it is possible that problems related to dependent censoring can arise.
This happens when the occurrence of one type of event increases the risk of withdrawal from a study
and hence alters the probability of observing events of other types.
The purpose of Chapter 3 is to formulate a model which reflects this type of mechanism, to evaluate
the effect on the asymptotic and finite sample properties of marginal estimates, and to examine the
performance of estimators obtained using flexible inverse probability weighted marginal estimating
equations. Data from a motivating study are used for illustration.
Clinical trials are often designed to assess the effect of therapeutic interventions on occurrence of recurrent events in
the presence of a dependent terminal event such as death. Statistical methods based on multistate analysis have considerable appeal in this setting since they can incorporate changes in risk with each event occurrence, a dependence between the recurrent event and
the terminal event and event-dependent censoring. To date, however, there has been limited methodology for the design of
trials involving recurrent and terminal events, and we addresses this in Chapter 4. Based on the asymptotic distribution of regression coefficients from a multiplicative intensity Markov regression model, we derive sample size formulae to address power requirements for both the recurrent and terminal event processes. Superiority and non-inferiority trial designs are dealt with. Simulation studies confirm that the designs satisfy the nominal power requirements in both settings, and an application to a trial evaluating the effect of a bisphosphonate on skeletal complications is given for illustration.
|
8 |
Marginal Methods for Multivariate Time to Event DataWu, Longyang 05 April 2012 (has links)
This thesis considers a variety of statistical issues related to the design and analysis of clinical trials involving multiple
lifetime events. The use of composite endpoints, multivariate survival methods with dependent censoring, and
recurrent events with dependent termination are considered. Much of this work is based on problems arising in oncology research.
Composite endpoints are routinely adopted in multi-centre randomized trials designed to evaluate the effect of
experimental interventions in cardiovascular disease, diabetes, and cancer. Despite their widespread use, relatively
little attention has been paid to the statistical properties of estimators of treatment effect based on composite
endpoints. In Chapter 2 we consider this issue in the context of multivariate models for time to event data in which copula
functions link marginal distributions with a proportional hazards structure. We then examine the asymptotic and
empirical properties of the estimator of treatment effect arising from a Cox regression model for the time to the
first event. We point out that even when the treatment effect is the same for the component events, the limiting value
of the estimator based on the composite endpoint is usually inconsistent for this common value. The limiting value
is determined by the degree of association between the events, the stochastic ordering of events, and the censoring
distribution. Within the framework adopted, marginal methods for the analysis of multivariate failure time data
yield consistent estimators of treatment effect and are therefore preferred. We illustrate the methods by application
to a recent asthma study.
While there is considerable potential for more powerful tests of treatment effect when marginal methods are used,
it is possible that problems related to dependent censoring can arise.
This happens when the occurrence of one type of event increases the risk of withdrawal from a study
and hence alters the probability of observing events of other types.
The purpose of Chapter 3 is to formulate a model which reflects this type of mechanism, to evaluate
the effect on the asymptotic and finite sample properties of marginal estimates, and to examine the
performance of estimators obtained using flexible inverse probability weighted marginal estimating
equations. Data from a motivating study are used for illustration.
Clinical trials are often designed to assess the effect of therapeutic interventions on occurrence of recurrent events in
the presence of a dependent terminal event such as death. Statistical methods based on multistate analysis have considerable appeal in this setting since they can incorporate changes in risk with each event occurrence, a dependence between the recurrent event and
the terminal event and event-dependent censoring. To date, however, there has been limited methodology for the design of
trials involving recurrent and terminal events, and we addresses this in Chapter 4. Based on the asymptotic distribution of regression coefficients from a multiplicative intensity Markov regression model, we derive sample size formulae to address power requirements for both the recurrent and terminal event processes. Superiority and non-inferiority trial designs are dealt with. Simulation studies confirm that the designs satisfy the nominal power requirements in both settings, and an application to a trial evaluating the effect of a bisphosphonate on skeletal complications is given for illustration.
|
9 |
Joint modeling of longitudinal and time to event data with application to tuberculosis researchNigrini, Sharday January 2021 (has links)
Due to tuberculosis (TB) being one of the top ten diseases in Africa with the
highest mortality rate, a crucial objective is to find the appropriate medication to
cure patients and prevent people from contracting the disease. Since this statistic
is not improving sufficiently, it is evident that there is a need for new anti-TB
drugs. One of the main challenges in developing new and effective drugs for the
treatment of TB is to identify the combinations of effective drugs when subsequent testing of patients in pivotal clinical trials are performed. During the early weeks of the treatment of TB, trials of the early bactericidal activity assess the decline in colony-forming unit (CFU) count of Mycobacterium TB in the sputum of patients containing smear-microscopy-positive pulmonary TB. A previously published dataset containing CFU counts of treated patients over 56 days is used to perform joint modeling of the nonlinear data over time and the patients’ sputum culture conversion (i.e., the time-to-event outcome). It is clear from the results obtained that there is an association between the longitudinal and time-to-event outcomes. / Mini Dissertation ( MSc (Advanced Data Analytics))--University of Pretoria, 2021. / South African Medical Research Council (SAMRC) / Statistics / MSc (Advanced Data Analytics) / Restricted
|
10 |
Applications of Time to Event Analysis in Clinical DataXu, Chenjia 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Survival analysis has broad applications in diverse research areas. In this dissertation, we consider an innovative application of survival analysis approach to phase I dose-finding design and the modeling of multivariate survival data. In the first part of the dissertation, we apply time to event analysis in an innovative dose-finding design. To account for the unique feature of a new class of oncology drugs, T-cell engagers, we propose a phase I dose-finding method incorporating systematic intra-subject dose escalation. We utilize survival analysis approach to analyze intra-subject dose-escalation data and to identify the maximum tolerated dose. We evaluate the operating characteristics of the proposed design through simulation studies and compare it to existing methodologies. The second part of the dissertation focuses on multivariate survival data with semi-competing risks. Time-to-event data from the same subject are often correlated. In addition, semi-competing risks are sometimes present with correlated events when a terminal event can censor other non-terminal events but not vice versa. We use a semiparametric frailty model to account for the dependence between correlated survival events and semi-competing risks and adopt penalized partial likelihood (PPL) approach for parameter estimation. In addition, we investigate methods for variable selection in semi-parametric frailty models and propose a double penalized partial likelihood (DPPL) procedure for variable selection of fixed effects in frailty models. We consider two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped absolute deviation (SCAD) penalty. The proposed methods are evaluated in simulation studies and illustrated using data from Indianapolis-Ibadan Dementia Project.
|
Page generated in 0.166 seconds