• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bivariate Generalization of the Time-to-Event Conditional Reassessment Method with a Novel Adaptive Randomization Method

Yan, Donglin 01 January 2018 (has links)
Phase I clinical trials in oncology aim to evaluate the toxicity risk of new therapies and identify a safe but also effective dose for future studies. Traditional Phase I trials of chemotherapies focus on estimating the maximum tolerated dose (MTD). The rationale for finding the MTD is that better therapeutic effects are expected at higher dose levels as long as the risk of severe toxicity is acceptable. With the advent of a new generation of cancer treatments such as the molecularly targeted agents (MTAs) and immunotherapies, higher dose levels no longer guarantee increased therapeutic effects, and the focus has shifted to estimating the optimal biological dose (OBD). The OBD is a dose level with the highest biologic activity with acceptable toxicity. The search for OBD requires joint evaluation of toxicity and efficacy. Although several seamleass phase I/II designs have been published in recent years, there is not a consensus regarding an optimal design and further improvement is needed for some designs to be widely used in practice. In this dissertation, we propose a modification to an existing seamless phase I/II design by Wages and Tait (2015) for locating the OBD based on binary outcomes, and extend it to time to event (TITE) endpoints. While the original design showed promising results, we hypothesized that performance could be improved by replacing the original adaptive randomization stage with a different randomization strategy. We proposed to calculate dose assigning probabilities by averaging all candidate models that fit the observed data reasonably well, as opposed to the original design that based all calculations on one best-fit model. We proposed three different strategies to select and average among candidate models, and simulations are used to compare the proposed strategies to the original design. Under most scenarios, one of the proposed strategies allocates more patients to the optimal dose while improving accuracy in selecting the final optimal dose without increasing the overall risk of toxicity. We further extend this design to TITE endpoints to address a potential issue of delayed outcomes. The original design is most appropriate when both toxicity and efficacy outcomes can be observed shortly after the treatment, but delayed outcomes are common, especially for efficacy endpoints. The motivating example for this TITE extension is a Phase I/II study evaluating optimal dosing of all-trans retinoic acid (ATRA) in combination with a fixed dose of daratumumab in the treatment of relapsed or refractory multiple myeloma. The toxicity endpoint is observed in one cycle of therapy (i.e., 4 weeks) while the efficacy endpoint is assessed after 8 weeks of treatment. The difference in endpoint observation windows causes logistical challenges in conducting the trial, since it is not acceptable in practice to wait until both outcomes for each participant have been observed before sequentially assigning the dose of a newly eligible participant. The result would be a delay in treatment for patients and undesirably long trial duration. To address this issue, we generalize the time-to-event continual reassessment method (TITE-CRM) to bivariate outcomes with potentially non-monotonic dose-efficacy relationship. Simulation studies show that the proposed TITE design maintains similar probability in selecting the correct OBD comparing to the binary original design, but the number of patients treated at the OBD decreases as the rate of enrollment increases. We also develop an R package for the proposed methods and document the R functions used in this research. The functions in this R package assist implementation of the proposed randomization strategy and design. The input and output format of these functions follow similar formatting of existing R packages such as "dfcrm" or "pocrm" to allow direct comparison of results. Input parameters include efficacy skeletons, prior distribution of any model parameters, escalation restrictions, design method, and observed data. Output includes recommended dose level for the next patient, MTD, estimated model parameters, and estimated probabilities of each set of skeletons. Simulation functions are included in this R package so that the proposed methods can be used to design a trial based on certain parameters and assess performance. Parameters of these scenarios include total sample size, true dose-toxicity relationship, true dose-efficacy relationship, patient recruit rate, delay in toxicity and efficacy responses.
12

Approximation du calcul de la taille échantillonnale pour les tests à hypothèses multiples lorsque r parmis m hypothèses doivent être significatives

Delorme, Philippe 12 1900 (has links)
Généralement, dans les situations d’hypothèses multiples on cherche à rejeter toutes les hypothèses ou bien une seule d’entre d’elles. Depuis quelques temps on voit apparaître le besoin de répondre à la question : « Peut-on rejeter au moins r hypothèses ? ». Toutefois, les outils statisques pour répondre à cette question sont rares dans la littérature. Nous avons donc entrepris de développer les formules générales de puissance pour les procédures les plus utilisées, soit celles de Bonferroni, de Hochberg et de Holm. Nous avons développé un package R pour le calcul de la taille échantilonnalle pour les tests à hypothèses multiples (multiple endpoints), où l’on désire qu’au moins r des m hypothèses soient significatives. Nous nous limitons au cas où toutes les variables sont continues et nous présentons quatre situations différentes qui dépendent de la structure de la matrice de variance-covariance des données. / Generally, in multiple endpoints situations we want to reject all hypotheses or at least only one of them. For some time now, we see emerge the need to answer the question : "Can we reject at least r hypotheses ?" However, the statistical tools to answer this new problem are rare in the litterature. We decide to develop general power formulas for the principals procedures : Bonferroni’s, Hochberg’s and Holm’s procedures. We also develop an R package for the sample size calculation for multiple endpoints, when we want to reject at least r hypotheses. We limit ourselves in the case where all the variables are continuous and we present four different situations depending on the structure of the data’s variance-covariance matrix.
13

A PREDICTIVE PROBABILITY INTERIM DESIGN FOR PHASE II CLINICAL TRIALS WITH CONTINUOUS ENDPOINTS

Liu, Meng 01 January 2017 (has links)
Phase II clinical trials aim to potentially screen out ineffective and identify effective therapies to move forward to randomized phase III trials. Single-arm studies remain the most utilized design in phase II oncology trials, especially in scenarios where a randomized design is simply not practical. Due to concerns regarding excessive toxicity or ineffective new treatment strategies, interim analyses are typically incorporated in the trial, and the choice of statistical methods mainly depends on the type of primary endpoints. For oncology trials, the most common primary objectives in phase II trials include tumor response rate (binary endpoint) and progression disease-free survival (time-to-event endpoint). Interim strategies are well-developed for both endpoints in single-arm phase II trials. The advent of molecular targeted therapies, often with lower toxicity profiles from traditional cytotoxic treatments, has shifted the drug development paradigm into establishing evidence of biological activity, target modulation and pharmacodynamics effects of these therapies in early phase trials. As such, these trials need to address simultaneous evaluation of safety as well as proof-of-concept of biological marker activity or changes in continuous tumor size instead of binary response rates. In this dissertation, we extend a predictive probability design for binary outcomes in the single-arm clinical trial setting and develop two interim designs for continuous endpoints, such as continuous tumor shrinkage or change in a biomarker over time. The two-stage design mainly focuses on the futility stopping strategies, while it also has the capacity of early stopping for efficacy. Both optimal and minimax designs are presented for this two-stage design. The multi-stage design has the flexibility of stopping the trial early either due to futility or efficacy. Due to the intense computation and searching strategy we adopt, only the minimax design is presented for this multi-stage design. The multi-stage design allows for up to 40 interim looks with continuous monitoring possible for large and moderate effect sizes, requiring an overall sample size less than 40. The stopping boundaries for both designs are based on predictive probability with normal likelihood and its conjugated prior distributions, while the design itself satisfies the pre-specified type I and type II error rate constraints. From simulation results, when compared with binary endpoints, both designs well preserve statistical properties across different effect sizes with reduced sample size. We also develop an R package, PPSC, and detail it in chapter four, so that both designs can be freely accessible for use in future phase II clinical trials with the collaborative efforts of biostatisticians. Clinical investigators and biostatisticians have the flexibility to specify the parameters from the hypothesis testing framework, searching ranges of the boundaries for predictive probabilities, the number of interim looks involved and if the continuous monitoring is preferred and so on.
14

Alternative Endpoints and Analysis Techniques in Kidney Transplant Trials

Fergusson, Nicholas Anthony January 2017 (has links)
Clinical trials in kidney transplantation suffer from several major issues including: 1) Unfeasibility due to low short-term event rates of hard outcomes and 2) Reliance on a composite outcome that consists of unequal endpoints that may generate misleading results. This thesis attempts to explore and apply methods to solve these issues and ultimately, improve kidney transplantation trials. We present a secondary analysis of the ACE trial in kidney transplant using composites with alternative graft function surrogate endpoints. Typically, kidney transplant trials—including the ACE trial— use a time-to-event composite of death, end-stage renal disease (ESRD), and doubling of serum creatinine. Instead of doubling of serum creatinine, we investigated the use of percentage declines of estimate glomerular filtration rate (eGFR) within a time-to-event composite of death and ESRD. Additionally, we present an application of an innovative analysis method, the win ratio approach, to the ACE trial as a way of lessening concerns associated with unequal composite endpoints. Composites of death, ESRD, and either a 40%, 30% or 20% decline in eGFR did not alter original ACE trial results, interpretations, or conclusions. The win ratio approach, and the presentation of a win ratio, generated very comparable results to a standard time-to-event analysis while lessening the impact of unequal composite endpoints and making fewer statistical assumptions. This research provides a novel, trial-level application of alternative endpoints and analysis techniques within a kidney transplant trial setting.
15

Critères de jugement dans les essais contrôlés randomisés en réanimation / Outcomes in randomized controlled trials in critically ill patients

Gaudry, Stéphane 28 November 2016 (has links)
Le choix des critères de jugement (principal et secondaires) est une étape essentielle de la construction d’un essai contrôlé randomisé. Notre premier travail a été de réaliser une revue systématique sur la place des critères de jugement importants pour les patients ou patient-important outcomes dans les essais contrôlés randomisés en réanimation. Nous avons défini les patient-imortant outcomes comme étant la mortalité d’une part, et l’impact fonctionnel et sur la qualité de vie des séquelles de la réanimation d’autre part. En effet, en réanimation, les deux objectifs thérapeutiques principaux sont d’une part éviter le décès des patients et d’autre part réduire l’impact des séquelles à moyen et long terme chez les survivants. Nous avons montré que les patient-imortant outcomes ne représentent qu’une proportion faible (27/112, 24%) des critères de jugement principaux des essais publiés en 2013 et que pour une grande majorité il s’agissait d’un critère de mortalité. Une analyse sur les essais publiés au premier semestre 2016 a montré qu’il n’y avait pas eu d’évolution (25% des critères de jugement principaux étaient des patient-important outcomes).Puis, en réalisant une étude ancillaire de cette revue systématique, nous avons investigué l’impact potentiel des décisions de limitation(s) ou d’arrêt(s) des thérapeutiques actives quand le critère de jugement était la mortalité, et décrit les données rapportées sur ces limitation(s) ou d’arrêt(s) des thérapeutiques actives dans les essais en réanimation. Nous avons montré que très peu d’essais contrôlés randomisés en réanimation (6/65, 9%) rapportaient le taux de décision de limitation(s)ou d’arrêt(s) des thérapeutiques actives, bien que ces décisions soient fréquentes en pratique clinique. Pour explorer l’impact qu’un déséquilibre de ces décisions entre les 2 bras d’un essai pouvait avoir en termes de biais sur la mortalité, nous avons réalisé une étude de simulation. Cette étude a montré notamment que lorsque ces décisions étaient prises de façon plus tardive dans le groupe expérimental, l’intervention pouvait apparaître protectrice alors même qu’il n’existait pas de réel effet sur la survie. Enfin, nous avons conduit un essai contrôlé randomisé en réanimation (étude AKIKI, Artificial Kidney Initiation in Kidney Injury) en utilisant la mortalité comme critère de jugement principal et en rapportant le taux et le délai des décisions de limitation(s) ou d’arrêt(s) des thérapeutiques actives dans les 2 bras / The choice of relevant primary and secondary endpoints is an essential step of the design of a randomized controlled trial. In our first work, we conducted a systematic review on patient-important outcomes in randomized controlled trials in critically ill patients. Indeed, clinical decision-making by ICU physicians now pursues the goal of improving mean and long-term outcomes in survivors in addition to increasing their chance of survival. We defined patient-important outcomes as on one hand, outcomes involving mortality at any time, and on the other, quality of life and functional outcomes assessed after ICU discharge. We found that a minority of primary outcomes (27/112,24%) used in randomized controlled trials published in 2013, were patient-important outcomes and that mortality accounted for the vast majority of them. Our analysis of most recently published trials (first half 2016) showed that patient-important outcomes were used in the samelow proportions (25% of the primary outcomes were patient-important outcomes) We then addressed the question of how well withholding and with drawal of life support therapies(W-WLST) decisions were reported in RCT in critically ill patients and how such decisions could impact mortality as outcome measure in these trials. We found that W-WLST decisions, although being a daily concern in routine practice, were scarcely reported in these trials, since they appeared in only 6 of 65 (9%) during follow-up. We further explored the impact of an imbalance in such decisions between the 2 arms of a randomized controlled trial, through a simulation study. This simulation showed that the intervention could appear as protective, if the decision of W-WLST was delayed in the interventional arm, even though the intervention had no true effecton survival. Finally, we performed a randomized controlled study (Artificial Kidney Initiation in Kidney Injury,AKIKI) using mortality as primary outcome and paid attention to report the rate of W-WLSTdecisions in the 2 arms.
16

Some recent advances in multivariate statistics: modality inference and statistical monitoring of clinical trials with multiple co-primary endpoints

Cheng, Yansong 22 January 2016 (has links)
This dissertation focuses on two topics in multivariate statistics. The first part develops an inference procedure and fast computation tool for the modal clustering method proposed by Li et al. (2007). The modal clustering, based on the kernel density estimate, clusters data using their associations within a single mode, with the final number of clusters equaling the number of modes, otherwise known as the modality of the distribution of the data. This method provides a flexible tool for clustering data of low to moderate dimensions with arbitrary distributional shapes. In contrast to Li and colleagues, we expand their method by proposing a procedure that determines the number of clusters in the data. A test statistic and its asymptotic distribution are derived to assess the significance of each mode within the data. The inference procedure is tested on both simulated and real data sets. In addition, an R computing package is developed (Modalclust) that implements the modal clustering procedure using parallel processing which dramatically increases computing speed over the previously available method. This package is available on the Comprehensive R Archive Network (CRAN). The second part of this dissertation develops methods of statistical monitoring of clinical trials with multiple co-primary endpoints, where success is defined as meeting both endpoints simultaneously. In practice, a group sequential design method is used to stop trials early for promising efficacy, and conditional power is used for futility stopping rules. In this dissertation we show that stopping boundaries for the group sequential design with multiple co-primary endpoints should be the same as those for studies with single endpoints. Lan and Wittes (1988) proposed the B-value tool to calculate the conditional power of single endpoint trials and we extend this tool to calculate the conditional power for studies with multiple co-primary endpoints. We consider the cases of two-arm studies with co-primary normal and binary endpoints and provide several examples of implementation with simulated trials. A fixed-weight sample size re-estimation approach based on conditional power is introduced. Finally, we discuss the possibility of blinded interim analyses for multiple endpoints using the modality inference method introduced in the first part.
17

BLINDED EVALUATIONS OF EFFECT SIZES IN CLINICAL TRIALS: COMPARISONS BETWEEN BAYESIAN AND EM ANALYSES

Turkoz, Ibrahim January 2013 (has links)
Clinical trials are major and costly undertakings for researchers. Planning a clinical trial involves careful selection of the primary and secondary efficacy endpoints. The 2010 draft FDA guidance on adaptive designs acknowledges possible study design modifications, such as selection and/or order of secondary endpoints, in addition to sample size re-estimation. It is essential for the integrity of a double-blind clinical trial that individual treatment allocation of patients remains unknown. Methods have been proposed for re-estimating the sample size of clinical trials, without unblinding treatment arms, for both categorical and continuous outcomes. Procedures that allow a blinded estimation of the treatment effect, using knowledge of trial operational characteristics, have been suggested in the literature. Clinical trials are designed to evaluate effects of one or more treatments on multiple primary and secondary endpoints. The multiplicity issues when there is more than one endpoint require careful consideration for controlling the Type I error rate. A wide variety of multiplicity approaches are available to ensure that the probability of making a Type I error is controlled within acceptable pre-specified bounds. The widely used fixed sequence gate-keeping procedures require prospective ordering of null hypotheses for secondary endpoints. This prospective ordering is often based on a number of untested assumptions about expected treatment differences, the assumed population variance, and estimated dropout rates. We wish to update the ordering of the null hypotheses based on estimating standardized treatment effects. We show how to do so while the study is ongoing, without unblinding the treatments, without losing the validity of the testing procedure, and with maintaining the integrity of the trial. Our simulations show that we can reliably order the standardized treatment effect also known as signal-to-noise ratio, even though we are unable to estimate the unstandardized treatment effect. In order to estimate treatment difference in a blinded setting, we must define a latent variable substituting for the unknown treatment assignment. Approaches that employ the EM algorithm to estimate treatment differences in blinded settings do not provide reliable conclusions about ordering the null hypotheses. We developed Bayesian approaches that enable us to order secondary null hypotheses. These approaches are based on posterior estimation of signal-to-noise ratios. We demonstrate with simulation studies that our Bayesian algorithms perform better than existing EM algorithm counterparts for ordering effect sizes. Introducing informative priors for the latent variables, in settings where the EM algorithm has been used, typically improves the accuracy of parameter estimation in effect size ordering. We illustrate our method with a secondary analysis of a longitudinal study of depression. / Statistics
18

A Modern Statistical Approach to Quality Improvement in Health Care using Quantile Regression

Dalton, Jarrod E. 07 March 2013 (has links)
No description available.
19

Assessment of Pulmonary Insufficiency using Energy-Based Endpoints and 4D Phase Contrast MR Imaging

Lee, Namheon January 2013 (has links)
No description available.
20

Effects of hemodynamic stresses on the remodeling parameters in arteriovenous fistula

Rajabi Jaghargh, Ehsan 02 June 2015 (has links)
No description available.

Page generated in 0.0433 seconds