1 |
Methodological issues in randomized trials of pediatric acute diarrhea: evaluating probiotics and the need for standardized definitions and valid outcome measuresJohnston, Bradley C. 11 1900 (has links)
BACKGROUND: In a 2006 WHO report, diarrheal diseases ranked second among conditions afflicting children. Pediatric acute diarrhea, although most often the result of a gastrointestinal infection, can also occur as a result of antibiotic exposure. This is often referred to as antibiotic-associated diarrhea (AAD). Previous research suggests that probiotics may be effective in the treatment or prevention of various types of PAD.
METHODS: The first study involved a systematic review and meta-analysis of RCTs involving probiotics as an adjunct to antibiotics for preventing AAD in children. The second study was a systematic review of definitions and primary outcome measures employed in RCTs of PAD. The third study used a modified Delphi consensus procedure to develop a new instrument for evaluating the severity of PAD. The study involved steering committee discussions (phase 1) and two electronic surveys (phase 2 and 3) of leading experts in measurement and clinical gastroenterology.
RESULTS: The per protocol meta-analysis of ten RCTs significantly favored probiotics to prevent the incidence of diarrhea (NNT = 10). However, this effect did not withstand ITT analysis and among included trials there was considerable inconsistency regarding definitions for the reviews primary outcome measure, the incidence of diarrhea. Study two identified 121 RCTs that reported 62 unique definitions of diarrhea, 64 unique definitions of diarrhea resolution and 62 unique primary outcome measures. Thirty-one trials used grading systems to support outcome evaluation. However, none of the trials (or their citations) reported evidence of their validation. In study three experts agreed on the inclusion of five attributes containing 13 items. Attributes proposed for the IPADDS include: Diarrhea Frequency and Duration, Vomiting Frequency and Duration, Fever, Restrictions in Normal Daily Activities and Dehydration.
CONCLUSION: It is premature to draw a valid conclusion about the efficacy of probiotics for pediatric AAD. Definitions of diarrhea and primary outcome measures in RCTs of PAD are heterogeneous and lack evidence of validity. The third study represents content validity evidence for IPADDS. A numerical scoring system needs to be added and further empirical evidence of reliability and validity are required. / Experimental Medicine
|
2 |
Methodological issues in randomized trials of pediatric acute diarrhea: evaluating probiotics and the need for standardized definitions and valid outcome measuresJohnston, Bradley C. Unknown Date
No description available.
|
3 |
Bayesian design and analysis of cluster randomized trialsXiao, Shan 07 August 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Cluster randomization is frequently used in clinical trials for convenience of inter
ventional implementation and for reducing the risk of contamination. The opera
tional convenience of cluster randomized trials, however, is gained at the expense
of reduced analytical power. Compared to individually randomized studies, cluster
randomized trials often have a much-reduced power. In this dissertation, I consider
ways of enhancing analytical power with historical trial data. Specifically, I introduce
a hierarchical Bayesian model that is designed to incorporate available information
from previous trials of the same or similar interventions. Operationally, the amount
of information gained from the previous trials is determined by a Kullback-Leibler
divergence measure that quantifies the similarity, or lack thereof, between the histor
ical and current trial data. More weight is given to the historical data if they more
closely resemble the current trial data. Along this line, I examine the Type I error
rates and analytical power associated with the proposed method, in comparison with
the existing methods without utilizing the ancillary historical information. Similarly,
to design a cluster randomized trial, one could estimate the power by simulating trial
data and comparing them with the historical data from the published studies. Data
analytical and power simulation methods are developed for more general situations
of cluster randomized trials, with multiple arms and multiple types of data following
the exponential family of distributions. An R package is developed for practical use
of the methods in data analysis and trial design.
|
4 |
Development of an Instrument for Assessing Risk of Bias of Randomized Trials in Systematic ReviewsWang, Ying 04 September 2024 (has links)
Assessment of risk of bias in the included randomized controlled trials (RCTs) has become an essential step in systematic reviews, which informs the decision of whether to rate down certainty of evidence due to risk of bias applying the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach. Many instruments exist for rating risk of bias in RCTs; however, even those most commonly used that developed by the Cochrane group, suffer from limitations. In particular, the revised Cochrane instrument, while reflecting methodological advances, sacrificed simplicity and practicability.
The objective of this thesis is to use rigorous methodology to develop a simple-structured RCT risk of bias instrument that is easy for systematic review authors to use. The thesis begins with a chapter introducing the background and the structure of the thesis. The thesis subsequently describes a systematic survey of existing RCT risk of bias instruments for their included items, through which we collected potential candidate items for the new instrument. We then present a summary of empirical evidence investigating how the possible risk of bias issues influence the estimates of intervention effects in RCTs, which assisted with the item selection for the new instrument. Then, this thesis describes the detailed process for instrument development and providing the new instrument. This thesis ends with a chapter summarizing key findings, discussing strengths and limitations, and exploring directions for future research. / Thesis / Doctor of Philosophy (PhD)
|
5 |
Optimal Sample Allocation in Multilevel ExperimentsShen, Zuchao 11 June 2019 (has links)
No description available.
|
6 |
Statistical Approaches for Handling Missing Data in Cluster Randomized TrialsFiero, Mallorie H. January 2016 (has links)
In cluster randomized trials (CRTs), groups of participants are randomized as opposed to individual participants. This design is often chosen to minimize treatment arm contamination or to enhance compliance among participants. In CRTs, we cannot assume independence among individuals within the same cluster because of their similarity, which leads to decreased statistical power compared to individually randomized trials. The intracluster correlation coefficient (ICC) is crucial in the design and analysis of CRTs, and measures the proportion of total variance due to clustering. Missing data is a common problem in CRTs and should be accommodated with appropriate statistical techniques because they can compromise the advantages created by randomization and are a potential source of bias. In three papers, I investigate statistical approaches for handling missing data in CRTs. In the first paper, I carry out a systematic review evaluating current practice of handling missing data in CRTs. The results show high rates of missing data in the majority of CRTs, yet handling of missing data remains suboptimal. Fourteen (16%) of the 86 reviewed trials reported carrying out a sensitivity analysis for missing data. Despite suggestions to weaken the missing data assumption from the primary analysis, only five of the trials weakened the assumption. None of the trials reported using missing not at random (MNAR) models. Due to the low proportion of CRTs reporting an appropriate sensitivity analysis for missing data, the second paper aims to facilitate performing a sensitivity analysis for missing data in CRTs by extending the pattern mixture approach for missing clustered data under the MNAR assumption. I implement multilevel multiple imputation (MI) in order to account for the hierarchical structure found in CRTs, and multiply imputed values by a sensitivity parameter, k, to examine parameters of interest under different missing data assumptions. The simulation results show that estimates of parameters of interest in CRTs can vary widely under different missing data assumptions. A high proportion of missing data can occur among CRTs because missing data can be found at the individual level as well as the cluster level. In the third paper, I use a simulation study to compare missing data strategies to handle missing cluster level covariates, including the linear mixed effects model, single imputation, single level MI ignoring clustering, MI incorporating clusters as fixed effects, and MI at the cluster level using aggregated data. The results show that when the ICC is small (ICC ≤ 0.1) and the proportion of missing data is low (≤ 25\%), the mixed model generates unbiased estimates of regression coefficients and ICC. When the ICC is higher (ICC > 0.1), MI at the cluster level using aggregated data performs well for missing cluster level covariates, though caution should be taken if the percentage of missing data is high.
|
7 |
Improved Standard Error Estimation for Maintaining the Validities of Inference in Small-Sample Cluster Randomized Trials and Longitudinal StudiesTanner, Whitney Ford 01 January 2018 (has links)
Data arising from Cluster Randomized Trials (CRTs) and longitudinal studies are correlated and generalized estimating equations (GEE) are a popular analysis method for correlated data. Previous research has shown that analyses using GEE could result in liberal inference due to the use of the empirical sandwich covariance matrix estimator, which can yield negatively biased standard error estimates when the number of clusters or subjects is not large. Many techniques have been presented to correct this negative bias; However, use of these corrections can still result in biased standard error estimates and thus test sizes that are not consistently at their nominal level. Therefore, there is a need for an improved correction such that nominal type I error rates will consistently result.
First, GEEs are becoming a popular choice for the analysis of data arising from CRTs. We study the use of recently developed corrections for empirical standard error estimation and the use of a combination of two popular corrections. In an extensive simulation study, we find that nominal type I error rates can be consistently attained when using an average of two popular corrections developed by Mancl and DeRouen (2001, Biometrics 57, 126-134) and Kauermann and Carroll (2001, Journal of the American Statistical Association 96, 1387-1396) (AVG MD KC). Use of this new correction was found to notably outperform the use of previously recommended corrections.
Second, data arising from longitudinal studies are also commonly analyzed with GEE. We conduct a simulation study, finding two methods to attain nominal type I error rates more consistently than other methods in a variety of settings: First, a recently proposed method by Westgate and Burchett (2016, Statistics in Medicine 35, 3733-3744) that specifies both a covariance estimator and degrees of freedom, and second, AVG MD KC with degrees of freedom equaling the number of subjects minus the number of parameters in the marginal model.
Finally, stepped wedge trials are an increasingly popular alternative to traditional parallel cluster randomized trials. Such trials often utilize a small number of clusters and numerous time intervals, and these components must be considered when choosing an analysis method. A generalized linear mixed model containing a random intercept and fixed time and intervention covariates is the most common analysis approach. However, the sole use of a random intercept applies assumptions that will be violated in practice. We show, using an extensive simulation study based on a motivating example and a more general design, alternative analysis methods are preferable for maintaining the validity of inference in small-sample stepped wedge trials with binary outcomes. First, we show the use of generalized estimating equations, with an appropriate bias correction and a degrees of freedom adjustment dependent on the study setting type, will result in nominal type I error rates. Second, we show the use of a cluster-level summary linear mixed model can also achieve nominal type I error rates for equal cluster size settings.
|
8 |
Enhancing Statistician Power: Flexible Covariate-Adjusted Semiparametric Inference for Randomized Studies with Multivariate OutcomesStephens, Alisa Jane 21 June 2014 (has links)
It is well known that incorporating auxiliary covariates in the analysis of randomized clinical trials (RCTs) can increase efficiency. Questions still remain regarding how to flexibly incorporate baseline covariates while maintaining valid inference. Recent methodological advances that use semiparametric theory to develop covariate-adjusted inference for RCTs have focused on independent outcomes. In biomedical research, however, cluster randomized trials and longitudinal studies, characterized by correlated responses, are commonly used. We develop methods that flexibly incorporate baseline covariates for efficiency improvement in randomized studies with correlated outcomes. In Chapter 1, we show how augmented estimators may be used for cluster randomized trials, in which treatments are assigned to groups of individuals. We demonstrate the potential for imbalance correction and efficiency improvement through consideration of both cluster- and individual-level covariates. To improve small-sample estimation, we consider several variance adjustments. We evaluate this approach for continuous and binary outcomes through simulation and apply it to the Young Citizens study, a cluster randomized trial of a community behavioral intervention for HIV prevention in Tanzania. Chapter 2 builds upon the previous chapter by deriving semiparametric locally efficient estimators of marginal mean treatment effects when outcomes are correlated. Estimating equations are determined by the efficient score under a mean model for marginal effects when data contain baseline covariates and exhibit correlation. Locally efficient estimators are implemented for longitudinal data with continuous outcomes and clustered data with binary outcomes. Methods are illustrated through application to AIDS Clinical Trial Group Study 398, a longitudinal randomized study that compared various protease inhibitors in HIV-positive subjects. In Chapter 3, we empirically evaluate several covariate-adjusted tests of intervention effects when baseline covariates are selected adaptively and the number of randomized units is small. We demonstrate that randomization inference preserves type I error under model selection while tests based on asymptotic theory break down. Additionally, we show that covariate adjustment typically increases power, except at extremely small sample sizes using liberal selection procedures. Properties of covariate-adjusted tests are explored for independent and multivariate outcomes. We revisit Young Citizens to provide further insight into the performance of various methods in small-sample settings.
|
9 |
Extending the Principal Stratification Method To Multi-Level Randomized TrialsGuo, Jing 12 April 2010 (has links)
The Principal Stratification method estimates a causal intervention effect by taking account of subjects' differences in participation, adherence or compliance. The current Principal Stratification method has been mostly used in randomized intervention trials with randomization at a single (individual) level with subjects who were randomly assigned to either intervention or control condition. However, randomized intervention trials have been conducted at group level instead of individual level in many scientific fields. This is so called "two-level randomization", where randomization is conducted at a group (second) level, above an individual level but outcome is often observed at individual level within each group. The incorrect inferences may result from the causal modeling if one only considers the compliance from individual level, but ignores it or be determine it from group level for a two-level randomized trial. The Principal Stratification method thus needs to be further developed to address this issue.
To extend application of the Principal Stratification method, this research developed a new methodology for causal inferences in two-level intervention trials which principal stratification can be formed by both group level and individual level compliance. Built on the original Principal Stratification method, the new method incorporates a range of alternative methods to assess causal effects on a population when data on exposure at the group level are incomplete or limited, and are data at individual level. We use the Gatekeeper Training Trial, as a motivating example as well as for illustration. This study is focused on how to examine the intervention causal effect for schools that varied by level of adoption of the intervention program (Early-adopter vs. Later-adopter). In our case, the traditional Exclusion Restriction Assumption for Principal Stratification method is no longer hold. The results show that the intervention had a stronger impact on Later-Adopter group than Early-Adopter group for all participated schools. These impacts were larger for later trained schools than earlier trained schools. The study also shows that the intervention has a different impact on middle and high schools.
|
10 |
Faire preuve par le chiffre ? Le cas des expérimentations aléatoires en économie / Evidence by numbers? The case of randomized controlled trialsJatteau, Arthur 05 December 2016 (has links)
Par l’intermédiaire d’Esther Duflo et de son laboratoire le J-PAL, les expérimentations aléatoires ont connu un essor remarquable depuis les années 2000 en économie et sont présentées par leurs promoteurs comme une méthode particulièrement robuste dans l’évaluation d’impact. Combinant méthodologies quantitatives et qualitatives, cette thèse examine la construction sociale de la preuve expérimentale et apporte une contribution à une épistémologie sociale et historique des expérimentations aléatoires, ainsi qu’à la socio-économie de la quantification. Dans une première partie, nous développons une socio-histoire de cette méthode. Les origines des expérimentations aléatoires sont pluridisciplinaires et antérieures à leur utilisation massive en médecine depuis les années 1940, puis en économie depuis la fin des années 1960. Nous en tirons des enseignements méthodologiques éclairant la pratique actuelle des expérimentations aléatoires. Dans un second temps, nous nous intéressons aux acteurs de cette méthode, en nous penchant sur les chercheurs du J-PAL. En procédant à une analyse prosopographique, complétée par une analyse de réseau, nous montrons que les capitaux académiques élevés de ces chercheurs et l’existence de leaders permettent de contrôler et promouvoir la diffusion de la méthode. Dans une dernière partie, nous interrogeons la production de la preuve par les expérimentations aléatoires. En nous attachant à saisir les pratiques expérimentales, nous montrons que les validités interne et externe sont souvent problématiques. Enfin, nous analysons les liens contrariés entre expérimentations aléatoires et politique(s). / With Esther Duflo and her lab (the J-PAL), randomized controlled trials (RCTs) became trendy from the the 2000’s onward in economics and are presented by their advocates as the most robust method for impact evaluation. Relying on mixed methods, this thesis investigates the social construction of experimental evidence and contributes to a social and historical epistemology of RCTs and to the socio-economy of quantification.The first part develops a socio-history of this method. The origins of RCTs are multidisciplinary and precede their extensive use in medicine from the 1940s and in economics from the 1960s onward. This allows us to gain a deeper undestanding of the current use of RCTs.In the second part, we examine the stakeholders of this method, chiefly J-PAL researchers. Our prosopographical analysis, supplemented by a network analysis, demonstrates that their high level of academic capital and the presence of leaders allow for the control and the diffusion of RCTs.In the last part, we scrutinize the production of experimental evidence. By examining RCTs in operation, we show that both their internal and external validity are in many cases compromized. Finally, we explore the convoluted links between RCTs, policy and politics.
|
Page generated in 0.0741 seconds