• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 16
  • 6
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 128
  • 128
  • 31
  • 23
  • 23
  • 23
  • 18
  • 18
  • 18
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Instrumental variable and longitudinal structural equation modelling methods for causal mediation : the PACE trial of treatments for chronic fatigue syndrome

Goldsmith, Kimberley January 2014 (has links)
Background: Understanding complex psychological treatment mechanisms is important in order to refine and improve treatment. Mechanistic theories can be evaluated using mediation analysis methods. The Pacing, Graded Activity, and Cognitive Behaviour Therapy: A Randomised Evaluation (PACE) trial studied complex therapies for the treatment of chronic fatigue syndrome. The aim of the project was to study different mediation analysis methods using PACE trial data, and to make trial design recommendations based upon the findings. Methods: PACE trial data were described using summary statistics and correlation analyses. Mediation estimates were derived using: the product of coefficients approach, instrumental variable (IV) methods with randomisation by baseline variables interactions as IVs, and dual process longitudinal structural equation models (SEM). Monte Carlo simulation studies were done to further explore the behaviour of IV estimators and to examine aspects of the SEM. Results: Cognitive and behavioural measures were mediators of the cognitive behavioural and graded exercise therapies in PACE. Results were robust when accounting for correlated measurement error and different SEM structures. Randomisation by baseline IVs were weak, giving imprecise and sometimes extreme estimates, leaving their utility unclear. A flexible version of a latent change SEM with contemporaneous mediation effects and contemporaneous correlated measurement errors was the most appropriate longitudinal model. Conclusions: IV methods using interaction IVs are unlikely to be useful; designs with randomised IV might be more suitable. Longitudinal SEM for mediation in clinical trials seems a promising approach. Mediation estimates from SEM were generally robust when allowing for correlated measurement error and for different model classes. Mediation analysis in trials should be longitudinal and should consider the number and timing of measures at the design stage. Using appropriate methods for studying mediation in trials will help clarify treatment mechanisms of action and allow for their refinement, which would maximize the information gained from trials and benefit patients.
32

Evaluating the Use of Ridge Regression and Principal Components in Propensity Score Estimators under Multicollinearity

Gripencrantz, Sarah January 2014 (has links)
Multicollinearity can be present in the propensity score model when estimating average treatment effects (ATEs). In this thesis, logistic ridge regression (LRR) and principal components logistic regression (PCLR) are evaluated as an alternative to ML estimation of the propensity score model. ATE estimators based on weighting (IPW), matching and stratification are assessed in a Monte Carlo simulation study to evaluate LRR and PCLR. Further, an empirical example of using LRR and PCLR on real data under multicollinearity is provided. Results from the simulation study reveal that under multicollinearity and in small samples, the use of LRR reduces bias in the matching estimator, compared to ML. In large samples PCLR yields lowest bias, and typically was found to have the lowest MSE in all estimators. PCLR matched ML in bias under IPW estimation and in some cases had lower bias. The stratification estimator was heavily biased compared to matching and IPW but both bias and MSE improved as PCLR was applied, and for some cases under LRR. The specification with PCLR in the empirical example was usually most sensitive as a strongly correlated covariate was included in the propensity score model.
33

Principal stratification : applications and extensions in clinical trials with intermediate variables

Lou, Yiyue 15 December 2017 (has links)
Randomized clinical trials (RCTs) are considered to be the "gold standard" in order to demonstrate a causal relationship between a treatment and an outcome because complete randomization ensures that the only difference between the two units being compared is the treatment. The intention-to-treat (ITT) comparison has long been regarded as the preferred analytic approach for RCTs. However, if there exists an “intermediate” variable between the treatment and outcome, and the analysis conditions on this intermediate, randomization will break down, and the ITT approach does not account properly for the intermediate. In this dissertation, we explore the principal stratification approach for dealing with intermediate variables, illustrate its applications in two different clinical trial settings, and extend the existing analytic approaches with respect to specific challenges in these settings. The first part of our work focuses on clinical endpoint bioequivalence (BE) studies with noncompliance and missing data. In clinical endpoint BE studies, the primary analysis for assessing equivalence between a generic and an innovator product is usually based on the observed per-protocol (PP) population (usually completers and compliers). The FDA Missing Data Working Group recently recommended using “causal estimands of primary interest.” This PP analysis, however, is not generally causal because the observed PP is post-treatment, and conditioning on it may introduce selection bias. To date, no causal estimand has been proposed for equivalence assessment. We propose co-primary causal estimands to test equivalence by applying the principal stratification approach. We discuss and verify by simulation the causal assumptions under which the current PP estimator is unbiased for the primary principal stratum causal estimand – the "Survivor Average Causal Effect" (SACE). We also propose tipping point sensitivity analysis methods to assess the robustness of the current PP estimator from the SACE estimand when these causal assumptions are not met. Data from a clinical endpoint BE study is used to illustrate the proposed co-primary causal estimands and sensitivity analysis methods. Our work introduces a causal framework for equivalence assessment in clinical endpoint BE studies with noncompliance and missing data. The second part of this dissertation targets the use of principal stratification analysis approaches in a pragmatic randomized clinical trial -- the Patient Activation after DXA Result Notification (PAADRN) study. PAADRN is a multi-center, pragmatic randomized clinical trial that was designed to improve bone health. Participants were randomly assigned to either intervention group with usual care augmented by a tailored patient-activation Dual-energy X-ray absorptiometry (DXA) results letter accompanied by an educational brochure, or control group with usual care only. The primary analyses followed the standard ITT principle, which provided a valid estimate for the intervention assignment. However, findings might underestimate the effect of intervention because PAADRN might not have an effect if the patient did not read, remember and act on the letter. We apply principal stratification to evaluate the effectiveness of PAADRN for subgroups, defined by patient's recall of having received a DXA result letter, which is an intermediate outcome that's post-treatment. We perform simulation studies to compare the principal score weighting methods with the instrumental variable (IV) methods. We examine principal strata causal effects on three outcome measures regarding pharmacological treatment and bone health behaviors. Finally, we conduct sensitivity analyses to assess the effect of potential violations of relevant causal assumptions. Our work is an important addition to the primary findings based on ITT. It provides a profound understanding of why the PAADRN intervention does (or does not) work for patients with different letter recall statuses, and sheds light on the improvement of the intervention.
34

Estimating Causal Effects in the Presence of Spatial Interference

Zirkle, Keith W. 01 January 2019 (has links)
Environmental epidemiologists are increasingly interested in establishing causality between exposures and health outcomes. A popular model for causal inference is the Rubin Causal Model (RCM), which typically seeks to estimate the average difference in study units' potential outcomes. If the exposure Z is binary, then we may express this as E[Y(Z=1)-Y(Z=0)]. An important assumption under RCM is no interference; that is, the potential outcomes of one unit are not affected by the exposure status of other units. The no interference assumption is violated if we expect spillover or diffusion of exposure effects based on units' proximity to other units and several other causal estimands arise. For example, if we consider the effect of other study units on a unit in an adjacency matrix A, then we may estimate a direct effect, E[Y(Z=1,A)-Y(Z=0,A)], and a spillover effect, E[Y(Z,A)=Y(Z,A`)]. This thesis presents novel methods for estimating causal effects under interference. We begin by outlining the potential outcomes framework and introducing the assumptions necessary for causal inference with no interference. We present an association study that assesses the relationship of animal feeding operations (AFOs) on groundwater nitrate in private wells in Iowa, USA. We then place the relationship in a causal framework where we estimate the causal effects of AFO placement on groundwater nitrate using propensity score-based methods. We proceed to causal inference with interference, which we motivate with examples from air pollution epidemiology where upwind events may affect downwind locations. We adapt assumptions for causal inference in social networks to causal inference with spatially structured interference. We then use propensity score-based methods to estimate both direct and spillover causal effects. We apply these methods to estimate the causal effects of the Environmental Protection Agency’s nonattainment regulation for particulate matter on lung cancer incidence in California, Georgia, and Kentucky using data from the Surveillance, Epidemiology, and End Results Program. As an alternative causal method, we motivate use of wind speed as an instrumental variable to define principal strata based on which study units are experiencing interference. We apply these methods to estimate the causal effects of air pollution on asthma incidence in the San Diego, California, USA region using data from the 500 Cities Project. All our methods are proposed in a Bayesian setting. We conclude by discussing the contributions of this thesis and the future of causal analysis in environmental epidemiology.
35

Selection of smoothing parameters with application in causal inference

Häggström, Jenny January 2011 (has links)
This thesis is a contribution to the research area concerned with selection of smoothing parameters in the framework of nonparametric and semiparametric regression. Selection of smoothing parameters is one of the most important issues in this framework and the choice can heavily influence subsequent results. A nonparametric or semiparametric approach is often desirable when large datasets are available since this allow us to make fewer and weaker assumptions as opposed to what is needed in a parametric approach. In the first paper we consider smoothing parameter selection in nonparametric regression when the purpose is to accurately predict future or unobserved data. We study the use of accumulated prediction errors and make comparisons to leave-one-out cross-validation which is widely used by practitioners. In the second paper a general semiparametric additive model is considered and the focus is on selection of smoothing parameters when optimal estimation of some specific parameter is of interest. We introduce a double smoothing estimator of a mean squared error and propose to select smoothing parameters by minimizing this estimator. Our approach is compared with existing methods.The third paper is concerned with the selection of smoothing parameters optimal for estimating average treatment effects defined within the potential outcome framework. For this estimation problem we propose double smoothing methods similar to the method proposed in the second paper. Theoretical properties of the proposed methods are derived and comparisons with existing methods are made by simulations.In the last paper we apply our results from the third paper by using a double smoothing method for selecting smoothing parameters when estimating average treatment effects on the treated. We estimate the effect on BMI of divorcing in middle age. Rich data on socioeconomic conditions, health and lifestyle from Swedish longitudinal registers is used.
36

The Left Hemisphere Interpreter and Confabulation : a Comparison

Åström, Frida January 2011 (has links)
The left hemisphere interpreter refers to a function in the left hemisphere of the brain that search for and produce causal explanations for events, behaviours and feelings, even though no such apparent pattern exists between them. Confabulation is said to occur when a person presents or acts on obviously false information, despite being aware that they are false. People who confabulate also tend to defend their confabulations even when they are presented with counterevidence. Research related to these two areas seems to deal with the same phenomenon, namely the human tendency to infer explanations for events, even if the explanations have no actual bearing in reality. Despite this, research on the left hemisphere interpreter has progressed relatively independently from research related to the concept of confabulation. This thesis has therefore aimed at reviewing each area and comparing them in a search for common relations. What has been found as a common theme is the emphasis they both place on the potentially underlying function of the interpreter and confabulation. Many researchers across the two fields stress the adaptive and vital function of keeping the brain free from both contradiction and unpredictability, and propose that this function is served by confabulations and the left hemisphere interpreter. This finding may provide a possible opening for collaboration across the fields, and for the continued understanding of these exciting and perplexing phenomena.
37

Education and Earnings for Poverty Reduction : Short-Term Evidence of Pro-Poor Growth from the Mexican Oportunidades Program

Si, Wei January 2011 (has links)
Education, as an indispensable component of human capital, has been acknowledged to play a critical role in economic growth, which is theoretically elaborated by human capital theory and empirically confirmed by evidence from different parts of the world. The educational impact on growth is especially valuable and meaningful when it is for the sake of poverty reduction and pro-poorness of growth. The paper re-explores the precious link between human capital development and poverty reduction by investigating the causal effect of education accumulation on earnings enhancement for anti-poverty and pro-poor growth. The analysis takes the evidence from a well-known conditional cash transfer (CCT) program — Oportunidades in Mexico. Aiming at alleviating poverty and promoting a better future by investing in human capital for children and youth in poverty, this CCT program has been recognized producing significant outcomes. The study investigates a short-term impact of education on earnings of the economically disadvantaged youth, taking the data of both the program’s treated and untreated youth from urban areas in Mexico from 2002 to 2004. Two econometric techniques, i.e. difference-in-differences and difference-in-differences propensity score matching approach are applied for estimation. The empirical analysis first identifies that youth who under the program’s schooling intervention possess an advantage in educational attainment over their non-intervention peers; with this identification of education discrepancy as a prerequisite, further results then present that earnings of the education advantaged youth increase at a higher rate about 20 percent than earnings of their education disadvantaged peers over the two years. This result indicates a confirmation that education accumulation for the economically disadvantaged young has a positive impact on their earnings enhancement and thus inferring a contribution to poverty reduction and pro-poorness of growth.
38

Capture-recapture Estimation for Conflict Data and Hierarchical Models for Program Impact Evaluation

Mitchell, Shira Arkin 07 June 2014 (has links)
A relatively recent increase in the popularity of evidence-based activism has created a higher demand for statisticians to work on human rights and economic development projects. The statistical challenges of revealing patterns of violence in armed conflict require efficient use of the data, and careful consideration of the implications of modeling decisions on estimates. Impact evaluation of a complex economic development project requires a careful consideration of causality and transparency to donors and beneficiaries. In this dissertation, I compare marginal and conditional models for capture recapture, and develop new hierarchical models that accommodate challenges in data from the armed conflict in Colombia, and more generally, in many other capture recapture settings. Additionally, I propose a study design for a non-randomized impact evaluation of the Millennium Villages Project (MVP), to be carried out during my postdoctoral fellowship. The design includes small area estimation of baseline variables, propensity score matching, and hierarchical models for causal inference.
39

Topics in experimental and tournament design

Hennessy, Jonathan Philip 21 October 2014 (has links)
We examine three topics related to experimental design in this dissertation. Two are related to the analysis of experimental data and the other focuses on the design of paired comparison experiments, in this case knockout tournaments. The two analysis topics are motivated by how to estimate and test causal effects when the assignment mechanism fails to create balanced treatment groups. In Chapter 2, we apply conditional randomization tests to experiments where, through random chance, the treatment groups differ in their covariate distributions. In Chapter 4, we apply principal stratification to factorial experiments where the subjects fail to comply with their assigned treatment. The sources of imbalance differ, but, in both cases, ignoring the imbalance can lead to incorrect conclusions. In Chapter 3, we consider designing knockout tournaments to maximize different objectives given a prior distribution on the strengths of the players. These objectives include maximizing the probability the best player wins the tournament. Our emphasis on balance in the other two chapters comes from a desire to create a fair comparison between treatments. However, in this case, the design uses the prior information to intentionally bias the tournament in favor of the better players. / Statistics
40

Evaluating the Performance of Propensity Scores to Address Selection Bias in a Multilevel Context: A Monte Carlo Simulation Study and Application Using a National Dataset

Lingle, Jeremy Andrew 16 October 2009 (has links)
When researchers are unable to randomly assign students to treatment conditions, selection bias is introduced into the estimates of treatment effects. Random assignment to treatment conditions, which has historically been the scientific benchmark for causal inference, is often impossible or unethical to implement in educational systems. For example, researchers cannot deny services to those who stand to gain from participation in an academic program. Additionally, students select into a particular treatment group through processes that are impossible to control, such as those that result in a child dropping-out of high school or attending a resource-starved school. Propensity score methods provide valuable tools for removing the selection bias from quasi-experimental research designs and observational studies through modeling the treatment assignment mechanism. The utility of propensity scores has been validated for the purposes of removing selection bias when the observations are assumed to be independent; however, the ability of propensity scores to remove selection bias in a multilevel context, in which group membership plays a role in the treatment assignment, is relatively unknown. A central purpose of the current study was to begin filling in the gaps in knowledge regarding the performance of propensity scores for removing selection bias, as defined by covariate balance, in multilevel settings using a Monte Carlo simulation study. The performance of propensity scores were also examined using a large-scale national dataset. Results from this study provide support for the conclusion that multilevel characteristics of a sample have a bearing upon the performance of propensity scores to balance covariates between treatment and control groups. Findings suggest that propensity score estimation models should take into account the cluster-level effects when working with multilevel data; however, the numbers of treatment and control group individuals within each cluster must be sufficiently large to allow estimation of those effects. Propensity scores that take into account the cluster-level effects can have the added benefit of balancing covariates within each cluster as well as across the sample as a whole.

Page generated in 0.0587 seconds