• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 12
  • 6
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 109
  • 109
  • 27
  • 19
  • 19
  • 18
  • 18
  • 18
  • 17
  • 15
  • 14
  • 13
  • 13
  • 13
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Evaluating causal effect in time-to-event observarional data with propensity score matching

Zhu, Danqi 07 June 2016 (has links)
No description available.
22

The Validity of Summary Comorbidity Measures

Gilbert, Elizabeth January 2016 (has links)
Prognostic scores, and more specifically comorbidity scores, are important and widely used measures in the health care field and in health services research. A comorbidity is an existing disease an individual has in addition to a primary condition of interest, such as cancer. A comorbidity score is a summary score that can be created from these individual comorbidities for prognostic purposes, as well as for confounding adjustment. Despite their widespread use, the properties of and conditions under which comorbidity scores are valid dimension reduction tools in statistical models is largely unknown. This dissertation explores the use of summary comorbidity measures in statistical models. Three particular aspects are examined. First, it is shown that, under standard conditions, the predictive ability of these summary comorbidity measures remains as accurate as the individual comorbidities in regression models, which can include factors such as treatment variables and additional covariates. However, these results are only true when no interaction exists between the individual comorbidities and any additional covariate. The use of summary comorbidity measures in the presence of such interactions leads to biased results. Second, it is shown that these measures are also valid in the causal inference framework through confounding adjustment in estimating treatment effects. Lastly, we introduce a time dependent extension of summary comorbidity scores. This time dependent score can account for changes in patients' health over time and is shown to be a more accurate predictor of patient outcomes. A data example using breast cancer data from the SEER Medicare Database is used throughout this dissertation to illustrate the application of these results to the health care field. / Statistics
23

Causal Inference Using Bayesian Network For Search And Rescue

Belden, Amanda 01 June 2024 (has links) (PDF)
People who are considered missing have much higher probabilities of being found dead compared to those who are not considered missing in terms of Search and Rescue (SAR) missions. Dementia patients are incredibly likely to be declared missing, and in fact after removing those with dementia the probability of the mission being regarded as missing person case is only about 10%. Additionally, those who go missing are much more likely to be on private land than on protected areas such as forests and parks. These and similar associations can be represented and investigated using a Bayesian network that has been trained on Search and Rescue mission data. By finding associations between factors that affect these missions, SAR teams can find patterns in historical cases and apply them to future cases in order to narrow down their search areas, improve their plans, and hopefully lead to lower search times and fewer deaths and unsolved cases. Causal inference allows causal relationships to be determined, telling SAR teams that they can make current decisions based on these learned relationships and their decisions will cause the change that they expect based on the Bayesian network.
24

Instrumental variable and longitudinal structural equation modelling methods for causal mediation : the PACE trial of treatments for chronic fatigue syndrome

Goldsmith, Kimberley January 2014 (has links)
Background: Understanding complex psychological treatment mechanisms is important in order to refine and improve treatment. Mechanistic theories can be evaluated using mediation analysis methods. The Pacing, Graded Activity, and Cognitive Behaviour Therapy: A Randomised Evaluation (PACE) trial studied complex therapies for the treatment of chronic fatigue syndrome. The aim of the project was to study different mediation analysis methods using PACE trial data, and to make trial design recommendations based upon the findings. Methods: PACE trial data were described using summary statistics and correlation analyses. Mediation estimates were derived using: the product of coefficients approach, instrumental variable (IV) methods with randomisation by baseline variables interactions as IVs, and dual process longitudinal structural equation models (SEM). Monte Carlo simulation studies were done to further explore the behaviour of IV estimators and to examine aspects of the SEM. Results: Cognitive and behavioural measures were mediators of the cognitive behavioural and graded exercise therapies in PACE. Results were robust when accounting for correlated measurement error and different SEM structures. Randomisation by baseline IVs were weak, giving imprecise and sometimes extreme estimates, leaving their utility unclear. A flexible version of a latent change SEM with contemporaneous mediation effects and contemporaneous correlated measurement errors was the most appropriate longitudinal model. Conclusions: IV methods using interaction IVs are unlikely to be useful; designs with randomised IV might be more suitable. Longitudinal SEM for mediation in clinical trials seems a promising approach. Mediation estimates from SEM were generally robust when allowing for correlated measurement error and for different model classes. Mediation analysis in trials should be longitudinal and should consider the number and timing of measures at the design stage. Using appropriate methods for studying mediation in trials will help clarify treatment mechanisms of action and allow for their refinement, which would maximize the information gained from trials and benefit patients.
25

Principal stratification : applications and extensions in clinical trials with intermediate variables

Lou, Yiyue 15 December 2017 (has links)
Randomized clinical trials (RCTs) are considered to be the "gold standard" in order to demonstrate a causal relationship between a treatment and an outcome because complete randomization ensures that the only difference between the two units being compared is the treatment. The intention-to-treat (ITT) comparison has long been regarded as the preferred analytic approach for RCTs. However, if there exists an “intermediate” variable between the treatment and outcome, and the analysis conditions on this intermediate, randomization will break down, and the ITT approach does not account properly for the intermediate. In this dissertation, we explore the principal stratification approach for dealing with intermediate variables, illustrate its applications in two different clinical trial settings, and extend the existing analytic approaches with respect to specific challenges in these settings. The first part of our work focuses on clinical endpoint bioequivalence (BE) studies with noncompliance and missing data. In clinical endpoint BE studies, the primary analysis for assessing equivalence between a generic and an innovator product is usually based on the observed per-protocol (PP) population (usually completers and compliers). The FDA Missing Data Working Group recently recommended using “causal estimands of primary interest.” This PP analysis, however, is not generally causal because the observed PP is post-treatment, and conditioning on it may introduce selection bias. To date, no causal estimand has been proposed for equivalence assessment. We propose co-primary causal estimands to test equivalence by applying the principal stratification approach. We discuss and verify by simulation the causal assumptions under which the current PP estimator is unbiased for the primary principal stratum causal estimand – the "Survivor Average Causal Effect" (SACE). We also propose tipping point sensitivity analysis methods to assess the robustness of the current PP estimator from the SACE estimand when these causal assumptions are not met. Data from a clinical endpoint BE study is used to illustrate the proposed co-primary causal estimands and sensitivity analysis methods. Our work introduces a causal framework for equivalence assessment in clinical endpoint BE studies with noncompliance and missing data. The second part of this dissertation targets the use of principal stratification analysis approaches in a pragmatic randomized clinical trial -- the Patient Activation after DXA Result Notification (PAADRN) study. PAADRN is a multi-center, pragmatic randomized clinical trial that was designed to improve bone health. Participants were randomly assigned to either intervention group with usual care augmented by a tailored patient-activation Dual-energy X-ray absorptiometry (DXA) results letter accompanied by an educational brochure, or control group with usual care only. The primary analyses followed the standard ITT principle, which provided a valid estimate for the intervention assignment. However, findings might underestimate the effect of intervention because PAADRN might not have an effect if the patient did not read, remember and act on the letter. We apply principal stratification to evaluate the effectiveness of PAADRN for subgroups, defined by patient's recall of having received a DXA result letter, which is an intermediate outcome that's post-treatment. We perform simulation studies to compare the principal score weighting methods with the instrumental variable (IV) methods. We examine principal strata causal effects on three outcome measures regarding pharmacological treatment and bone health behaviors. Finally, we conduct sensitivity analyses to assess the effect of potential violations of relevant causal assumptions. Our work is an important addition to the primary findings based on ITT. It provides a profound understanding of why the PAADRN intervention does (or does not) work for patients with different letter recall statuses, and sheds light on the improvement of the intervention.
26

Selection of smoothing parameters with application in causal inference

Häggström, Jenny January 2011 (has links)
This thesis is a contribution to the research area concerned with selection of smoothing parameters in the framework of nonparametric and semiparametric regression. Selection of smoothing parameters is one of the most important issues in this framework and the choice can heavily influence subsequent results. A nonparametric or semiparametric approach is often desirable when large datasets are available since this allow us to make fewer and weaker assumptions as opposed to what is needed in a parametric approach. In the first paper we consider smoothing parameter selection in nonparametric regression when the purpose is to accurately predict future or unobserved data. We study the use of accumulated prediction errors and make comparisons to leave-one-out cross-validation which is widely used by practitioners. In the second paper a general semiparametric additive model is considered and the focus is on selection of smoothing parameters when optimal estimation of some specific parameter is of interest. We introduce a double smoothing estimator of a mean squared error and propose to select smoothing parameters by minimizing this estimator. Our approach is compared with existing methods.The third paper is concerned with the selection of smoothing parameters optimal for estimating average treatment effects defined within the potential outcome framework. For this estimation problem we propose double smoothing methods similar to the method proposed in the second paper. Theoretical properties of the proposed methods are derived and comparisons with existing methods are made by simulations.In the last paper we apply our results from the third paper by using a double smoothing method for selecting smoothing parameters when estimating average treatment effects on the treated. We estimate the effect on BMI of divorcing in middle age. Rich data on socioeconomic conditions, health and lifestyle from Swedish longitudinal registers is used.
27

The Left Hemisphere Interpreter and Confabulation : a Comparison

Åström, Frida January 2011 (has links)
The left hemisphere interpreter refers to a function in the left hemisphere of the brain that search for and produce causal explanations for events, behaviours and feelings, even though no such apparent pattern exists between them. Confabulation is said to occur when a person presents or acts on obviously false information, despite being aware that they are false. People who confabulate also tend to defend their confabulations even when they are presented with counterevidence. Research related to these two areas seems to deal with the same phenomenon, namely the human tendency to infer explanations for events, even if the explanations have no actual bearing in reality. Despite this, research on the left hemisphere interpreter has progressed relatively independently from research related to the concept of confabulation. This thesis has therefore aimed at reviewing each area and comparing them in a search for common relations. What has been found as a common theme is the emphasis they both place on the potentially underlying function of the interpreter and confabulation. Many researchers across the two fields stress the adaptive and vital function of keeping the brain free from both contradiction and unpredictability, and propose that this function is served by confabulations and the left hemisphere interpreter. This finding may provide a possible opening for collaboration across the fields, and for the continued understanding of these exciting and perplexing phenomena.
28

Education and Earnings for Poverty Reduction : Short-Term Evidence of Pro-Poor Growth from the Mexican Oportunidades Program

Si, Wei January 2011 (has links)
Education, as an indispensable component of human capital, has been acknowledged to play a critical role in economic growth, which is theoretically elaborated by human capital theory and empirically confirmed by evidence from different parts of the world. The educational impact on growth is especially valuable and meaningful when it is for the sake of poverty reduction and pro-poorness of growth. The paper re-explores the precious link between human capital development and poverty reduction by investigating the causal effect of education accumulation on earnings enhancement for anti-poverty and pro-poor growth. The analysis takes the evidence from a well-known conditional cash transfer (CCT) program — Oportunidades in Mexico. Aiming at alleviating poverty and promoting a better future by investing in human capital for children and youth in poverty, this CCT program has been recognized producing significant outcomes. The study investigates a short-term impact of education on earnings of the economically disadvantaged youth, taking the data of both the program’s treated and untreated youth from urban areas in Mexico from 2002 to 2004. Two econometric techniques, i.e. difference-in-differences and difference-in-differences propensity score matching approach are applied for estimation. The empirical analysis first identifies that youth who under the program’s schooling intervention possess an advantage in educational attainment over their non-intervention peers; with this identification of education discrepancy as a prerequisite, further results then present that earnings of the education advantaged youth increase at a higher rate about 20 percent than earnings of their education disadvantaged peers over the two years. This result indicates a confirmation that education accumulation for the economically disadvantaged young has a positive impact on their earnings enhancement and thus inferring a contribution to poverty reduction and pro-poorness of growth.
29

Capture-recapture Estimation for Conflict Data and Hierarchical Models for Program Impact Evaluation

Mitchell, Shira Arkin 07 June 2014 (has links)
A relatively recent increase in the popularity of evidence-based activism has created a higher demand for statisticians to work on human rights and economic development projects. The statistical challenges of revealing patterns of violence in armed conflict require efficient use of the data, and careful consideration of the implications of modeling decisions on estimates. Impact evaluation of a complex economic development project requires a careful consideration of causality and transparency to donors and beneficiaries. In this dissertation, I compare marginal and conditional models for capture recapture, and develop new hierarchical models that accommodate challenges in data from the armed conflict in Colombia, and more generally, in many other capture recapture settings. Additionally, I propose a study design for a non-randomized impact evaluation of the Millennium Villages Project (MVP), to be carried out during my postdoctoral fellowship. The design includes small area estimation of baseline variables, propensity score matching, and hierarchical models for causal inference.
30

Topics in experimental and tournament design

Hennessy, Jonathan Philip 21 October 2014 (has links)
We examine three topics related to experimental design in this dissertation. Two are related to the analysis of experimental data and the other focuses on the design of paired comparison experiments, in this case knockout tournaments. The two analysis topics are motivated by how to estimate and test causal effects when the assignment mechanism fails to create balanced treatment groups. In Chapter 2, we apply conditional randomization tests to experiments where, through random chance, the treatment groups differ in their covariate distributions. In Chapter 4, we apply principal stratification to factorial experiments where the subjects fail to comply with their assigned treatment. The sources of imbalance differ, but, in both cases, ignoring the imbalance can lead to incorrect conclusions. In Chapter 3, we consider designing knockout tournaments to maximize different objectives given a prior distribution on the strengths of the players. These objectives include maximizing the probability the best player wins the tournament. Our emphasis on balance in the other two chapters comes from a desire to create a fair comparison between treatments. However, in this case, the design uses the prior information to intentionally bias the tournament in favor of the better players. / Statistics

Page generated in 0.0421 seconds