• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating Causal Direct and Indirect Effects in the Presence of Post-Treatment Confounders: A Simulation Study

January 2013 (has links)
abstract: In investigating mediating processes, researchers usually use randomized experiments and linear regression or structural equation modeling to determine if the treatment affects the hypothesized mediator and if the mediator affects the targeted outcome. However, randomizing the treatment will not yield accurate causal path estimates unless certain assumptions are satisfied. Since randomization of the mediator may not be plausible for most studies (i.e., the mediator status is not randomly assigned, but self-selected by participants), both the direct and indirect effects may be biased by confounding variables. The purpose of this dissertation is (1) to investigate the extent to which traditional mediation methods are affected by confounding variables and (2) to assess the statistical performance of several modern methods to address confounding variable effects in mediation analysis. This dissertation first reviewed the theoretical foundations of causal inference in statistical mediation analysis, modern statistical analysis for causal inference, and then described different methods to estimate causal direct and indirect effects in the presence of two post-treatment confounders. A large simulation study was designed to evaluate the extent to which ordinary regression and modern causal inference methods are able to obtain correct estimates of the direct and indirect effects when confounding variables that are present in the population are not included in the analysis. Five methods were compared in terms of bias, relative bias, mean square error, statistical power, Type I error rates, and confidence interval coverage to test how robust the methods are to the violation of the no unmeasured confounders assumption and confounder effect sizes. The methods explored were linear regression with adjustment, inverse propensity weighting, inverse propensity weighting with truncated weights, sequential g-estimation, and a doubly robust sequential g-estimation. Results showed that in estimating the direct and indirect effects, in general, sequential g-estimation performed the best in terms of bias, Type I error rates, power, and coverage across different confounder effect, direct effect, and sample sizes when all confounders were included in the estimation. When one of the two confounders were omitted from the estimation process, in general, none of the methods had acceptable relative bias in the simulation study. Omitting one of the confounders from estimation corresponded to the common case in mediation studies where no measure of a confounder is available but a confounder may affect the analysis. Failing to measure potential post-treatment confounder variables in a mediation model leads to biased estimates regardless of the analysis method used and emphasizes the importance of sensitivity analysis for causal mediation analysis. / Dissertation/Thesis / Ph.D. Psychology 2013
2

Pesquisas sob amostragem informativa utilizando o FBST / Surveys under informative sampling using the FBST

Azerêdo, Daniel Mendes 28 May 2013 (has links)
Pfeffermann, Krieger e Rinott (1998) apresentaram uma metodologia para modelar processos de amostragem que pode ser utilizada para avaliar se este processo de amostragem é informativo. Neste cenário, as probabilidades de seleção da amostra são aproximadas por uma função polinomial dependendo das variáveis resposta e concomitantes. Nesta abordagem, nossa principal proposta é investigar a aplicação do teste de significância FBST (Full Bayesian Significance Test), apresentado por Pereira e Stern (1999), como uma ferramenta para testar a ignorabilidade amostral, isto é, para avaliar uma relação de significância entre as probabilidades de seleção da amostra e a variável resposta. A performance desta modelagem estatística é testada com alguns experimentos computacionais. / Pfeffermann, Krieger and Rinott (1998) introduced a framework for modeling sampling processes that can be used to assess if a sampling process is informative. In this setting, sample selection probabilities are approximated by a polynomial function depending on outcome and auxiliary variables. Within this framework, our main purpose is to investigate the application of the Full Bayesian Significance Test (FBST), introduced by Pereira and Stern (1999), as a tool for testing sampling ignorability, that is, to detect a significant relation between the sample selection probabilities and the outcome variable. The performance of this statistical modelling framework is tested with some simulation experiments.
3

Pesquisas sob amostragem informativa utilizando o FBST / Surveys under informative sampling using the FBST

Daniel Mendes Azerêdo 28 May 2013 (has links)
Pfeffermann, Krieger e Rinott (1998) apresentaram uma metodologia para modelar processos de amostragem que pode ser utilizada para avaliar se este processo de amostragem é informativo. Neste cenário, as probabilidades de seleção da amostra são aproximadas por uma função polinomial dependendo das variáveis resposta e concomitantes. Nesta abordagem, nossa principal proposta é investigar a aplicação do teste de significância FBST (Full Bayesian Significance Test), apresentado por Pereira e Stern (1999), como uma ferramenta para testar a ignorabilidade amostral, isto é, para avaliar uma relação de significância entre as probabilidades de seleção da amostra e a variável resposta. A performance desta modelagem estatística é testada com alguns experimentos computacionais. / Pfeffermann, Krieger and Rinott (1998) introduced a framework for modeling sampling processes that can be used to assess if a sampling process is informative. In this setting, sample selection probabilities are approximated by a polynomial function depending on outcome and auxiliary variables. Within this framework, our main purpose is to investigate the application of the Full Bayesian Significance Test (FBST), introduced by Pereira and Stern (1999), as a tool for testing sampling ignorability, that is, to detect a significant relation between the sample selection probabilities and the outcome variable. The performance of this statistical modelling framework is tested with some simulation experiments.
4

Did the Russian Invasion of Ukraine Strengthen European Identity? : Utilizing Unexpected Event During Surveys Design: A Quasi-Experimental Approach

Portolani, Lyon January 2024 (has links)
In the wake of the Russian invasion of Ukraine, a compelling question arises, could this event have brought Europeans closer? Armed conflicts often strengthen in-group identity as individuals seek safety from external threats. This study speculates that the perceived threat of the invasion might have intensified emotional attachments to Europe across the continent. Additionally, it explores how the response differed between Western Europe and Central and Eastern European countries. This study bases its conceptualization, hypotheses, and interpretations on social identity theory, alongside a comprehensive review of the literature on armed conflict and identity. Utilizing a quasi-experimental method to investigate the probable causal link and using 12 countries from the 10th round of the European Social Survey to generalize the findings across a diverse European population. The findings reveal that Europeans did not develop a stronger sense of European identity in response to the invasion, suggesting that Europeans do not perceive Europe as a meaningful identity to unite under or seek safety in when military conflicts intensify on the continent. This study contributes to the understanding that the European project, along with its socio-political efforts, has been relatively unsuccessful in establishing itself as a significant unifying point when conflicts intensify.
5

[en] COMBINING STRATEGIES FOR ESTIMATION OF TREATMENT EFFECTS / [pt] COMBINANDO ESTRATÉGIAS PARA ESTIMAÇÃO DE EFEITOS DE TRATAMENTO

RAFAEL DE CARVALHO CAYRES PINTO 19 January 2018 (has links)
[pt] Uma ferramenta importante na avaliação de políticas econômicas é a estimação do efeito médio de um programa ou tratamento sobre uma variável de interesse. A principal dificuldade desse cálculo deve-se µa atribuição do tratamento aos potenciais participantes geralmente não ser aleatória, causando viés de seleção quando desconsiderada. Uma maneira de resolver esse problema é supor que o econometrista observa um conjunto de características determinantes, a menos de um componente estritamente aleatório, da participação. Sob esta hipótese, conhecida como Ignorabilidade, métodos semiparamétricos de estimação foram desenvolvidos, entre os quais a imputação de valores contrafactuais e a reponderação da amostra. Ambos são consistentes e capazes de atingir, assintoticamente, o limite de eficiência semiparamétrico. Entretanto, nas amostras frequentemente disponíveis, o desempenho desses métodos nem sempre é satisfatório. O objetivo deste trabalho é estudar como a combinação das duas estratégias pode produzir estimadores com melhores propriedades em amostras pequenas. Para isto, consideramos duas formas de integrar essas abordagens, tendo como referencial teórico a literatura de estimação duplamente robusta desenvolvida por James Robins e co-autores. Analisamos suas propriedades e discutimos por que podem superar o uso isolado de cada uma das técnicas que os compõem. Finalmente, comparamos, num exercício de Monte Carlo, o desempenho desses estimadores com os de imputação e reponderação. Os resultados mostram que a combinação de estratégias pode reduzir o viés e a variância, mas isso depende da forma como é implementada. Concluímos que a escolha dos parâmetros de suavização é decisiva para o desempenho da estimação em amostras de tamanho moderado. / [en] Estimation of mean treatment effect is an important tool for evaluating economic policy. The main difficulty in this calculation is caused by nonrandom assignment of potential participants to treatment, which leads to selection bias when ignored. A solution to this problem is to suppose the econometrician observes a set of covariates that determine participation, except for a strictly random component. Under this assumption, known as Ignorability, semiparametric methods were developed, including imputation of counterfactual outcomes and sample reweighing. Both are consistent and can asymptotically achieve the semiparametric efficiency bound. However, in sample sizes commonly available, their performance is not always satisfactory. The goal of this dissertation is to study how combining these strategies can lead to better estimation in small samples. We consider two different ways of merging these methods, based on Doubly Robust inference literature developed by James Robins and his co-authors, analyze their properties and discuss why they would overcome each of their components. Finally, we compare the proposed estimators to imputation and reweighing in a Monte Carlo exercise. Results show that while combined strategies may reduce bias and variance, it depends on the way it is implemented. We conclude that the choice of smoothness parameters is critical to obtain good estimates in moderate size samples.

Page generated in 0.0732 seconds