Spelling suggestions: "subject:"heterogeneous treatment effects"" "subject:"eterogeneous treatment effects""
1 |
Essays in the Economics of AgingMickey, Ryan 17 December 2015 (has links)
In this dissertation, I explore how economic decisions diverge for different age groups. Two essays address the location decisions of older households while the third examines why different age cohorts donate to charities.
The first essay estimates how the age distribution of the population across cities will change as the number of older adults rises. I use a residential sorting model to estimate the location preference heterogeneity between younger and older households. I then simulate where the two household types will live in 2030. All MSAs end up with a higher proportion of older households in 2030, and only eight of 243 MSAs experience a decline in the number of older households. The results suggest that MSAs in upstate New York and on the west coast, particularly in California, will have the largest number of older households in 2030. Florida will remain a popular place for older households, but its relative importance may diminish in the future.
The second essay explores whether the basic motivations for charitable giving differ by age cohort. Using the results from a randomized field experiment, I test whether benefits to self or benefits to others drives the charitable giving decision for each age cohort. I find limited heterogeneity for benefits to self. Individuals between the ages of 50 and 64 increase average donations more than any other age cohort in response to emphasizing warm glow, and this heterogeneity is exclusively driven by larger conditional gifts.
The third essay is preliminary joint work with H. Spencer Banzhaf and Carlianne Patrick. We build a unique data set of local homestead exemptions, which vary by generosity and eligibility requirements, for tax jurisdictions in Georgia. Using school-district-level Census data since 1970 along with the history of such exemptions, we will explore the impact of these exemptions, particularly exemptions targeting older households, on the demographic makeup of each jurisdiction and consider the impact of these laws on the relative levels of housing capital consumed by older and younger households.
|
2 |
Treatment Effect Heterogeneity and Statistical Decision-making in the Presence of InterferenceOwusu, Julius January 2023 (has links)
This dissertation consists of three chapters that generally focus on the design of welfare-maximizing treatment assignment rules in heterogeneous populations with interactions. In the first two chapters, I focus on an important pre-step in the design of treatment assignment rules: inference for heterogeneous treatment effects in populations with interactions. In the final chapter, I and my co-authors study treatment assignment rules in the presence of social interaction in heterogeneous populations.
In chapter one, I argue that statistical inference of heterogeneous treatment effects (HTEs) across predefined subgroups is complicated when economic units interact because treatment effects may vary by pretreatment variables, post-treatment exposure variables (that measure the exposure to other units’ treatment statuses), or both. It invalidates the standard hypothesis testing technique used to infer HTEs. To address the problem, I develop statistical methods (asymptotic and bootstrap) to infer HTEs and disentangle the drivers of treatment effects heterogeneity in populations where units interact. Specifically, I incorporate clustered interference into the potential outcomes model and propose kernel-based test statistics for the null hypotheses of (a) no HTEs by treatment assignment (or post-treatment exposure variables) for all pretreatment variables values; and (b) no HTEs by pretreatment variables for all treatment assignment vectors. To disentangle the source of heterogeneity in treatment effects, I recommend a multiple-testing algorithm. In addition, I prove the asymptotic
properties of the proposed test statistics via a modern poissonization technique.
As a robust alternative to the inferential methods I propose in chapter one, in chapter two, I
design randomization tests of heterogeneous treatment effects (HTEs) when units interact on a
single network. My modeling strategy allows network interference into the potential outcomes
framework using the concept of network exposure mapping. I consider three null hypotheses that represent different notions of homogeneous treatment effects, but due to nuisance parameters and the multiplicity of potential outcomes, the hypotheses are not sharp. To address the issue of multiple potential outcomes, I propose a conditional randomization inference method that expands on existing methods. Additionally, I consider two techniques that overcome the nuisance parameter issue. I show that my conditional randomization inference method, combined with either of the proposed techniques for handling nuisance parameters, produces asymptotically valid p-values.
Chapter three is based on a joint paper with Young Ki Shin and Seungjin Han. We study treatment assignment rules in the presence of social interaction in heterogeneous populations. We construct an analytical framework under the anonymous interaction assumption, where the decision problem becomes choosing a treatment fraction. We propose a multinomial empirical success (MES) rule that includes the empirical success rule of Manski (2004) as a special case. We investigate the non-asymptotic bounds of the expected utility based on the MES rule. / Dissertation / Doctor of Philosophy (PhD)
|
3 |
On the effectiveness of EU structural funds during the Great Recession: Estimates from a heterogeneous local average treatment effects frameworkBachtrögler, Julia 08 1900 (has links) (PDF)
This study investigates the heterogeneity of European NUTS-2 regions with regard to their ability to take advantage of European Union (EU) structural funds aimed at convergence. It considers a concept of absorptive capacity based on regional policy design, and additionally accounts for the programming period 2007-2013 in the empirical analysis. A fuzzy regression discontinuity design allowing for heterogeneous treatment effects is applied to evaluate convergence funds in 250 NUTS-2 regions from 2000 (and 1989) to 2013. The main results suggest a positive conditional impact of funds payments on regional GDP per capita growth. However, based on a time-varying treatment effects model, we are able to identify a deterioration in the effectiveness of convergence funds during the programming period 2007-2013. Furthermore, the analysis reveals an inverted U-shaped relationship between the share of committed funds paid out and GDP per capita growth. The latter finding indicates that the marginal benefits from EU convergence funds might be decreasing. (author's abstract) / Series: Department of Economics Working Paper Series
|
4 |
ESSAYS ON INTERGENERATIONAL DEPENDENCY AND WELFARE REFORMHartley, Robert Paul 01 January 2017 (has links)
This dissertation consists of three essays related to the effects of welfare reform on the intergenerational transmission of welfare participation as well as effects on labor supply and childcare arrangements. States implemented welfare reform at different times from 1992 to 1996, and these policies notably introduced work requirements and other restrictions intended to limit dependency of needy families. One mechanism reforms were intended to address was childhood exposure to a "culture" of ongoing welfare receipt. In Essay 1, I estimate the effect of reform on the transmission of welfare participation for 2961 mother-daughter pairs in the Panel Study of Income Dynamics (PSID) over the period 1968-2013. I find that a mother's welfare participation increased her daughter's odds of participation as an adult by roughly 30 percentage points, but that welfare reform attenuated this transmission by at least 50 percent, or at least 30 percent over the baseline odds of participation. While I find comparable-sized transmission patterns in daughters' adult use of the broader safety net and other outcomes such as educational attainment and income, there is no diminution of transmission after welfare reform. In Essay 2, I estimate behavioral labor supply responses to reforms using experimental data from Connecticut's Jobs First welfare waiver program in 1996. Recent studies have used a distributional analysis of Jobs First suggesting evidence that some individuals reduce hours in order to opt into welfare, an example of behavioral-induced participation. However, estimates obtained by a semi-parametric panel quantile estimator allowing women to vary arbitrarily in preferences and welfare participation costs indicate no evidence of behavioral-induced participation. These findings show that a welfare program imposes an estimated cost up to 10 percent of quarterly earnings, and these costs can be heterogeneous throughout the conditional earnings distribution. Lastly, in Essay 3, I return to PSID data to examine the relationship between welfare spending on childcare assistance and the care arrangements chosen by low-income families. Experimental evidence has shown that formal child care can result in long-term socioeconomic gains for disadvantaged children, and work requirements after welfare reform have necessitated increased demand for child care among single mothers. I find that an increase of a thousand dollars in state-level childcare assistance per child in poverty increases the probability of formal care among low-earnings single-mother families by about 27 to 30 percentage points. When public assistance makes child care more affordable, families within the target population reveal a higher preference for formal care relative to informal, which may be related to perceived quality improvements for child enrichment and development.
|
5 |
Apprentissage basé sur le Qini pour la prédiction de l’effet causal conditionnelBelbahri, Mouloud-Beallah 08 1900 (has links)
Les modèles uplift (levier en français) traitent de l'inférence de cause à effet pour un facteur spécifique, comme une intervention de marketing. En pratique, ces modèles sont construits sur des données individuelles issues d'expériences randomisées. Un groupe traitement comprend des individus qui font l'objet d'une action; un groupe témoin sert de comparaison. La modélisation uplift est utilisée pour ordonner les individus par rapport à la valeur d'un effet causal, par exemple, positif, neutre ou négatif.
Dans un premier temps, nous proposons une nouvelle façon d'effectuer la sélection de modèles pour la régression uplift. Notre méthodologie est basée sur la maximisation du coefficient Qini. Étant donné que la sélection du modèle correspond à la sélection des variables, la tâche est difficile si elle est effectuée de manière directe lorsque le nombre de variables à prendre en compte est grand. Pour rechercher de manière réaliste un bon modèle, nous avons conçu une méthode de recherche basée sur une exploration efficace de l'espace des coefficients de régression combinée à une pénalisation de type lasso de la log-vraisemblance. Il n'y a pas d'expression analytique explicite pour la surface Qini, donc la dévoiler n'est pas facile. Notre idée est de découvrir progressivement la surface Qini comparable à l'optimisation sans dérivée. Le but est de trouver un maximum local raisonnable du Qini en explorant la surface près des valeurs optimales des coefficients pénalisés. Nous partageons ouvertement nos codes à travers la librairie R tools4uplift. Bien qu'il existe des méthodes de calcul disponibles pour la modélisation uplift, la plupart d'entre elles excluent les modèles de régression statistique. Notre librairie entend combler cette lacune. Cette librairie comprend des outils pour: i) la discrétisation, ii) la visualisation, iii) la sélection de variables, iv) l'estimation des paramètres et v) la validation du modèle. Cette librairie permet aux praticiens d'utiliser nos méthodes avec aise et de se référer aux articles méthodologiques afin de lire les détails.
L'uplift est un cas particulier d'inférence causale. L'inférence causale essaie de répondre à des questions telle que « Quel serait le résultat si nous donnions à ce patient un traitement A au lieu du traitement B? ». La réponse à cette question est ensuite utilisée comme prédiction pour un nouveau patient. Dans la deuxième partie de la thèse, c’est sur la prédiction que nous avons davantage insisté. La plupart des approches existantes sont des adaptations de forêts aléatoires pour le cas de l'uplift. Plusieurs critères de segmentation ont été proposés dans la littérature, tous reposant sur la maximisation de l'hétérogénéité. Cependant, dans la pratique, ces approches sont sujettes au sur-ajustement. Nous apportons une nouvelle vision pour améliorer la prédiction de l'uplift. Nous proposons une nouvelle fonction de perte définie en tirant parti d'un lien avec l'interprétation bayésienne du risque relatif. Notre solution est développée pour une architecture de réseau de neurones jumeaux spécifique permettant d'optimiser conjointement les probabilités marginales de succès pour les individus traités et non-traités. Nous montrons que ce modèle est une généralisation du modèle d'interaction logistique de l'uplift. Nous modifions également l'algorithme de descente de gradient stochastique pour permettre des solutions parcimonieuses structurées. Cela aide dans une large mesure à ajuster nos modèles uplift. Nous partageons ouvertement nos codes Python pour les praticiens désireux d'utiliser nos algorithmes.
Nous avons eu la rare opportunité de collaborer avec l'industrie afin d'avoir accès à des données provenant de campagnes de marketing à grande échelle favorables à l'application de nos méthodes. Nous montrons empiriquement que nos méthodes sont compétitives avec l'état de l'art sur les données réelles ainsi qu'à travers plusieurs scénarios de simulations. / Uplift models deal with cause-and-effect inference for a specific factor, such as a marketing intervention. In practice, these models are built on individual data from randomized experiments. A targeted group contains individuals who are subject to an action; a control group serves for comparison. Uplift modeling is used to order the individuals with respect to the value of a causal effect, e.g., positive, neutral, or negative.
First, we propose a new way to perform model selection in uplift regression models. Our methodology is based on the maximization of the Qini coefficient. Because model selection corresponds to variable selection, the task is haunting and intractable if done in a straightforward manner when the number of variables to consider is large. To realistically search for a good model, we conceived a searching method based on an efficient exploration of the regression coefficients space combined with a lasso penalization of the log-likelihood. There is no explicit analytical expression for the Qini surface, so unveiling it is not easy. Our idea is to gradually uncover the Qini surface in a manner inspired by surface response designs. The goal is to find a reasonable local maximum of the Qini by exploring the surface near optimal values of the penalized coefficients. We openly share our codes through the R Package tools4uplift. Though there are some computational methods available for uplift modeling, most of them exclude statistical regression models. Our package intends to fill this gap. This package comprises tools for: i) quantization, ii) visualization, iii) variable selection, iv) parameters estimation and v) model validation. This library allows practitioners to use our methods with ease and to refer to methodological papers in order to read the details.
Uplift is a particular case of causal inference. Causal inference tries to answer questions such as ``What would be the result if we gave this patient treatment A instead of treatment B?" . The answer to this question is then used as a prediction for a new patient. In the second part of the thesis, it is on the prediction that we have placed more emphasis. Most existing approaches are adaptations of random forests for the uplift case. Several split criteria have been proposed in the literature, all relying on maximizing heterogeneity. However, in practice, these approaches are prone to overfitting. In this work, we bring a new vision to uplift modeling. We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk. Our solution is developed for a specific twin neural network architecture allowing to jointly optimize the marginal probabilities of success for treated and control individuals. We show that this model is a generalization of the uplift logistic interaction model. We modify the stochastic gradient descent algorithm to allow for structured sparse solutions. This helps fitting our uplift models to a great extent. We openly share our Python codes for practitioners wishing to use our algorithms.
We had the rare opportunity to collaborate with industry to get access to data from large-scale marketing campaigns favorable to the application of our methods. We show empirically that our methods are competitive with the state of the art on real data and through several simulation setting scenarios.
|
Page generated in 0.1551 seconds