• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Biochemical and behavioural studies on aspects of central noradrenergic function following chronic antidepressant administration

Upton, S. January 1988 (has links)
No description available.
2

Statistical methods in AIDS progression studies with an analysis of the Edinburgh City Hospital Cohort

McNeil, Alexander John January 1993 (has links)
No description available.
3

Towards Personalized Learning using Counterfactual Inference for Randomized Controlled Trials

Zhao, Siyuan 26 April 2018 (has links)
Personalized learning considers that the causal effects of a studied learning intervention may differ for the individual student (e.g., maybe girls do better with video hints while boys do better with text hints). To evaluate a learning intervention inside ASSISTments, we run a randomized control trial (RCT) by randomly assigning students into either a control condition or a treatment condition. Making the inference about causal effects of studies interventions is a central problem. Counterfactual inference answers “What if� questions, such as "Would this particular student benefit more if the student were given the video hint instead of the text hint when the student cannot solve a problem?". Counterfactual prediction provides a way to estimate the individual treatment effects and helps us to assign the students to a learning intervention which leads to a better learning. A variant of Michael Jordan's "Residual Transfer Networks" was proposed for the counterfactual inference. The model first uses feed-forward neural networks to learn a balancing representation of students by minimizing the distance between the distributions of the control and the treated populations, and then adopts a residual block to estimate the individual treatment effect. Students in the RCT usually have done a number of problems prior to participating it. Each student has a sequence of actions (performance sequence). We proposed a pipeline to use the performance sequence to improve the performance of counterfactual inference. Since deep learning has achieved a huge amount of success in learning representations from raw logged data, student representations were learned by applying the sequence autoencoder to performance sequences. Then, incorporate these representations into the model for counterfactual inference. Empirical results showed that the representations learned from the sequence autoencoder improved the performance of counterfactual inference.
4

Essays in the Economics of Aging

Mickey, Ryan 17 December 2015 (has links)
In this dissertation, I explore how economic decisions diverge for different age groups. Two essays address the location decisions of older households while the third examines why different age cohorts donate to charities. The first essay estimates how the age distribution of the population across cities will change as the number of older adults rises. I use a residential sorting model to estimate the location preference heterogeneity between younger and older households. I then simulate where the two household types will live in 2030. All MSAs end up with a higher proportion of older households in 2030, and only eight of 243 MSAs experience a decline in the number of older households. The results suggest that MSAs in upstate New York and on the west coast, particularly in California, will have the largest number of older households in 2030. Florida will remain a popular place for older households, but its relative importance may diminish in the future. The second essay explores whether the basic motivations for charitable giving differ by age cohort. Using the results from a randomized field experiment, I test whether benefits to self or benefits to others drives the charitable giving decision for each age cohort. I find limited heterogeneity for benefits to self. Individuals between the ages of 50 and 64 increase average donations more than any other age cohort in response to emphasizing warm glow, and this heterogeneity is exclusively driven by larger conditional gifts. The third essay is preliminary joint work with H. Spencer Banzhaf and Carlianne Patrick. We build a unique data set of local homestead exemptions, which vary by generosity and eligibility requirements, for tax jurisdictions in Georgia. Using school-district-level Census data since 1970 along with the history of such exemptions, we will explore the impact of these exemptions, particularly exemptions targeting older households, on the demographic makeup of each jurisdiction and consider the impact of these laws on the relative levels of housing capital consumed by older and younger households.
5

Specification, estimation and testing of treatment effects in multinomial outcome models : accommodating endogeneity and inter-category covariance

Tang, Shichao 18 June 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In this dissertation, a potential outcomes (PO) based framework is developed for causally interpretable treatment effect parameters in the multinomial dependent variable regression framework. The specification of the relevant data generating process (DGP) is also derived. This new framework simultaneously accounts for the potential endogeneity of the treatment and loosens inter-category covariance restrictions on the multinomial outcome model (e.g., the independence from irrelevant alternatives restriction). Corresponding consistent estimators for the “deep parameters” of the DGP and the treatment effect parameters are developed and implemented (in Stata). A novel approach is proposed for assessing the inter-category covariance flexibility afforded by a particular multinomial modeling specification [e.g. multinomial logit (MNL), multinomial probit (MNP), and nested multinomial logit (NMNL)] in the context of our general framework. This assessment technique can serve as a useful tool for model selection. The new modeling/estimation approach developed in this dissertation is quite general. I focus here, however, on the NMNL model because, among the three modeling specifications under consideration (MNL, MNP and NMNL), it is the only one that is both computationally feasible and is relatively unrestrictive with regard to inter-category covariance. Moreover, as a logical starting point, I restrict my analyses to the simplest version of the model – the trinomial (three-category) NMNL with an endogenous treatment (ET) variable conditioned on individual-specific covariates only. To identify potential computational issues and to assess the statistical accuracy of my proposed NMNL-ET estimator and its implementation (in Stata), I conducted a thorough simulation analysis. I found that conventional optimization techniques are, in this context, generally fraught with convergence problems. To overcome this, I implement a systematic line search algorithm that successfully resolves this issue. The simulation results suggest that it is important to accommodate both endogeneity and inter-category covariance simultaneously in model design and estimation. As an illustration and as a basis for comparing alternative parametric specifications with respect to ease of implementation, computational efficiency and statistical performance, the proposed model and estimation method are used to analyze the impact of substance abuse/dependence on the employment status using the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) data.
6

Estimation of causal effects of exposure models and of drug-induced homicide prosecutions on drug overdose deaths

Kung, Kelly C. 23 June 2023 (has links)
Causal inference methods have been applied in various fields where researchers want to establish causal effects between different phenomena. The goal of causal inference is to estimate treatment effects by comparing outcomes had units received treatment versus outcomes had units not received treatment. We focus on estimating treatment effects in three different projects. We first proposed linear unbiased estimators (LUEs) for general causal effects under the assumption that treatment effects are additive. Under the assumption of additivity, the set of estimands considered grows as contrasts in exposures are now equivalent. Furthermore, we identified a subset of LUEs that forms an affine basis for LUEs, and we characterized LUEs with minimum integrated variance through defining conditions on the support of the estimator. We also estimated the effect of drug-induced homicide (DIH) prosecutions reported by the media on unintentional drug overdose deaths, which have never been empirically assessed, using various models. Using a difference-in-differences-like logistic generalized additive model (GAM) with smoothed time effects where we assumed a constant treatment effect, we found that DIH prosecutions reported by the media were associated with a potential harmful effect (risk ratio: 1.064; 95% CI: (1.012, 1.118)) on drug overdose deaths. Upon further research, however, there are potential issues using a constant treatment effect model in a setting where treatment is staggered and treatment effects are heterogeneous. Therefore, we also used a GAM with a linear link function where we assumed that treatment effects may depend on the treatment duration. With this second model, we estimated a risk ratio for having any DIH prosecutions reported by the media of 0.956 (95% CI: (0.824, 1.110)) and a risk ratio of 0.986 (95% CI: (0.973, 0.999)) for the effect of being exposed to DIH prosecutions reported by the media for each additional six months. Despite being statistically significant, the effects were not practically significant. However, the results call for further research on the effect of DIH prosecutions on drug overdose deaths. Lastly, we shift our focus to Structural Nested Mean Models (SNMMs). We extended SNMMs to a new class of estimators which estimate treatment effects of different treatment regimes in the risk ratio scale---the Structural Nested Risk Ratio Model (SNRRM). We further generalized previous work on SNMMs by estimating treatment effects by modeling a function of treatment, which we choose to be any function that can be modeled by generalized linear models, as opposed to just a model for treatment initiation. We applied SNRRMs to estimate the effect of DIH prosecutions reported by the media on drug overdose deaths.
7

Modern Econometric Methods for the Analysis of Housing Markets

Kesiz Abnousi, Vartan 26 May 2021 (has links)
The increasing availability of richer, high-dimensional, home sales data-sets, as well as spatially geocoded data, allows for the use of new econometric and computational methods to explore novel research questions. This dissertation consists of three separate research papers which aim to leverage this trend to answer empirical inferential questions, propose new computational approaches in environmental valuation, and address future challenges. The first research chapter estimates the effect on home values of 10 large-scale urban stream restoration projects situated near the project sites. The study area is the Johnson Creek Watershed in Portland, Oregon. The research design incorporates four matching model approaches that vary based on the temporal bands' width, a narrow and a wider band, and two spatial zoning buffers, a smaller and larger that account for the affected homes' distances. Estimated effects tend to be positive for six projects when the restoration projects' distance is smaller, and the temporal bands are narrow, while two restoration projects have positive effects on home values across all four modeling approaches. The second research chapter focuses on the underlying statistical and computational properties of matching methods for causal treatment effects. The prevailing notion in the literature is that there is a tradeoff between bias and variance linked to the number of matched control observations for each treatment unit. In addition, in the era of Big Data, there is a paucity of research addressing the tradeoffs between inferential accuracy and computational time across different matching methods. Is it worth employing computationally costly matching methods if the gains in bias reduction and efficiency are negligible? We revisit the notion of bias-variance tradeoff and address the subject of computational time considerations. We conduct a simulation study and evaluate 160 models and 320 estimands. The results suggest that the conventional notion of a bias-variance tradeoff, with bias increasing and variance decreasing with the number of matched controls, does not hold under the bias-corrected matching estimator (BCME), developed by Abadie and Imbens (2011). Specifically, for the BCME, the trend of bias decreases as the number of matches per treated unit increases. Moreover, when the pre-matching balance's quality is already good, choosing only one match results in a significantly larger bias under all methods and estimators. In addition, the genetic search matching algorithm, GenMatch, is superior compared to the baseline Greedy Method by achieving a better balance between the observed covariate distributions of the treated and matched control groups. On the down side, GenMatch is 408 times slower compared to a greedy matching method. However, when we employ the BCME on matched data, there is a negligible difference in bias reduction between the two matching methods. Traditionally, environmental valuation methods using residential property transactions follow two approaches, hedonic price functions and Random Utility sorting models. An alternative approach is the Iterated Bidding Algorithm (IBA), introduced by Kuminoff and Jarrah (2010). This third chapter aims to improve the IBA approach to property and environmental valuation compared to its early applications. We implement this approach in an artificially simulated residential housing market, maintaining full control over the data generating mechanism. We implement the Mesh Adaptive Direct Search Algorithm (MADS) and introduce a convergence criterion that leverages the knowledge of individuals' actual pairing to homes. We proceed to estimate the preference parameters of the distribution of an underlying artificially simulated housing market. We estimate with significantly higher precision than the original baseline Nelder-Mead optimization that relied only on a price discrepancy convergence criterion, as implemented during the IBAs earlier applications. / Doctor of Philosophy / The increasing availability of richer, high-dimensional, home sales data sets enables us to employ new methods to explore novel research questions involving housing markets. This dissertation consists of three separate research papers which leverage this trend. The first research paper estimates the effects on home values of 10 large-scale urban stream restoration projects in Portland, Oregon. These homes are located near the project sites. The results show that the distance of the homes from the project sites and the duration of the construction cause different effects on home values. However, two restorations have positive effects regardless of the distance and the duration period. The second research study is focused on the issue of causality. The study demonstrates that a traditional notion concerning causality known as the ``bias-variance tradeoff" is not always valid. In addition, the research shows that sophisticated but time-consuming algorithms have negligible effects in improving the accuracy of estimating the causal effects when we account for the required computational time. The third research study improves an environmental evaluation method that relies on residential property transactions. The methodology leverages the features of more informative residential data sets in conjunction with a more efficient optimization method, leading to significant improvements. The study concludes that due to these improvements, this alternative method can be employed to elicit the true preferences of homeowners over housing and locational characteristics by avoiding the shortcomings of existing techniques.
8

Estimating the effect of the 2008 financial crisis on GNI in Greece and Iceland: A synthetic control approach

Iasonidou, Sofia January 2016 (has links)
The purpose of this thesis is to conduct a comparative study in order to estimate the impact of the financial crisis to the GNI of Greece and Iceland. By applying synthetic control matching (a relatively new methodology) the study intends to compare the two countries, thus deducting conclusions about good or bad measures adopted. The results indicate that in both cases the adopted measures were not the optimal ones, since the synthetic counterfactual appear to perform better than the actual Greece and Iceland. Moreover, it is shown that Iceland reacted better to the shock it was exposed. However, different characteristics of the two countries impede the application of Icelandic actions in the Greek case.
9

Firm's value, financing constraints and dividend policy in relation to firm's political connections

Alsaraireh, Ahmad January 2017 (has links)
The relationship between politicians and firms has attracted a considerable amount of research, especially in developing countries, where firms' political links are a widespread phenomenon. However, existing literature offers contradicting views about this relationship, espicially regarding the impact of firms' political connections on firms' market-performance. Furthermore, there is limited evidence on the impact of firms' political connections on some of the important corporate decisions, including firms' investment- and dividend-policies. Therefore, this thesis seeks to fill these gaps by offering three empirical essays with Jordan as a case study. The first essay examines the impact of firms' political links on their values by controlling for macroeconomic conditions. Also, in the extended models, by specifying three major events which occurred after 2008, namely, the establishment of the Anti-Corruption Commission (ACC), the Global Financial Crisis, and the Arab Uprisings, we investigate the effects of these events on the relationship between firms' political ties and their value. The findings of this essay indicate that politically-connected firms have higher values compared to their non-connected counterparts in Jordan. Moreover, it is found that firms with stronger political-ties have higher values than firms with weaker ties. Furthermore, the positive effect of political connections continues, even after controlling for the macroeconomic conditions, though the latter are considered to be more important than political connections for firm valuation due to their impact on the share price. Interestingly, findings show that the events occurring after 2008 do not seem to have affected the relationship between political connections and firm value since the significant positive impact of political-ties on firm value persists during the post-event period. The second empirical essay studies the role of political connections in mitigating firms' financing-constraints. Moreover, it investigates the effect of the strength of political connections in alleviating these constraints. Finally, it looks at the impact of the above-mentioned three events which occurred after 2008, notwithstanding the new banking Corporate Governance Code issued in 2007. Findings of this essay reveal that firms' political connections are important in mitigating their financing-constraints. Furthermore, the results show that stronger political connections seem to reduce financing-constraints more than weaker connections. Finally, findings show that the impact of firms' political connections has diminished during the post-event period (2008 - 2014). The third essay examines how a firm's political connections can affect its dividend-policy. It also considers the impact of the strength of political connections on dividend-policy. Finally, we extend the empirical analysis by investigating any shift in the relationship between political connections and dividends due to the events of the Global Financial Crisis, the Arab Uprisings, and the adoption of the International Financial Reporting Standards (IFRS). Results of this essay reveal that a firm's political connections have a significant positive impact on both the propensity to pay dividends and the dividend-payout ratio. Regarding the impact of the strength of political connections on dividends, it is found that firms with weaker political connections pay out more in dividends than firms with stronger connections. In terms of the impact of the events which occurred after 2008 on the relationship between political connections and dividends, the findings show that the impact of these connections on dividends is eliminated.
10

Covariate selection and propensity score specification in causal inference

Waernbaum, Ingeborg January 2008 (has links)
<p>This thesis makes contributions to the statistical research field of causal inference in observational studies. The results obtained are directly applicable in many scientific fields where effects of treatments are investigated and yet controlled experiments are difficult or impossible to implement.</p><p>In the first paper we define a partially specified directed acyclic graph (DAG) describing the independence structure of the variables under study. Using the DAG we show that given that unconfoundedness holds we can use the observed data to select minimal sets of covariates to control for. General covariate selection algorithms are proposed to target the defined minimal subsets.</p><p>The results of the first paper are generalized in Paper II to include the presence of unobserved covariates. Morevoer, the identification assumptions from the first paper are relaxed.</p><p>To implement the covariate selection without parametric assumptions we propose in the third paper the use of a model-free variable selection method from the framework of sufficient dimension reduction. By simulation the performance of the proposed selection methods are investigated. Additionally, we study finite sample properties of treatment effect estimators based on the selected covariate sets.</p><p>In paper IV we investigate misspecifications of parametric models of a scalar summary of the covariates, the propensity score. Motivated by common model specification strategies we describe misspecifications of parametric models for which unbiased estimators of the treatment effect are available. Consequences of the misspecification for the efficiency of treatment effect estimators are also studied.</p>

Page generated in 0.0796 seconds