• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Empirical Likelihood Confidence Intervals for the Population Mean Based on Incomplete Data

Valdovinos Alvarez, Jose Manuel 09 May 2015 (has links)
The use of doubly robust estimators is a key for estimating the population mean response in the presence of incomplete data. Cao et al. (2009) proposed an alternative doubly robust estimator which exhibits strong performance compared to existing estimation methods. In this thesis, we apply the jackknife empirical likelihood, the jackknife empirical likelihood with nuisance parameters, the profile empirical likelihood, and an empirical likelihood method based on the influence function to make an inference for the population mean. We use these methods to construct confidence intervals for the population mean, and compare the coverage probabilities and interval lengths using both the ``usual'' doubly robust estimator and the alternative estimator proposed by Cao et al. (2009). An extensive simulation study is carried out to compare the different methods. Finally, the proposed methods are applied to two real data sets.
2

Essays in Social Choice and Econometrics:

Zhou, Zhuzhu January 2021 (has links)
Thesis advisor: Uzi Segal / The dissertation studies the property of transitivity in the social choice theory. I explain why we should care about transitivity in decision theory. I propose two social decision theories: redistribution regret and ranking regret, study their properties of transitivity, and discuss the possibility to find a best choice for the social planner. Additionally, in the joint work, we propose a general method to construct a consistent estimator given two parametric models, one of which could be incorrectly specified. In “Why Transitivity”, to explain behaviors violating transitivity, e.g., preference reversals, some models, like regret theory, salience theory were developed. However, these models naturally violate transitivity, which may not lead to a best choice for the decision maker. This paper discusses the consequences and the possible extensions to deal with it. In “Redistribution Regret and Transitivity”, a social planner wants to allocate resources, e.g., the government allocates fiscal revenue or parents distribute toys to children. The social planner cares about individuals' feelings, which depend both on their assigned resources, and on the alternatives they might have been assigned. As a result, there could be intransitive cycles. This paper shows that the preference orders are generally non-transitive but there are two exceptions: fixed total resource and one extremely sensitive individual, or only two individuals with the same non-linear individual regret function. In “Ranking Regret”, a social planner wants to rank people, e.g., assign airline passengers a boarding order. A natural ranking is to order people from most to least sensitive to their rank. But people's feelings can depend both on their assigned rank, and on the alternatives they might have been assigned. As a result, there may be no best ranking, due to intransitive cycles. This paper shows how to tell when a best ranking exists, and that when it exists, it is indeed the natural ranking. When this best does not exist, an alternative second-best group ranking strategy is proposed, which resembles actual airline boarding policies. In “Over-Identified Doubly Robust Identification and Estimation”, joint with Arthur Lewbel and Jinyoung Choi, we consider two parametric models. At least one is correctly specified, but we don't know which. Both models include a common vector of parameters. An estimator for this common parameter vector is called Doubly Robust (DR) if it's consistent no matter which model is correct. We provide a general technique for constructing DR estimators (assuming the models are over identified). Our Over-identified Doubly Robust (ODR) technique is a simple extension of the Generalized Method of Moments. We illustrate our ODR with a variety of models. Our empirical application is instrumental variables estimation, where either one of two instrument vectors might be invalid. / Thesis (PhD) — Boston College, 2021. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
3

Statistické metody pro regresní modely s chybějícími daty / Statistical Methods for Regression Models With Missing Data

Nekvinda, Matěj January 2018 (has links)
The aim of this thesis is to describe and further develop estimation strategies for data obtained by stratified sampling. Estimation of the mean and linear regression model are discussed. The possible inclusion of auxiliary variables in the estimation is exam- ined. The auxiliary variables can be transformed rather than used in their original form. A transformation minimizing the asymptotic variance of the resulting estimator is pro- vided. The estimator using an approach from this thesis is compared to the doubly robust estimator and shown to be asymptotically equivalent.
4

[en] COMBINING STRATEGIES FOR ESTIMATION OF TREATMENT EFFECTS / [pt] COMBINANDO ESTRATÉGIAS PARA ESTIMAÇÃO DE EFEITOS DE TRATAMENTO

RAFAEL DE CARVALHO CAYRES PINTO 19 January 2018 (has links)
[pt] Uma ferramenta importante na avaliação de políticas econômicas é a estimação do efeito médio de um programa ou tratamento sobre uma variável de interesse. A principal dificuldade desse cálculo deve-se µa atribuição do tratamento aos potenciais participantes geralmente não ser aleatória, causando viés de seleção quando desconsiderada. Uma maneira de resolver esse problema é supor que o econometrista observa um conjunto de características determinantes, a menos de um componente estritamente aleatório, da participação. Sob esta hipótese, conhecida como Ignorabilidade, métodos semiparamétricos de estimação foram desenvolvidos, entre os quais a imputação de valores contrafactuais e a reponderação da amostra. Ambos são consistentes e capazes de atingir, assintoticamente, o limite de eficiência semiparamétrico. Entretanto, nas amostras frequentemente disponíveis, o desempenho desses métodos nem sempre é satisfatório. O objetivo deste trabalho é estudar como a combinação das duas estratégias pode produzir estimadores com melhores propriedades em amostras pequenas. Para isto, consideramos duas formas de integrar essas abordagens, tendo como referencial teórico a literatura de estimação duplamente robusta desenvolvida por James Robins e co-autores. Analisamos suas propriedades e discutimos por que podem superar o uso isolado de cada uma das técnicas que os compõem. Finalmente, comparamos, num exercício de Monte Carlo, o desempenho desses estimadores com os de imputação e reponderação. Os resultados mostram que a combinação de estratégias pode reduzir o viés e a variância, mas isso depende da forma como é implementada. Concluímos que a escolha dos parâmetros de suavização é decisiva para o desempenho da estimação em amostras de tamanho moderado. / [en] Estimation of mean treatment effect is an important tool for evaluating economic policy. The main difficulty in this calculation is caused by nonrandom assignment of potential participants to treatment, which leads to selection bias when ignored. A solution to this problem is to suppose the econometrician observes a set of covariates that determine participation, except for a strictly random component. Under this assumption, known as Ignorability, semiparametric methods were developed, including imputation of counterfactual outcomes and sample reweighing. Both are consistent and can asymptotically achieve the semiparametric efficiency bound. However, in sample sizes commonly available, their performance is not always satisfactory. The goal of this dissertation is to study how combining these strategies can lead to better estimation in small samples. We consider two different ways of merging these methods, based on Doubly Robust inference literature developed by James Robins and his co-authors, analyze their properties and discuss why they would overcome each of their components. Finally, we compare the proposed estimators to imputation and reweighing in a Monte Carlo exercise. Results show that while combined strategies may reduce bias and variance, it depends on the way it is implemented. We conclude that the choice of smoothness parameters is critical to obtain good estimates in moderate size samples.
5

Sur les estimateurs doublement robustes avec sélection de modèles et de variables pour les données administratives

Bahamyirou, Asma 10 1900 (has links)
Les essais cliniques randomisés (ECRs) constituent la meilleure solution pour obtenir des effets causaux et évaluer l’efficacité des médicaments. Toutefois, vu qu’ils ne sont pas toujours réalisables, les bases de données administratives servent de solution de remplacement. Le sujet principal de cette thèse peut être divisée en deux parties, le tout, repartie en trois articles. La première partie de cette thèse traite de l’utilisation des estimateurs doublement robustes en inférence causale sur des bases de données administratives avec intégration des méthodes d’apprentissage automatique. Nous pouvons citer, par exemple, l’estimateur par maximum de vraisemblance ciblé (TMLE) et l’estimateur par augmentation de l’inverse de la probabilité de traitement (AIPTW). Ces méthodes sont de plus en plus utilisées en pharmaco-épidémiologie pour l’estimation des paramètres causaux, comme l’effet moyen du traitement. Dans la deuxième partie de cette thèse, nous développons un estimateur doublement robuste pour les données administratives et nous étendons une méthode existante pour l’ajustement du biais de sélection utilisant un échantillon probabiliste de référence. Le premier manuscrit de cette thèse présente un outil de diagnostic pour les analystes lors de l’utilisation des méthodes doublement robustes. Ce manuscrit démontre à l’aide d’une étude de simulation l’impact de l’estimation du score de propension par des méthodes flexibles sur l’effet moyen du traitement, et ce, en absence de positivité pratique. L’article propose un outil capable de diagnostiquer l’instabilité de l’estimateur en absence de positivité pratique et présente une application sur les médicaments contre l’asthme durant la grossesse. Le deuxième manuscrit présente une procédure de sélection de modificateurs d’effet et d’estimation de l’effet conditionnel. En effet, cet article utilise une procédure de régularisation en deux étapes et peut être appliqué sur plusieurs logiciels standards. Finalement, il présente une application sur les médicaments contre l’asthme durant la grossesse. Le dernier manuscrit développe une méthodologie pour ajuster un biais de sélection dans une base de données administratives dans le but d’estimer une moyenne d’une population, et ce, en présence d’un échantillon probabiliste provenant de la même population avec des co-variables communes. En utilisant une méthode de régularisation, il montre qu’il est possible de sélectionner statistiquement les bonnes variables à ajuster dans le but de réduire l’erreur quadratique moyenne et la variance. Cet article décrit ensuite une application sur l’impact de la COVID-19 sur les Canadiens. / Randomized clinical trials (RCTs) are the gold standard for establishing causal effects and evaluating drug efficacy. However, RCTs are not always feasible and the usage of administrative data for the estimation of a causal parameter is an alternative solution. The main subject of this thesis can be divided into two parts, the whole comprised of three articles. The first part studies the usage of doubly robust estimators in causal inference using administrative data and machine learning. Examples of doubly robust estimators are Targeted Maximum Likelihood Estimation (TMLE; [73]) and Augmented Inverse Probability of Treatment Weighting (AIPTW; [51]). These methods are more and more present in pharmacoepidemiology [65, 102, 86, 7, 37]. In the second part of this thesis, we develop a doubly robust estimator and extend an existing one [121] for the setting of administrative data with a supplemental probability sample. The first paper of this thesis proposes a diagnostic tool that uses re-sampling methods to identify instability in doubly robust estimators when using data-adaptive methods in the presence of near practical positivity violations. It demonstrates the impact of machine learning methods for propensity score estimation when near practical positivity violations are induced. It then describes an analysis of asthma medication during pregnancy. The second manuscript develops a methodology to statistically select effect modifying variables using a two stage procedure in the context of a single time point exposure. It then describes an analysis of asthma medication during pregnancy. The third manuscript describes the development of a variable selection procedure using penalization for combining a nonprobability and probability sample in order to adjust for selection bias. It shows that we can statistically select the right subset of the variables when the true propensity score model is sparse. It demonstrates the benefit in terms of mean squared error and presents an application of the impact of COVID-19 on Canadians.
6

Méthodes de rééchantillonnage en méthodologie d'enquête

Mashreghi, Zeinab 10 1900 (has links)
Le sujet principal de cette thèse porte sur l'étude de l'estimation de la variance d'une statistique basée sur des données d'enquête imputées via le bootstrap (ou la méthode de Cyrano). L'application d'une méthode bootstrap conçue pour des données d'enquête complètes (en absence de non-réponse) en présence de valeurs imputées et faire comme si celles-ci étaient de vraies observations peut conduire à une sous-estimation de la variance. Dans ce contexte, Shao et Sitter (1996) ont introduit une procédure bootstrap dans laquelle la variable étudiée et l'indicateur de réponse sont rééchantillonnés ensemble et les non-répondants bootstrap sont imputés de la même manière qu'est traité l'échantillon original. L'estimation bootstrap de la variance obtenue est valide lorsque la fraction de sondage est faible. Dans le chapitre 1, nous commençons par faire une revue des méthodes bootstrap existantes pour les données d'enquête (complètes et imputées) et les présentons dans un cadre unifié pour la première fois dans la littérature. Dans le chapitre 2, nous introduisons une nouvelle procédure bootstrap pour estimer la variance sous l'approche du modèle de non-réponse lorsque le mécanisme de non-réponse uniforme est présumé. En utilisant seulement les informations sur le taux de réponse, contrairement à Shao et Sitter (1996) qui nécessite l'indicateur de réponse individuelle, l'indicateur de réponse bootstrap est généré pour chaque échantillon bootstrap menant à un estimateur bootstrap de la variance valide même pour les fractions de sondage non-négligeables. Dans le chapitre 3, nous étudions les approches bootstrap par pseudo-population et nous considérons une classe plus générale de mécanismes de non-réponse. Nous développons deux procédures bootstrap par pseudo-population pour estimer la variance d'un estimateur imputé par rapport à l'approche du modèle de non-réponse et à celle du modèle d'imputation. Ces procédures sont également valides même pour des fractions de sondage non-négligeables. / The aim of this thesis is to study the bootstrap variance estimators of a statistic based on imputed survey data. Applying a bootstrap method designed for complete survey data (full response) in the presence of imputed values and treating them as true observations may lead to underestimation of the variance. In this context, Shao and Sitter (1996) introduced a bootstrap procedure in which the variable under study and the response status are bootstrapped together and bootstrap non-respondents are imputed using the imputation method applied on the original sample. The resulting bootstrap variance estimator is valid when the sampling fraction is small. In Chapter 1, we begin by doing a survey of the existing bootstrap methods for (complete and imputed) survey data and, for the first time in the literature, present them in a unified framework. In Chapter 2, we introduce a new bootstrap procedure to estimate the variance under the non-response model approach when the uniform non-response mechanism is assumed. Using only information about the response rate, unlike Shao and Sitter (1996) which requires the individual response status, the bootstrap response status is generated for each selected bootstrap sample leading to a valid bootstrap variance estimator even for non-negligible sampling fractions. In Chapter 3, we investigate pseudo-population bootstrap approaches and we consider a more general class of non-response mechanisms. We develop two pseudo-population bootstrap procedures to estimate the variance of an imputed estimator with respect to the non-response model and the imputation model approaches. These procedures are also valid even for non-negligible sampling fractions.

Page generated in 0.0289 seconds