• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 14
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 79
  • 79
  • 14
  • 13
  • 13
  • 11
  • 11
  • 11
  • 11
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Inflation and Asset Prices

Pflueger, Carolin January 2012 (has links)
Do corporate bond spreads reflect fear of debt deflation? Most corporate bonds have fixed nominal face values, so unexpectedly low inflation raises firms' real debt burdens and increases default risk. The first chapter develops a real business cycle model with time-varying inflation risk and optimal, but infrequent, capital structure choice. In this model, more volatile or more procyclical inflation lead to quantitatively important credit spread increases. This is true even with inflation volatility as moderate as that in developed economies since 1970. Intuitively, this result obtains because inflation persistence generates large uncertainty about the price level at long maturities and because firms cannot adjust their capital structure immediately. We find strong empirical support for our model predictions in a panel of six developed economies. Both inflation volatility and the inflation-stock return correlation have varied substantially over time and across countries. They jointly explain as much variation in credit spreads as do equity volatility and the dividend-price ratio. Credit spreads rise by 15 basis points if either inflation volatility or the inflation-stock return correlation increases by one standard deviation. Firms counteract higher debt financing costs by adjusting their capital structure in times of higher inflation uncertainty. The second chapter empirically decomposes excess return predictability in inflation-indexed and nominal government bonds into liquidity, market segmentation, real interest rate risk and inflation risk. This chapter finds evidence for time-varying liquidity premia in Treasury Inflation Protected Securities (TIPS) and for time-varying liquidity premia in TIPS and for time-varying inflation risk premia in nominal bonds. The third chapter develops a pre-test for weak instruments in linear instrumental variable regression that is robust to heteroskedasticity and autocorrelation. Our test statistic is a scaled version of the regular first-stage F statistic. The critical values depend on the long-run variance-covariance matrix of the first stage. We apply our pre-test to the instrumental variable estimation of the Elasticity of Intertemporal Substitution and find that instruments previously considered not to be weak do not exceed our threshold.
32

Essays on Instrumental Variables

Kolesar, Michal 08 October 2013 (has links)
This dissertation addresses issues that arise in the classic linear instrumental variables (IV) model when some of the underlying assumptions are violated. / Economics
33

Peer Effects in the Classroom: Evidence from New Peers

Pivovarova, Margarita 14 January 2014 (has links)
This thesis investigates the role of classmates in the academic achievement of an individual student. I propose a new strategy to identify ability spillovers and combine this strategy with a unique data set to estimate peer effects in education. Using this innovative approach, I quantify the average effect of peers on own academic achievement in middle school and analyze heterogeneity of own response to peers along ability and gender lines. In Chapter 1, I provide a comprehensive empirical analysis of linear-in-means model of peer interactions and estimate the effect of the average quality of peers on academic progress of six-graders in Ontario public schools. I provide convincing evidence of the validity of my identification strategy and show that the average quality of classmates measured by their lagged test scores matters for individual academic achievement. I find positive, large and significant ability spillovers from peers in the same classroom. To reconcile the broad spectrum of peer effect's estimates in the literature, I also investigate the impact of peers in the same school and grade. I show that once a peer group is aggregated to a grade or class level, the effect attenuates towards zero. In Chapter 2, I relax the main assumption of linear-in-means model and compare alternative models of peer interactions with the empirical results from the first chapter. My findings imply that all students unambiguously benefit from the presence of high achieving peers. At the same time, academic progress of high-achievers does not suffer from the presence of low-achieving classmates. This finding provides important policy implications for ability grouping of students in schools. With the help of a policy experiment I demonstrate that spreading out high ability students across classrooms is an efficient strategy to increase the achievement level of every student. In the third chapter, I introduce gender dimension into the analysis of peer effects and investigate the role of class gender composition on individual academic achievement. I employ two different identification strategies and find that large share of girls in a class facilitates academic progress of both boys and girls. While he average quality of girls is one of the determinants of own achievement, peer-to-peer interactions and improved discipline in a classroom, when more girls are present, also play an important role.
34

Peer Effects in the Classroom: Evidence from New Peers

Pivovarova, Margarita 14 January 2014 (has links)
This thesis investigates the role of classmates in the academic achievement of an individual student. I propose a new strategy to identify ability spillovers and combine this strategy with a unique data set to estimate peer effects in education. Using this innovative approach, I quantify the average effect of peers on own academic achievement in middle school and analyze heterogeneity of own response to peers along ability and gender lines. In Chapter 1, I provide a comprehensive empirical analysis of linear-in-means model of peer interactions and estimate the effect of the average quality of peers on academic progress of six-graders in Ontario public schools. I provide convincing evidence of the validity of my identification strategy and show that the average quality of classmates measured by their lagged test scores matters for individual academic achievement. I find positive, large and significant ability spillovers from peers in the same classroom. To reconcile the broad spectrum of peer effect's estimates in the literature, I also investigate the impact of peers in the same school and grade. I show that once a peer group is aggregated to a grade or class level, the effect attenuates towards zero. In Chapter 2, I relax the main assumption of linear-in-means model and compare alternative models of peer interactions with the empirical results from the first chapter. My findings imply that all students unambiguously benefit from the presence of high achieving peers. At the same time, academic progress of high-achievers does not suffer from the presence of low-achieving classmates. This finding provides important policy implications for ability grouping of students in schools. With the help of a policy experiment I demonstrate that spreading out high ability students across classrooms is an efficient strategy to increase the achievement level of every student. In the third chapter, I introduce gender dimension into the analysis of peer effects and investigate the role of class gender composition on individual academic achievement. I employ two different identification strategies and find that large share of girls in a class facilitates academic progress of both boys and girls. While he average quality of girls is one of the determinants of own achievement, peer-to-peer interactions and improved discipline in a classroom, when more girls are present, also play an important role.
35

O efeito das fiscalizações do trabalho para a redução do trabalho infantil no Brasil / The effect of inspections of the work for the reduction of child labor in Brazil

Roselaine Bonfim de Almeida 15 April 2015 (has links)
O trabalho infantil vem diminuindo desde meados da década de 1990. Foi também nesse período que a inspeção do trabalho no Brasil começou a dar maior importância ao combate ao trabalho infantil. Assim, o presente trabalho teve por objetivo analisar o efeito da inspeção do trabalho sobre a queda no trabalho infantil, em 2000 e 2010. Inicialmente, a ideia era utilizar o número de empresas fiscalizadas no município como uma medida da execução da inspeção no município. Entretanto, essa variável pode ser endógena já que as inspeções do trabalho não dependem apenas de ações fiscais planejadas, mas também de denúncias de violações da legislação. Para resolver esse problema considerou-se que a realização da fiscalização depende da disponibilidade de auditores fiscais do trabalho (AFTs) e da distância que eles precisam percorrer para chegar ao local onde será realizada a fiscalização. Os AFTs são distribuídos por estado e trabalham nas Superintendências Regionais do Trabalho (SRTs) ou nas Gerências Regionais do Trabalho (GRTs). De acordo com essas informações, criou-se duas variáveis instrumentais. A primeira foi a distância entre cada município e a SRT ou a GRT mais próxima. A segunda foi a quantidade de AFTs no estado. A partir dessas variáveis instrumentais utilizou-se o método de mínimos quadrados em dois estágios. As análises foram realizadas por faixas etárias. Os resultados encontrados para o ano 2000 mostram que o aumento de 1% na inspeção reduziu a proporção de crianças e adolescentes que trabalham em todas as faixas analisadas. A redução foi de 0,22% para a faixa de 10 a 17 anos, de 0,45% para a faixa de 10 a 14 anos, de 0,19% para aqueles com 15 anos e de aproximadamente 0,09% para a faixa de 16 a 17 anos. Em termos absolutos, esses valores representam aproximadamente 8.658 crianças e adolescentes de 10 a 17, 5.140 crianças e adolescentes de 10 a 14 anos, 1.233 adolescentes de 15 anos e 1.929 adolescentes de 16 e 17 anos. Os resultados foram estatisticamente significativos a 1% e a 10%. Para o ano de 2010 os resultados mostraram que o aumento de 1% na inspeção reduziu a proporção de crianças e adolescentes que trabalham em todas as faixas analisadas. A redução foi de 0,26% para a faixa de 10 a 17 anos, de 0,66% para a faixa de 10 a 13 anos, de 0,41% para a faixa de 14 a 15 anos e de 0,08% para a faixa de 16 a 17 anos. Todos esses resultados foram estatisticamente significativos a 1%, com exceção da última faixa etária. Em termos absolutos, esses valores representam aproximadamente 8.856 crianças e adolescentes de 10 a 17 anos, 4.686 crianças e adolescentes de 10 a 13 anos e 3.642 adolescentes de 14 e 15 anos. Esses resultados mostram a importância da fiscalização para a redução ou eliminação do trabalho infantil, principalmente das piores formas. / Child labor has been decreasing since the mid-1990s. It was also during this period that the labor inspection in Brazil started to give greater importance to combat child labor. Thus, this research aimed to analyze the effect of labor inspection in the reduction of child labor in 2000 and 2010. Initially, the idea was to use the number of inspected companies in the municipality as a measure of the implementation of inspection in the municipality. However, this variable may be endogenous since the inspections of the work don\'t rely only on planned fiscal actions, but also on complaints of violations of the laws. To solve this problem it was considered that the implementation of labor inspection depends on the availability of labor inspectors and the distance that they have to travel to get to the place where the inspection will be performed. The labor inspectors are distributed by state, and they work in the Regional Superintendent of Labor (RSL), or in the Regional Management of Labour (RML). According to this information, two instrumental variables were created. The first was the distance between each municipality and the nearest RSL or RML. The second was the number of labor inspectors in the state. We used the method of Two-Stage Least Squares. The analyses were performed by age groups. The results for the year 2000 show that an increase of 1% in the inspection reduced the proportion of children and adolescents working in all analyzed groups. The reduction was of 0.22% for the aged group of 10 to 17 years, 0.45% for the aged group of 10 to 14 years, 0.19% for those aged 15 and approximately 0.09% for the aged group of 16 to 17 years. In absolute terms, these values represent approximately 8,658 children and adolescents for the aged group of 10 to 17 years, 5,140 children and adolescents for the aged group of 10 to 14 years, 1,233 adolescents aged 15 years and 1,929 adolescents for the aged group of 16 to 17 years. The results were statistically significant at 1% and at 10%. For the year 2010, results also showed that the increase of 1% in the labor inspection reduced the proportion of children and adolescents working in all analyzed groups. The reduction was of 0.26% for the aged group of 10 to 17 years, 0.66% for the aged group of 10 to 13 years, 0.41% for the aged group of 14 to 15 years and 0.08% for the aged group of 16 to 17 years. All these results were statistically significant at 1%, except for the last aged group. In absolute terms, these values represent approximately 8,856 children and adolescents for the aged group of 10 to 17 years, 4,689 children and adolescents for the aged group of 10 to 13 years and 3,642 adolescents for the aged group of 14 to 15 years. These results show the importance of labor inspection to decrease or eliminate child labor, mainly its worst forms.
36

Variaveis instrumentais no modelo canonico de contagio heteroscedastico / Instrumental variables in heteroskedastic canonical model of contagion

Ribeiro, Andre Luiz Prima 15 August 2018 (has links)
Orientador: Luiz Koodi Hotta / Dissertação ( mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-15T13:05:45Z (GMT). No. of bitstreams: 1 Ribeiro_AndreLuizPrima_M.pdf: 3151695 bytes, checksum: d87230fa6191977394ccb585657639ad (MD5) Previous issue date: 2010 / Resumo: O conhecimento das relações de dependência entre as economias são relevantes para tomadas de decisões de Bancos Centrais, investidores e governos. Um tema desafiador é o estudo da existência de contágio entre as economias. Este trabalho considera o Modelo Canônico de Contágio estudado por Pesaran e Pick (2007), o qual diferencia contágio de interdependência. O estimador de mínimos quadrados ordinário para este modelo é viesado devido à existência de variáveis endógenas no modelo. A teoria de variáveis instrumentais é utilizada para diminuir o viés existente nos estimadores de mínimos quadrados ordinários. Este trabalho estuda este modelo na presença de erros heteroscedásticos e utiliza as volatilidades condicionais como variáveis instrumentais. São estudados vários métodos para teste de hipóteses, com ênfase em testes robustos a instrumentos fracos. São abordadas duas diferentes definições de crise e são postuladas como instrumentos válidos as volatilidades condicionais dos índices de desempenho das economias e analisadas por meio de simulações de Monte Carlo a validade destes instrumentos para identificar a existência de contágio. Especificamente, são consideradas as distribuições dos estimadores e a função poder dos testes propostos para diferentes tamanhos de amostras, bem como, estudadas as aproximações das distribuições assintóticas dos estimadores e estatísticas dos testes. Finalmente, o modelo canônico de contágio é utilizado na análise dos dados de retorno dos principais índices acionários de Argentina, Brasil, México e EUA, assim como para alguns países asiáticos / Abstract: The understanding of the dependence among the economies are relevant to policy makers, Central Banks and investors in the decision making process. An important issue is the study of the existence of contagion among the economies. This work consider the Canonical Model of Contagion of Pesaran and Pick (2007), which diferentiates contagion of interdependence. The ordinary least squares estimator for this model is biased because there are endogenous variables in the model. Instrumental variable are used in order to decrease the bias of the ordinary least squares estimators. The model is extended to the case of heteroskedastic errors, feature usually found in financial data. Two definitions of crises are applied and we postulate the conditional volatility of the performance indexes as a instrumental variable. We analyze the validity of this instruments by means of Monte Carlo simulations. Monte Carlo simulations are used to analyst the distributions of the estimators and the power functions of the tests proposed. Finally, the canonical model of contagion is used to analyst the data of the most important performance indexes of Argentina, Brazil, Mexico and USA, as well the performance indexes of seven Asiatic countries / Mestrado / Estatistica / Mestre em Estatística
37

Statistical issues in Mendelian randomization : use of genetic instrumental variables for assessing causal associations

Burgess, Stephen January 2012 (has links)
Mendelian randomization is an epidemiological method for using genetic variationto estimate the causal effect of the change in a modifiable phenotype onan outcome from observational data. A genetic variant satisfying the assumptionsof an instrumental variable for the phenotype of interest can be usedto divide a population into subgroups which differ systematically only in thephenotype. This gives a causal estimate which is asymptotically free of biasfrom confounding and reverse causation. However, the variance of the causalestimate is large compared to traditional regression methods, requiring largeamounts of data and necessitating methods for efficient data synthesis. Additionally,if the association between the genetic variant and the phenotype is notstrong, then the causal estimates will be biased due to the “weak instrument”in finite samples in the direction of the observational association. This biasmay convince a researcher that an observed association is causal. If the causalparameter estimated is an odds ratio, then the parameter of association willdiffer depending on whether viewed as a population-averaged causal effect ora personal causal effect conditional on covariates. We introduce a Bayesian framework for instrumental variable analysis, whichis less susceptible to weak instrument bias than traditional two-stage methods,has correct coverage with weak instruments, and is able to efficiently combinegene–phenotype–outcome data from multiple heterogeneous sources. Methodsfor imputing missing genetic data are developed, allowing multiple genetic variantsto be used without reduction in sample size. We focus on the question ofa binary outcome, illustrating how the collapsing of the odds ratio over heterogeneousstrata in the population means that the two-stage and the Bayesianmethods estimate a population-averaged marginal causal effect similar to thatestimated by a randomized trial, but which typically differs from the conditionaleffect estimated by standard regression methods. We show how thesemethods can be adjusted to give an estimate closer to the conditional effect. We apply the methods and techniques discussed to data on the causal effect ofC-reactive protein on fibrinogen and coronary heart disease, concluding withan overall estimate of causal association based on the totality of available datafrom 42 studies.
38

Investigating effects of diagnosing depression among patients with acute myocardial infarction

Tang, Yuexin 01 July 2014 (has links)
Observational data and alternative estimators with correct interpretations have been used to assess the "right" treatment rates in previous studies. However, no systematic analytical approach has been proposed to examine whether the existing diagnosis rates were right in practice. This study used patients with acute myocardial infarction (AMI) as an example to demonstrate use of observational data to explore the clinical and economic effects of depression diagnosis and the "right" depression diagnosis rates in real-world settings. The objectives of this study were to (1) examine the effects of depression diagnosing on survival, healthcare costs and utilization among elderly patients with AMI; and (2) ascertain bounds on the estimates of the effects of depression diagnosing on survival, healthcare costs and utilization based on chart abstracted data for a subset of patients. Using Medicare claims data, we included a retrospective cohort of all Medicare fee-for-service patients with their first AMI without a depression diagnosis in the previous year during 2007-2008. Depression diagnosis was identified if a patient had a depression diagnosis within 30 days after AMI admission. We also assessed the effects of depression diagnosis within 60 and 90 days after AMI admission. Outcomes were survival, healthcare costs (total costs, Part A, Part B (outpatient, physician fee schedule, and other), and Part D costs), and utilization (hospitalizations, emergency department (ED) visits, outpatient visits, physician visits, and prescription claims) within 1 year after AMI admission. Risk adjustment (RA) and instrumental variables (IV) models were used to estimate the effects of depression diagnosis on AMI patient outcomes. Instruments of local area depression diagnosis styles were created based on area diagnosis ratio (ADR). Using chart abstracted data for a convenience sample, we measured patient physical functional status by difficulties with activities of daily living (ADL) and overall health by adult comorbidity evaluation-27 (ACE-27), AMI severity, and mental illnesses during the index hospitalization. Among 155841 AMI patients in our study sample, 5.9% had a depression diagnosis within 30 days after AMI admission. Our RA estimates showed that depression diagnosis was associated with decreased survival, increased total healthcare costs, Part A costs, Part B outpatient costs, hospitalizations, ED visits, physician visits, and prescription claims in 1 year after AMI admission for patients diagnosed with depression. The ADR-based instruments were strongly related to depression diagnosis (Chow-F values > 10). Our IV estimates showed that higher depression diagnosis rates were associated with increased total healthcare costs, Part A costs, Part B physician fee schedule costs, Part B other costs, Part D costs, and physician visits, but decreased ED visits and prescription claims in 1 year after AMI admission for patients whose depression diagnosis was affected by ADR-based instruments. Since patients diagnosed with depression were more likely to be sicker based on measures in the charts, the RA estimates might be biased toward worse health outcomes and higher healthcare costs and utilization. Across patients grouped by local depression diagnosis styles, the measures in the charts were more evenly distributed across diagnosis groups. However, patients living in areas with stronger preferences of depression diagnosis tended to use more wheelchairs, indicating worse physical function than those living in areas with less stronger preferences. Furthermore, our instruments based on local physician depression diagnosis styles might be correlated with local area practice styles in general (preference to healthcare utilization overall) and local physician supply, and thereby affect healthcare utilization and costs. Therefore, the instruments might not be valid and we could not conclude whether the existing depression diagnosis rates need to be changed.
39

Essays in robust estimation and inference in semi- and nonparametric econometrics / Contributions à l'estimation et à l'inférence robuste en économétrie semi- et nonparamétrique

Guyonvarch, Yannick 28 November 2019 (has links)
Dans le chapitre introductif, nous dressons une étude comparée des approches en économétrie et en apprentissage statistique sur les questions de l'estimation et de l'inférence en statistique.Dans le deuxième chapitre, nous nous intéressons à une classe générale de modèles de variables instrumentales nonparamétriques. Nous généralisons la procédure d'estimation de Otsu (2011) en y ajoutant un terme de régularisation. Nous prouvons la convergence de notre estimateur pour la norme L2 de Lebesgue.Dans le troisième chapitre, nous montrons que lorsque les données ne sont pas indépendantes et identiquement distribuées (i.i.d) mais simplement jointement échangeables, une version modifiée du processus empirique converge faiblement vers un processus gaussien sous les mêmes conditions que dans le cas i.i.d. Nous obtenons un résultat similaire pour une version adaptée du processus empirique bootstrap. Nous déduisons de nos résultats la normalité asymptotique de plusieurs estimateurs non-linéaires ainsi que la validité de l'inférence basée sur le bootstrap. Nous revisitons enfin l'article empirique de Santos Silva et Tenreyro (2006).Dans le quatrième chapitre, nous abordons la question de l'inférence pour des ratios d'espérances. Nous trouvons que lorsque le dénominateur ne tend pas trop vite vers zéro quand le nombre d'observations n augmente, le bootstrap nonparamétrique est valide pour faire de l'inférence asymptotique. Dans un second temps, nous complétons un résultat d'impossibilité de Dufour (1997) en montrant que quand n est fini, il est possible de construire des intervalles de confiance qui ne sont pas pathologiques sont certaines conditions sur le dénominateur.Dans le cinquième chapitre, nous présentons une commande Stata qui implémente les estimateurs proposés par de Chaisemartin et d'Haultfoeuille (2018) pour mesurer plusieurs types d'effets de traitement très étudiés en pratique. / In the introductory chapter, we compare views on estimation and inference in the econometric and statistical learning disciplines.In the second chapter, our interest lies in a generic class of nonparametric instrumental models. We extend the estimation procedure in Otsu (2011) by adding a regularisation term to it. We prove the consistency of our estimator under Lebesgue's L2 norm.In the third chapter, we show that when observations are jointly exchangeable rather than independent and identically distributed (i.i.d), a modified version of the empirical process converges weakly towards a Gaussian process under the same conditions as in the i.i.d case. We obtain a similar result for a modified version of the bootstrapped empirical process. We apply our results to get the asymptotic normality of several nonlinear estimators and the validity of bootstrap-based inference. Finally, we revisit the empirical work of Santos Silva and Tenreyro (2006).In the fourth chapter, we address the issue of conducting inference on ratios of expectations. We find that when the denominator tends to zero slowly enough when the number of observations n increases, bootstrap-based inference is asymptotically valid. Secondly, we complement an impossibility result of Dufour (1997) by showing that whenever n is finite it is possible to construct confidence intervals which are not pathological under some conditions on the denominator.In the fifth chapter, we present a Stata command which implements estimators proposed in de Chaisemartin et d'Haultfoeuille (2018) to measure several types of treatment effects widely studied in practice.
40

Where There’s Smoke, There’s Fire : An Analysis of the Riksbank’s Interest Setting Policy

Lahlou, Mehdi, Sandstedt, Sebastian January 2017 (has links)
We analyse the Swedish central bank, the Riksbank’s, interest setting policy in a Taylor rule framework. In particular, we examine whether or not the Riksbank has reacted to fluctuations in asset prices during the period 1995:Q1 to 2016:Q2. This is done by estimating a forward-looking Taylor rule with interest rate smoothing, augmented with stock prices, house prices and the real exchange rate, using IV GMM. In general, we find that the Riksbank’s interest setting policy is well described by a forward-looking Taylor rule with interest rate smoothing and that the use of factors as instruments, derived from a PCA, serves to alleviate the weak-identification problem that tend to plague GMM. Moreover, apart from finding evidence that the Riksbank exhibit a substantial degree of policy rate inertia and has acted so as to stabilize inflation and the real economy, we also find evidence that the Riksbank has been reacting to fluctuations in stock prices, house prices, and the real exchange rate.

Page generated in 0.1146 seconds