• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 2
  • 2
  • Tagged with
  • 43
  • 25
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Essays on Instrumental Variables

Kolesar, Michal 08 October 2013 (has links)
This dissertation addresses issues that arise in the classic linear instrumental variables (IV) model when some of the underlying assumptions are violated. / Economics
22

Goodness-of-Fit Test Issues in Generalized Linear Mixed Models

Chen, Nai-Wei 2011 December 1900 (has links)
Linear mixed models and generalized linear mixed models are random-effects models widely applied to analyze clustered or hierarchical data. Generally, random effects are often assumed to be normally distributed in the context of mixed models. However, in the mixed-effects logistic model, the violation of the assumption of normally distributed random effects may result in inconsistency for estimates of some fixed effects and the variance component of random effects when the variance of the random-effects distribution is large. On the other hand, summary statistics used for assessing goodness of fit in the ordinary logistic regression models may not be directly applicable to the mixed-effects logistic models. In this dissertation, we present our investigations of two independent studies related to goodness-of-fit tests in generalized linear mixed models. First, we consider a semi-nonparametric density representation for the random effects distribution and provide a formal statistical test for testing normality of the random-effects distribution in the mixed-effects logistic models. We obtain estimates of parameters by using a non-likelihood-based estimation procedure. Additionally, we not only evaluate the type I error rate of the proposed test statistic through asymptotic results, but also carry out a bootstrap hypothesis testing procedure to control the inflation of the type I error rate and to study the power performance of the proposed test statistic. Further, the methodology is illustrated by revisiting a case study in mental health. Second, to improve assessment of the model fit in the mixed-effects logistic models, we apply the nonparametric local polynomial smoothed residuals over within-cluster continuous covariates to the unweighted sum of squares statistic for assessing the goodness-of-fit of the logistic multilevel models. We perform a simulation study to evaluate the type I error rate and the power performance for detecting a missing quadratic or interaction term of fixed effects using the kernel smoothed unweighted sum of squares statistic based on the local polynomial smoothed residuals over x-space. We also use a real data set in clinical trials to illustrate this application.
23

Essays in monetary economics and applied econometrics

Giordani, Paolo January 2001 (has links)
This dissertation collects five independent essays. The first essay is An Alternative Explanation of the Price Puzzle. The most widely accepted explanation of the price puzzle points to an inadequate performance of the VAR in forecasting inflation. This essay suggests that the finding of a price puzzle is due to a seemingly innocent misspecification in taking the theoretical model to the data: a measure of output gap is not included in the VAR (output alone being used instead), while this variable is a crucial element in every equation of the theoretical models. When the VAR is correctly specified, the price puzzle disappears. Building on results contained in the first paper, the second-- Stronger Evidence of Long-Run Neutrality: A comment on Bernanke and Mihov---improves the empirical performance of standard models on the prediction that a monetary policy shock should have temporary effects on output. It turns out that the same misspecification causing the price puzzle is also responsible for overestimation of the time needed for the effects on output of a monetary policy shock to die out. The point can be proven in a theoretical economy, and is confirmed on US data. Monetary Policy Without Monetary Aggregates: Some (Surprising) Evidence , joint with Giovanni Favara) is the third essay. It points to what seems to be a falsified prediction of models in the New-Keynesian framework. In this framework monetary aggregates are reserved a pretty boring role, so boring that they can be safely excluded from the final lay out of the model. These models predict that a money demand shock should have no effect on output, inflation and interest rate. However, the prediction seems to be quite wrong Inflation Forecast Targeting, joint with Paul Söderlind, takes a step outside the representative-agent framework. In RE models, all agents typically have the same information set, and therefore make the same predictions. However, in the real even professional forecasters show substantial disagreement. This disagreement can have an impact on asset prices and transaction volumes, among other things. However, there is no unique way of aggregating forecasts (or forecast probability density functions) into a measure of disagreement. The paper deals with this problem, surveying some proposed methods. The most appropriate measure of disagreement turns out to depend on the intended use, that is, on the model. Moreover, forecasters underestimate uncertainty. Constitutions and Central-Bank Independence: An Objection to McCallum's Second Fallacy, joint with Giancarlo Spagnolo , is an excursion into the field of Political Economy. The essay provides some foundations for the assumption that renegotiating a delegation contract can be costly by illustrating how political institutions can generate inertia in re-contracting, reduce the gains from it or prevent it altogether. Once the nature of renegotiation costs has been clarified, it is easier to see why certain institutions can mitigate or solve dynamic inconsistencies better than others. / Diss. Stockholm : Handelshögsk., 2001
24

Multilevel Mediation Analysis: Statistical Assumptions and Centering

January 2010 (has links)
abstract: Mediation analysis is a statistical approach that examines the effect of a treatment (e.g., prevention program) on an outcome (e.g., substance use) achieved by targeting and changing one or more intervening variables (e.g., peer drug use norms). The increased use of prevention intervention programs with outcomes measured at multiple time points following the intervention requires multilevel modeling techniques to account for clustering in the data. Estimating multilevel mediation models, in which all the variables are measured at individual level (Level 1), poses several challenges to researchers. The first challenge is to conceptualize a multilevel mediation model by clarifying the underlying statistical assumptions and implications of those assumptions on cluster-level (Level-2) covariance structure. A second challenge is that variables measured at Level 1 potentially contain both between- and within-cluster variation making interpretation of multilevel analysis difficult. As a result, multilevel mediation analyses may yield coefficient estimates that are composites of coefficient estimates at different levels if proper centering is not used. This dissertation addresses these two challenges. Study 1 discusses the concept of a correctly specified multilevel mediation model by examining the underlying statistical assumptions and implication of those assumptions on Level-2 covariance structure. Further, Study 1 presents analytical results showing algebraic relationships between the population parameters in a correctly specified multilevel mediation model. Study 2 extends previous work on centering in multilevel mediation analysis. First, different centering methods in multilevel analysis including centering within cluster with the cluster mean as a Level-2 predictor of intercept (CWC2) are discussed. Next, application of the CWC2 strategy to accommodate multilevel mediation models is explained. It is shown that the CWC2 centering strategy separates the between- and within-cluster mediated effects. Next, Study 2 discusses assumptions underlying a correctly specified CWC2 multilevel mediation model and defines between- and within-cluster mediated effects. In addition, analytical results for the algebraic relationships between the population parameters in a CWC2 multilevel mediation model are presented. Finally, Study 2 shows results of a simulation study conducted to verify derived algebraic relationships empirically. / Dissertation/Thesis / Ph.D. Psychology 2010
25

Efeitos da especificação incorreta da função de ligação no modelo de regressão beta / The impact of misspecification of the link function in beta regression

Augusto Cesar Giovanetti de Andrade 09 August 2007 (has links)
O ajuste de modelos de regressão beta requer a especificação de uma função de ligação. Algumas funções de ligação úteis são: logito, probito, complemento log-log e log-log. Usualmente, a ligação logito é utilizada pois permite interpretação simples para os parâmetros de regressão. O principal objetivo deste trabalho é avaliar o impacto da especificação incorreta da função de ligação em regressão beta. Estudos de simulação serão usados com esse prop´osito. Amostras da variável resposta serão geradas assumindo uma função de ligação conhecida (verdadeira) e o modelo de regressão beta será ajustado usando a função de ligação verdadeira (correta) e algumas funções de ligação incorretas. Resultados numéricos serão comparados para avaliar o efeito da especificação incorreta da função de ligação sobre as inferências em regressão beta. Adicionalmente, será introduzido um modelo de regressão beta com função de ligação de Aranda-Ordaz, a qual depende de um parâmetro que pode ser estimado através dos dados. / Fitting beta regression models requires the specification of the link function. Some useful link functions for beta regression are: logit, probit, complementary log-log and log-log. Usually, the logit link is used since it allows easy interpretation for the regression parameters. The main objective of this work is to evaluate the impact of misspecification of the link function in beta regression. Simulation studies will be used for this purpose. Samples of the response variable will be generated assuming a known (true) link function, and the beta regression will be fitted using the true (correct) link and some incorrect link functions. Numerical results will be compared to evaluate the effect of misspecification of the link function on inference in beta regression. Also, we will introduce a beta regression model with Aranda-Ordaz link function, which depends on an unknown parameter that can be estimated through the data.
26

Estimation of DSGE Models: A Monte Carlo Analysis

Motula, Paulo Fernando Nericke 18 June 2013 (has links)
Submitted by Paulo Fernando Nericke Motula (pnericke@fgvmail.br) on 2013-06-29T15:45:20Z No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2013-07-03T13:29:49Z (GMT) No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-07-09T19:35:20Z (GMT) No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) / Made available in DSpace on 2013-07-09T19:40:59Z (GMT). No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) Previous issue date: 2013-06-18 / We investigate the small sample properties and robustness of the parameter estimates of DSGE models. Our test ground is the Smets and Wouters (2007)'s model and the estimation procedures we evaluate are the Simulated Method of Moments (SMM) and Maximum Likelihood (ML). We look at the empirical distributions of the parameter estimates and their implications for impulse-response and variance decomposition in the cases of correct specification and two types of misspecification. Our results indicate an overall poor performance of SMM and some patterns of bias in impulse-response and variance decomposition for ML under the types of misspecification studied. / Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.
27

Model Misspecification and the Hedging of Exotic Options

Balshaw, Lloyd Stanley 30 August 2018 (has links)
Asset pricing models are well established and have been used extensively by practitioners both for pricing options as well as for hedging them. Though Black-Scholes is the original and most commonly communicated asset pricing model, alternative asset pricing models which incorporate additional features have since been developed. We present three asset pricing models here - the Black-Scholes model, the Heston model and the Merton (1976) model. For each asset pricing model we test the hedge effectiveness of delta hedging, minimum variance hedging and static hedging, where appropriate. The options hedged under the aforementioned techniques and asset pricing models are down-and-out call options, lookback options and cliquet options. The hedges are performed over three strikes, which represent At-the-money, Out-the-money and In-the-money options. Stock prices are simulated under the stochastic-volatility double jump diffusion (SVJJ) model, which incorporates stochastic volatility as well as jumps in the stock and volatility process. Simulation is performed under two ’Worlds’. World 1 is set under normal market conditions, whereas World 2 represents stressed market conditions. Calibrating each asset pricing model to observed option prices is performed via the use of a least squares optimisation routine. We find that there is not an asset pricing model which consistently provides a better hedge in World 1. In World 2, however, the Heston model marginally outperforms the Black-Scholes model overall. This can be explained through the higher volatility under World 2, which the Heston model can more accurately describe given the stochastic volatility component. Calibration difficulties are experienced with the Merton model. These difficulties lead to larger errors when minimum variance hedging and alternative calibration techniques should be considered for future users of the optimiser.
28

Statistical Adequacy and Reliability of Inference in Regression-like Models

Romero, Alfredo A. 09 June 2010 (has links)
Using theoretical relations as a source of econometric specifications might lead a researcher to models that do not adequately capture the statistical regularities in the data and do not faithfully represent the phenomenon of interest. In addition, the researcher is unable to disentangle the statistical and substantive sources of error and thus incapable of using the statistical evidence to assess whether the theory, and not the statistical model, is wrong. The Probabilistic Reduction Approach puts forward a modeling strategy in which theory can confront data without compromising the credibility of either one of them. This approach explicitly derives testable assumptions that, along with the standardized residuals, help the researcher assess the precision and reliability of statistical models via misspecification testing. It is argued that only when the statistical source of error is ruled out can the researcher reconcile the theory and the data and establish the theoretical and/or external validity of econometric models. Through the approach, we are able to derive the properties of Beta regression-like models, appropriate when the researcher deals with rates and proportions or any other random variable with finite support; and of Lognormal models, appropriate when the researcher deals with nonnegative data, and specially important of the estimation of demand elasticities. / Ph. D.
29

Model robust regression: combining parametric, nonparametric, and semiparametric methods

Mays, James Edward January 1995 (has links)
In obtaining a regression fit to a set of data, ordinary least squares regression depends directly on the parametric model formulated by the researcher. If this model is incorrect, a least squares analysis may be misleading. Alternatively, nonparametric regression (kernel or local polynomial regression, for example) has no dependence on an underlying parametric model, but instead depends entirely on the distances between regressor coordinates and the prediction point of interest. This procedure avoids the necessity of a reliable model, but in using no information from the researcher, may fit to irregular patterns in the data. The proper combination of these two regression procedures can overcome their respective problems. Considered is the situation where the researcher has an idea of which model should explain the behavior of the data, but this model is not adequate throughout the entire range of the data. An extension of partial linear regression and two methods of model robust regression are developed and compared in this context. These methods involve parametric fits to the data and nonparametric fits to either the data or residuals. The two fits are then combined in the most efficient proportions via a mixing parameter. Performance is based on bias and variance considerations. / Ph. D. / incomplete_metadata
30

Modelling economic high-frequency time series

Lundbergh, Stefan January 1999 (has links)
Diss. Stockholm : Handelshögsk.

Page generated in 0.2445 seconds