• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 26
  • 13
  • 12
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 163
  • 163
  • 26
  • 26
  • 24
  • 22
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Involuntary unemployment and financial frictions in estimated DSGE models / Chômage involontaire et frictions financières dans les modèles DSGE estimés

Devulder, Antoine 19 April 2016 (has links)
L’utilisation de modèles DSGE, construits à partir de comportements micro-fondés des agents économiques, s'est progressivement imposée aux institutions pour l'analyse macroéconomique du cycle d'affaires et l'évaluation de politiques, grâce à leur cohérence interne. La crise financière récente et la préoccupation que représente la persistance du chômage à un niveau élevé plaident en faveur de modèles qui tiennent compte des ajustements imparfaits de l'offre et de la demande sur les marchés du crédit et du travail. Pourtant, des modèles relativement rudimentaires dans leur représentation de ces marchés, comme celui de Smets et Wouters (2003-2007), reproduisent aussi bien les données que des modèles économétriques usuels. On peut donc légitimement s'interroger sur l'intérêt de prendre en compte ces frictions dans la spécification des modèles théoriques destinés à l'analyse économique opérationnelle. Dans cette thèse, je réponds à cette question en montrant que l'inclusion de mécanismes microfondés, spécifiques aux marchés du crédit et du travail peut modifier très significativement les conclusions obtenues à partir d'un modèle DSGE estimé, tant d'un point de vue positif que normatif. Pour cela, je construis un modèle à deux pays de la France et du reste de la zone euro, avec un reste du monde exogène, et l'estime avec et sans ces deux frictions, en utilisant une approche hayésienne. Par rapport aux modèles existant dans la littérature, je propose deux améliorations à la spécification du marché du travail. Premièrement, suivant Pissarides (2009), le salaire réel moyen est rendu rigide en supposant que seuls les nouveaux employés renégocient leur rémunération. Deuxièmement, le taux de participation sur le marché du travail est rendu endogène et le chômage involontaire, dans le sens où le bien-être des chômeurs est inférieur à celui des employés. L'inclusion de ce dernier mécanisme dans le modèle estimé fera cependant I'objet de travaux futurs.Afin de mettre en évidence les effets des frictions sur les marches du crédit et du travail, je soumets les quatre versions estimées du modèle à plusieurs exercices: une analyse en contributions des chocs structurels pendant la crise. L'évaluation de différentes règles de politique monétaire, la simulation contrefactuelle de la crise sous l'hypothèse d'un régime de change flexible entre la France et le reste de la zone euro et, enfin. la simulation de variante de TVA sociale. / Thanks to their internal consistency. DSGE models, built on microecoc behavor, have become prevalenl for business cycle and policy analysis in institutions. The recent crisis and governments' concern about persistent unemployment advocate for mechanism, capturing imperfect adjustments in credit and labor markets. However, popular models such as the one of Smets and Wouters (2003-2007), although unsophisticated in their representation of these markets, are able to replicate the data as well as usual econometric tools. It is thus necessary to question the benefits of including these frictions in theoretical models for operational use.ln this thesis, I address this issue and show that microfounded mechanisms specifiç to labor and credit markets can significantly alter the conclusions based on the use of an estimated DSGE model, fom both a positive and a normative perspective.For this purpose, I build a two-country model of France and the rest of the euro area with exogenous rest of the world variables, and estimate it with and without these two frictions using Bayesian techniques. By contrast with existing models, I propose two improvements of the representation of labor markets. First, following Pissarides (2009), only wages in new jobs are negotiated by firms and workers, engendering stickiness in the average real wage. Second, I develop a set of assumptions to make labor market participation endogenous and unemployment involuntary in the sense that the unemployed workers are worse-off that the employed ones. Yet, including this setup in the estimated model is left for future research.Using the four estimated versions of the model, I undertake a number of analyses to highlight the role of financial and labor market frictions : an historical shock decomposition of fluctuations during the crisis, the evaluation of several monetary policy rules, a counterfactual simulation of the crisis under the assumption of a flexible exchange rate regime between France and the rest of the euro area and, lastly, the simulation of social VAT scenarios.
122

Bayesian Estimation of DSGE Models / Bayesovský odhad DSGE modelů

Bouda, Milan January 2012 (has links)
Thesis is dedicated to Bayesian Estimation of DSGE Models. Firstly, the history of DSGE modeling is outlined as well as development of this macroeconometric field in the Czech Republic and in the rest of the world. Secondly, the comprehensive DSGE framework is described in detail. It means that everyone is able to specify or estimate arbitrary DSGE model according to this framework. Thesis contains two empirical studies. The first study describes derivation of the New Keynesian DSGE Model and its estimation using Bayesian techniques. This model is estimated with three different Taylor rules and the best performing Taylor rule is identified using the technique called Bayesian comparison. The second study deals with development of the Small Open Economy Model with housing sector. This model is based on previous study which specifies this model as a closed economy model. I extended this model by open economy features and government sector. Czech Republic is generally considered as a small open economy and these extensions make this model more applicable to this economy. Model contains two types of households. The first type of consumers is able to access the capital markets and they can smooth consumption across time by buying or selling financial assets. These households follow the permanent income hypothesis (PIH). The other type of household uses rule of thumb (ROT) consumption, spending all their income to consumption. Other agents in this economy are specified in standard way. Outcomes of this study are mainly focused on behavior of house prices. More precisely, it means that all main outputs as Bayesian impulse response functions, Bayesian prediction and shock decomposition are focused mainly on this variable. At the end of this study one macro-prudential experiment is performed. This experiment comes up with answer on the following question: is the higher/lower Loan to Value (LTV) ratio better for the Czech Republic? This experiment is very conclusive and shows that level of LTV does not affect GDP. On the other hand, house prices are very sensitive to this LTV ratio. The recommendation for the Czech National Bank could be summarized as follows. In order to keep house prices less volatile implement rather lower LTV ratio than higher.
123

Time Series Data Analysis of Single Subject Experimental Designs Using Bayesian Estimation

Aerts, Xing Qin 08 1900 (has links)
This study presents a set of data analysis approaches for single subject designs (SSDs). The primary purpose is to establish a series of statistical models to supplement visual analysis in single subject research using Bayesian estimation. Linear modeling approach has been used to study level and trend changes. I propose an alternate approach that treats the phase change-point between the baseline and intervention conditions as an unknown parameter. Similar to some existing approaches, the models take into account changes in slopes and intercepts in the presence of serial dependency. The Bayesian procedure used to estimate the parameters and analyze the data is described. Researchers use a variety of statistical analysis methods to analyze different single subject research designs. This dissertation presents a series of statistical models to model data from various conditions: the baseline phase, A-B design, A-B-A-B design, multiple baseline design, alternating treatments design, and changing criterion design. The change-point evaluation method can provide additional confirmation of causal effect of the treatment on target behavior. Software codes are provided as supplemental materials in the appendices. The applicability for the analyses is demonstrated using five examples from the SSD literature.
124

Bayesian and classical inference for extensions of Geometric Exponential distribution with applications in survival analysis under the presence of the data covariated and randomly censored /

Gianfelice, Paulo Roberto de Lima. January 2020 (has links)
Orientador: Fernando Antonio Moala / Abstract: This work presents a study of probabilistic modeling, with applications to survival analysis, based on a probabilistic model called Exponential Geometric (EG), which o ers great exibility for the statistical estimation of its parameters based on samples of life time data complete and censored. In this study, the concepts of estimators and lifetime data are explored under random censorship in two cases of extensions of the EG model: the Extended Geometric Exponential (EEG) and the Generalized Extreme Geometric Exponential (GE2). The work still considers, exclusively for the EEG model, the approach of the presence of covariates indexed in the rate parameter as a second source of variation to add even more exibility to the model, as well as, exclusively for the GE2 model, a analysis of the convergence, hitherto ignored, it is proposed for its moments. The statistical inference approach is performed for these extensions in order to expose (in the classical context) their maximum likelihood estimators and asymptotic con dence intervals, and (in the bayesian context) their a priori and a posteriori distributions, both cases to estimate their parameters under random censorship, and covariates in the case of EEG. In this work, bayesian estimators are developed with the assumptions that the prioris are vague, follow a Gamma distribution and are independent between the unknown parameters. The results of this work are regarded from a detailed study of statistical simulation applied to... (Complete abstract click electronic access below) / Resumo: Este trabalho apresenta um estudo de modelagem probabilística, com aplicações à análise de sobrevivência, fundamentado em um modelo probabilístico denominado Exponencial Geométrico (EG), que oferece uma grande exibilidade para a estimação estatística de seus parâmetros com base em amostras de dados de tempo de vida completos e censurados. Neste estudo são explorados os conceitos de estimadores e dados de tempo de vida sob censuras aleatórias em dois casos de extensões do modelo EG: o Exponencial Geom étrico Estendido (EEG) e o Exponencial Geométrico Extremo Generalizado (GE2). O trabalho ainda considera, exclusivamente para o modelo EEG, a abordagem de presença de covariáveis indexadas no parâmetro de taxa como uma segunda fonte de variação para acrescentar ainda mais exibilidade para o modelo, bem como, exclusivamente para o modelo GE2, uma análise de convergência até então ignorada, é proposta para seus momentos. A abordagem da inferência estatística é realizada para essas extensões no intuito de expor (no contexto clássico) seus estimadores de máxima verossimilhança e intervalos de con ança assintóticos, e (no contexto bayesiano) suas distribuições à priori e posteriori, ambos os casos para estimar seus parâmetros sob as censuras aleatórias, e covariáveis no caso do EEG. Neste trabalho os estimadores bayesianos são desenvolvidos com os pressupostos de que as prioris são vagas, seguem uma distribuição Gama e são independentes entre os parâmetros desconhecidos. Os resultad... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
125

[en] CHANGES IN THE BRAZILIAN YIELD CURVE RESPONSES TO MONETARY SHOCKS / [pt] MUDANÇA NA REAÇÃO DA CURVA DE JUROS BRASILEIRA À CHOQUES MONETÁRIOS

GUSTAVO CURI AMARANTE 30 May 2016 (has links)
[pt] Evidências empíricas de estimativas de modelos VAR em forma reduzida mostram que houve uma mudança na maneira que a curva de juros brasileira reage à choques de política monetária. Para melhor entender a razão desta mudança, estimamos um DSGE linearizado, acrescido de uma estrutura à termo para as taxas de juros, sobre dois períodos amostrais para verificar quais parâmetros da economia poderiam causado essa mudança. O método de linearização envolve um termo de ajuste que permite a existência de prêmio à termo e gera um estado estacionário ajustado pela volatilidade. Nós discutimos as evidências empíricas, comparamos o método de solução com outro métodos mais tradicionais e estimamos um modelo com preferências Epstein-Zin usando métodos bayesianos. Nós encontramos que nosso modelo estrutural é capaz de capturar algumas das mudanças de comportamento, que é causada principalmente por um menor coeficiente associado à inflação na regra de juros e por maior persistência dos choques monetários. / [en] Empirical evidence from reduced form VAR estimates shows that there has been a change in the way that the Brazilian yield curve reacts to a monetary policy shock. To better understand the sources of this change we estimated a linearized DSGE model with a term structure of interest rates over two sample periods to see what parameters of the economy might have caused the change. The linearization method is augmented with a risk adjustment term in order to generate a positive term spread and a risk-adjusted steady state. We discuss the empirical evidence, compare the solution methods with other traditional methods and estimate a model with Epstein-Zin preferences using Bayesian methods.We find that our structural model is capable of capturing some of the changes of behavior, and it is caused mainly by a smaller inflation coefficient of the interest rate rule and higher persistence of monetary policy shocks.
126

Essays on Business Cycles Fluctuations and Forecasting Methods

Pacce, Matías José 03 July 2017 (has links)
This doctoral dissertation proposes methodologies which, from a linear or a non-linear approach, accommodate to the information flow and can deal with a large amount of data. The empirical application of the proposed methodologies contributes to answer some of the questions that have emerged or that it has potentiated after the 2008 global crisis. Thus, essential aspects of the macroeconomic analysis are studied, like the identification and forecast of business cycles turning points, the business cycles interactions between countries or the development of tools able to forecast the evolution of key economic indicators based on new data sources, like those which emerge from search engines.
127

Essays on Inflation: Expectations, Forecasting and Markups

Capolongo, Angela 15 September 2020 (has links) (PDF)
This manuscript is composed of three chapters.In the first chapter, I analyze the impact of key European Central Bank’s unconventional monetary policy announcements on inflation expectations, measured by Euro Area five-year Inflation Linked Swap rates five years ahead, since the aftermath of the crisis. I control for market liquidity and uncertainty measures, change in oil price shock and macroeconomic news. The results show that the impact of the European Central Bank’s announcements has been positive during the period under observation. Along the line of the expansionary monetary policy measures implemented, the agents have been revising upwards their long term inflation expectations. This means that the unconventional monetary policy measures were effective. In the second chapter, co-authored with Claudia Pacella, we construct a Bayesian vector autoregressive model with three layers of information: the key drivers of inflation, cross-country dynamic interactions, and country-specific variables. The model provides good forecasting accuracy with respect to the popular benchmarks used in the literature. We perform a step-by-step analysis to shed light on which layer of information is more crucial for accurately forecasting euro area inflation. Our empirical analysis reveals the importance of including the key drivers of inflation and taking into account the multi-country dimension of the euro area. The results show that the complete model performs better overall in forecasting inflation excluding energy and unprocessed food over the medium-term. We use the model to establish stylized facts on the euro area and cross-country heterogeneity over the business cycle. In the third chapter, using confidential firm-level data from the National Bank of Belgium, I document the heterogeneous response of firms’ markups to the 2008 financial crisis. Overall, markups increased in the aftermath of the crisis and the effect was larger for highly financially constrained firms. I show that standard heterogeneous-firm models, featuring monopolistic competition and variable markups, are unable to replicate these patterns. I then introduce endogenous demand shifters which respond to firm investment in market share (e.g. quality). I show that the interaction of an increase in the cost of procuring inputs combined with an endogenous quality downgrading can rationalize the observed changes in firm-level markups. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
128

Correspondance entre régression par processus Gaussien et splines d'interpolation sous contraintes linéaires de type inégalité. Théorie et applications. / Correspondence between Gaussian process regression and interpolation splines under linear inequality constraints. Theory and applications

Maatouk, Hassan 01 October 2015 (has links)
On s'intéresse au problème d'interpolation d'une fonction numérique d'une ou plusieurs variables réelles lorsque qu'elle est connue pour satisfaire certaines propriétés comme, par exemple, la positivité, monotonie ou convexité. Deux méthodes d'interpolation sont étudiées. D'une part, une approche déterministe conduit à un problème d'interpolation optimale sous contraintes linéaires inégalité dans un Espace de Hilbert à Noyau Reproduisant (RKHS). D'autre part, une approche probabiliste considère le même problème comme un problème d'estimation d'une fonction dans un cadre bayésien. Plus précisément, on considère la Régression par Processus Gaussien ou Krigeage pour estimer la fonction à interpoler sous les contraintes linéaires de type inégalité en question. Cette deuxième approche permet également de construire des intervalles de confiance autour de la fonction estimée. Pour cela, on propose une méthode d'approximation qui consiste à approcher un processus gaussien quelconque par un processus gaussien fini-dimensionnel. Le problème de krigeage se ramène ainsi à la simulation d'un vecteur gaussien tronqué à un espace convexe. L'analyse asymptotique permet d'établir la convergence de la méthode et la correspondance entre les deux approches déterministeet probabiliste, c'est le résultat théorique de la thèse. Ce dernier est vu comme unegénéralisation de la correspondance établie par [Kimeldorf and Wahba, 1971] entre estimateur bayésien et spline d'interpolation. Enfin, une application réelle dans le domainede l'assurance (actuariat) pour estimer une courbe d'actualisation et des probabilités dedéfaut a été développée. / This thesis is dedicated to interpolation problems when the numerical function is known to satisfy some properties such as positivity, monotonicity or convexity. Two methods of interpolation are studied. The first one is deterministic and is based on convex optimization in a Reproducing Kernel Hilbert Space (RKHS). The second one is a Bayesian approach based on Gaussian Process Regression (GPR) or Kriging. By using a finite linear functional decomposition, we propose to approximate the original Gaussian process by a finite-dimensional Gaussian process such that conditional simulations satisfy all the inequality constraints. As a consequence, GPR is equivalent to the simulation of a truncated Gaussian vector to a convex set. The mode or Maximum A Posteriori is defined as a Bayesian estimator and prediction intervals are quantified by simulation. Convergence of the method is proved and the correspondence between the two methods is done. This can be seen as an extension of the correspondence established by [Kimeldorf and Wahba, 1971] between Bayesian estimation on stochastic process and smoothing by splines. Finally, a real application in insurance and finance is given to estimate a term-structure curve and default probabilities.
129

Correction d'estimateurs de la fonction de Pickands et estimateur bayésien

Chalifoux, Kevin 01 1900 (has links)
Faire l'estimation d'une copule de valeurs extrêmes bivariée revient à estimer A, sa fonction de Pickands qui lui est associée. Cette fonction A:[0,1] \( \rightarrow \) [0,1] doit satisfaire certaines contraintes : $$\max\{1-t, t \} \leq A(t) \leq 1, \hspace{3mm} t\in[0,1]$$ $$\text{A est convexe.}$$ Plusieurs estimateurs ont été proposés pour estimer cette fonction A, mais peu respectent ses contraintes imposées. La contribution principale de ce mémoire est d'introduire une technique simple de correction d'estimateurs de la fonction de Pickands de sorte à ce que les estimateurs corrigés respectent les contraintes exigées. La correction proposée utilise une nouvelle propriété du vecteur aléatoire bivarié à valeurs extrêmes, combinée avec l'enveloppe convexe de l'estimateur obtenu pour garantir le respect des contraintes de la fonction A. La seconde contribution de ce mémoire est de présenter un estimateur bayésien non paramétrique de la fonction de Pickands basé sur la forme introduite par Capéraà et al. (1997). L'estimateur utilise les processus de Dirichlet pour estimer la fonction de répartition d'une transformation du vecteur aléatoire bivarié à valeurs extrêmes. Des analyses par simulations sont produites sur un ensemble d'estimateurs pour mesurer la performance de la correction et de l'estimateur bayésien proposés, sur un ensemble de 18 distributions de valeurs extrêmes bivariées. La correction améliore l'erreur quadratique moyenne sur l'ensemble des niveaux. L'estimateur bayésien proposé obtient l'erreur quadratique moyenne minimale pour les estimateurs considérés. / Estimating a bivariate extreme-value copula is equivalent to estimating A, its associated Pickands function. This function A: [0,1] \( \rightarrow \) [0,1] must satisfy some constraints : $$\max\{1-t, t \} \leq A(t) \leq 1, \hspace{3mm} t\in[0,1]$$ $$\text{A is convex.}$$ Many estimators have been proposed to estimate A, but few satisfy the imposed constraints. The main contribution of this thesis is the introduction of a simple correction technique for Pickands function estimators so that the corrected estimators respect the required constraints. The proposed correction uses a new property of the extreme-value random vector and the convex hull of the obtained estimator to guaranty the respect of the Pickands function constraints. The second contribution of this thesis is to present a nonparametric bayesian estimator of the Pickands function based on the form introduced by Capéraà, Fougères and Genest (1997). The estimator uses Dirichlet processes to estimate the cumulative distribution function of a transformation of the extreme-value bivariate vector. Analysis by simulations and a comparison with popular estimators provide a measure of performance for the proposed correction and bayesian estimator. The analysis is done on 18 bivariate extreme-value distributions. The correction reduces the mean square error on all distributions. The bayesian estimator has the lowest mean square error of all the considered estimators.
130

Estimating The Drift Diffusion Model of Conflict

Thomas, Noah January 2021 (has links)
No description available.

Page generated in 0.1385 seconds