Spelling suggestions: "subject:"power found"" "subject:"power sound""
21 |
THREE ESSAYS ON CREDIT MARKETS AND THE MACROECONOMYBianco, Timothy P. 01 January 2018 (has links)
Historically, credit market conditions have been shown to impact economic activity, at times severely. For instance, in the late 2000s, the United States experienced a financial crisis that seized domestic and foreign credit markets. The ensuing lack of access to credit brought about a steep decline in output and a sluggish recovery. Accordingly, policymakers commonly take steps to mitigate the effects of adverse credit market conditions and, at times, conduct unconventional monetary policy once traditional policy tools become ineffective. This dissertation is a collection of essays regarding monetary policy, the flow of credit, financial crises, and the macroeconomy. Specifically, I describe monetary policy’s impact on the allocation of credit in the U.S. and analyze the role of upstream and downstream credit conditions and financial crises on international trade in a global supply chain.
The first chapter assesses the impact of monetary policy shocks on credit reallocation and evaluates the importance of theoretical transmission mechanisms. Compustat data covering 1974 through 2017 is used to compute quarterly measures of credit flows. I find that expansionary monetary policy is associated with positive long-term credit creation and credit reallocation. These impacts are larger for long-term credit and for credit of financially constrained firms and firms that are perceived as risky to the lender. This is predicted by the balance sheet channel of monetary policy and mechanisms that reduce lenders’ risk perceptions and increase the tendency to search for yield. Furthermore, I find that, on average, the largest increases in credit creation resulting from monetary expansion are to firms that exhibit relatively low investment efficiency. These estimation results suggest that expansionary monetary policy may have a negative impact on future economic growth.
The second chapter evaluates the quantitative effects of unconventional monetary policy in the late 2000s and early 2010s. This was a period when the traditional monetary policy tool (the federal funds rate) was constrained by the zero lower bound. We compute credit flow measures using Compustat data, and we employ a factor augmented vector autoregression to analyze unconventional monetary policy’s impact on the allocation of credit during the zero lower bound period. By employing policy counterfactuals, we find that unconventional monetary policy has a positive and simultaneous impact on credit creation and credit destruction and these impacts are larger in long-term credit markets. Applying this technique to analyze the flows of financially constrained and non-financially constrained borrowing firms, we find that unconventional monetary policy operates through the easing of collateral constraints because these effects are larger for small firms or those with high default probabilities. During the zero lower bound period, we also find that unconventional monetary policy brings about increases in credit creation for firms of relatively high investment efficiency.
The third chapter pertains to the global trade collapse of the late 2000s. This collapse was due, in part, to strained credit markets and the vulnerability of exporters to adverse credit market conditions. The chapter evaluates the impact of upstream and downstream credit conditions and the differential effects of financial crises on bilateral trade. I find that upstream and downstream sectors’ needs for external financing is negatively associated with trade flows when the exporting or importing country’s cost of credit is high. However, I find that this effect is dampened for downstream sectors. I also find that downstream sectors’ value of collateral is positively associated with trade when the cost of credit is high in the importing country. High downstream trade credit dependence coupled with high costs of credit in the importing country also cause declines in imports. There are amplifying effects of credit costs for sectors that are highly dependent on external financing when the importing or exporting country is in financial crisis. Further, the magnitude is larger when the exporting country is in financial crisis. Finally, I find that these effects on trade flows are large when the exporting country is a developed economy, but they are muted for developing economies.
|
22 |
Determinants of Fiscal Multipliers RevisitedHorvath, Roman, Kaszab, Lorant, Marsal, Ales, Rabitsch, Katrin 09 1900 (has links) (PDF)
We generalize a simple New Keynesian model and show that a flattening of the Phillips curve reduces the size of fiscal multipliers at the zero lower bound (ZLB) on the nominal interest rate. The factors behind the flatting are consistent with micro- and macroeconomic empirical evidence: it is a result of, not a higher level of price rigidity, but an increase in the degree of strategic complementarity in price-setting -- invoked by the assumption of a specific instead of an economy-wide labour market, and decreasing instead of constant-returns-to-scale. In normal times, the efficacy of fiscal policy and resulting multipliers tends to be small because negative wealth effects crowd out consumption, and because monetary policy endogenously reacts to fiscally-driven
increases in inflation and output by raising rates, offsetting part of the stimulus. In times of a binding ZLB and a fixed nominal rate, an increase in (expected) inflation instead lowers the real rate, leading to larger fiscal multipliers. Conditional on being in a ZLB-environment, under a flatter Phillips curve, increases in expected inflation are lower, so that fiscal multipliers at the ZLB tend to be lower. Finally, we also discuss the role of solution methods in determining the size of fiscal multipliers. / Series: Department of Economics Working Paper Series
|
23 |
Détection d'évènements simples à partir de mesures sur courant alternatif / Detection of simple events from alternative current measurementsAmirach, Nabil 10 June 2015 (has links)
La nécessité d’économiser de l'énergie est l’un des axes importants de ces dernières décennies, d’où le besoin de surveiller la consommation d'énergie des processus résidentiels et industriels. Le travail de recherche présenté dans ce manuscrit s’inscrit plus particulièrement dans le suivi de la consommation électrique afin de permettre l’économie d’énergie. Le but final étant d'avoir une connaissance précise et fiable d'un réseau électrique donné. Cela passe par la décomposition de la consommation électrique globale du réseau électrique étudié afin de fournir une analyse détaillée de l'énergie consommée par usage. L’objectif de cette thèse est la mise en place d’une approche non-intrusive permettant de réaliser les étapes de détection d’évènements et d’extraction de caractéristiques, qui précédent les étapes de classification et d’estimation de la consommation électrique par usage. L’algorithme résultant des travaux effectués durant cette thèse permet de détecter les évènements qui surviennent sur le courant et d’y associer un vecteur d’information contenant des paramètres caractérisant le régime permanent et le régime transitoire. Ce vecteur d’information permet ensuite de reconnaître tous les évènements liés à la même charge électrique. / The need to save energy is an important focus of recent decades, hence the need to monitor the energy consumption of residential and industrial processes. The research works presented in this manuscript are within the monitoring power consumption area in order to enable energy saving. The final goal is to have a clear and reliable knowledge of a given grid. This involves the decomposition of the overall power consumption of the electrical network to provide a detailed analysis of the consumed energy. The objective of this thesis is to develop a non-intrusive approach to achieve the event detection and feature extraction steps, which precede the classification and the power consumption estimation steps. The algorithm resulting from the works performed in this thesis can detect events which occur on the current and associates to them an information vector containing the parameters characterizing the steady and transient states. Then this information vector is used to recognize all the events linked to the same electrical load.
|
24 |
Bounded Eigenvalues of Fully Clamped and Completely Free Rectangular PlatesMochida, Yusuke January 2007 (has links)
Exact solution to the vibration of rectangular plates is available only for plates with two opposite edges subject to simply supported conditions. Otherwise, they are analysed by using approximate methods. There are several approximate methods to conduct a vibration analysis, such as the Rayleigh-Ritz method, the Finite Element Method, the Finite Difference Method, and the Superposition Method. The Rayleigh-Ritz method and the finite element method give upper bound results for the natural frequencies of plates. However, there is a disadvantage in using this method in that the error due to discretisation cannot be calculated easily. Therefore, it would be good to find a suitable method that gives lower bound results for the natural frequencies to complement the results from the Rayleigh-Ritz method. The superposition method is also a convenient and efficient method but it gives lower bound solution only in some cases. Whether it gives upper bound or lower bound results for the natural frequencies depends on the boundary conditions. It is also known that the finite difference method always gives lower bound results. This thesis presents bounded eigenvalues, which are dimensionless form of natural frequencies, calculated using the superposition method and the finite difference method. All computations were done using the MATLAB software package. The convergence tests show that the superposition method gives a lower bound for the eigenvalues of fully clamped plates, and an upper bound for the completely free plates. It is also shown that the finite difference method gives a lower bound for the eigenvalues of completely free plates. Finally, the upper bounds and lower bounds for the eigenvalues of fully clamped and completely free plates are given.
|
25 |
Multiparty Communication ComplexityDavid, Matei 06 August 2010 (has links)
Communication complexity is an area of complexity theory that studies an abstract model of computation called a communication protocol. In a $k$-player communication protocol, an input to a known function is partitioned into $k$ pieces of $n$ bits each, and each piece is assigned to one of the players in the protocol. The goal of the players is to evaluate the function on the distributed input by using as little communication as possible. In a Number-On-Forehead (NOF) protocol, the input piece assigned to each player is metaphorically placed on that player's forehead, so that each player sees everyone else's input but its own. In a Number-In-Hand (NIH) protocol, the piece assigned to each player is seen only by that player. Overall, the study of communication protocols has been used to obtain lower bounds and impossibility results for a wide variety of other models of computation.
Two of the main contributions presented in this thesis are negative results on the NOF model of communication, identifying limitations of NOF protocols. Together, these results consitute stepping stones towards a better fundamental understanding of this model. As the first contribution, we show that randomized NOF protocols are exponentially more powerful than deterministic NOF protocols, as long as $k \le n^c$ for some constant $c$. As the second contribution, we show that nondeterministic NOF protocols are exponentially more powerful than randomized NOF protocols, as long as $k \le \delta \cdot \log n$ for some constant $\delta < 1$.
For the third major contribution, we turn to the NIH model and we present a positive result. Informally, we show that a NIH communication protocol for a function $f$ can simulate a Stack Machine (a Turing Machine augmented with a stack) for a related function $F$, consisting of several instances of $f$ bundled together. Using this simulation and known communication complexity lower bounds, we obtain the first known (space vs. number of passes) trade-off lower bounds for Stack Machines.
|
26 |
Multiparty Communication ComplexityDavid, Matei 06 August 2010 (has links)
Communication complexity is an area of complexity theory that studies an abstract model of computation called a communication protocol. In a $k$-player communication protocol, an input to a known function is partitioned into $k$ pieces of $n$ bits each, and each piece is assigned to one of the players in the protocol. The goal of the players is to evaluate the function on the distributed input by using as little communication as possible. In a Number-On-Forehead (NOF) protocol, the input piece assigned to each player is metaphorically placed on that player's forehead, so that each player sees everyone else's input but its own. In a Number-In-Hand (NIH) protocol, the piece assigned to each player is seen only by that player. Overall, the study of communication protocols has been used to obtain lower bounds and impossibility results for a wide variety of other models of computation.
Two of the main contributions presented in this thesis are negative results on the NOF model of communication, identifying limitations of NOF protocols. Together, these results consitute stepping stones towards a better fundamental understanding of this model. As the first contribution, we show that randomized NOF protocols are exponentially more powerful than deterministic NOF protocols, as long as $k \le n^c$ for some constant $c$. As the second contribution, we show that nondeterministic NOF protocols are exponentially more powerful than randomized NOF protocols, as long as $k \le \delta \cdot \log n$ for some constant $\delta < 1$.
For the third major contribution, we turn to the NIH model and we present a positive result. Informally, we show that a NIH communication protocol for a function $f$ can simulate a Stack Machine (a Turing Machine augmented with a stack) for a related function $F$, consisting of several instances of $f$ bundled together. Using this simulation and known communication complexity lower bounds, we obtain the first known (space vs. number of passes) trade-off lower bounds for Stack Machines.
|
27 |
Branch and Bound Algorithm for Multiprocessor SchedulingRahman, Mostafizur January 2009 (has links)
The multiprocessor task graph scheduling problem has been extensively studied asacademic optimization problem which occurs in optimizing the execution time of parallelalgorithm with parallel computer. The problem is already being known as one of the NPhardproblems. There are many good approaches made with many optimizing algorithmto find out the optimum solution for this problem with less computational time. One ofthem is branch and bound algorithm.In this paper, we propose a branch and bound algorithm for the multiprocessor schedulingproblem. We investigate the algorithm by comparing two different lower bounds withtheir computational costs and the size of the pruned tree.Several experiments are made with small set of problems and results are compared indifferent sections.
|
28 |
Parameterized complexity and polynomial-time approximation schemesHuang, Xiuzhen 17 February 2005 (has links)
According to the theory of NPcompleteness, many problems that have important realworld applications are NPhard. This excludes the possibility of solving them in polynomial time unless P=NP. A number of approaches have been proposed in dealing with NPhard problems, among them are approximation algorithms and parameterized algorithms. The study of approximation algorithms tries to find good enough solutions instead of optimal solutions in polynomial time, while parameterized algorithms try to give exact solutions when a natural parameter is small.
In this thesis, we study the structural properties of parameterized computation and approximation algorithms for NP optimization problems. In particular, we investigate the relationship between parameterized complexity and polynomialtime approximation scheme (PTAS) for NP optimization problems.
We give nice characterizations for two important subclasses in PTAS: Fully Polynomial Time Approximation Scheme (FPTAS) and Effcient Polynomial Time Approximation Scheme (EPTAS), using the theory of parameterized complexity. Our characterization of the class FPTAS has its advantages over the former characterizations, and our characterization of EPTAS is the first systematic investigation of this new but important approximation class.
We develop new techniques to derive strong computational lower bounds for certain parameterized problems based on the theory of parameterized complexity. For example, we prove that unless an unlikely collapse occurs in parameterized complexity theory, the clique problem could not be solved in time O(f (k)no(k)) for any function
f . This lower bound matches the upper bound of the trivial algorithm that simply enumerates and checks all subsets of k vertices in the given graph of n vertices.
We then extend our techniques to derive computational lower bounds for PTAS and EPTAS algorithms of NP optimization problems. We prove that certain NP optimization problems with known PTAS algorithms have no PTAS algorithms of running time O(f (1/Epsilon)no(1/Epsilon)) for any function f . Therefore, for these NP optimization problems, although theoretically they can be approximated in polynomial time to an arbitrarily small error bound Epsilon, they have no practically effective approximation algorithms for small error bound Epsilon. To our knowledge, this is the first time such lower bound results have been derived for PTAS algorithms. This seems to open a new direction for the study of computational lower bounds on the approximability of NP optimization problems.
|
29 |
Income Risk and Aggregate Demand over the Business CycleMericle, David 23 July 2012 (has links)
This dissertation consists of three essays on income risk and aggregate demand over the business cycle, each addressing an aspect of the Great Recession. The first chapter reframes the standard liquidity trap model to illustrate the costly feedback loop between idiosyncratic risk and aggregate demand. I first show that a liquidity trap can result from excess demand for precautionary savings in times of high uncertainty. Second, I show that the output and welfare costs of the ensuing recession depend crucially on how the drop in demand for output is translated into a reduction in demand for labor. Increased unemployment risk compounds the original rise in idiosyncratic productivity risk and reinforces precautionary motives, deepening the recession. Third, I show that increasing social insurance can raise output and welfare at the zero bound. I decompose these effects to distinguish the component unique to the liquidity trap environment and show that social insurance is most effective at the zero bound when it targets the type of idiosyncratic risk households face, which in turns depends on the labor market adjustment mechanism. The second paper offers a novel model of the connection between the consumer credit and home mortgage markets through an individual’s credit history. This paper introduces a novel justification for the home mortgage interest deduction. In an economy with both housing assets and consumer credit, the mortgage interest deduction is modeled as a subsidy for the accumulation of collateralizable assets by households who have maintained good credit. As such, the subsidy loosens participation constraints and facilitates risk-sharing. Empirical evidence and a calibration exercise reveal that the subsidy has a sizable
impact on the availability of credit. The third paper assesses the role of policy uncertainty in the Great Recession. The Great Recession features substantial geographic variation in employment losses, a fact that is often presented as a challenge to uncertainty-based models of the downturn. In this paper we show that there is a substantial correlation between the distribution of employment losses and the increases in local measures of both economic and policy uncertainty. This relationship is robust across a wide range of measures. / Economics
|
30 |
A study of the impacts of quantitative easing on the macroeconomics variablesValente, João Paulo 19 June 2013 (has links)
Submitted by João Valente (joaopaulovalente@hotmail.com) on 2013-09-21T14:35:40Z
No. of bitstreams: 1
Tese_FINAL.pdf: 645576 bytes, checksum: 5dcd3933e691c9eb4d85777d7f20904f (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2013-09-27T13:44:20Z (GMT) No. of bitstreams: 1
Tese_FINAL.pdf: 645576 bytes, checksum: 5dcd3933e691c9eb4d85777d7f20904f (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-09-30T12:35:01Z (GMT) No. of bitstreams: 1
Tese_FINAL.pdf: 645576 bytes, checksum: 5dcd3933e691c9eb4d85777d7f20904f (MD5) / Made available in DSpace on 2013-09-30T12:35:14Z (GMT). No. of bitstreams: 1
Tese_FINAL.pdf: 645576 bytes, checksum: 5dcd3933e691c9eb4d85777d7f20904f (MD5)
Previous issue date: 2013-06-19 / Neste trabalho, propusemos um modelo DSGE que busca responder algumas questões sobre políticas de afrouxamento monetário (Quantitative Easing - QE) recentemente implementadas em resposta à crise de 2008. Desenvolvemos um modelo DSGE com agentes heterogêneos e preferred-habitat nas compras de títulos do governo. Nosso modelo permite o estudo da otimalidade da compra de portfolio (em termos de duration dos títulos) para os bancos centrais quando estão implementando a política. Além disso, a estrutura heterogênea nos permite olhar para distribuição de renda provocada pelas compras de títulos. Nossos resultados preliminares evidenciam o efeito distributivo do QE. No entanto, nosso modelo expandido apresentou alguns problemas de estabilidade. / In this paper, we proposed a DSGE model that seeks to answer some questions about the recent implemented Quantitative Easing (QE) programs. Our framework is a DSGE model with heterogeneous agents and preferred-habitat in purchases of government bonds. It allows the study of optimality purchasing portfolio (in terms of duration of the bonds) for central banks when they are implementing the policy. Furthermore, the heterogeneous structure allows us to look at income distribution caused by purchases of these securities. Our preliminary results show some distributive effect of QE. However, our expanded model showed some stability problems.
|
Page generated in 0.0593 seconds