Spelling suggestions: "subject:"power found"" "subject:"power sound""
1 |
Three Essays on Unconventional Monetary Policy at the Zero Lower BoundZhang, Yang 29 November 2013 (has links)
In the first chapter “Impact of Quantitative Easing at the Zero Lower Bound (with J. Dorich, R. Mendes)”, we introduce imperfect asset substitution and segmented asset markets, along the lines of Andres et al. (2004), in an otherwise standard small open-economy model with nominal rigidities. We estimate the model using Canadian data. We use the model to provide a quantitative assessment of the macroeconomic impact of quantitative easing (QE) when the policy rate is at its effective lower bound. In the second chapter “Impact of Forward Guidance at the Zero Lower Bound”, I consider alternative monetary policy rules under commitment in a calibrated three-equation New Keynesian model and examine the extent to which forward guidance helps to mitigate the negative real impact of the zero lower bound. The simulation results suggest that the conditional statement policy prolongs the zero lower bound duration for an additional 4 quarters and reverses half of the decline in inflation associated with the lower bound. It even generates a period of overshooting in inflation three quarters after the initial negative demand shock. Alternatively, the effect of price-level targeting as a forward guidance policy at the zero lower bound is slightly different. In the third chapter “Impact of Quantitative Easing on Household Deleveraging”, I extend the DSGE model in the first chapter with some financial frictions to explore the effects of QE on asset prices and household balance sheet. There are two effects of QE on aggregate output originated from the model. First, QE leads to a decline in term premium, which increases current consumption relative to future consumption. Second, it leads to a lower loan to collateral value ratio and a decline in external finance premium. Favorable financing condition encourages further accumulation of household debt at cheaper rates, in turn, leads to an immediate higher household debt to income ratio. In the consideration of the future withdrawal of any stimulus provided from QE, this would pose greater challenges as it implies much intensive household deleveraging process. I provide some sensitivity analysis around key parameters of the model.
|
2 |
Three Essays on Unconventional Monetary Policy at the Zero Lower BoundZhang, Yang January 2013 (has links)
In the first chapter “Impact of Quantitative Easing at the Zero Lower Bound (with J. Dorich, R. Mendes)”, we introduce imperfect asset substitution and segmented asset markets, along the lines of Andres et al. (2004), in an otherwise standard small open-economy model with nominal rigidities. We estimate the model using Canadian data. We use the model to provide a quantitative assessment of the macroeconomic impact of quantitative easing (QE) when the policy rate is at its effective lower bound. In the second chapter “Impact of Forward Guidance at the Zero Lower Bound”, I consider alternative monetary policy rules under commitment in a calibrated three-equation New Keynesian model and examine the extent to which forward guidance helps to mitigate the negative real impact of the zero lower bound. The simulation results suggest that the conditional statement policy prolongs the zero lower bound duration for an additional 4 quarters and reverses half of the decline in inflation associated with the lower bound. It even generates a period of overshooting in inflation three quarters after the initial negative demand shock. Alternatively, the effect of price-level targeting as a forward guidance policy at the zero lower bound is slightly different. In the third chapter “Impact of Quantitative Easing on Household Deleveraging”, I extend the DSGE model in the first chapter with some financial frictions to explore the effects of QE on asset prices and household balance sheet. There are two effects of QE on aggregate output originated from the model. First, QE leads to a decline in term premium, which increases current consumption relative to future consumption. Second, it leads to a lower loan to collateral value ratio and a decline in external finance premium. Favorable financing condition encourages further accumulation of household debt at cheaper rates, in turn, leads to an immediate higher household debt to income ratio. In the consideration of the future withdrawal of any stimulus provided from QE, this would pose greater challenges as it implies much intensive household deleveraging process. I provide some sensitivity analysis around key parameters of the model.
|
3 |
Spillovers from US monetary policy: Evidence from a time-varying parameter global vector autoregressive modelCrespo Cuaresma, Jesus, Doppelhofer, Gernot, Feldkircher, Martin, Huber, Florian 08 February 2019 (has links) (PDF)
The paper develops a global vector auto-regressive model with time varying pa-
rameters and stochastic volatility to analyse whether international spillovers of US monetary
policy have changed over time. The model proposed enables us to assess whether coefficients
evolve gradually over time or are better characterized by infrequent, but large, breaks. Our find-
ings point towards pronounced changes in the international transmission of US monetary policy
throughout the sample period, especially so for the reaction of international output, equity prices
and exchange rates against the US dollar. In general, the strength of spillovers has weakened
in the aftermath of the global financial crisis. Using simple panel regressions, we link the vari-
ation in international responses to measures of trade and financial globalization. We find that
a broad trade base and a high degree of financial integration with the world economy tend to
cushion risks stemming from a foreign shock such as US tightening of monetary policy, whereas
a reduction in trade barriers and/or a liberalization of the capital account increase these risks.
|
4 |
Data Structuring Problems in the Bit Probe ModelRahman, Mohammad Ziaur January 2007 (has links)
We study two data structuring problems under the bit probe model: the dynamic predecessor problem and integer representation in a manner supporting basic updates in as few bit operations as possible. The model of computation considered in this paper is the bit probe model. In this model, the complexity measure counts only the bitwise accesses to the data structure. The model ignores the cost of computation. As a result, the bit probe complexity of a data structuring problem can be considered as a fundamental measure of the problem. Lower bounds derived by this model are valid as lower bounds for any realistic, sequential model of computation. Furthermore, some of the problems are more suitable for study in this model as they can be solved using less than $w$ bit probes where $w$ is the size of a computer word.
The predecessor problem is one of the fundamental problems in computer science with numerous applications and has been studied for several decades. We study the colored predecessor problem, a variation of the predecessor problem, in which each element is associated with a symbol from a finite alphabet or color. The problem is to store a subset $S$ of size $n,$ from a finite universe $U$ so that to support efficient insertion, deletion and queries to determine the color of the largest value in $S$ which is not larger than $x,$ for a given $x \in U.$ We present a data structure for the problem that requires $O(k \sqrt[k]{{\log U} \over {\log \log U}})$ bit probes for the query and $O(k^2 {{\log U} \over {\log \log U}})$ bit probes for the update operations, where $U$ is the universe size and $k$ is positive constant. We also show that the results on the colored predecessor problem can be used to solve some other related problems such as existential range query, dynamic prefix sum, segment representative, connectivity problems, etc.
The second structure considered is for integer representation. We examine the problem of integer representation in a nearly minimal number of bits so that increment and decrement (and indeed addition and subtraction) can be performed using few bit inspections and fewer bit changes. In particular, we prove a new lower bound of $\Omega(\sqrt{n})$ for the increment and decrement operation, where $n$ is the minimum number of bits required to represent the number. We present several efficient data structures to represent integers that use a logarithmic number of bit inspections and a constant number of bit changes per operation.
|
5 |
Data Structuring Problems in the Bit Probe ModelRahman, Mohammad Ziaur January 2007 (has links)
We study two data structuring problems under the bit probe model: the dynamic predecessor problem and integer representation in a manner supporting basic updates in as few bit operations as possible. The model of computation considered in this paper is the bit probe model. In this model, the complexity measure counts only the bitwise accesses to the data structure. The model ignores the cost of computation. As a result, the bit probe complexity of a data structuring problem can be considered as a fundamental measure of the problem. Lower bounds derived by this model are valid as lower bounds for any realistic, sequential model of computation. Furthermore, some of the problems are more suitable for study in this model as they can be solved using less than $w$ bit probes where $w$ is the size of a computer word.
The predecessor problem is one of the fundamental problems in computer science with numerous applications and has been studied for several decades. We study the colored predecessor problem, a variation of the predecessor problem, in which each element is associated with a symbol from a finite alphabet or color. The problem is to store a subset $S$ of size $n,$ from a finite universe $U$ so that to support efficient insertion, deletion and queries to determine the color of the largest value in $S$ which is not larger than $x,$ for a given $x \in U.$ We present a data structure for the problem that requires $O(k \sqrt[k]{{\log U} \over {\log \log U}})$ bit probes for the query and $O(k^2 {{\log U} \over {\log \log U}})$ bit probes for the update operations, where $U$ is the universe size and $k$ is positive constant. We also show that the results on the colored predecessor problem can be used to solve some other related problems such as existential range query, dynamic prefix sum, segment representative, connectivity problems, etc.
The second structure considered is for integer representation. We examine the problem of integer representation in a nearly minimal number of bits so that increment and decrement (and indeed addition and subtraction) can be performed using few bit inspections and fewer bit changes. In particular, we prove a new lower bound of $\Omega(\sqrt{n})$ for the increment and decrement operation, where $n$ is the minimum number of bits required to represent the number. We present several efficient data structures to represent integers that use a logarithmic number of bit inspections and a constant number of bit changes per operation.
|
6 |
Minimum Diameter Double-Loop NetworksGao, Ying-Yao 21 July 2002 (has links)
Abstract
¡@¡@Double-loop networks have become one of the most popular architectures in the design of Local Area Networks and distributed memory multiprocessor systems. This is due to its characters of minimal diameter, easy routing, expandability and regularity. The switching mechanism at each node can easily be implemented using building blocks of the same specification. Therefore, double-loop networks have a high degree of reliability and hence very low vulnerability. Let N denote the number of nodes in a double-loop network and d(N) be the best possible diameter with N vertices. Given an N, Bermond et al. [5], Boesch and Wang [7], and Yebra et al. [23] have shown that . This is a well-known lower bound for d(N) and is usually denoted as lb(N). In this paper, what we discuss is finding an optimal topology such that d(N)= lb(N) for any given value of N. We provide a simple formula to find optimal topologies of double-loop networks with N nodes.
|
7 |
Likelihood-Based Modulation Classification for Multiple-Antenna ReceiversRamezani-Kebrya, Ali 21 September 2012 (has links)
Prior to signal demodulation, blind recognition of the modulation
scheme of the received signal is an important task for intelligent
radios in various commercial and military applications such as
spectrum management, surveillance of broadcasting activities and adaptive
transmission. Antenna arrays provide spatial diversity and increase channel
capacity. This thesis focuses on the algorithms and performance analysis of
the blind modulation classification (MC) for a multiple antenna receiver configuration.
For a single-input-multiple-output (SIMO) configuration with unknown channel amplitude, phase, and noise variance, we
investigate likelihood-based algorithms for linear digital MC. The existing
algorithms are presented and extended to SIMO. Using recently proposed blind estimates of the unknown parameters, a
new algorithm is developed. In addition, two upper bounds on the classification performance of MC
algorithms are provided. We derive the exact Cramer-Rao Lower Bounds (CRLBs) of joint estimates of the unknown parameters for one- and two-dimensional amplitude modulations. The asymptotic behaviors of the CRLBs are obtained for the high signal-to-noise-ratio (SNR) region. Numerical results demonstrate the accuracy of the CRLB expressions and confirm that the expressions in the literature are special cases of our results. The classification performance of the proposed algorithm is compared with the existing algorithm and upper bounds. It is shown that the proposed algorithm outperforms the existing one significantly with reasonable computational complexity.
The proposed algorithm in this thesis can be used in modern intelligent radios equipped with multiple antenna receivers
and the provided performance analysis, i.e., the CRLB expressions, can be employed to design practical systems involving estimation of the unknown parameters
and is not limited to MC. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2012-09-21 00:51:43.938
|
8 |
Efficient Evaluation of Set ExpressionsMirzazadeh, Mehdi January 2014 (has links)
In this thesis, we study the problem of evaluating set expressions over sorted sets in the comparison model. The problem arises in the context of evaluating search queries in text database systems; most text search engines maintain an inverted list, which consists of a set of documents that contain each possible word. Thus, answering a query is reduced to computing the union, the intersection, or a more complex set expression over sets of documents containing the words in the query.
At the first step, for a given expression on a number of sets and the sizes of the sets, we investigate the worst-case complexity of evaluating the expression in terms of the sizes of the sets. We prove lower bounds and provide algorithms with the matching running time up to a constant factor. We then refine the problem further and design an algorithm that computes such expressions according to the degree by which the input sets are interleaved rather than only considering sets sizes. %We prove the running time of our algorithm is asymptotically optimal. We prove the optimality of our algorithm by way of presenting a matching lower bound sensitive to the interleaving measure.
The algorithms we present are different in the set of set operators they allow in input expressions. We provide algorithms that are worst-case optimal for inputs with union, intersection, and symmetric difference operators. One of the algorithms we provide also supports minus and complement operators and is conjectured to be optimal when an input is allowed to contain these operators as well. We also provide a worst-case optimal algorithm for the form of problem where the input may contain "threshold'" operators, which generalize union and intersection operators: for a number t, a t-threshold operator selects elements that appear in at least in t of the operand sets. Finally, the adaptive algorithm we provide supports union and intersection operators.
|
9 |
Zero Lower Bound and Uncovered Interest Parity – A Forecasting PerspectiveZhang, Yifei 30 July 2018 (has links)
No description available.
|
10 |
Abenomics: Towards Brighter Future or More of the Same? / Abenomics: Vstříc světlejším zítřkům, nebo stále to samé?Pinta, Ondřej January 2014 (has links)
This thesis investigates the impact of Abenomics policies, named after the new Japanese Prime Minister Shinzo Abe, on the economy. His so-called "three arrows" agenda includes fiscal expansion, quantitative and qualitative monetary easing, and regulatory reforms. This work focuses on the assessment of the fulfillment of set goals and compares Abenomics to previous policies. Abe's cabinet succeeded in raising inflation and depreciating yen. The debt growth has almost halted and the GDP has mildly recovered. However, the economy is still far from stable. This thesis also explores further issues encountered by the Japanese economy such as the shut-down of nuclear power plants and effects of the zero lower bound constraint. This work introduces a synthetic counterfactual to assess the real results of Abenomics. This method builds a model of an alternate Japan, in which Abe had not assumed the office. The results suggest that the impact of Abenomics on the GDP per capita is slightly positive or negligible.
|
Page generated in 0.0316 seconds