• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 558
  • 49
  • 45
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 7
  • 6
  • 2
  • 1
  • Tagged with
  • 2961
  • 2961
  • 1600
  • 1425
  • 1302
  • 905
  • 160
  • 116
  • 114
  • 110
  • 101
  • 82
  • 80
  • 79
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Inequality under frictional labour markets

Kostadinov, Emil January 2017 (has links)
This thesis studies the emergence of, and the interaction between, inequality in earnings and inequality in wealth, when labour markets are frictional. Chapter 1 investigates the implications of search frictions and human-capital accumulation for the equilibrium distribution of wages when firms invest optimally in match-specific productivity. Optimal investment choice is incorporated in a framework along the lines of (Burdett et al., 2011) and equilibrium is characterised. The effect of the rate of human capital accumulation on equilibrium dispersion of firm productivities and wages is analysed in a numerically solved version of the model. Chapter 2 studies the empirical relationship between wealth and two labour market outcomes - re-employment wages and unemployment durations. The analysis complements a closely related literature by exploiting new data from the Survey of Income and Program Participation. As in prior studies, negative relationship between net worth and hazard rates to employment is documented. In disagreement with prior studies, the relationship between re-employment wages and net worth is found to be non-monotonic and it is argued that prior findings likely result from misspecification. The implications of the relationship serve as a motivation for the third chapter. Chapter 3 (joint with Melvyn Coles) presents a model of the consumption-leisure tradeoff for risk-averse workers when labour markets are frictional. Optimal behaviour is that of a life-cycle consumer - work when young and save for retirement (non-participation) later - planning retirement efficiently. The analysis has highly tractable implications for wealth dynamics which emphasise life-cycle motives, labour-market decisions, persistent differentials in ability and heterogeneity in initial wealth. The model's empirical relevance is assessed; it is demonstrated that it provides an empirically convincing explanation for much of the between-households inequality in wealth.
452

Essays on dynamic macroeconomics

Boostani, Reza January 2011 (has links)
This thesis uses the techniques of macroeconomic theory to answer three questions. It is divided in three chapters each focusing on one of these questions. The first chapter investigates the appropriate labor market policy response to two fundamental changes in the economy. I introduce unemployment benefits financed by a proportional payroll tax within a model of directed search on the job. I show that there exists a unique positive level of unemployment benefit which maximizes welfare of individuals. The optimal unemployment benefit level is hump-shaped as a function of the level of idiosyncratic risk. At empirically relevant levels of idiosyncratic risk, a much less generous system than in the economy without uncertainty emerges. Furthermore, the welfare costs of deviating from the optimal level are substantial, and accompanied by high unemployment rates. I also find that while the optimal generosity of the unemployment insurance system declines monotonically with the amount of aggregate risk in the economy, the welfare costs of deviating from the optimal system are rather small. Chapter two develops a small open economy model with both staggered nominal prices and wages. Then, performances of some alternative simple policy rules are compared by using the welfare loss criterion. It is shown that, firstly, the performance of domestic inflation-targeting or wage inflation-targeting is better than both CPI inflation-targeting and pegged exchange rate. Second, although the performance of simple rules depends on the degree of stickiness in prices and wages, wage inflationtargeting performs better than domestic inflation-targeting for a wide combination of wage and price stickiness. In chapter three, I develop a model with uninsurable capital-income risk and incomplete markets, and investigate the cyclical properties of the equity premium. Although the model abstracts from some common features of the business cycle model, it can generate a sizable and countercyclical equity premium. Moreover, the model generates relatively more volatile consumption, investment, and equity premium than under complete markets.
453

Fluctuations in the supply of credit and its effects on the capital structure of Japanese firms

Voutsinas, Konstantinos January 2010 (has links)
This study examines how fluctuations in the supply of credit and financial constraints affect capital structure. It is one of the first studies to do so and its methodology is inspired by the recent studies of Faulkender & Petersen (2006) and Bougheas et al. (2006). It examines the economy of Japan, a perfect testing ground for this theory due to the extreme credit supply fluctuations that have occurred during the past years. Furthermore under this new perspective of capital structure theory two more hypotheses are tested. A “horse race” test between the two predominant theories of capital structure, the trade-off and the pecking order hypothesis, is run. The methodology utilised to perform this test is similar to that derived by Shyam-Sunder & Myers (1999). Finally the role of trade credit, a factor overlooked by the majority of previous capital structure studies, is investigated through the use of a similar methodology as that utilised by Mateut et al. (2006). The results of this panel data study, applied in a large sample of public and private firms, clearly indicate that fluctuations in the supply of credit affect capital structure and also that Japanese firms face financial constraints. The pecking order hypothesis is proven to be the winner of the “horse race” test and trade credit is found to be a significant factor of capital structure and more specifically a substitute to bank credit. These findings should be taken into consideration by future research and even perhaps lead into the creation of a new theory of capital structure
454

Innovation & competition in a memory process

Correa, Juan A. January 2011 (has links)
Does innovation increase or decrease with more competition when innovation follows a memory process? This thesis provides a theoretical model which analyzes the innovation and competition relationship assuming that innovation follows a memory process, i.e. the current probability of innovation success depends on previous periods’ innovation successes. I find innovation increases with more product market competition, even under the Schumpeterian context where inventions are not completely appropriable. Assuming the probability to innovate increases with past innovations; a follower firm has large incentives to innovate, even in a highly competitive environment, since the memory obtained after innovating increases its probability to innovate again and become a leader. Therefore, industries will be most of the time neck-and-neck where firms innovate to escape from competition. I test this theoretical finding using the same dataset of Aghion et al. (2005). I find ambiguous results for the innovation-competition relationship. I show that the instrumental variables used by Aghion et al. (2005) are not exogenous and the empirical model is not stable over time. I, therefore, build a database of 220 U.S. industries to analyze the innovationcompetition relationship. As in my theoretical model, I find that innovation increases with more product market competition when innovation follows a memory process. However, when the innovation process is memoryless, I find that more competition decreases the level of innovation when industries already have a high level of competition
455

Applied game theory and optimal mechanism design

Zhang, Qi January 2014 (has links)
This thesis applies game theory to study optimal toehold bidding strategies during takeover competition, the problem of optimal design of voting rules and the design of package bidding mechanism to implement the core allocations. It documents three different research questions that are all related to auction theory. Chapter 2 develops a two-stage takeover game to explain toehold puzzle in the context of takeover. Potential bidders are allowed to acquire target shares in the open market, subject to some limitations. This pre-bid ownership is known as a toehold. Purchasing a toehold prior to making any takeover offer looks like a profitable strategy given substantial takeover premiums. However actual toehold bidding has decreased since 1980s and now is not common. Its time-series patter is centred on either zero or a large value. Chapter 2 develops a two-stage takeover game. In the first stage of this two-stage game, each bidder simultaneously acquires a toehold. In the second stage, bidders observe acquired toehold sizes, and process this information to update their beliefs about rival's private valuation. Then each bidder competes to win the target under a sealed-bid second-price auction. Different from previous toehold puzzle literature focusing on toehold bidding costs in the form of target managerial entrenchment, this chapter develops a two-stage takeover game and points another possible toehold bidding cost - the opportunity loss of a profitable resale. Chapter 2 finds that, under some conditions, there exists a partial pooling Bayesian equilibrium, in which low-value bidders optimally avoid any toehold, while high-value bidders pool their decisions at one size. The equilibrium toehold acquisition strategies coincide with the bimodal distribution of the actual toehold purchasing behaviour. Chapter 3 studies the problem of optimal design of voting rules when each agent faces binary choice. The designer is allowed to use any type of non-transferable penalty on individuals in order to elicit agents' private valuations. And each agent's private valuation is assumed to be independently distributed. Early work showed that the simple majority rule has good normative properties in the situation of binary choice. However, their results relay on the assumption that agents' preferences have equal intensities. Chapter 3 shows that, under reasonable assumptions, the simple majority is the best voting mechanism in terms of utilitarian efficiency, even if voters' preferences are comparable and may have varying intensities. At equilibrium, the mechanism optimally assigns zero penalty to every voter. In other words, the designer does not extract private information from any agent in the society, because the expected penalty cost of eliciting private information to select the better alternative is too high. Chapter 4 presents a package bidding mechanism whose subgame perfect equilibrium outcomes coincide with the core of an underlying strictly convex transferable utility game. It adopts the concept of core as a competitive standard, which enables the mechanism to avoid the well-known weaknesses of VCG mechanism. In this mechanism, only core allocations generate subgame perfect equilibrium payoffs, because non-core allocations provide arbitrage opportunities for some players. By the strict convexity assumption, the implementation of the core is achieved in terms of expectation.
456

A new approach to calculate and forecast dynamic conditional correlation : the use of a multivariate heteroskedastic mixture model

Lu, Cheng January 2010 (has links)
Much research in finance has been directed towards forecasting time varying volatility of unidimensional macroeconomic variables such as stock index, exchange rate and interest rate. However, comparatively little is devoted to modelling time varying correlation. In this research, we extend the current literature on correlation modelling by reviewing existing time-series tools, performing empirical analysis and developing two new conditional heteroscedastic models based on mixture techniques. Specifically, Engle’s standard DCC is augmented with an asymmetric factor and then modified so that disturbances (conditional returns) can be modelled using multivariate Gaussian mixture distribution and multivariate T mixture distribution. A key motivation of proposing mixture models is to account for the bi-modality observed in unconditional distribution of realized correlation. Besides, the ultimate purpose of incorporating this assumption to a multivariate GARCH is to account for a variety of stylized features frequently presented in financial returns such as volatility clustering, correlation clustering, leverage effect, fat tails, skewness and leptokurtosis. Since the model flexibility given this assumption can be greatly enhanced, after a thorough comparison we find significant evidence of outperformance of our models over other alternative models from a range of perspectives. Besides, in this research we also study a new type of correlation model using multivariate skew-t as basis for quantifying the density values of conditional returns. Note that, the ADCC skew-t and AGDCC skew-t model analyzed in this research are both new to the financial literature
457

Essays on monetary policy : macro and firm-level evidence from Malaysia, a small open economy

Abdul Karim, Zulkefly January 2011 (has links)
This dissertation is comprised of three empirical essays evaluating the effectiveness of monetary policy implementation in a small open economy (i.e. Malaysia) by using macro, and micro-level study. The motivations for these three studies evolve around the issue of the role of monetary policy in transmitting to economic activity at the macroeconomic level, and at the microeconomic level through firm-level equity returns, and firm-level investment spending. The first essay, which is in Chapter 2, examines the implementation of monetary policy in a small open economy at the macroeconomic level by using an open-economy structural VAR (SVAR) study. Monetary policy variables (interest rate and money supply) have been measured through a non-recursive identification scheme, which allows the monetary authority to set the interest rate and money supply after observing the current value of foreign variables, domestic output and inflation. Specifically, this chapter tests the effect of foreign shocks upon domestic macroeconomic fluctuations and monetary policy, and examines the effectiveness of domestic monetary policy as a stabilization policy. The results show the important role of foreign shocks in influencing Malaysian monetary policy and macroeconomic variables. There is a real effect of monetary policy, which is that a positive shock in money supply increases domestic output. In contrast, a positive interest rates shock has a negative effect on domestic output growth and inflation. The effects of money supply and interest rate shocks on the exchange rate and stock prices are also consistent with standard economic theory. In addition, domestic monetary policy enables to mitigate the negative effect of external shocks upon domestic economy. iii The second essay (chapter 3) investigates the effects of domestic monetary policy shocks upon Malaysian firm-level equity returns in a dynamic panel data framework. A domestic monetary policy shock is generated via a recursive SVAR identification scheme, which allows the monetary authority to set the overnight interbank rate after observing the current value of world oil price, foreign income, foreign monetary policy, domestic output and inflation. An augmented Fama and French (1992, 1996) multifactor model has been used in estimating the determinants of firm-level stock returns. The results revealed that firm stock returns have responded negatively to monetary policy shocks. Moreover, the effect of domestic monetary policy shocks on stock returns is significant for small firms’ equity, whereas equity of large firms is not significantly affected. The effect of domestic monetary policy also has differential effects according to the sub-sector of the economy in which a firm operates. The equity returns of financially constrained firms are also significantly more affected by domestic monetary policy than the returns of less constrained firms. The third essay, which is in Chapter 4, examines the effects of monetary policy on firms’ balance sheets, with a particular focus on the effects upon firms’ fixedinvestment spending. The focal point concerns the two main channels of monetary policy transmission mechanism, namely the interest rate and broad credit channels in affecting firms’ investment spending. Specifically, the interest rates channel is measured through the firm user cost of capital, whereas the broad credit channel is identified through the firms’ liquidity (cash flow to capital stock ratio). By estimating the firms’ investment model using a dynamic neoclassical framework in an autoregressive distributed lagged (ARDL) model, the empirical results tend to support the relevance of interest rates, and the broad credit channel in transmitting to the firm-level investment spending. The results also reveal that the effect of monetary policy channels to the firms’ investment are heterogeneous, in that the small firms who faced financial constraint responded more to monetary tightening as compared to the large firms (less constrained firms). The effect of monetary policy is also heterogeneous across subsectors of the economy, as some sectors (for example, consumer products, industrial products and services) are significantly affected by monetary policy, whereas other subsectors (for example, property) are not affected
458

Aggregate and disaggregated fluctuations

Mennuni, Alessandro January 2011 (has links)
In the usual version of the neoclassical growth model used to identify neutral (N-Shock) and investment shocks (I-Shock), a linear transformation frontier between consumption and investment goods is assumed. This paper extends the original framework, allowing for curvature in the transformation frontier, and studies how this affects the relative price of investment goods and hence the identification of investment shocks. A concave frontier allows a substantial improvement in the prediction of the saving rate. Furthermore, a concave frontier induces short-run aggregate effects of relative demand shifts, thereby fostering the propagation of the shocks under consideration, which overall account for 86% of the aggregate fluctuations. When I identify shocks with curvature, the N-shock appears to be stationary while the I-shock is a unit root. This leads the N-shock to play a major role: 91% of the fluctuations explained are due to the N- shock
459

Reconsidering the role of nominal monetary policy variables : evidence from four major economies

Zhu, Min January 2011 (has links)
This thesis aims to contribute to monetary policy studies by conducting fundamental research and gathering empirical evidence on the effectiveness of monetary policy in the U.S., U.K., Germany and Japan. The financial crisis in 2007/08 highlighted the weakness of using nominal interest rates as the main monetary policy instrument. Before the financial crisis in 2007/08, the new monetary consensus (Bernanke 1992, 1995, and 1996, Woodford, 2003) that interest rates could be the effective intermediate instrument to influence the economy was widely accepted by central banks. It had developed since the failure of monetarism in the late 1970s. Some central banks (BOE, ECB, and RBNZ) have adopted specific inflation targeting, and approach it through the short-term interest rate. However, as the short-term interest rate has approached zero in a number of countries, it has become apparent that new monetary policy instruments are needed. Quantitative variables (including measures of money and credit) have since become of greater concern again, especially since a policy of “quantitative easing (QE)” was adopted by the Bank of England in 2009. The main goal of this research is to provide empirical evidence on the interaction between financial variables (interest rates, money and credit) and economic variables (nominal GDP). Three main questions are being dealt with: (1)Are financial variables (interest rates, money and credit) appropriate to target nominal GDP? (2)Do quantitative variables (money and credit) have superior predictive abilities to the price variable (interest rate) in predicting nominal GDP? (3)Do credit variables perform better than money aggregates in explaining nominal GDP? The empirical analysis employs simple regressions, Granger causality tests, the general-to-specific modelling methodology and VARs model. The empirical results suggest that quantitative variables (money aggregates and GDP-circulation credit) have more predictive power for nominal GDP than price variables (interest rates). Meanwhile the GDP-circulation credit displays more accurate features than money aggregates to target nominal GDP. Overall the outcomes not only enrich the literature regarding the monetary policy in the U.S., U.K., Germany and Japan but also provide further empirical support for a modified ‘credit view’ of the transmission of monetary policy. They also have implications for the design of a successful monetary policy implementation regime
460

Essays on innovation economics

Pineda Bermudez, Carlos January 2015 (has links)
No description available.

Page generated in 0.0817 seconds