Spelling suggestions: "subject:"conomic theory."" "subject:"c:conomic theory.""
161 |
An application of the contingent valuation method to an excludable public good : the case of Northampton's parksCoskeran, Thomas January 1998 (has links)
This thesis discusses an application of the Contingent Valuation Method (CVM), a technique involving the use of questionnaire surveys, to the valuation of parks in the town of Northampton, England. Urban parks are an example of a class of good, excludable public goods, to which the CVM has not been extensively applied. The application, therefore, breaks new ground in applying the technique to the particular case and in being the first use of the CVM for this type of good in the United Kingdom. The thesis begins with a review of the nature of the CVM. A justification for using the method in the case of parks is then provided. Once theoretical difficulties surrounding the application are examined, an account follows of a pilot contingent valuation survey. The results of this and the analysis conducted on it are reported. Results from a main survey, which followed the pilot, are then discussed and analysed using both tobit and logit analysis. The implications of these studies are summarized in the concluding chapter. The principal policy conclusions to follow from the work done are that: the Council could consider increasing expenditure on parks to reflect fully the preferences of the town's population; any increased spending on parks could come, at least in part, from a reduction in spending on highways in the town; parks could be used as part of a redistributive social policy; those on higher incomes could be expected to make greater contributions to the maintenance of parks. A rationale for these and other policy recommendations are made in the final chapter, as are suggestions for further research into the application of the CVM to excludable public goods.
|
162 |
Experiments on confidence calibration and decision makingMurad, Zahra January 2015 (has links)
This thesis reports on three experiments studying subjects' confidence about performance on a task and how it relates to decision-making under uncertainty. Chapter 1 introduces the thesis providing an overview of the common themes and methods underlying this research. Chapter 2 reports the first experiment, investigating the relationship between risk attitudes and confidence judgements. We measure confidence in two different ways, with an incentivized elicitation tool and with unincentivized self-reports. Using our incentivised tool we find that, in the absence of controls for risk attitudes, subjects tend to be underconfident about their own performance. When we filter out the effects of risk attitudes we find that underconfidence is reduced, but not eliminated. We also identify an interesting link between self-reported confidence and risk attitudes in that experimental subjects with less concave utility functions and more elevated probability weighting functions tend to report higher confidence levels. Chapter 3 reports the second experiment, investigating the role of information in experimental market entry games. We look at whether individual over-entry to simple and under-entry to difficult markets disappears when subjects make entry decisions in groups or are given statistical information about performance of previous subjects. We find that individuals and groups are both susceptible to the same type of biases in entry and both fail to learn from repetition and feedback. We find that individuals learn to de-bias their entry decisions in the second half of the experiment when given explicit information about the performance of others. Chapter 4 reports an experiment investigating "snowballing of confidence" in hierarchical tournaments. We analyse how high/low scorers of a group in one stage of the tournament change their confidence levels in the next stage when they are re-grouped with other high/low scorers. We find that all subjects start the tournament assigning an equal chance to being high or low scorers in their groups. As they proceed through the stages, low scorers become more underconfident whereas high scorers become more overconfident about their relative performances. We also identify an interesting difference in the perceptions of the task between high and low scorers that is linked to self-serving causal attribution biases previously found in the psychology literature. Chapter 5 summarizes the findings of this dissertation and concludes.
|
163 |
Weak factor model in large dimensionPhan, Quang January 2015 (has links)
This thesis presents some extensions to the current literature in high-dimensional static factor models. When the cross-section dimension (represented by N hence-forth) is very large, the standard assumption for each common factor is to have the number of non-zero loadings grow linearly with N . On the other hand, an idiosyncratic error for each component can only be correlated with a finite number of other components in the cross-section. These two assumptions are crucial in standard high-dimensional factor analysis, as they allow us to obtain consistent estimators for the factors, the loadings and the number of factors. However, together they rule out the possibility that we may have some factors that have strictly less than N but still non-negligible number of non-zero loadings, e.g. N for some 0 < < 1 . The existence of these weak factors will decrease the signal-to-noise ratio as now the gap between the systematic and idiosyncratic eigenvalues is more narrow. As the consequence, in such model it is harder to establish the consistency of the factors estimated by sample principle components. Furthermore, the number of factors is even more challenging to identify because most existing methods rely on the large signal-to-noise ratio. In this thesis, I consider a factor model that allows general strength for each factor, i.e. both strong and weak factors can exist. Chapter 1 gives more discussions about the current literature on this and the motivation for my contribution. In Chapter 2, I show that the sample principle components are still the consistent estimators for the factors (up to the spanning space), provided that the factors are not too weak. In addition, I derive the lower bound that the strength of the weakest factor needs to achieve for being consistently estimated. More precisely, what I mean by strength is the order of the number of non-zero loadings of the factor. Chapter 3 presents a novel method to determine the number of factors, which is asymptotically consistent even when the factors are weak. I run extensive Monte Carlo simulations to compare the performance of this method to the two well-known ones, i.e. the class of criteria proposed in Bai and Ng (2002) and the eigenvalue ratio method in Ahn and Horenstein (2013). In Chapter 4 and 5, I show some applications that are based on the work of this thesis. I mainly focus on two issues: selecting the factor models in practice and using factor analysis to compute the large static covariance matrix.
|
164 |
Three essays in applied economicsDeiana, Claudio January 2016 (has links)
This thesis comprises three essays. The first one focuses on the effect of a change in the labour market conditions induced by a trade shock on crime at the US local level. Using US Census data, I provide evidence that the increasing exposure to Chinese competitiveness has indirectly contributed to the change in the propensity to commit crime through a reduction of the expected labour market earnings. The second essay, which is co-authored with Vincenzo Bove and Roberto Nisticó, addresses the reasons why countries decide to transfer weapons only to specific recipients. We present novel empirical models of the arms trade and concentrate on the role of energy dependence, in particular of oil, in explaining the trade of weapons between countries. We find strong empirical support for the hypothesis that oil-dependent economies have incentives to provide security by selling or giving away arms to oil-rich countries and reduce their risk of political instability. Finally, the last essay, joint with Emanuele Ciani, has a specific focus on family economics. We provide evidence that parents who helped their adult children in the past are rewarded by higher chances of receiving informal care later in life. To this purpose we use Italian data containing retrospective information about help with housing received from parents at the time of marriage. We show a positive association with their current provision of informal care to them, which is robust to controlling for a large set of individual and family characteristics, and is confirmed by an IV regression using house prices as instrument. The results are in line with theories based on the presence of a third generation of grandchildren, such as those involving a demonstration effect or a family constitution.
|
165 |
The ethics of liberal market governance : Adam Smith and the constitution of financial market agencyClarke, Chris D. January 2012 (has links)
In this thesis I provide a historicised account of the work of Adam Smith in order to reveal the essential variety of viable ethico-political commitments in liberal political economy and International Political Economy (IPE). Specifically, I draw on Quentin Skinner’s approach to intellectual history in order to engage with the thought of Smith. I show how existing readings of Smith in IPE on the whole tend to fail some of Skinner's most basic methodological principles for interpreting past texts, which is problematic for IPE scholars because it reveals the distinctly 'economistic' historiography of Smith that dominates the subject field. I offer a way of escaping the limitations of the prevailing economistic historiography through providing a sustained engagement with his actual texts as read in context. In so doing, I present a novel account of Smith for IPE which emphasises the crucial role of the concept of the 'sympathy procedure' in his work, through such a mechanism people learn how to express fellow-feeling within their market-bound relationships. I argue that this recovery provides a critical lens through which to interrogate the ethics of liberal market governance today, one which animates an alternative to economistic understandings of market-oriented behaviour. Following Skinner, I do not propose a direct 'application' of a Smithian perspective, but instead use it as part of a pragmatically inspired study to reveal the historical contingency of some of the most deeply held views about subjecthood as manifested under liberal market governance today. This enables me in the empirical parts of my thesis to reflect on competing discourses of the global financial crisis at the regulatory and everyday level of global finance via a 'sympathy perspective'. I argue that through such an engagement Smith's sympathy procedure can produce novel ways of subverting the ethics of global finance as currently constituted.
|
166 |
The low-beta anomaly and estimation intervalRenfro, Brandon Kyle 02 March 2017 (has links)
<p>The relationship between risk and return is a central theme of finance. The classic theoretical model which addresses this relationship is the Capital Asset Pricing Model, ?CAPM?. The CAPM of Sharpe (1964), Lintner (1965), and Black (1972) posits that a security?s return is directly and positively related to its exposure to systematic risk as measured by beta.
Tests of the CAPM theorized relationship between risk and return predominantly support a direct, linear relationship. A minority of the research on the CAPM purports to show that the relationship between a security?s beta and its return is either too flat, or even inverse (Blitz, Falkenstein, & Vliet, 2014). This inverse relationship is known as the ?low-beta anomaly?. Such tests of the CAPM largely focus on a single beta estimation interval; five years of monthly returns.
The purpose of this study was to assess the nature of the relationship between equity beta, and post-estimation return. Specifically, this study sought to address the validity and persistence of the low-beta anomaly across multiple beta estimation intervals.
Within the twenty year sample period from January of 1994 to December of 2013 this research covered ten different beta estimation intervals in order to determine whether a statistically significant and theoretically consistent relationship existed between equity beta and post-estimation realized return. This was done utilizing a double-pass, or ?Fama-MacBeth? regression (Fama & MacBeth, 1973), whereby historical beta is first estimated by regressing a security?s return against the return of the market. The second pass of the regression, which provides the empirical test, is then conducted on the historical beta against post-estimation return. Further, this research appropriately accounted for the conditional nature of the beta-return relationship.
This research provided two basic conclusions: First, the low-beta anomaly is not robust across multiple beta estimation intervals. Second, with any test of the relationship between beta and return the choice of beta estimation interval matters. Different estimation intervals sometimes provide contradictory empirical results for the same period.
|
167 |
Effects of monetary policy on macro economic performance : the case of NigeriaIsedu, Mustafa January 2013 (has links)
The objective of this study is to empirically examine the effects of monetary policy on macroeconomic performance in Nigeria. The study uses Quarterly data between 1970 Q1 to 2011Q4, being a sample period of forty one years. The study further, introduces structural break to investigate the presence of a possible structural change which takes into account the effects of the Structural Adjustment Programme introduced by the Nigerian government in the 1987. The data was split into two sub-periods from 1970Q1–1986Q4, era before the Structural adjustment programme, and from 1987Q1 – 2011Q4, period after the Structural Adjustment Programme in Nigeria. In this study, three approaches were utilised in the methodology. First, estimation and analysis was based on coefficients of the variables using long-run and co-integrating Vector Error Correction Model (VECM). The results confirm my a priori expectation, although many of the variables were not statistically significant. The study also estimates the period 1970Q1–2011Q4, without a structural break for the GDP model having been confirmed in a structural break test. Therefore, we accept the null hypothesis which means that there was no structural break in real domestic growth during the structural adjustment programme, introduced in 1987. However, the structural break tests for consumers’ price index (CPI) and balance of payments (BOP) show that the parameters of the analysed equations were not stable given that recursive errors cut across the critical lines for both tests. As a result of the foregoing, we reject the null hypothesis meaning that there was a structural break for the CPI and (BOP) models. This means that the structural adjustment programme introduced in 1987 brought about a change in CPI and BOP in the Nigerian economy. In the second approach of the methodology, a macroeconomic model was simulated to demonstrate the effects of monetary policy on macroeconomic performance. Thus, the results obtained from the simulation are impressive and generally satisfactory; the results suggest the effectiveness of monetary policy implementation for counter cyclical income stabilization, BOP stabilization and CPI stabilization in Nigeria. In the third approach of my methodology, three structural vector autoregressive (SVAR) econometric models were formulated to trace the effects of monetary policy shocks on domestic output, consumers’ price index and balance of payments position in Nigeria. Structural effects of monetary shocks or innovations were captured by the impulse response functions and the forecast error variance decomposition. The empirical impulse-response assessment indicates that an exogenous shock to the short-term interest rate has the most significant positive effect on real domestic output and consumers’ price followed by a transitory effect of the broad money. The response of the country’s external payments’ position to a structural shock as measured by a one-standard deviation innovation in all the policy variables is almost zero. The policy implication is straightforward- monetary policy in Nigeria is effective in maintaining internal balance and ineffective in achieving external balance. Overall, the results suggest that monetary policy affects macroeconomic performance in Nigeria.
|
168 |
Three essays in applied econometricsChen, Zhihong January 2005 (has links)
Thesis advisor: Arthur Lewbel / This dissertation consists of three self-contained papers in applied econometrics. The frrst chapter, Testing Multivariate Distributions (joint with Jushan Bai), proposes a new method to test multivariate distributions with a focus on multivariate normality and multivariate t distribution, motivated in part by examination of financial market data. Using Khmaladze's martingale transformation to purge the effect of parameter estimation, our test generates a distribution-free statistic and can be easily applied to cases with complicated parameters. Simulation shows our test has good size and power. Finally, we apply our test procedure to a real multivariate financial time series. The result is consistent with the well-known fat tail property of financial data. The second chapter, Measuring the Poverty Line in China - An Equivalence Scale Method, is motivated by the current urban poverty issue in China. The fundamental question is: given the poverty threshold for an individual, how should that threshold vary across households with different demographic characteristics? This paper uses urban Household survey (uHS) data of China to estimate the equivalence scales for Chinese urban households. The results provide a quantitative reference to calculate the comparable poverty lines for households with different demographic compositions. It also can be used to determine appropriate subsidy levels for demographically different households. A useful byproduct of this exercise is the specification of a demand system for China. The third chapter, Dynamics of City Growth: Random or Deterministic? Evidence From China (joint with Shihe Fu), tests the random growth theory and the endogenous growth theory in urban economics using Chinese city size data from 1984-2002. We implement unit root and cointegration tests on pooled heterogeneous cities in the country. Since China is still in the period of rapid urbanization, we can only tentatively conclude that the overall Chinese city growth does not follow either random growth or parallel growth. However, we find that a small number of cities with certain common characteristics do grow parallel. / Thesis (PhD) — Boston College, 2005. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
|
169 |
Essays in nonparametric estimation and inferenceTaylor, Luke January 2017 (has links)
This thesis consists of three chapters which represent my journey as a researcher during this PhD. The uniting theme is nonparametric estimation and inference in the presence of data problems. The first chapter begins with nonparametric estimation in the presence of a censored dependent variable and endogenous regressors. For Chapters 2 and 3 my attention moves to problems of inference in the presence of mismeasured data. In Chapter 1 we develop a nonparametric estimator for the local average response of a censored dependent variable to endogenous regressors in a nonseparable model where the unobservable error term is not restricted to be scalar and where the nonseparable function need not be monotone in the unobservables. We formalise the identification argument put forward in Altonji, Ichimura and Otsu (2012), construct a nonparametric estimator, characterise its asymptotic property, and conduct a Monte Carlo investigation to study its small sample properties. We show that the estimator is consistent and asymptotically normally distributed. Chapter 2 considers specification testing for regression models with errors-in-variables. In contrast to the method proposed by Hall and Ma (2007), our test allows general nonlinear regression models. Since our test employs the smoothing approach, it complements the nonsmoothing one by Hall and Ma in terms of local power properties. We establish the asymptotic properties of our test statistic for the ordinary and supersmooth measurement error densities and develop a bootstrap method to approximate the critical value. We apply the test to the specification of Engel curves in the US. Finally, some simulation results endorse our theoretical findings: our test has advantages in detecting high frequency alternatives and dominates the existing tests under certain specifications. Chapter 3 develops a nonparametric significance test for regression models with measurement error in the regressors. To the best of our knowledge, this is the first test of its kind. We use a ‘semi-smoothing’ approach with nonparametric deconvolution estimators and show that our test is able to overcome the slow rates of convergence associated with such estimators. In particular, our test is able to detect local alternatives at the √n rate. We derive the asymptotic distribution under i.i.d. and weakly dependent data, and provide bootstrap procedures for both data types. We also highlight the finite sample performance of the test through a Monte Carlo study. Finally, we discuss two empirical applications. The first considers the effect of cognitive ability on a range of socio-economic variables. The second uses time series data - and a novel approach to estimate the measurement error without repeated measurements - to investigate whether future inflation expectations are able to stimulate current consumption.
|
170 |
Three essays on firms and international tradeHuang, Hanwei January 2018 (has links)
The first chapter of the thesis investigates the resilience of Chinese manufacturing importers to supply chain disruptions by exploiting the 2003 SARS epidemic as a natural experiment. I show both in theory and empirics that geographical diversification is crucial in building a resilient supply chain. I also find that reduction in trade costs induces firms to further diversify. Connectivity to the transportation network facilitates diversification in input sourcing and reduces the negative impact of SARS. Infrastructure is therefore useful not only in improving the efficiency of the economy, but also in increasing its resilience to shocks. The second chapter studies how changes in factor endowments, technologies, and trade costs jointly determine structural adjustments, which are defined as changes in the distributions of production and exports. During 1999 to 2007, Chinese manufacturing production became more capital-intensive while exports did not. A structurally estimated Ricardian and Heckscher-Ohlin model with heterogeneous firms reconciles this seemingly puzzling pattern. Counterfactual simulations show that capital deepening made Chinese production more capital intensive, but technology changes that biased toward the labour intensive sectors and trade liberalizations provided a counterbalancing force. The last chapter examines how firm heterogeneity shapes comparative advantage. Drawing on matched customs and firm-level data from China, we find that export participation, exported product scope and product mix, and firm mix within industries vary systematically with firms’ labour intensity. This is rationalized by a model in which firms from industries of comparative disadvantage face tougher competition in the export market. The competitive effect induces reallocation within and across firms and generates endogenous Ricardian comparative advantage, which dampens ex ante comparative advantage. Using sufficient statistics to measure and decompose comparative advantage, we find that the dampening mechanism is quantitatively important in shaping comparative advantage for a calibrated Chinese economy.
|
Page generated in 0.0606 seconds