• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • Tagged with
  • 904
  • 904
  • 904
  • 171
  • 97
  • 96
  • 79
  • 63
  • 61
  • 59
  • 59
  • 56
  • 51
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

An application of the contingent valuation method to an excludable public good : the case of Northampton's parks

Coskeran, Thomas January 1998 (has links)
This thesis discusses an application of the Contingent Valuation Method (CVM), a technique involving the use of questionnaire surveys, to the valuation of parks in the town of Northampton, England. Urban parks are an example of a class of good, excludable public goods, to which the CVM has not been extensively applied. The application, therefore, breaks new ground in applying the technique to the particular case and in being the first use of the CVM for this type of good in the United Kingdom. The thesis begins with a review of the nature of the CVM. A justification for using the method in the case of parks is then provided. Once theoretical difficulties surrounding the application are examined, an account follows of a pilot contingent valuation survey. The results of this and the analysis conducted on it are reported. Results from a main survey, which followed the pilot, are then discussed and analysed using both tobit and logit analysis. The implications of these studies are summarized in the concluding chapter. The principal policy conclusions to follow from the work done are that: the Council could consider increasing expenditure on parks to reflect fully the preferences of the town's population; any increased spending on parks could come, at least in part, from a reduction in spending on highways in the town; parks could be used as part of a redistributive social policy; those on higher incomes could be expected to make greater contributions to the maintenance of parks. A rationale for these and other policy recommendations are made in the final chapter, as are suggestions for further research into the application of the CVM to excludable public goods.
122

Experiments on confidence calibration and decision making

Murad, Zahra January 2015 (has links)
This thesis reports on three experiments studying subjects' confidence about performance on a task and how it relates to decision-making under uncertainty. Chapter 1 introduces the thesis providing an overview of the common themes and methods underlying this research. Chapter 2 reports the first experiment, investigating the relationship between risk attitudes and confidence judgements. We measure confidence in two different ways, with an incentivized elicitation tool and with unincentivized self-reports. Using our incentivised tool we find that, in the absence of controls for risk attitudes, subjects tend to be underconfident about their own performance. When we filter out the effects of risk attitudes we find that underconfidence is reduced, but not eliminated. We also identify an interesting link between self-reported confidence and risk attitudes in that experimental subjects with less concave utility functions and more elevated probability weighting functions tend to report higher confidence levels. Chapter 3 reports the second experiment, investigating the role of information in experimental market entry games. We look at whether individual over-entry to simple and under-entry to difficult markets disappears when subjects make entry decisions in groups or are given statistical information about performance of previous subjects. We find that individuals and groups are both susceptible to the same type of biases in entry and both fail to learn from repetition and feedback. We find that individuals learn to de-bias their entry decisions in the second half of the experiment when given explicit information about the performance of others. Chapter 4 reports an experiment investigating "snowballing of confidence" in hierarchical tournaments. We analyse how high/low scorers of a group in one stage of the tournament change their confidence levels in the next stage when they are re-grouped with other high/low scorers. We find that all subjects start the tournament assigning an equal chance to being high or low scorers in their groups. As they proceed through the stages, low scorers become more underconfident whereas high scorers become more overconfident about their relative performances. We also identify an interesting difference in the perceptions of the task between high and low scorers that is linked to self-serving causal attribution biases previously found in the psychology literature. Chapter 5 summarizes the findings of this dissertation and concludes.
123

Weak factor model in large dimension

Phan, Quang January 2015 (has links)
This thesis presents some extensions to the current literature in high-dimensional static factor models. When the cross-section dimension (represented by N hence-forth) is very large, the standard assumption for each common factor is to have the number of non-zero loadings grow linearly with N . On the other hand, an idiosyncratic error for each component can only be correlated with a finite number of other components in the cross-section. These two assumptions are crucial in standard high-dimensional factor analysis, as they allow us to obtain consistent estimators for the factors, the loadings and the number of factors. However, together they rule out the possibility that we may have some factors that have strictly less than N but still non-negligible number of non-zero loadings, e.g. N for some 0 < < 1 . The existence of these weak factors will decrease the signal-to-noise ratio as now the gap between the systematic and idiosyncratic eigenvalues is more narrow. As the consequence, in such model it is harder to establish the consistency of the factors estimated by sample principle components. Furthermore, the number of factors is even more challenging to identify because most existing methods rely on the large signal-to-noise ratio. In this thesis, I consider a factor model that allows general strength for each factor, i.e. both strong and weak factors can exist. Chapter 1 gives more discussions about the current literature on this and the motivation for my contribution. In Chapter 2, I show that the sample principle components are still the consistent estimators for the factors (up to the spanning space), provided that the factors are not too weak. In addition, I derive the lower bound that the strength of the weakest factor needs to achieve for being consistently estimated. More precisely, what I mean by strength is the order of the number of non-zero loadings of the factor. Chapter 3 presents a novel method to determine the number of factors, which is asymptotically consistent even when the factors are weak. I run extensive Monte Carlo simulations to compare the performance of this method to the two well-known ones, i.e. the class of criteria proposed in Bai and Ng (2002) and the eigenvalue ratio method in Ahn and Horenstein (2013). In Chapter 4 and 5, I show some applications that are based on the work of this thesis. I mainly focus on two issues: selecting the factor models in practice and using factor analysis to compute the large static covariance matrix.
124

Three essays in applied economics

Deiana, Claudio January 2016 (has links)
This thesis comprises three essays. The first one focuses on the effect of a change in the labour market conditions induced by a trade shock on crime at the US local level. Using US Census data, I provide evidence that the increasing exposure to Chinese competitiveness has indirectly contributed to the change in the propensity to commit crime through a reduction of the expected labour market earnings. The second essay, which is co-authored with Vincenzo Bove and Roberto Nisticó, addresses the reasons why countries decide to transfer weapons only to specific recipients. We present novel empirical models of the arms trade and concentrate on the role of energy dependence, in particular of oil, in explaining the trade of weapons between countries. We find strong empirical support for the hypothesis that oil-dependent economies have incentives to provide security by selling or giving away arms to oil-rich countries and reduce their risk of political instability. Finally, the last essay, joint with Emanuele Ciani, has a specific focus on family economics. We provide evidence that parents who helped their adult children in the past are rewarded by higher chances of receiving informal care later in life. To this purpose we use Italian data containing retrospective information about help with housing received from parents at the time of marriage. We show a positive association with their current provision of informal care to them, which is robust to controlling for a large set of individual and family characteristics, and is confirmed by an IV regression using house prices as instrument. The results are in line with theories based on the presence of a third generation of grandchildren, such as those involving a demonstration effect or a family constitution.
125

The ethics of liberal market governance : Adam Smith and the constitution of financial market agency

Clarke, Chris D. January 2012 (has links)
In this thesis I provide a historicised account of the work of Adam Smith in order to reveal the essential variety of viable ethico-political commitments in liberal political economy and International Political Economy (IPE). Specifically, I draw on Quentin Skinner’s approach to intellectual history in order to engage with the thought of Smith. I show how existing readings of Smith in IPE on the whole tend to fail some of Skinner's most basic methodological principles for interpreting past texts, which is problematic for IPE scholars because it reveals the distinctly 'economistic' historiography of Smith that dominates the subject field. I offer a way of escaping the limitations of the prevailing economistic historiography through providing a sustained engagement with his actual texts as read in context. In so doing, I present a novel account of Smith for IPE which emphasises the crucial role of the concept of the 'sympathy procedure' in his work, through such a mechanism people learn how to express fellow-feeling within their market-bound relationships. I argue that this recovery provides a critical lens through which to interrogate the ethics of liberal market governance today, one which animates an alternative to economistic understandings of market-oriented behaviour. Following Skinner, I do not propose a direct 'application' of a Smithian perspective, but instead use it as part of a pragmatically inspired study to reveal the historical contingency of some of the most deeply held views about subjecthood as manifested under liberal market governance today. This enables me in the empirical parts of my thesis to reflect on competing discourses of the global financial crisis at the regulatory and everyday level of global finance via a 'sympathy perspective'. I argue that through such an engagement Smith's sympathy procedure can produce novel ways of subverting the ethics of global finance as currently constituted.
126

Effects of monetary policy on macro economic performance : the case of Nigeria

Isedu, Mustafa January 2013 (has links)
The objective of this study is to empirically examine the effects of monetary policy on macroeconomic performance in Nigeria. The study uses Quarterly data between 1970 Q1 to 2011Q4, being a sample period of forty one years. The study further, introduces structural break to investigate the presence of a possible structural change which takes into account the effects of the Structural Adjustment Programme introduced by the Nigerian government in the 1987. The data was split into two sub-periods from 1970Q1–1986Q4, era before the Structural adjustment programme, and from 1987Q1 – 2011Q4, period after the Structural Adjustment Programme in Nigeria. In this study, three approaches were utilised in the methodology. First, estimation and analysis was based on coefficients of the variables using long-run and co-integrating Vector Error Correction Model (VECM). The results confirm my a priori expectation, although many of the variables were not statistically significant. The study also estimates the period 1970Q1–2011Q4, without a structural break for the GDP model having been confirmed in a structural break test. Therefore, we accept the null hypothesis which means that there was no structural break in real domestic growth during the structural adjustment programme, introduced in 1987. However, the structural break tests for consumers’ price index (CPI) and balance of payments (BOP) show that the parameters of the analysed equations were not stable given that recursive errors cut across the critical lines for both tests. As a result of the foregoing, we reject the null hypothesis meaning that there was a structural break for the CPI and (BOP) models. This means that the structural adjustment programme introduced in 1987 brought about a change in CPI and BOP in the Nigerian economy. In the second approach of the methodology, a macroeconomic model was simulated to demonstrate the effects of monetary policy on macroeconomic performance. Thus, the results obtained from the simulation are impressive and generally satisfactory; the results suggest the effectiveness of monetary policy implementation for counter cyclical income stabilization, BOP stabilization and CPI stabilization in Nigeria. In the third approach of my methodology, three structural vector autoregressive (SVAR) econometric models were formulated to trace the effects of monetary policy shocks on domestic output, consumers’ price index and balance of payments position in Nigeria. Structural effects of monetary shocks or innovations were captured by the impulse response functions and the forecast error variance decomposition. The empirical impulse-response assessment indicates that an exogenous shock to the short-term interest rate has the most significant positive effect on real domestic output and consumers’ price followed by a transitory effect of the broad money. The response of the country’s external payments’ position to a structural shock as measured by a one-standard deviation innovation in all the policy variables is almost zero. The policy implication is straightforward- monetary policy in Nigeria is effective in maintaining internal balance and ineffective in achieving external balance. Overall, the results suggest that monetary policy affects macroeconomic performance in Nigeria.
127

Essays in nonparametric estimation and inference

Taylor, Luke January 2017 (has links)
This thesis consists of three chapters which represent my journey as a researcher during this PhD. The uniting theme is nonparametric estimation and inference in the presence of data problems. The first chapter begins with nonparametric estimation in the presence of a censored dependent variable and endogenous regressors. For Chapters 2 and 3 my attention moves to problems of inference in the presence of mismeasured data. In Chapter 1 we develop a nonparametric estimator for the local average response of a censored dependent variable to endogenous regressors in a nonseparable model where the unobservable error term is not restricted to be scalar and where the nonseparable function need not be monotone in the unobservables. We formalise the identification argument put forward in Altonji, Ichimura and Otsu (2012), construct a nonparametric estimator, characterise its asymptotic property, and conduct a Monte Carlo investigation to study its small sample properties. We show that the estimator is consistent and asymptotically normally distributed. Chapter 2 considers specification testing for regression models with errors-in-variables. In contrast to the method proposed by Hall and Ma (2007), our test allows general nonlinear regression models. Since our test employs the smoothing approach, it complements the nonsmoothing one by Hall and Ma in terms of local power properties. We establish the asymptotic properties of our test statistic for the ordinary and supersmooth measurement error densities and develop a bootstrap method to approximate the critical value. We apply the test to the specification of Engel curves in the US. Finally, some simulation results endorse our theoretical findings: our test has advantages in detecting high frequency alternatives and dominates the existing tests under certain specifications. Chapter 3 develops a nonparametric significance test for regression models with measurement error in the regressors. To the best of our knowledge, this is the first test of its kind. We use a ‘semi-smoothing’ approach with nonparametric deconvolution estimators and show that our test is able to overcome the slow rates of convergence associated with such estimators. In particular, our test is able to detect local alternatives at the √n rate. We derive the asymptotic distribution under i.i.d. and weakly dependent data, and provide bootstrap procedures for both data types. We also highlight the finite sample performance of the test through a Monte Carlo study. Finally, we discuss two empirical applications. The first considers the effect of cognitive ability on a range of socio-economic variables. The second uses time series data - and a novel approach to estimate the measurement error without repeated measurements - to investigate whether future inflation expectations are able to stimulate current consumption.
128

Three essays on firms and international trade

Huang, Hanwei January 2018 (has links)
The first chapter of the thesis investigates the resilience of Chinese manufacturing importers to supply chain disruptions by exploiting the 2003 SARS epidemic as a natural experiment. I show both in theory and empirics that geographical diversification is crucial in building a resilient supply chain. I also find that reduction in trade costs induces firms to further diversify. Connectivity to the transportation network facilitates diversification in input sourcing and reduces the negative impact of SARS. Infrastructure is therefore useful not only in improving the efficiency of the economy, but also in increasing its resilience to shocks. The second chapter studies how changes in factor endowments, technologies, and trade costs jointly determine structural adjustments, which are defined as changes in the distributions of production and exports. During 1999 to 2007, Chinese manufacturing production became more capital-intensive while exports did not. A structurally estimated Ricardian and Heckscher-Ohlin model with heterogeneous firms reconciles this seemingly puzzling pattern. Counterfactual simulations show that capital deepening made Chinese production more capital intensive, but technology changes that biased toward the labour intensive sectors and trade liberalizations provided a counterbalancing force. The last chapter examines how firm heterogeneity shapes comparative advantage. Drawing on matched customs and firm-level data from China, we find that export participation, exported product scope and product mix, and firm mix within industries vary systematically with firms’ labour intensity. This is rationalized by a model in which firms from industries of comparative disadvantage face tougher competition in the export market. The competitive effect induces reallocation within and across firms and generates endogenous Ricardian comparative advantage, which dampens ex ante comparative advantage. Using sufficient statistics to measure and decompose comparative advantage, we find that the dampening mechanism is quantitatively important in shaping comparative advantage for a calibrated Chinese economy.
129

Essays in microeconometrics

Dong, Hao January 2018 (has links)
This thesis consists of three chapters, which are works during my PhD study. In the first two chapters, I investigate the estimation of nonparametric and semiparametric econometric models widely used in empirical studies when the data is mismeasured. In the last chapter, my attention moves to the estimation of the effects of the social interactions. In Chapter 1, I study the estimation of the nonparametric additive model in the presence of a mismeasured covariate. In such a situation, the conventional method may cause severe bias. Therefore, I propose a new estimator. The estimation procedure is divided into two stages. In the first stage, to adept to the additive structure, I use a series method. And to deal with the ill-posedness brought by the mismeasurement, I introduce a ridge parameter. The convergence rate is then derived for the first stage estimator. For the distributional results required for inference, based on the first stage estimator, I implement the one-step back-fitting with a deconvolution kernel. Asymptotic normality is derived for the second stage estimator. Chapter 2 investigates the sharp regression-discontinuity (SRD) design when there is a continuously distributed measurement error in the running variable. In such a situation, the discontinuity at the cut-off disappear completely, and using the conventional SRD method cause severe bias. To overcome this, I develop a new estimator of the average treatment effect at the cut-off. Two separate cases characterized by the observability of the treatment status are considered. In the case of observed treatment status, the proposed estimator is the difference between the deconvolution local linear estimators based on treated and control groups. In the case of unobserved treatment status, the observed running variable cannot be used to divide the sample due to the presence of measurement errors. So, the one-sided kernel functions are implemented, and an additional ridging parameter is introduced for regularization. Asymptotic properties of proposed estimators are derived for both cases. Chapter 3 develops a new method to estimate the spillover effects using the factor structure of the variables generating the spillovers. Specifically, we find that such a factor structure implies constraints on the spillovers, which can be utilized to improve the performance of the existing estimator, like LASSO, by adding the factor-induced constraints. The L2 error bound is derived for the proposed estimator. Compared with the unconstrained case, the proposed estimator is more accurate in the sense that it has approximately sharper error bound. Simulation results demonstrate our findings.
130

Essays on inference in econometric models

Adusumilli, Karun January 2018 (has links)
This thesis contains three essays on inference in econometric models. Chapter 1 considers the question of bootstrap inference for Propensity Score Matching. Propensity Score Matching, where the propensity scores are estimated in a first step, is widely used for estimating treatment effects. In this context, the naive bootstrap is invalid (Abadie and Imbens, 2008). This chapter proposes a novel bootstrap procedure for this context, and demonstrates its consistency. Simulations and real data examples demonstrate the superior performance of the proposed method relative to using the asymptotic distribution for inference, especially when the degree of overlap in propensity scores is poor. General versions of the procedure can also be applied to other causal effect estimators such as inverse probability weighting and propensity score subclassification, potentially leading to higher order refinements for inference in such contexts. Chapter 2 tackles the question of inference in incomplete econometric models. In many economic and statistical applications, the observed data take the form of sets rather than points. Examples include bracket data in survey analysis, tumor growth and rock grain images in morphology analysis, and noisy measurements on the support function of a convex set in medical imaging and robotic vision. Additionally, nonparametric bounds on treatment effects under imperfect compliance can be expressed by means of random sets. This chapter develops a concept of nonparametric likelihood for random sets and its mean, known as the Aumann expectation, and proposes general inference methods by adapting the theory of empirical likelihood. Chapter 3 considers inference on the cumulative distribution function (CDF) in the classical measurement error model. It proposes both asymptotic and bootstrap based uniform confidence bands for the estimator of the CDF under measurement error. The proposed techniques can also be used to obtain confidence bands for quantiles, and perform various CDF-based tests such as goodness-offit tests for parametric models of densities, two sample homogeneity tests, and tests for stochastic dominance; all for the first time under measurement error.

Page generated in 0.1225 seconds