Spelling suggestions: "subject:"economics econometrics"" "subject:"economics sconometrics""
1 |
Essays in mechanism designUlku, Levent. January 2008 (has links)
Thesis (Ph. D.)--Rutgers University, 2008. / "Graduate Program in Economics." Includes bibliographical references (p. 69-70).
|
2 |
Essays on financial econometricsCai, Lili. January 2010 (has links)
Thesis (Ph. D.)--Rutgers University, 2010. / "Graduate Program in Economics." Includes bibliographical references (p. 84-88).
|
3 |
Essays on forecasting and volatility modellingDias, Gustavo Fruet January 2013 (has links)
This thesis contributes to four distinct fields on the econometrics literature: forecasting macroeconomic variables using large datasets, volatility modelling, risk premium estimation and iterative estimators. As a research output, this thesis presents a balance of applied econometrics and econometric theory, with the latter one covering the asymptotic theory of iterative estimators under different models and mapping specifications. In Chapter 1 we introduce and motivate the estimation tools for large datasets, the volatility modelling and the use of iterative estimators. In Chapter 2, we address the issue of forecasting macroeconomic variables using medium and large datasets, by adopting vector autoregressive moving average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing the iterative ordinary least squares (IOLS) estimator. We establish the consistency and asymptotic distribution considering the ARMA(1,1) and we argue these results can be extended to the multivariate case. Monte Carlo results show that IOLS is consistent and feasible for large systems, and outperforms the maximum likelihood (MLE) estimator when sample size is small. Our empirical application shows that VARMA models outperform the AR(1) (autoregressive of order one model) and vector autoregressive (VAR) models, considering different model dimensions. Chapter 3 proposes a new robust estimator for GARCH-type models: the nonlinear iterative least squares (NL-ILS). This estimator is especially useful on specifications where errors have some degree of dependence over time or when the conditional variance is misspecified. We illustrate the NL-ILS estimator by providing algorithms that consider the GARCH(1,1), weak-GARCH(1,1), GARCH(1,1)-in-mean and RealGARCH(1,1)-in-mean models. I establish the consistency and asymptotic distribution of the NLILS estimator, in the case of the GARCH(1,1) model under assumptions that are compatible with the quasi-maximum likelihood (QMLE) estimator. The consistency result is extended to the weak-GARCH(1,1) model and a further extension of the asymptotic results to the GARCH(1,1)-inmean case is also discussed. A Monte Carlo study provides evidences that the NL-ILS estimator is consistent and outperforms the MLE benchmark in a variety of specifications. Moreover, when the conditional variance is misspecified, the MLE estimator delivers biased estimates of the parameters in the mean equation, whereas the NL-ILS estimator does not. The empirical application investigates the risk premium on the CRSP, S&P500 and S&P100 indices. I document the risk premium parameter to be significant only for the CRSP index when using the robust NL-ILS estimator. We argue that this comes from the wider composition of the CRPS index, resembling the market more accurately, when compared to the S&P500 and S&P100 indices. This nding holds on daily, weekly and monthly frequencies and it is corroborated by a series of robustness checks. Chapter 4 assesses the evolution of the risk premium parameter over time. To this purpose, we introduce a new class of volatility-in-mean model, the time-varying GARCH-in-mean (TVGARCH-in-mean) model, that allows the risk premium parameter to evolve stochastically as a random walk process. We show that the kernel based NL-ILS estimator successfully estimates the time-varying risk premium parameter, presenting a good finite sample performance. Regarding the empirical study, we find evidences that the risk premium parameter is time-varying, oscillating over negative and positive values. Chapter 5 concludes pointing the relevance of of the use of iterative estimators rather than the standard MLE framework, as well as the contributions to the applied econometrics, financial econometrics and econometric theory literatures.
|
4 |
Three perspectives on oscillating labour : the case of the West BankKadri, Ali January 1996 (has links)
No description available.
|
5 |
Econometric methods and applications in modelling non-stationary climate dataPretis, Felix January 2015 (has links)
Understanding of climate change and policy responses thereto rely on accurate measurements as well as models of both socio-economic and physical processes. However, data to assess impacts and establish historical climate records are non-stationary: distributions shift over time due to shocks, measurement changes, and stochastic trends - all of which invalidate standard statistical inference. This thesis establishes econometric methods to model non-stationary climate data consistent with known physical laws, enabling joint estimation and testing, develops techniques for the automatic detection of structural breaks, and evaluates socio-economic scenarios used in long-run climate projections. Econometric cointegration analysis can be used to overcome inferential difficulties stemming from stochastic trends in time series, however, cointegration has been criticised in climate research for lacking a physical justification for its use. I show that physical two-component energy balance models of global mean climate can be mapped to a cointegrated system, making them directly testable, and thereby provide a physical justification for econometric methods in climate research. Automatic model selection with more variables than observations is introduced in modelling concentrations of atmospheric CO<sub>2</sub>, while controlling for outliers and breaks at any point in the sample using impulse indicator saturation. Without imposing the inclusion of variables a-priori, model selection results find that vegetation, temperature and other natural factors alone cannot explain the trend or the variation in CO<sub>2</sub> growth. Industrial production components, driven by business cycles and economic shocks, are highly significant contributors. Generalizing the principle of indicator saturation, I present a methodology to detect structural breaks at any point in a time series using designed functions. Selecting over these break functions at every point in time using a general-to-specific algorithm, yields unbiased estimates of the break date and magnitude. Analytical derivations for the split-sample approach are provided under the null of no breaks and the alternative of one or more breaks. The methodology is demonstrated by detecting volcanic eruptions in a time series of Northern Hemisphere mean temperature derived from a coupled climate simulation spanning close to 1200 years. All climate models require socio-economic projections to make statements about future climate change. The large span of projected temperature changes then originates predominantly from the wide range of scenarios, rather than uncertainty in climate models themselves. For the first time, observations over two decades are available against which the first sets of socio-economic scenarios used in the Intergovernmental Panel on Climate Change reports can be assessed. The results show that the growth rate in fossil fuel CO<sub>2</sub> emission intensity (fossil fuel CO2 emissions per GDP) over the 2000s exceeds all main scenario values, with the discrepancy being driven by underprediction of high growth rates in Asia. This underestimation of emission intensity raises concerns about achieving a world of economic prosperity in an environmentally sustainable fashion.
|
6 |
Essays in time series econometricsSakarya, Neslihan 25 May 2017 (has links)
No description available.
|
7 |
The nonparametric approach to demand analysis : essays in revealed preference theoryAdams, Abigail January 2013 (has links)
This thesis comprises three principal essays, each of which provides a contribution to the literature on the nonparametric approach to demand analysis. In each essay, I develop novel techniques that follow in the revealed preference tradition, and apply them to tackle a series of questions that concern the mechanisms underlying consumer spending decisions. Each technique developed is tightly linked to a particular nonparametric theory of choice behaviour and is explicitly designed for use with a finite set of observations. My work draws heavily upon results from finite mathematics, into which I integrate insights from information theory and integer programming. The output of this endeavor is a set of methodologies that are largely free of auxiliary assumptions over the form of the unobserved structural functions of interest. Providing greater detail on the work to come, my first essay extends and clarifies the nonparametric approach to forecasting demand behaviour at new budget regimes. Using insights from information theory and integer programming, I construct an operational nonparametric definition of global rationality and develop a methodology that facilitates the recovery of globally rational individual demand predictions. This is the first attempt in the literature to develop a systematic methodology to impose global rationality on nonparametric demand predictions. The resulting forecasts allow for unrestricted preference heterogeneity in the population and I demonstrate how these predictions can be used for coherent welfare analysis. In my second and third essays, I prove new revealed preference testability axioms for models that extend the traditional neoclassical choice framework. Specifically, in my second essay, I address the intertemporal allocation of spending by collectives, whilst my final essay integrates taste variation into the utility maximisation framework. In both of these essays, I develop my testable results into practical algorithms that allow one to recover salient features of individual preferences. In my second essay, a methodology is developed to recover the minimal intrahousehold heterogeneity in theory-consistent discount rates, whilst my final essay develops a quadratic programming procedure that facilitates the recovery of the minimal interpersonal and intertemporal heterogeneity in tastes that is required to rationalise observed choice patterns. Applying these techniques to consumption micro-data yields new empirical insights that are of relevance to the applied literatures on time discounting, family economics and the public policy debate on tobacco control.
|
8 |
Market and professional decision-making under risk and uncertaintyDavidson, Erick, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 104-107).
|
9 |
Two essays : on the common information in the return volatilities and volumes : on the informational efficiency of municipal bond marketZhang, Lei. January 2008 (has links)
Thesis (Ph.D.)--Syracuse University, 2008. / "Publication number: AAT 3323095."
|
10 |
Essays in panel data and financial econometricsPakel, Cavit January 2012 (has links)
This thesis is concerned with volatility estimation using financial panels and bias-reduction in non-linear dynamic panels in the presence of dependence. Traditional GARCH-type volatility models require large time-series for accurate estimation. This makes it impossible to analyse some interesting datasets which do not have a large enough history of observations. This study contributes to the literature by introducing the GARCH Panel model, which exploits both time-series and cross-section information, in order to make up for this lack of time-series variation. It is shown that this approach leads to gains both in- and out-of-sample, but suffers from the well-known incidental parameter issue and therefore, cannot deal with short data either. As a response, a bias-correction approach valid for a general variety of models beyond GARCH is proposed. This extends the analytical bias-reduction literature to cross-section dependence and is a theoretical contribution to the panel data literature. In the final chapter, these two contributions are combined in order to develop a new approach to volatility estimation in short panels. Simulation analysis reveals that this approach is capable of removing a substantial portion of the bias even when only 150-200 observations are available. This is in stark contrast with the standard methods which require 1,000-1,500 observations for accurate estimation. This approach is used to model monthly hedge fund volatility, which is another novel contribution, as it has hitherto been impossible to analyse hedge fund volatility, due to their typically short histories. The analysis reveals that hedge funds exhibit variation in their volatility characteristics both across and within investment strategies. Moreover, the sample distributions of fund volatilities are asymmetric, have large right tails and react to major economic events such as the recent credit crunch episode.
|
Page generated in 0.0827 seconds