1 |
Predictive Accuracy of Linear Models with Ordinal RegressorsModin Larsson, Jim January 2016 (has links)
This paper considers four approaches to ordinal predictors in linear regression to evaluate how these contrast with respect to predictive accuracy. The two most typical treatments, namely, dummy coding and classic linear regression on assigned level scores are compared with two improved methods; penalized smoothed coefficients and a generalized additive model with cubic splines. A simulation study is conducted to assess all on the basis of predictive performance. Our results show that the dummy based methods surpass the numeric at low sample sizes. Although, as sample size increases, differences between the methods diminish. Tendencies of overfitting are identified among the dummy methods. We conclude by stating that the choice of method not only ought to be context driven, but done in the light of all characteristics.
|
2 |
Efficiency of weights matrix specification in the spatial error modelKent, Cannon 10 December 2021 (has links) (PDF)
This study investigates and quantifies the effect of different specifications of the spatial weights matrix (��) on estimates and inferences in the context of a regression model using the lattice perspective with polygon-type data. The study also investigates an alternative to the specification of �� by estimating a spatial variance-covariance matrix based on known features of the spatial data. Previous literature has addressed the a priori construction of �� and selection criteria but assumes point-type data. This study’s primary contribution is the setup of a true and known benchmark that allows the comparison of the different specifications of ��. This is accomplished by using a disaggregate point-type data generating process which is then aggregated into polygon-type data. Monte Carlo simulations show that current specifications of �� used in maximum likelihood estimation for the spatial error model perform poorly. Additionally, the estimated spatial variance-covariance matrix outperforms the traditional specifications of ��.
|
3 |
Smooth transitions in macroeconomic relationshipsEliasson, Ann-Charlotte January 1999 (has links)
The purpose of this thesis is to explore the possibilities and advantages of describing macroeconomic relationships with a certain well-defined class of parametric nonlinear econometric models, called smooth transition regressions (STR). An STR model is a flexible nonlinear specification with a continuum of regimes. It is locally linear transitions from one extreme regime to another are determined by a function of a continuous variable, the transition variable.The thesis consists of four essays and the macroeconomic relationships that are considered are: Consumption, Money Demand and the Phillips Curve. The essays of this dissertation emphasise the importance of allowing for a flexible functional form when dealing with macroeconomic relationships. / Diss. Stockholm : Handelshögsk.
|
4 |
Modelling economic high-frequency time seriesLundbergh, Stefan January 1999 (has links)
Diss. Stockholm : Handelshögsk.
|
5 |
A Simulation Study On Marginalized Transition Random Effects Models For Multivariate Longitudinal Binary DataYalcinoz, Zerrin 01 May 2008 (has links) (PDF)
In this thesis, a simulation study is held and a statistical model is fitted to the simulated data. This data is assumed to be the satisfaction of the customers who withdraw their salary from a particular bank. It is a longitudinal data which has bivariate and binary response. It is assumed to be collected from 200 individuals at four different time points. In such data sets, two types of dependence -the dependence within subject measurements and the dependence between responses- are important and these are considered in the model. The model is Marginalized Transition Random Effects Models, which has three levels. The first level measures the effect of covariates on responses, the second level accounts for temporal changes, and the third level measures the difference between individuals. Markov Chain Monte Carlo methods are used for the model fit. In the simulation study, the changes between the estimated values and true parameters are searched under two conditions, when the model is correctly specified or not. Results suggest that the better convergence is obtained with the full model. The third level which observes the individual changes is more sensitive to the model misspecification than the other levels of the model.
|
6 |
Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networksRech, Gianluigi January 2001 (has links)
This dissertation consists of 3 essays In the first essay, A Simple Variable Selection Technique for Nonlinear Models, written in cooperation with Timo Teräsvirta and Rolf Tschernig, I propose a variable selection method based on a polynomial expansion of the unknown regression function and an appropriate model selection criterion. The hypothesis of linearity is tested by a Lagrange multiplier test based on this polynomial expansion. If rejected, a kth order general polynomial is used as a base for estimating all submodels by ordinary least squares. The combination of regressors leading to the lowest value of the model selection criterion is selected. The second essay, Modelling and Forecasting Economic Time Series with Single Hidden-layer Feedforward Autoregressive Artificial Neural Networks, proposes an unified framework for artificial neural network modelling. Linearity is tested and the selection of regressors performed by the methodology developed in essay I. The number of hidden units is detected by a procedure based on a sequence of Lagrange multiplier (LM) tests. Serial correlation of errors and parameter constancy are checked by LM tests as well. A Monte-Carlo study, the two classical series of the lynx and the sunspots, and an application on the monthly S&P 500 index return series are used to demonstrate the performance of the overall procedure. In the third essay, Forecasting with Artificial Neural Network Models (in cooperation with Marcelo Medeiros), the methodology developed in essay II, the most popular methods for artificial neural network estimation, and the linear autoregressive model are compared by forecasting performance on 30 time series from different subject areas. Early stopping, pruning, information criterion pruning, cross-validation pruning, weight decay, and Bayesian regularization are considered. The findings are that 1) the linear models very often outperform the neural network ones and 2) the modelling approach to neural networks developed in this thesis stands up well with in comparison when compared to the other neural network modelling methods considered here. / <p>Diss. Stockholm : Handelshögskolan, 2002. Spikblad saknas</p>
|
7 |
Four Essays on Building Conditional Correlation GARCH Models.Nakatani, Tomoaki January 2010 (has links)
This thesis consists of four research papers. The main focus is on building the multivariate Conditional Correlation (CC-) GARCH models. In particular, emphasis lies on considering an extension of CC-GARCH models that allow for interactions or causality in conditional variances. In the first three chapters, misspecification testing and parameter restrictions in these models are discussed. In the final chapter, a computer package for building major variants of the CC-GARCH models is presented. The first chapter contains a brief introduction to the CC-GARCH models as well as a summary of each research paper. The second chapter proposes a misspecification test for modelling of the conditional variance part of the Extended Constant CC-GARCH model. The test is designed for testing the hypothesis of no interactions in the conditional variances. If the null hypothesis is true, then the conditional variances may be described by the standard CCC-GARCH model. The test is constructed on the Lagrange Multiplier (LM) principle that only requires the estimation of the null model. Although the test is derived under the assumption of the constant conditional correlation, the simulation experiments suggest that the test is also applicable to building CC-GARCH models with changing conditional correlations. There is no asymptotic theory available for these models, which is why simulation of the test statistic in this situation has been necessary. The third chapter provides yet another misspecification test for modelling of the conditional variance component of the CC-GARCH models, whose parameters are often estimated in two steps. The estimator obtained through these two steps is a two-stage quasi-maximum likelihood estimator (2SQMLE). Taking advantage of the asymptotic results for 2SQMLE, the test considered in this chapter is formulated using the LM principle, which requires only the estimation of univariate GARCH models. It is also shown that the test statistic may be computed by using an auxiliary regression. A robust version of the new test is available through another auxiliary regression. All of this amounts to a substantial simplification in computations compared with the test proposed in the second chapter. The simulation experiments show that, under both under both Gaussian and leptokurtic innovations, as well as under changing conditional correlations, the new test has reasonable size and power properties. When modelling the conditional variance, it is necessary to keep the sequence of conditional covariance matrices positive definite almost surely for any time horizon. In the fourth chapter it is demonstrated that under certain conditions some of the parameters of the model can take negative values while the conditional covariance matrix remains positive definite almost surely. It is also shown that even in the simplest first-order vector GARCH representation, the relevant parameter space can contain negative values for some parameters, which is not possible in the univariate model. This finding makes it possible to incorporate negative volatility spillovers into the CC-GARCH framework. Many new GARCH models and misspecification testing procedures have been recently proposed in the literature. When it comes to applying these models or tests, however, there do not seem to exist many options for the users to choose from other than creating their own computer programmes. This is especially the case when one wants to apply a multivariate GARCH model. The last chapter of the thesis offers a remedy to this situation by providing a workable environment for building CC-GARCH models. The package is open source, freely available on the Internet, and designed for use in the open source statistical environment R. With this package can estimate major variants of CC-GARCH models as well as simulate data from the CC-GARCH data generating processes with multivariate normal or Student's t innovations. In addition, the package is equipped with the necessary functions for conducting diagnostic tests such as those discussed in the third chapter of this thesis. / <p>Diss. Stockholm : Handelshögskolan, 2010. Sammanfattning jämte 4 uppsatser.</p>
|
8 |
Four essays on the econometric modelling of volatility and durationsAmado, Cristina January 2009 (has links)
The thesis "Four Essays on the Econometric Modelling of Volatility and Durations" consists of four research papers in the area of financial econometrics on topics of the modelling of financial market volatility and the econometrics of ultra-high-frequency data. The aim of the thesis is to develop new econometric methods for modelling and hypothesis testing in these areas. The second chapter introduces a new model, the time-varying GARCH (TV-GARCH) model, in which volatility has a smooth time-varying structure of either additive or multiplicative type. To characterize smooth changes in the (un)conditional variance we assume that the parameters vary smoothly over time according to the logistic transition function. A data-based modelling technique is used for specifying the parametric structure of the TV-GARCH models. This is done by testing a sequence of hypotheses by Lagrange multiplier tests presented in the chapter. Misspecification tests are also provided for evaluating the adequacy of the estimated model. The third chapter addresses the issue of modelling deterministic changes in the unconditional variance over a long return series. The modelling strategy is illustrated with an application to the daily returns of the Dow Jones Industrial Average (DJIA) index from 1920 until 2003. The empirical results sustain the hypothesis that the assumption of constancy of the unconditional variance is not adequate over long return series and indicate that deterministic changes in the unconditional variance may be associated with macroeconomic factors. In the fourth chapter we propose an extension of the univariate multiplicative TV-GARCH model to the multivariate Conditional Correlation GARCH (CC-GARCH) framework. The variance equations are parameterized such that they combine the long-run and the short-run dynamic behaviour of the volatilities. In this framework, the long-run behaviour is described by the individual unconditional variances, and it is allowed to vary smoothly over time according to the logistic transition function. The effects of modelling the nonstationary variance component are examined empirically in several CC-GARCH models using pairs of seven daily stock return series from the S&P 500 index. The results show that the magnitude of such effect varies across different stock series and depends on the structure of the conditional correlation matrix. An important feature of financial durations is the evidence of a strong diurnal variation over the trading day. In the fifth chapter we propose a new parameterization for describing the diurnal pattern of trading activity. The parametric structure of the diurnal component allows the duration process to change smoothly over the time-of-day according to the logistic transition function. The empirical results suggest that the diurnal variation may not always have the inverted U-shaped pattern for the trade durations as documented in earlier studies.
|
9 |
Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks /Rech, Gianluigi, January 1900 (has links)
Diss. Stockholm : Handelshögskolan, 2002.
|
10 |
The gravity model for international trade: Specification and estimation issues in the prevalence of zero flowsKrisztin, Tamás, Fischer, Manfred M. 14 August 2014 (has links) (PDF)
The gravity model for international trade is one of the most
successful empirical models in trade literature. There is a long tradition to log-linearise the multiplicative model and to estimate the parameters of interest by least squares. But this practice is inappropriate for several reasons. First of all, bilateral trade flows are frequently zero and disregarding countries that do not trade with each other produces biased results. Second, log-linearisation in the presence of heteroscedasticity leads to inconsistent estimates in general.
In recent years, the Poisson gravity model along with pseudo maximum likelihood estimation methods have become popular as a way of dealing with such econometric issues as arise when dealing with origin-destination
flows. But the standard Poisson model specification
is vulnerable to problems of overdispersion and excess zero
flows. To overcome these problems, this paper presents zero-inflated extensions of the Poisson and negative binomial specifications as viable alternatives to both the log-linear and the standard Poisson specifications of
the gravity model. The performance of the alternative model specifications is assessed on a real world example, where more than half of country-level trade flows are zero. (authors' abstract) / Series: Working Papers in Regional Science
|
Page generated in 0.1209 seconds