• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 2
  • 2
  • Tagged with
  • 43
  • 25
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Combining structural and reduced-form models for macroeconomic forecasting and policy analysis

Monti, Francesca 08 February 2011 (has links)
Can we fruitfully use the same macroeconomic model to forecast and to perform policy analysis? There is a tension between a model’s ability to forecast accurately and its ability to tell a theoretically consistent story. The aim of this dissertation is to propose ways to soothe this tension, combining structural and reduced-form models in order to have models that can effectively do both. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
32

Profile Monitoring with Fixed and Random Effects using Nonparametric and Semiparametric Methods

Abdel-Salam, Abdel-Salam Gomaa 20 November 2009 (has links)
Profile monitoring is a relatively new approach in quality control best used where the process data follow a profile (or curve) at each time period. The essential idea for profile monitoring is to model the profile via some parametric, nonparametric, and semiparametric methods and then monitor the fitted profiles or the estimated random effects over time to determine if there have been changes in the profiles. The majority of previous studies in profile monitoring focused on the parametric modeling of either linear or nonlinear profiles, with both fixed and random effects, under the assumption of correct model specification. Our work considers those cases where the parametric model for the family of profiles is unknown or at least uncertain. Consequently, we consider monitoring profiles via two techniques, a nonparametric technique and a semiparametric procedure that combines both parametric and nonparametric profile fits, a procedure we refer to as model robust profile monitoring (MRPM). Also, we incorporate a mixed model approach to both the parametric and nonparametric model fits. For the mixed effects models, the MMRPM method is an extension of the MRPM method which incorporates a mixed model approach to both parametric and nonparametric model fits to account for the correlation within profiles and to deal with the collection of profiles as a random sample from a common population. For each case, we formulated two Hotelling's T 2 statistics, one based on the estimated random effects and one based on the fitted values, and obtained the corresponding control limits. In addition,we used two different formulas for the estimated variancecovariance matrix: one based on the pooled sample variance-covariance matrix estimator and a second one based on the estimated variance-covariance matrix based on successive differences. A Monte Carlo study was performed to compare the integrated mean square errors (IMSE) and the probability of signal of the parametric, nonparametric, and semiparametric approaches. Both correlated and uncorrelated errors structure scenarios were evaluated for varying amounts of model misspecification, number of profiles, number of observations per profile, shift location, and in- and out-of-control situations. The semiparametric (MMRPM) method for uncorrelated and correlated scenarios was competitive and, often, clearly superior with the parametric and nonparametric over all levels of misspecification. For a correctly specified model, the IMSE and the simulated probability of signal for the parametric and theMMRPM methods were identical (or nearly so). For the severe modelmisspecification case, the nonparametric andMMRPM methods were identical (or nearly so). For the mild model misspecification case, the MMRPM method was superior to the parametric and nonparametric methods. Therefore, this simulation supports the claim that the MMRPM method is robust to model misspecification. In addition, the MMRPM method performed better for data sets with correlated error structure. Also, the performances of the nonparametric and MMRPM methods improved as the number of observations per profile increases since more observations over the same range of X generally enables more knots to be used by the penalized spline method, resulting in greater flexibility and improved fits in the nonparametric curves and consequently, the semiparametric curves. The parametric, nonparametric and semiparametric approaches were utilized for fitting the relationship between torque produced by an engine and engine speed in the automotive industry. Then, we used a Hotelling's T 2 statistic based on the estimated random effects to conduct Phase I studies to determine the outlying profiles. The parametric, nonparametric and seminonparametric methods showed that the process was stable. Despite the fact that all three methods reach the same conclusion regarding the –in-control– status of each profile, the nonparametric and MMRPM results provide a better description of the actual behavior of each profile. Thus, the nonparametric and MMRPM methods give the user greater ability to properly interpret the true relationship between engine speed and torque for this type of engine and an increased likelihood of detecting unusual engines in future production. Finally, we conclude that the nonparametric and semiparametric approaches performed better than the parametric approach when the user's model is misspecified. The case study demonstrates that, the proposed nonparametric and semiparametric methods are shown to be more efficient, flexible and robust to model misspecification for Phase I profile monitoring in a practical application. Thus, our methods are robust to the common problem of model misspecification. We also found that both the nonparametric and the semiparametric methods result in charts with good abilities to detect changes in Phase I data, and in charts with easily calculated control limits. The proposed methods provide greater flexibility and efficiency than current parametric methods used in profile monitoring for Phase I that rely on correct model specification, an unrealistic situation in many practical problems in industrial applications. / Ph. D.
33

Five contributions to econometric theory and the econometrics of ultra-high-frequency data

Meitz, Mika January 2006 (has links)
No description available.
34

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks

Rech, Gianluigi January 2001 (has links)
This dissertation consists of 3 essays In the first essay, A Simple Variable Selection Technique for Nonlinear Models, written in cooperation with Timo Teräsvirta and Rolf Tschernig, I propose a variable selection method based on a polynomial expansion of the unknown regression function and an appropriate model selection criterion. The hypothesis of linearity is tested by a Lagrange multiplier test based on this polynomial expansion. If rejected, a kth order general polynomial is used as a base for estimating all submodels by ordinary least squares. The combination of regressors leading to the lowest value of the model selection criterion is selected.  The second essay, Modelling and Forecasting Economic Time Series with Single Hidden-layer Feedforward Autoregressive Artificial Neural Networks, proposes an unified framework for artificial neural network modelling. Linearity is tested and the selection of regressors performed by the methodology developed in essay I. The number of hidden units is detected by a procedure based on a sequence of Lagrange multiplier (LM) tests. Serial correlation of errors and parameter constancy are checked by LM tests as well. A Monte-Carlo study, the two classical series of the lynx and the sunspots, and an application on the monthly S&amp;P 500 index return series are used to demonstrate the performance of the overall procedure. In the third essay, Forecasting with Artificial Neural Network Models (in cooperation with Marcelo Medeiros), the methodology developed in essay II, the most popular methods for artificial neural network estimation, and the linear autoregressive model are compared by forecasting performance on 30 time series from different subject areas. Early stopping, pruning, information criterion pruning, cross-validation pruning, weight decay, and Bayesian regularization are considered. The findings are that 1) the linear models very often outperform the neural network ones and 2) the modelling approach to neural networks developed in this thesis stands up well with in comparison when compared to the other neural network modelling methods considered here. / <p>Diss. Stockholm : Handelshögskolan, 2002. Spikblad saknas</p>
35

Parent Involvement in Children's Schooling: An Investigation of Measurement Equivalence across Ethnic Groups

Scott, Heather Marie 01 January 2011 (has links)
Epstein et al.'s Theory of Overlapping Spheres of Influence focuses on the interaction and communication, or partnerships, among families, schools, and the community to bring the three closer together. The theory works in conjunction with Epstein's typology of parental involvement, which focuses on six types of involvement that are instrumental to a child's development and his/her school and educational success. These serve as the framework for the study and support the construct of parent's involvement in children's schooling. The purpose of the current study was to conduct further validation analyses of an inventory designed to measure the construct of parent involvement in their children's schooling through the investigation of measurement invariance to determine if the measurement properties of the inventory varied by race/ethnicity. The study compared the responses of 126 Hispanic parents/guardians with 116 White/non-Hispanic parents/guardians to investigate if these two groups were interpreting the items on the inventory in the same manner. The inventory was administered to a sample of parents/guardians of children in grades 3 through 5 in a local school district. Findings indicated that the measurement model was misspecified for the White/non-Hispanic group and the Hispanic group and further measurement invariance testing was not conducted. Exploratory factor analyses were conducted in order to investigate which models would best fit the data for both groups. Feedback also was obtained from parents/guardians about the clarity of the inventory, which revealed their confusion with the response scale and the wording of particular items. In addition, they supplied issues or aspects of parent involvement that they found important but missing from the inventory. Results from the psychometric analyses and qualitative feedback indicated that the inventory requires modification and further psychometric investigation. In addition, caution should be exercised for anyone who may be considering utilizing the inventory. Results of the study were interpreted in terms of contributions to the parent involvement literature, as well as recommendations for the improvement of the inventory.
36

Four Essays on Building Conditional Correlation GARCH Models.

Nakatani, Tomoaki January 2010 (has links)
This thesis consists of four research papers. The main focus is on building the multivariate Conditional Correlation (CC-) GARCH models. In particular, emphasis lies on considering an extension of CC-GARCH models that allow for interactions or causality in conditional variances. In the first three chapters, misspecification testing and parameter restrictions in these models are discussed. In the final chapter, a computer package for building major variants of the CC-GARCH models is presented. The first chapter contains a brief introduction to the CC-GARCH models as well as a summary of each research paper. The second chapter proposes a misspecification test for modelling of the conditional variance part of the Extended Constant CC-GARCH model. The test is designed for testing the hypothesis of no interactions in the conditional variances. If the null hypothesis is true, then the conditional variances may be described by the standard CCC-GARCH model. The test is constructed on the Lagrange Multiplier (LM) principle that only requires the estimation of the null model. Although the test is derived under the assumption of the constant conditional correlation, the simulation experiments suggest that the test is also applicable to building CC-GARCH models with changing conditional correlations. There is no asymptotic theory available for these models, which is why simulation of the test statistic in this situation has been necessary. The third chapter provides yet another misspecification test for modelling of the conditional variance component of the CC-GARCH models, whose parameters are often estimated in two steps. The estimator obtained through these two steps is a two-stage quasi-maximum likelihood estimator (2SQMLE). Taking advantage of the asymptotic results for 2SQMLE, the test considered in this chapter is formulated using the LM principle, which requires only the estimation of univariate GARCH models. It is also shown that the test statistic may be computed by using an auxiliary regression. A robust version of the new test is available through another auxiliary regression. All of this amounts to a substantial simplification in computations compared with the test proposed in the second chapter. The simulation experiments show that, under both under both Gaussian and leptokurtic innovations, as well as under changing conditional correlations, the new test has reasonable size and power properties. When modelling the conditional variance, it is necessary to keep the sequence of conditional covariance matrices positive definite almost surely for any time horizon. In the fourth chapter it is demonstrated that under certain conditions some of the parameters of the model can take negative values while the conditional covariance matrix remains positive definite almost surely. It is also shown that even in the simplest first-order vector GARCH representation, the relevant parameter space can contain negative values for some parameters, which is not possible in the univariate model. This finding makes it possible to incorporate negative volatility spillovers into the CC-GARCH framework. Many new GARCH models and misspecification testing procedures have been recently proposed in the literature. When it comes to applying these models or tests, however, there do not seem to exist many options for the users to choose from other than creating their own computer programmes. This is especially the case when one wants to apply a multivariate GARCH model. The last chapter of the thesis offers a remedy to this situation by providing a workable environment for building CC-GARCH models. The package is open source, freely available on the Internet, and designed for use in the open source statistical environment R. With this package can estimate major variants of CC-GARCH models as well as simulate data from the CC-GARCH data generating processes with multivariate normal or Student's t innovations. In addition, the package is equipped with the necessary functions for conducting diagnostic tests such as those discussed in the third chapter of this thesis. / <p>Diss. Stockholm : Handelshögskolan, 2010. Sammanfattning jämte 4 uppsatser.</p>
37

Four essays on the econometric modelling of volatility and durations

Amado, Cristina January 2009 (has links)
The thesis "Four Essays on the Econometric Modelling of Volatility and Durations" consists of four research papers in the area of financial econometrics on topics of the modelling of financial market volatility and the econometrics of ultra-high-frequency data. The aim of the thesis is to develop new econometric methods for modelling and hypothesis testing in these areas. The second chapter introduces a new model, the time-varying GARCH (TV-GARCH) model, in which volatility has a smooth time-varying structure of either additive or multiplicative type. To characterize smooth changes in the (un)conditional variance we assume that the parameters vary smoothly over time according to the logistic transition function. A data-based modelling technique is used for specifying the parametric structure of the TV-GARCH models. This is done by testing a sequence of hypotheses by Lagrange multiplier tests presented in the chapter. Misspecification tests are also provided for evaluating the adequacy of the estimated model. The third chapter addresses the issue of modelling deterministic changes in the unconditional variance over a long return series. The modelling strategy is illustrated with an application to the daily returns of the Dow Jones Industrial Average (DJIA) index from 1920 until 2003. The empirical results sustain the hypothesis that the assumption of constancy of the unconditional variance is not adequate over long return series and indicate that deterministic changes in the unconditional variance may be associated with macroeconomic factors. In the fourth chapter we propose an extension of the univariate multiplicative TV-GARCH model to the multivariate Conditional Correlation GARCH (CC-GARCH) framework. The variance equations are parameterized such that they combine the long-run and the short-run dynamic behaviour of the volatilities. In this framework, the long-run behaviour is described by the individual unconditional variances, and it is allowed to vary smoothly over time according to the logistic transition function. The effects of modelling the nonstationary variance component are examined empirically in several CC-GARCH models using pairs of seven daily stock return series from the S&amp;P 500 index. The results show that the magnitude of such effect varies across different stock series and depends on the structure of the conditional correlation matrix. An important feature of financial durations is the evidence of a strong diurnal variation over the trading day. In the fifth chapter we propose a new parameterization for describing the diurnal pattern of trading activity. The parametric structure of the diurnal component allows the duration process to change smoothly over the time-of-day according to the logistic transition function. The empirical results suggest that the diurnal variation may not always have the inverted U-shaped pattern for the trade durations as documented in earlier studies.
38

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks /

Rech, Gianluigi, January 1900 (has links)
Diss. Stockholm : Handelshögskolan, 2002.
39

Revisiting the CAPM and the Fama-French Multi-Factor Models: Modeling Volatility Dynamics in Financial Markets

Michaelides, Michael 25 April 2017 (has links)
The primary objective of this dissertation is to revisit the CAPM and the Fama-French multi-factor models with a view to evaluate the validity of the probabilistic assumptions imposed (directly or indirectly) on the particular data used. By thoroughly testing the assumptions underlying these models, several departures are found and the original linear regression models are respecified. The respecification results in a family of heterogeneous Student's t models which are shown to account for all the statistical regularities in the data. This family of models provides an appropriate basis for revisiting the empirical adequacy of the CAPM and the Fama-French multi-factor models, as well as other models, such as alternative asset pricing models and risk evaluation models. Along the lines of providing a sound basis for reliable inference, the respecified models can serve as a coherent basis for selecting the relevant factors from the set of possible ones. The latter contributes to the enhancement of the substantive adequacy of the CAPM and the multi-factor models. / Ph. D.
40

Estimation and misspecification Risks in VaR estimation / Estimation and misspecification risks in VaR evaluation

Telmoudi, Fedya 19 December 2014 (has links)
Dans cette thèse, nous étudions l'estimation de la valeur à risque conditionnelle (VaR) en tenant compte du risque d'estimation et du risque de modèle. Tout d'abord, nous considérons une méthode en deux étapes pour estimer la VaR. La première étape évalue le paramètre de volatilité en utilisant un estimateur quasi maximum de vraisemblance généralisé (gQMLE) fondé sur une densité instrumentale h. La seconde étape estime un quantile des innovations à partir du quantile empirique des résidus obtenus dans la première étape. Nous donnons des conditions sous lesquelles l'estimateur en deux étapes de la VaR est convergent et asymptotiquement normal. Nous comparons également les efficacités des estimateurs obtenus pour divers choix de la densité instrumentale h. Lorsque l'innovation n'est pas de densité h, la première étape donne généralement un estimateur biaisé de paramètre de volatilité et la seconde étape donne aussi un estimateur biaisé du quantile des innovations. Cependant, nous montrons que les deux erreurs se contrebalancent pour donner une estimation consistante de la VaR. Nous nous concentrons ensuite sur l'estimation de la VaR dans le cadre de modèles GARCH en utilisant le gQMLE fondé sur la classe des densités instrumentales double gamma généralisées qui contient la distribution gaussienne. Notre objectif est de comparer la performance du QMLE gaussien par rapport à celle du gQMLE. Le choix de l'estimateur optimal dépend essentiellement du paramètre d qui minimise la variance asymptotique. Nous testons si le paramètre d qui minimise la variance asymptotique est égal à 2. Lorsque le test est appliqué sur des séries réelles de rendements financiers, l'hypothèse stipulant l'optimalité du QMLE gaussien est généralement rejetée. Finalement, nous considérons les méthodes non-paramétriques d'apprentissage automatique pour estimer la VaR. Ces méthodes visent à s'affranchir du risque de modèle car elles ne reposent pas sur une forme spécifique de la volatilité. Nous utilisons la technique des machines à vecteurs de support pour la régression (SVR) basée sur la fonction de perte moindres carrés (en anglais LS). Pour améliorer la solution du modèle LS-SVR nous utilisons les modèles LS-SVR pondérés et LS-SVR de taille fixe. Des illustrations numériques mettent en évidence l'apport des modèles proposés pour estimer la VaR en tenant compte des risques de spécification et d'estimation. / In this thesis, we study the problem of conditional Value at Risk (VaR) estimation taking into account estimation risk and model risk. First, we considered a two-step method for VaR estimation. The first step estimates the volatility parameter using a generalized quasi maximum likelihood estimator (gQMLE) based on an instrumental density h. The second step estimates a quantile of innovations from the empirical quantile of residuals obtained in the first step. We give conditions under which the two-step estimator of the VaR is consistent and asymptotically normal. We also compare the efficiencies of the estimators for various instrumental densities h. When the distribution of is not the density h the first step usually gives a biased estimator of the volatility parameter and the second step gives a biased estimator of the quantile of the innovations. However, we show that both errors counterbalance each other to give a consistent estimate of the VaR. We then focus on the VaR estimation within the framework of GARCH models using the gQMLE based on a class of instrumental densities called double generalized gamma which contains the Gaussian distribution. Our goal is to compare the performance of the Gaussian QMLE against the gQMLE. The choice of the optimal estimator depends on the value of d that minimizes the asymptotic variance. We test if this parameter is equal 2. When the test is applied to real series of financial returns, the hypothesis stating the optimality of Gaussian QMLE is generally rejected. Finally, we consider non-parametric machine learning models for VaR estimation. These methods are designed to eliminate model risk because they are not based on a specific form of volatility. We use the support vector machine model for regression (SVR) based on the least square loss function (LS). In order to improve the solution of LS-SVR model, we used the weighted LS-SVR and the fixed size LS-SVR models. Numerical illustrations highlight the contribution of the proposed models for VaR estimation taking into account the risk of specification and estimation.

Page generated in 0.0953 seconds