Spelling suggestions: "subject:"generalized least square"" "subject:"eneralized least square""
1 |
Minimax D-optimal designs for regression models with heteroscedastic errorsYzenbrandt, Kai 20 April 2021 (has links)
Minimax D-optimal designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, the minimax D-optimal design problem has an objective function as a difference of two convex functions. An effective algorithm is developed to compute minimax D-optimal designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax D-optimal designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax D-optimal designs. / Graduate
|
2 |
Feasible Generalized Least Squares: theory and applicationsGonzález Coya Sandoval, Emilio 04 June 2024 (has links)
We study the Feasible Generalized Least-Squares (FGLS) estimation of the parameters of a linear regression model in which the errors are allowed to exhibit heteroskedasticity of unknown form and to be serially correlated. The main contribution
is two fold; first we aim to demystify the reasons often advanced to use OLS instead of FGLS by showing that the latter estimate is robust, and more efficient and precise. Second, we devise consistent FGLS procedures, robust to misspecification, which achieves a lower mean squared error (MSE), often close to that of the correctly
specified infeasible GLS.
In the first chapter we restrict our attention to the case with independent heteroskedastic errors. We suggest a Lasso based procedure to estimate the skedastic function of the residuals. This estimate is then used to construct a FGLS estimator. Using extensive Monte Carlo simulations, we show that this Lasso-based FGLS procedure has better finite sample properties than OLS and other linear regression-based FGLS estimates. Moreover, the FGLS-Lasso estimate is robust to misspecification of
both the functional form and the variables characterizing the skedastic function.
The second chapter generalizes our investigation to the case with serially correlated errors. There are three main contributions; first we show that GLS is consistent requiring only pre-determined regressors, whereas OLS requires exogenous regressors to be consistent. The second contribution is to show that GLS is much more robust that OLS; even a misspecified GLS correction can achieve a lower MSE than OLS. The third contribution is to devise a FGLS procedure valid whether or not the regressors are exogenous, which achieves a MSE close to that of the correctly specified infeasible GLS. Extensive Monte Carlo experiments are conducted to assess the performance of our FGLS procedure against OLS in finite samples. FGLS achieves important reductions in MSE and variance relative to OLS.
In the third chapter we consider an empirical application; we re-examine the Uncovered Interest Parity (UIP) hypothesis, which states that the expected rate of return to speculation in the forward foreign exchange market is zero. We extend the FGLS procedure to a setting in which lagged dependent variables are included as regressors. We thus provide a consistent and efficient framework to estimate the parameters of a general k-step-ahead linear forecasting equation. Finally, we apply our FGLS procedures to the analysis of the two main specifications to test the UIP.
|
3 |
Modeling financial volatility : A functional approach with applications to Swedish limit order book dataElezovic, Suad January 2009 (has links)
<!-- /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-parent:""; margin:0cm; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-fareast-font-family:"Times New Roman"; mso-ansi-language:SV;} @page Section1 {size:612.0pt 792.0pt; margin:72.0pt 90.0pt 72.0pt 90.0pt; mso-header-margin:35.4pt; mso-footer-margin:35.4pt; mso-paper-source:0;} div.Section1 {page:Section1;} --> This thesis is designed to offer an approach to modeling volatility in the Swedish limit order market. Realized quadratic variation is used as an estimator of the integrated variance, which is a measure of the variability of a stochastic process in continuous time. Moreover, a functional time series model for the realized quadratic variation is introduced. A two-step estimation procedure for such a model is then proposed. Some properties of the proposed two-step estimator are discussed and illustrated through an application to high-frequency financial data and simulated experiments. In Paper I, the concept of realized quadratic variation, obtained from the bid and ask curves, is presented. In particular, an application to the Swedish limit order book data is performed using signature plots to determine an optimal sampling frequency for the computations. The paper is the first study that introduces realized quadratic variation in a functional context. Paper II introduces functional time series models and apply them to the modeling of volatility in the Swedish limit order book. More precisely, a functional approach to the estimation of volatility dynamics of the spreads (differences between the bid and ask prices) is presented through a case study. For that purpose, a two-step procedure for the estimation of functional linear models is adapted to the estimation of a functional dynamic time series model. Paper III studies a two-step estimation procedure for the functional models introduced in Paper II. For that purpose, data is simulated using the Heston stochastic volatility model, thereby obtaining time series of realized quadratic variations as functions of relative quantities of shares. In the first step, a dynamic time series model is fitted to each time series. This results in a set of inefficient raw estimates of the coefficient functions. In the second step, the raw estimates are smoothed. The second step improves on the first step since it yields both smooth and more efficient estimates. In this simulation, the smooth estimates are shown to perform better in terms of mean squared error. Paper IV introduces an alternative to the two-step estimation procedure mentioned above. This is achieved by taking into account the correlation structure of the error terms obtained in the first step. The proposed estimator is based on seemingly unrelated regression representation. Then, a multivariate generalized least squares estimator is used in a first step and its smooth version in a second step. Some of the asymptotic properties of the resulting two-step procedure are discussed. The new procedure is illustrated with functional high-frequency financial data.
|
4 |
[en] INSTITUTIONS AND MONETARY POLICY: A CROSS-COUNTRY EMPIRICAL ANALYSIS / [pt] INSTITUIÇÕES E POLÍTICA MONETÁRIA: UMA ANÁLISE EMPÍRICA DE UM CROSS-SECTION DE PAÍSESGUSTAVO AMORAS SOUZA LIMA 06 March 2018 (has links)
[pt] Esse trabalho busca verificar se há relação entre a política monetária conduzida por um grupo de países e as suas instituições, especialmente aquelas ligadas ao setor público. A partir da estimação de uma regra de
politica monetária comum para um grupo de países, regredimos coeficientes de reação das autoridades monetárias a desvios da inflação da meta e do hiato da atividade em métricas de instituições. Encontramos relações significativas entre a condução de política monetária e as instituições dos países, bem como potenciais determinantes das instituições, em vários casos. / [en] This paper seeks to verify if there is a relationship between the monetary policy conducted by a group of countries and their institutions, especially those related to the public sector. From the estimation of a common
monetary policy rule for a group of countries, we regressed the reaction coefficients of the monetary authorities to deviations from inflation target and activity gap on institutional metrics. We find significant relationships between conducting monetary policy and country institutions, as well as potential determinants of institutions, in several cases.
|
5 |
Modelo fatorial com cargas funcionais para séries temporais / Factor model with functional loadings for time seriesSalazar, Duvan Humberto Cataño 12 March 2018 (has links)
No contexto dos modelos fatoriais existem diferentes metodologias para abordar a modelagem de séries temporais multivariadas que exibem uma estrutura não estacionária de segunda ordem, co- movimentos e transições no tempo. Modelos com mudanças estruturais abruptas e restrições rigorosas (muitas vezes irreais) nas cargas fatoriais, quando elas são funções determinísticas no tempo, foram propostos na literatura para lidar com séries multivariadas que possuem essas características. Neste trabalho, apresentamos um modelo fatorial com cargas variando continuamente no tempo para modelar séries temporais não estacionárias e um procedimento para sua estimação que consiste em dois estágios. No primeiro, os fatores latentes são estimados empregando os componentes principais das séries observadas. Em um segundo estágio, tratamos estes componentes principais como co-variáveis e as cargas funcionais são estimadas através de funções de ondaletas e mínimos quadrados generalizados. Propriedades assintóticas dos estimadores de componentes principais e de mínimos quadrados dos coeficientes de ondaletas são apresentados. O desempenho da metodologia é ilustrado através de estudos de simulação. Uma aplicação do modelo proposto no mercado spot de energia do Nord Pool é apresentado. / In the context of the factor models there are different methodologies to modeling multivariate time series that exhibit a second order non-stationary structure, co-movements and transitions over time. Models with abrupt structural changes and strict restrictions (often unrealistic) in factor loadings, when they are deterministic functions of time, have been proposed in the literature to deal with multivariate series that have these characteristics. In this work, we present a factor model with time-varying loadings continuously to modeling non-stationary time series and a procedure for its estimation that consists of two stages. First, latent factors are estimated using the principal components of the observed series. Second, we treat principal components obtained in first stage as covariate and the functional loadings are estimated by wavelet functions and generalized least squares. Asymptotic properties of the principal components estimators and least squares estimators of the wavelet coefficients are presented. The per- formance of the methodology is illustrated by simulations. An application to the model proposed in the energy spot market of the Nord Pool is presented.
|
6 |
Modelo fatorial com cargas funcionais para séries temporais / Factor model with functional loadings for time seriesDuvan Humberto Cataño Salazar 12 March 2018 (has links)
No contexto dos modelos fatoriais existem diferentes metodologias para abordar a modelagem de séries temporais multivariadas que exibem uma estrutura não estacionária de segunda ordem, co- movimentos e transições no tempo. Modelos com mudanças estruturais abruptas e restrições rigorosas (muitas vezes irreais) nas cargas fatoriais, quando elas são funções determinísticas no tempo, foram propostos na literatura para lidar com séries multivariadas que possuem essas características. Neste trabalho, apresentamos um modelo fatorial com cargas variando continuamente no tempo para modelar séries temporais não estacionárias e um procedimento para sua estimação que consiste em dois estágios. No primeiro, os fatores latentes são estimados empregando os componentes principais das séries observadas. Em um segundo estágio, tratamos estes componentes principais como co-variáveis e as cargas funcionais são estimadas através de funções de ondaletas e mínimos quadrados generalizados. Propriedades assintóticas dos estimadores de componentes principais e de mínimos quadrados dos coeficientes de ondaletas são apresentados. O desempenho da metodologia é ilustrado através de estudos de simulação. Uma aplicação do modelo proposto no mercado spot de energia do Nord Pool é apresentado. / In the context of the factor models there are different methodologies to modeling multivariate time series that exhibit a second order non-stationary structure, co-movements and transitions over time. Models with abrupt structural changes and strict restrictions (often unrealistic) in factor loadings, when they are deterministic functions of time, have been proposed in the literature to deal with multivariate series that have these characteristics. In this work, we present a factor model with time-varying loadings continuously to modeling non-stationary time series and a procedure for its estimation that consists of two stages. First, latent factors are estimated using the principal components of the observed series. Second, we treat principal components obtained in first stage as covariate and the functional loadings are estimated by wavelet functions and generalized least squares. Asymptotic properties of the principal components estimators and least squares estimators of the wavelet coefficients are presented. The per- formance of the methodology is illustrated by simulations. An application to the model proposed in the energy spot market of the Nord Pool is presented.
|
7 |
The Evolution of Life History Traits and Their Thermal Plasticity in DaphniaBowman, Larry L., Jr., Post, David M. 06 January 2023 (has links) (PDF)
Few studies have explored the relative strength of ecogeographic versus lineage-specific effects on a global scale, particularly for poikilotherms, those organisms whose internal temperature varies with their environment. Here, we compile a global dataset of life history traits in Daphnia, at the species-and population-level, and use those data to parse the relative influences of lineage-specific effects and climate. We also compare the thermal response (plasticity) of life history traits and their dependence on climate, temperature, precipitation, and latitude. We found that the mode of evolution for life history traits varies but that the thermal response of life history traits most often follows a random walk model of evolution. We conclude that life history trait evolution in Daphnia is not strongly species-specific but is ecogeographically distinct, suggesting that life history evolution should be understood at the population level for Daphnia and possibly for other poikilotherms.
|
8 |
Unit root, outliers and cointegration analysis with macroeconomic applicationsRodríguez, Gabriel 10 1900 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal. / In this thesis, we deal with three particular issues in the literature on nonstationary time series. The first essay deals with various unit root tests in the context of structural change. The second paper studies some residual based tests in order to identify cointegration. Finally, in the third essay, we analyze several tests in order to identify additive outliers in nonstationary time series. The first paper analyzes the hypothesis that some time series can be characterized as stationary with a broken trend. We extend the class of M-tests and ADF test for a unit root to the case where a change in the trend function is allowed to occur at an unknown time. These tests (MGLS, ADFGLS) adopt the Generalized Least Squares (GLS) detrending approach to eliminate the set of deterministic components present in the model. We consider two models in the context of the structural change literature. The first model allows for a change in slope and the other for a change in slope as well as intercept. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal test (PF-Ls) which allows us to find the power envelope. The asymptotic critical values of the tests are tabulated and we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. Two methods to select the break point are analyzed. A first method estimates the break point that yields the minimal value of the statistic. In the second method, the break point is selected such that the absolute value of the t-statistic on the change in slope is maximized. We show that the MGLS and PTGLS tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power of the tests in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. In an empirical application, we consider two U.S. macroeconomic annual series widely used in the unit root literature: real wages and common stock prices. Our results suggest a rejection of the unit root hypothesis. In other words, we find that these series can be considered as trend stationary with a broken trend. Given the fact that using the GLS detrending approach allows us to attain gains in the power of the unit root tests, a natural extension is to propose this approach to the context of tests based on residuals to identify cointegration. This is the objective of the second paper in the thesis. In fact, we propose residual based tests for cointegration using local GLS detrending to eliminate separately the deterministic components in the series. We consider two cases, one where only a constant is included and one where a constant and a time trend are included. The limiting distributions of various residuals based tests are derived for a general quasi-differencing parameter and critical values are tabulated for values of c = 0 irrespective of the nature of the deterministic components and also for other values as proposed in the unit root literature. Simulations show that GLS detrending yields tests with higher power. Furthermore, using c = -7.0 or c = -13.5 as the quasi-differencing parameter, based on the two cases analyzed, is preferable. The third paper is an extension of a recently proposed method to detect outliers which explicitly imposes the null hypothesis of a unit root. it works in an iterative fashion to select multiple outliers in a given series. We show, via simulation, that under the null hypothesis of no outliers, it has the right size in finite samples to detect a single outlier but when applied in an iterative fashion to select multiple outliers, it exhibits severe size distortions towards finding an excessive number of outliers. We show that this iterative method is incorrect and derive the appropriate limiting distribution of the test at each step of the search. Whether corrected or not, we also show that the outliers need to be very large for the method to have any decent power. We propose an alternative method based on first-differenced data that has considerably more power. The issues are illustrated using two US/Finland real exchange rate series.
|
9 |
Statistical modelling of return on capital employed of individual unitsBurombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done.
The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with.
To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with.
Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with.
Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
|
10 |
Statistical modelling of return on capital employed of individual unitsBurombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done.
The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with.
To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with.
Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with.
Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
|
Page generated in 0.0999 seconds