• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimation de modèles autorégressifs vectoriels à noyaux à valeur opérateur : Application à l'inférence de réseaux / Estimation of operator-valued kernel-based vector autoregressive models : Application to network inference

Lim, Néhémy 02 April 2015 (has links)
Dans l’analyse des séries temporelles multivariées, la plupart des modèles existants sont utilisés à des fins de prévision, c’est-à-dire pour estimer les valeurs futures du système étudié à partir d’un historique de valeurs observées dans le passé. Une autre tâche consiste à extraire des causalités entre les variables d’un système dynamique. C’est pour ce dernier problème à visée explicative que nous développons une série d’outils. À cette fin, nous définissons dans cette thèse une nouvelle famille de modèles autorégressifs vectoriels non paramétriques construits à partir de noyaux à valeur opérateur. En faisant l’hypothèse d’une structure sous-jacente creuse, la parcimonie du modèle est contrôlée en imposant dans la fonction de coût des contraintes de parcimonie aux paramètres du modèle (qui sont en l’occurrence des vecteurs qui pondèrent une combinaison linéaire de noyaux). Les noyaux étudiés possèdent parfois des hyperparamètres qui doivent être appris selon la nature du problème considéré. Lorsque des hypothèses de travail ou des connaissances expertes permettent de fixer les paramètres du noyau, le problème d’apprentissage se réduit à la seule estimation des paramètres du modèle. Pour optimiser la fonction de coût correspondante, nous développons un algorithme proximal. A contrario, lorsqu’aucune hypothèse relative aux variables n’est disponible, les paramètres de certains noyaux ne peuvent être fixés a priori. Il est alors nécessaire d’apprendre conjointement les paramètres du modèle et ceux du noyau. Pour cela, nous faisons appel à un schéma d’optimisation alterné qui met en jeu des méthodes proximales. Nous proposons ensuite d’extraire un estimateur de la matrice d’adjacence encodant le réseau causal sous-jacent en calculant une statistique des matrices jacobiennes instantanées. Dans le cas de la grande dimension, c’est-à-dire un nombre insuffisant de données par rapport au nombre de variables, nous mettons en oeuvre une approche d’ensemble qui partage des caractéristiques du boosting et des forêts aléatoires. Afin de démontrer l’efficacité de nos modèles, nous les appliquons à deux jeux de données : des données simulées à partir de réseaux de régulation génique et des données réelles sur le climat. / In multivariate time series analysis, existing models are often used for forecasting, i.e. estimating future values of the observed system based on previously observed values. Another purpose is to find causal relationships among a set of state variables within a dynamical system. We focus on the latter and develop tools in order to address this problem. In this thesis, we define a new family of nonparametric vector autoregressive models based on operator-valued kernels. Assuming a sparse underlying structure, we control the model’s sparsity by defining a loss function that includes sparsity-inducing penalties on the model parameters (which are basis vectors within a linear combination of kernels). The selected kernels sometimes involve hyperparameters that may need to be learned depending on the nature of the problem. On the one hand, when expert knowledge or working assumptions allow presetting the parameters of the kernel, the learning problem boils down to estimating only the model parameters. To optimize the corresponding loss function, we develop a proximal algorithm. On the other hand, when no prior knowledge is available, some other kernels may exhibit unknown parameters. Consequently, this leads to the joint learning of the kernel parameters in addition to the model parameters. We thus resort to an alternate optimization scheme which involves proximal methods. Subsequently, we propose to build an estimate of the adjacency matrix coding for the underlying causal network by computing a function of the instantaneous Jacobian matrices. In a high-dimensional setting, i.e. insufficient amount of data compared to the number of variables, we design an ensemble methodology that shares features of boosting and random forests. In order to emphasize the performance of the developed models, we apply them on two tracks : simulated data from gene regulatory networks and real climate data.
2

Macroeconomic Forecasting: Statistically Adequate, Temporal Principal Components

Dorazio, Brian Arthur 05 June 2023 (has links)
The main goal of this dissertation is to expand upon the use of Principal Component Analysis (PCA) in macroeconomic forecasting, particularly in cases where traditional principal components fail to account for all of the systematic information making up common macroeconomic and financial indicators. At the outset, PCA is viewed as a statistical model derived from the reparameterization of the Multivariate Normal model in Spanos (1986). To motivate a PCA forecasting framework prioritizing sound model assumptions, it is demonstrated, through simulation experiments, that model mis-specification erodes reliability of inferences. The Vector Autoregressive (VAR) model at the center of these simulations allows for the Markov (temporal) dependence inherent in macroeconomic data and serves as the basis for extending conventional PCA. Stemming from the relationship between PCA and the VAR model, an operational out-of-sample forecasting methodology is prescribed incorporating statistically adequate, temporal principal components, i.e. principal components which capture not only Markov dependence, but all of the other, relevant information in the original series. The macroeconomic forecasts produced from applying this framework to several, common macroeconomic indicators are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons. / Doctor of Philosophy / The landscape of macroeconomic forecasting and nowcasting has shifted drastically in the advent of big data. Armed with significant growth in computational power and data collection resources, economists have augmented their arsenal of statistical tools to include those which can produce reliable results in big data environments. At the forefront of such tools is Principal Component Analysis (PCA), a method which reduces the number of predictors into a few factors containing the majority of the variation making up the original data series. This dissertation expands upon the use of PCA in the forecasting of key, macroeconomic indicators, particularly in instances where traditional principal components fail to account for all of the systematic information comprising the data. Ultimately, a forecasting methodology which incorporates temporal principal components, ones capable of capturing both time dependence as well as the other, relevant information in the original series, is established. In the final analysis, the methodology is applied to several, common macroeconomic and financial indicators. The forecasts produced using this framework are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons.
3

The macroeconomic effects of international uncertainty shocks

Crespo Cuaresma, Jesus, Huber, Florian, Onorante, Luca 03 1900 (has links) (PDF)
We propose a large-scale Bayesian VAR model with factor stochastic volatility to investigate the macroeconomic consequences of international uncertainty shocks on the G7 countries. The factor structure enables us to identify an international uncertainty shock by assuming that it is the factor most correlated with forecast errors related to equity markets and permits fast sampling of the model. Our findings suggest that the estimated uncertainty factor is strongly related to global equity price volatility, closely tracking other prominent measures commonly adopted to assess global uncertainty. The dynamic responses of a set of macroeconomic and financial variables show that an international uncertainty shock exerts a powerful effect on all economies and variables under consideration. / Series: Department of Economics Working Paper Series
4

Implications of Macroeconomic Volatility in the Euro Area

Hauzenberger, Niko, Böck, Maximilian, Pfarrhofer, Michael, Stelzer, Anna, Zens, Gregor 04 1900 (has links) (PDF)
In this paper, we estimate a Bayesian vector autoregressive (VAR) model with factor stochastic volatility in the error term to assess the effects of an uncertainty shock in the Euro area (EA). This allows us to incorporate uncertainty directly into the econometric framework and treat it as a latent quantity. Only a limited number of papers estimates impacts of uncertainty and macroeconomic consequences jointly, and most literature in this sphere is based on single countries. We analyze the special case of a shock restricted to the Euro area, whose countries are highly related by definition. Among other variables, we find significant results of a decrease in real activity measured by GDP in most Euro area countries over a period of roughly a year following an uncertainty shock. / Series: Department of Economics Working Paper Series
5

Spatio-temporal Analyses For Prediction Of Traffic Flow, Speed And Occupancy On I-4

Chilakamarri Venkata, Srinivasa Ravi Chandra 01 January 2009 (has links)
Traffic data prediction is a critical aspect of Advanced Traffic Management System (ATMS). The utility of the traffic data is in providing information on the evolution of traffic process that can be passed on to the various users (commuters, Regional Traffic Management Centers (RTMCs), Department of Transportation (DoT), ... etc) for user-specific objectives. This information can be extracted from the data collected by various traffic sensors. Loop detectors collect traffic data in the form of flow, occupancy, and speed throughout the nation. Freeway traffic data from I-4 loop detectors has been collected and stored in a data warehouse called the Central Florida Data Warehouse (CFDW[trademark symbol]) by the University of Central Florida for the periods between 1993-1994 and 2000 - 2003. This data is raw, in the form of time stamped 30-second aggregated data collected from about 69 stations over a 36 mile stretch on I-4 from Lake Mary in the east to Disney-World in the west. This data has to be processed to extract information that can be disseminated to various users. Usually, most statistical procedures assume that each individual data point in the sample is independent of other data points. This is not true to traffic data as they are correlated across space and time. Therefore, the concept of time sequence and the layout of data collection devices in space, introduces autocorrelations in a single variable and cross correlations across multiple variables. Significant autocorrelations prove that past values of a variable can be used to predict future values of the same variable. Furthermore, significant cross-correlations between variables prove that past values of one variable can be used to predict future values of another variable. The traditional techniques in traffic prediction use univariate time series models that account for autocorrelations but not cross-correlations. These models have neglected the cross correlations between variables that are present in freeway traffic data, due to the way the data are collected. There is a need for statistical techniques that incorporate the effect of these multivariate cross-correlations to predict future values of traffic data. The emphasis in this dissertation is on the multivariate prediction of traffic variables. Unlike traditional statistical techniques that have relied on univariate models, this dissertation explored the cross-correlation between multivariate traffic variables and variables collected across adjoining spatial locations (such as loop detector stations). The analysis in this dissertation proved that there were significant cross correlations among different traffic variables collected across very close locations at different time scales. The nature of cross-correlations showed that there was feedback among the variables, and therefore past values can be used to predict future values. Multivariate time series analysis is appropriate for modeling the effect of different variables on each other. In the past, upstream data has been accounted for in time series analysis. However, these did not account for feedback effects. Vector Auto Regressive (VAR) models are more appropriate for such data. Although VAR models have been applied to forecast economic time series models, they have not been used to model freeway data. Vector Auto Regressive models were estimated for speeds and volumes at a sample of two locations, using 5-minute data. Different specifications were fit--estimation of speeds from surrounding speeds; estimation of volumes from surrounding volumes; estimation of speeds from volumes and occupancies from the same location; estimation of speeds from volumes from surrounding locations (and vice versa). These specifications were compared to univariate models for the respective variables at three levels of data aggregation (5-minutes, 10 minutes, and 15 minutes) in this dissertation. For data aggregation levels of [less than]15 minutes, the VAR models outperform the univariate models. At data aggregation level of 15 minutes, VAR models did not outperform univariate models. Since VAR models were used for all traffic variables reported by the loop detectors, this made the application of VAR a true multivariate procedure for dynamic prediction of the multivariate traffic variables--flow, speed and occupancy. Also, VAR models are generally deemed more complex than univariate models due to the estimation of multiple covariance matrices. However, a VAR model for k variables must be compared to k univariate models and VAR models compare well with AutoRegressive Integrated Moving Average (ARIMA) models. The added complexity helps model the effect of upstream and downstream variables on the future values of the response variable. This could be useful for ATMS situations, where the effect of traffic redistribution and redirection is not known beforehand with prediction models. The VAR models were tested against more traditional models and their performances were compared against each other under different traffic conditions. These models significantly enhance the understanding of the freeway traffic processes and phenomena as well as identifying potential knowledge relating to traffic prediction. Further refinements in the models can result in better improvements for forecasts under multiple conditions.
6

Short-term Industrial Production Forecasting For Turkey

Degerli, Ahmet 01 September 2012 (has links) (PDF)
This thesis aims to produce short-term forecasts for the economic activity in Turkey. As a proxy for the economic activity, industrial production index is used. Univariate autoregressive distributed lag (ADL) models, vector autoregressive (VAR) models and combination forecasts method are utilized in a pseudo out-of-sample forecasting framework to obtain one-month ahead forecasts. To evaluate the models&rsquo / forecasting performances, the relative root mean square forecast error (RRMSFE) is calculated. Overall, results indicate that combining the VAR models with four endogenous variables yields the most substantial improvement in forecasting performance, relative to benchmark autoregressive (AR) model.
7

An Empirical Investigation of Optimum Currency Area Theory, Business Cycle Synchronization, and Intra-Industry Trade

Li, Dan 19 December 2013 (has links)
The dissertation is mainly made up of three empirical theses on the Optimum Currency Area theory, business cycle synchronization, and intra-industry trade. The second chapter conducts an empirical test into the theory of Optimum Currency Area. I investigate the feasibility of creating a currency union in East Asia by examining the dominance and symmetry of macroeconomic shocks. Relying on a series of structural Vector Autoregressive models with long-run and block exogeneity restrictions, I identify a variety of macroeconomic disturbances in eleven East Asian economies. To examine the nature of the disturbances, I look into the forecast error variance decomposition, correlation of disturbances, size of shocks, and speed of adjustments. Based on both statistical analysis and economic comparison, it is found that two groups of economies are subject to dominant and symmetrical domestic supply shocks, and that the two groups respond quickly to moderate-sized shocks. Therefore, it is economically feasible for the two groups of economies to foster common currency zones. The third chapter investigates the different effects of intra- and inter-industry trade on business cycle synchronization, controlling for financial market linkage and monetary policy making. The chapter is the first attempt to use intra- and inter-industry trade simultaneously in Instrument Variable estimations. The evidence in my paper is supportive that intra-industry trade increases business cycle synchronization, while inter-industry trade brings about divergence of cycles. The findings imply that country pairs with higher intra-industry trade intensity are more likely to experience synchronized business cycles and are more feasible to join a monetary union. My results also show that financial integration and monetary policy coordination provide no explanation for synchronization when industry-level trade are accounted for. The fourth chapter extends the third chapter and explores how the characteristics of global trade network influence intra-industry trade. Borrowing the concept of structural equivalence, the similarity of two countries’ aggregate trade relations with other countries, from the social network analysis, this study incorporates this measure of trade network to the augmented gravity model of intra-industry trade. I build up two fixed effects models to analyze intra-industry trade in the raw material and final product sectors among 182 countries from 1962 through 2000. Structural equivalence promotes intra-industry trade flows in the final product sector, but it does not influence intra-industry trade in the crude material sector. Moreover, structural equivalence has been increasingly important in boosting intra-industry trade over time. / Graduate / 0508
8

Núcleos de inflação no Brasil e poder preditivo da inflação total

Litvac, Basiliki Theophane Calochorios 05 February 2013 (has links)
Submitted by Basiliki Theophane Calochorios Litvac (basiliki.litvac@gmail.com) on 2013-03-06T22:27:11Z No. of bitstreams: 1 dissertacao_Basiliki_final_rev.pdf: 681459 bytes, checksum: 86dfce2ca595dfd933509d266c084a2d (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-03-07T12:53:10Z (GMT) No. of bitstreams: 1 dissertacao_Basiliki_final_rev.pdf: 681459 bytes, checksum: 86dfce2ca595dfd933509d266c084a2d (MD5) / Made available in DSpace on 2013-03-07T13:14:10Z (GMT). No. of bitstreams: 1 dissertacao_Basiliki_final_rev.pdf: 681459 bytes, checksum: 86dfce2ca595dfd933509d266c084a2d (MD5) Previous issue date: 2013-02-05 / Este trabalho tem por objetivo avaliar para o caso brasileiro uma das mais importantes propriedades esperadas de um núcleo: ser um bom previsor da inflação plena futura. Para tanto, foram utilizados como referência para comparação dois modelos construídos a partir das informações mensais do IPCA e seis modelos VAR referentes a cada uma das medidas de núcleo calculadas pelo Banco Central do Brasil. O desempenho das previsões foi avaliado pela comparação dos resultados do erro quadrático médio e pela aplicação da metodologia de Diebold-Mariano (1995) de comparação de modelos. Os resultados encontrados indicam que o atual conjunto de medidas de núcleos calculado pelo Banco Central não atende pelos critérios utilizados neste trabalho a essa característica desejada. / This paper aims at evaluating one of the most important desirable properties of a core inflation measure: to be a better predictor of headline inflation over the future. To achieve this goal, two benchmark models using monthly IPCA data were compared with six VAR models for each one of the core measures calculated by the Brazilian Central Bank. The forecasting performance was evaluated comparing the mean square error and by the Diebold-Mariano (1995) test for predictive accuracy. The evidence found indicates that the current set of core inflation measures calculated by the Central Bank does fulfill this desired property.
9

Vliv sekuritizace na dynamiku cen bydlení ve Španělsku / Impact of Securitization on House Price Dynamics in Spain

Hejlová, Hana January 2014 (has links)
The thesis tries to explain different nature of the dynamics during the upward and downward part of the last house price cycle in Spain, characterized by important rigidities. Covered bonds are introduced as an instrument which may accelerate a house price boom, while it may also serve as a source of correction to overvalued house prices in downturn. In a serious economic stress, lack of investment opportunities motivates investors to buy the covered bonds due to the strong guarantees provided, which may in turn help to revitalize the credit and housing markets. To address such regime shift, house price dynamics is modelled within a framework of mutually related house price, credit and business cycles using smooth transition vector autoregressive model. Linear behaviour of such system is rejected, indicating the need to model house prices in a nonlinear framework. Also, importance of modelling house prices in the context of credit and business cycles is confirmed. Possible causality from issuance of covered bonds to house price dynamics was identified in this nonlinear structure. Finally, threat to financial stability resulting from rising asset encumbrance both in the upward and downward part of the house price cycle was identified, stressing the need to model impact of the covered bonds on house prices in...
10

Contextualizing the Dynamics of Affective Functioning: Conceptual and Statistical Considerations

Adolf, Janne K. 14 September 2018 (has links)
Aktuelle Affektforschung betont die Bedeutung mikrolängsschnittlicher Daten für das Verstehen täglichen affektiven Funktionierens, da sie es erlauben affektive Dynamiken und potentiell zugrunde liegende Prozesse zu beschreiben. Dynamische Längsschnittmodelle werden entsprechend attraktiver. In dieser Dissertation komme ich Forderungen nach einer Integration kontextueller Informationen in die Untersuchung täglichen affektiven Funktionierens nach. Speziell modifiziere ich populäre dynamische Modelle so, dass sie kontextuelle Variationen einbeziehen. In einem ersten Beitrag werden Personen als in Kontexte eingebettet begriffen. Der vorgeschlagene Ansatz der festen moderierten Zeitreihenanalyse berücksichtigt systemische Reaktionen auf kontextuelle Veränderungen, indem Veränderungen in allen Parametern eines dynamischen Zeitreihenmodells auf kontextuelle Veränderungen bedingt schätzt werden. Kontextuelle Veränderungen werden als bekannt und assoziierte Parameterveränderungen als deterministisch behandelt. Folglich sind Modellspezifikation und -schätzung erleichtert und in kleineren Stichproben praktikabel. Es sind allerdings Informationen über den Einfluss kontextueller Faktoren erforderlich. Anwendbar auf einzelne Personen erlaubt der Ansatz die uneingeschränkte Exploration interindividueller Unterschiede in kontextualisierten affektiven Dynamiken. In einem zweiten Beitrag werden Personen als mit Kontexten interagierend begriffen. Ich implementiere eine Prozessperspektive auf kontextuelle Schwankungen, die die Dynamiken täglicher Ereignisse über autoregressive Modelle mit Poisson Messfehler abbildet. Die Kombination von Poisson und Gaußscher autoregressiver Modellierung erlaubt eine Formalisierung des dynamischen Zusammenspiels kontextueller und affektiver Prozesse. Die Modelle sind hierarchisch aufgesetzt und erfassen so interindividuelle Unterschiede in intraindividuellen Dynamiken. Die Schätzung erfolgt über simulationsbasierte Verfahren der Bayesschen Statistik. / Recent affect research stresses the importance of micro-longitudinal data for understanding daily affective functioning, as they allow describing affective dynamics and potentially underlying processes. Accordingly, dynamic longitudinal models get increasingly promoted. In this dissertation, I address calls for an integration of contextual information into the study of daily affective functioning. Specifically, I modify popular dynamic models so that they incorporate contextual changes. In a first contribution, individuals are characterized as embedded in contexts. The proposed approach of fixed moderated time series analysis accounts for systemic reactions to contextual changes by estimating change in all parameters of a dynamic time series model conditional on contextual changes. It thus treats contextual changes as known and related parameter changes as deterministic. Consequently, model specification and estimation are facilitated and feasible in smaller samples, but information on which and how contextual factors matter is required. Applicable to single individuals, the approach permits an unconstrained exploration of inter-individual differences in contextualized affective dynamics. In a second contribution, individuals are characterized as interacting reciprocally with contexts. Implementing a process perspective on contextual changes, I model the dynamics of daily events using autoregressive models with Poisson measurement error. Combining Poisson and Gaussian autoregressive models can formalize the dynamic interplay between contextual and affective processes. It thereby distinguishes not only unique from joint dynamics, but also affective reactivity from situation selection, evocation, or anticipation. The models are set up as hierarchical to capture inter-individual differences in intra-individual dynamics. Estimation is carried out via simulation-based techniques in the Bayesian framework.

Page generated in 0.0931 seconds