• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 173
  • 16
  • 13
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 120
  • 89
  • 63
  • 56
  • 54
  • 35
  • 33
  • 32
  • 27
  • 26
  • 25
  • 23
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Rent modelling of Swedish office markets : Forecasting and rent effects / Hyresmodellering av svenska kontorsmarknader : Prognoser och priseffekter

Harrami, Hamza, Paulsson, Oscar January 2017 (has links)
The Swedish office markets has been emerging the last decade towards a higher rental level equilibrium. The aim of this study is to investigate the fundamental drivers of office rents and modelling of office rent forecasts in five Swedish office submarkets; Stockholm (2), Gothenburg (2) and Malmö (1). The methodology is a combination of economic theory and econometric analysis. The product is an econometric model. By using the estimated drivers, office rent forecasts are modelled and computed based on a vector autoregression-model. Our results show that office stock and vacancy, in lagged fashion, are statistically superior in explaining office rent development. OMX30 was evident to be the largest macro-driver in explaining office rent. The generated forecasts were significant and valid in the CBD-submarkets. However, the forecasts for the Rest of Inner City (RIC)-submarkets were not as precise. The results also show that the forecasts move more linearly compared to the actual office rent data that move more "step-wise".
142

A New Approach to ANOVA Methods for Autocorrelated Data

Liu, Gang January 2016 (has links)
No description available.
143

Contributions to Efficient Statistical Modeling of Complex Data with Temporal Structures

Hu, Zhihao 03 March 2022 (has links)
This dissertation will focus on three research projects: Neighborhood vector auto regression in multivariate time series, uncertainty quantification for agent-based modeling networked anagrams, and a scalable algorithm for multi-class classification. The first project studies the modeling of multivariate time series, with the applications in the environmental sciences and other areas. In this work, a so-called neighborhood vector autoregression (NVAR) model is proposed to efficiently analyze large-dimensional multivariate time series. The time series are assumed to have underlying distances among them based on the inherent setting of the problem. When this distance matrix is available or can be obtained, the proposed NVAR method is demonstrated to provides a computationally efficient and theoretically sound estimation of model parameters. The performance of the proposed method is compared with other existing approaches in both simulation studies and a real application of stream nitrogen study. The second project focuses on the study of group anagram games. In a group anagram game, players are provided letters to form as many words as possible. In this work, the enhanced agent behavior models for networked group anagram games are built, exercised, and evaluated under an uncertainty quantification framework. Specifically, the game data for players is clustered based on their skill levels (forming words, requesting letters, and replying to requests), the multinomial logistic regressions for transition probabilities are performed, and the uncertainty is quantified within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. Simulations of ego agents with neighbors are conducted to demonstrate the efficacy of the proposed methods. The third project aims to develop efficient and scalable algorithms for multi-class classification, which achieve a balance between prediction accuracy and computing efficiency, especially in high dimensional settings. The traditional multinomial logistic regression becomes slow in high dimensional settings where the number of classes (M) and the number of features (p) is large. Our algorithms are computing efficiently and scalable to data with even higher dimensions. The simulation and case study results demonstrate that our algorithms have huge advantage over traditional multinomial logistic regressions, and maintains comparable prediction performance. / Doctor of Philosophy / In many data-central applications, data often have complex structures involving temporal structures and high dimensionality. Modeling of complex data with temporal structures have attracted great attention in many applications such as enviromental sciences, network sciences, data mining, neuroscience, and economics. However, modeling such complex data is quite challenging due to large uncertainty and dimensionality of complex data. This dissertation focuses on modeling and prediction of complex data with temporal structures. Three different types of complex data are modeled. For example, the nitrogen of multiple streams are modeled in a joint manner, human actions in networked group anagrams are modeled and the uncertainty is quantified, and data with multiple labels are classified. Different models are proposed and they are demonstrated to be efficient through simulation and case study.
144

A New State Transition Model for Forecasting-Aided State Estimation for the Grid of the Future

Hassanzadeh, Mohammadtaghi 09 July 2014 (has links)
The grid of the future will be more decentralized due to the significant increase in distributed generation, and microgrids. In addition, due to the proliferation of large-scale intermittent wind power, the randomness in power system state will increase to unprecedented levels. This dissertation proposes a new state transition model for power system forecasting-aided state estimation, which aims at capturing the increasing stochastic nature in the states of the grid of the future. The proposed state forecasting model is based on time-series modeling of filtered system states and it takes spatial correlation among the states into account. Once the states with high spatial correlation are identified, the time-series models are developed to capture the dependency of voltages and angles in time and among each other. The temporal correlation in power system states (i.e. voltage angles and magnitudes) is modeled by using autoregression, while the spatial correlation among the system states (i.e. voltage angles) is modeled using vector autoregression. Simulation results show significant improvement in power system state forecasting accuracy especially in presence of distributed generation and microgrids. / Ph. D.
145

The relationship between consumer price inflation and consumer confidence : The case of Sweden

Mtawali, Joyce, Taha, Gumush January 2024 (has links)
With economic uncertainties on the rise, understanding the relationship between consumer price inflation and consumer confidence becomes increasingly vital. This thesis investigates the relationship between Consumer Price Inflation (CPI) and Consumer confidence, specifically within the context of Sweden. The relationship is examined through the Vector Autoregression (VAR) model, spanning from the period 2002 to 2023. Drawing upon existing literature and theoretical frameworks in economics and psychology, this research provides a deeper understanding of the relationship between the two variables. By incorporating data up to 2023, this thesis will also examine the effects of the Covid-19 pandemic on the relationship between these key macroeconomic indicators. The results show statistically significant evidence that consumer price inflation predicts consumer confidence. Thus, we conclude that consumer price inflation plays a significant role in the dynamics of consumer confidence, influencing both economic conditions and expectations.
146

Matching DSGE models to data with applications to fiscal and robust monetary policy

Kriwoluzky, Alexander 01 December 2009 (has links)
Diese Doktorarbeit untersucht drei Fragestellungen. Erstens, wie die Wirkung von plötzlichen Änderungen exogener Faktoren auf endogene Variablen empirisch im Allgemeinen zu bestimmen ist. Zweitens, welche Effekte eine Erhöhung der Staatsausgaben im Speziellen hat. Drittens, wie optimale Geldpolitik bestimmt werden kann, wenn der Entscheider keine eindeutigen Modelle für die ökonomischen Rahmenbedingungen hat. Im ersten Kapitel entwickele ich eine Methode, mithilfe derer die Effekte von plötzlichen Änderungen exogener Faktoren auf endogene Variablen geschätzt werden können. Dazu wird die gemeinsame Verteilung von Parametern einer Vektor Autoregression (VAR) und eines stochastischen allgemeinen Gleichgewichtsmodelles (DSGE) bestimmt. Auf diese Weise können zentrale Probleme gelöst werden: das Identifikationsproblem der VAR und eine mögliche Misspezifikation des DSGE Modells. Im zweitem Kapitel wende ich die Methode aus dem ersten Kapitel an, um den Effekt einer angekündigten Erhöhung der Staatsausgaben auf den privaten Konsum und die Reallöhne zu untersuchen. Die Identifikation beruht auf der Einsicht, dass endogene Variablen, oft qualitative Unterschiede in der Periode der Ankündigung und nach der Realisation zeigen. Die Ergebnisse zeigen, dass der private Konsum negativ im Zeitraum der Ankündigung reagiert und positiv nach der Realisation. Reallöhne steigen zum Zeitpunkt der Ankündigung und sind positiv für zwei Perioden nach der Realisation. Im abschließendem Kapitel untersuche ich gemeinsam mit Christian Stoltenberg, wie Geldpolitik gesteuert werden sollte, wenn die Modellierung der Ökonomie unsicher ist. Wenn ein Modell um einen Parameter erweitert wird, kann das Modell dadurch so verändert werden, dass sich die Politikempfehlungen zwischen dem ursprünglichen und dem neuen Modell unterscheiden. Oft wird aber lediglich das erweiterte Modell betrachtet. Wir schlagen eine Methode vor, die beiden Modellen Rechnung trägt und somit zu einer besseren Politik führt. / This thesis is concerned with three questions: first, how can the effects macroeconomic policy has on the economy in general be estimated? Second, what are the effects of a pre-announced increase in government expenditures? Third, how should monetary policy be conducted, if the policymaker faces uncertainty about the economic environment. In the first chapter I suggest to estimate the effects of an exogenous disturbance on the economy by considering the parameter distributions of a Vector Autoregression (VAR) model and a Dynamic Stochastic General Equilibrium (DSGE) model jointly. This allows to resolve the major issue a researcher has to deal with when working with a VAR model and a DSGE model: the identification of the VAR model and the potential misspecification of the DSGE model. The second chapter applies the methodology presented in the preceding chapter to investigate the effects of a pre-announced change in government expenditure on private consumption and real wages. The shock is identified by exploiting its pre-announced nature, i.e. different signs of the responses in endogenous variables during the announcement and after the realization of the shock. Private consumption is found to respond negatively during the announcement period and positively after the realization. The reaction of real wages is positive on impact and positive for two quarters after the realization. In the last chapter ''Optimal Policy Under Model Uncertainty: A Structural-Bayesian Estimation Approach'' I investigate jointly with Christian Stoltenberg how policy should optimally be conducted when the policymaker is faced with uncertainty about the economic environment. The standard procedure is to specify a prior over the parameter space ignoring the status of some sub-models. We propose a procedure that ensures that the specified set of sub-models is not discarded too easily. We find that optimal policy based on our procedure leads to welfare gains compared to the standard practice.
147

Oil Price and the Stock Market: A Structural VAR Model Identified with an External Instrument

Perez, Tomas Rene 28 July 2020 (has links)
No description available.
148

Debt Portfolio Optimization at the Swedish National Debt Office: : A Monte Carlo Simulation Model / Skuldportföljsoptimering på Riksgälden: : En Monte Carlo-simuleringsmodell

Greberg, Felix January 2020 (has links)
It can be difficult for a sovereign debt manager to see the implications on expected costs and risk of a specific debt management strategy, a simulation model can therefore be a valuable tool. This study investigates how future economic data such as yield curves, foreign exchange rates and CPI can be simulated and how a portfolio optimization model can be used for a sovereign debt office that mainly uses financial derivatives to alter its strategy. The programming language R is used to develop a bespoke software for the Swedish National Debt Office, however, the method that is used can be useful for any debt manager. The model performs well when calculating risk implications of different strategies but debt managers that use this software to find optimal strategies must understand the model's limitations in calculating expected costs. The part of the code that simulates economic data is developed as a separate module and can thus be used for other studies, key parts of the code are available in the appendix of this paper. Foreign currency exposure is the factor that had the largest effect on both expected cost and risk, moreover, the model does not find any cost advantage of issuing inflation-protected debt. The opinions expressed in this thesis are the sole responsibility of the author and should not be interpreted as reflecting the views of the Swedish National Debt Office. / Det kan vara svårt för en statsskuldsförvaltare att se påverkan på förväntade kostnader och risk när en skuldförvaltningsstrategi väljs, en simuleringsmodell kan därför vara ett värdefullt verktyg. Den här studien undersöker hur framtida ekonomiska data som räntekurvor, växelkurser ock KPI kan simuleras och hur en portföljoptimeringsmodell kan användas av ett skuldkontor som främst använder finansiella derivat för att ändra sin strategi. Programmeringsspråket R används för att utveckla en specifik mjukvara åt Riksgälden, men metoden som används kan vara användbar för andra skuldförvaltare. Modellen fungerar väl när den beräknar risk i olika portföljer men skuldförvaltare som använder modellen för att hitta optimala strategier måste förstå modellens begränsningar i att beräkna förväntade kostnader. Delen av koden som simulerar ekonomiska data utvecklas som en separat modul och kan därför användas för andra studier, de viktigaste delarna av koden finns som en bilaga till den här rapporten. Valutaexponering är den faktor som hade störst påverkan på både förväntade kostnader och risk och modellen hittar ingen kostnadsfördel med att ge ut inflationsskyddade lån. Åsikterna som uttrycks i den här uppsatsen är författarens egna ansvar och ska inte tolkas som att de reflekterar Riksgäldens syn.
149

Multiple Time Series Analysis of Freight Rate Indices / Multipel tidsserieanalys av fraktratsindex

Koller, Simon January 2020 (has links)
In this master thesis multiple time series of shipping industry and financial data are analysed in order to create a forecasting model to forecast freight rate indices. The data of main interest which are predicted are the two freight rate indices, BDI and BDTI, from the Baltic Exchange. The project investigates the possibilities for aggregated Vector Autoregression(VAR) models to outperform simple univariate models, in this case, an Autoregressive Integrated Moving Average(ARIMA) with seasonal components. The other part of this thesis is to model market shocks in the freight rate indices, given impulses in the other underlying VAR-model time series using the impulse response function. The main results are that the VAR-model forecast outperforms the ARIMA-model in forecasting the tanker freight rate index (BDTI), while the the bulk freight rate index(BDI) is better predicted by the simple ARIMA when calculating the forecast mean square error. / I denna avhandling analyseras multipla tidsserier över rederinärings- och finansiell data i syfte att skapa en prognosticerande modell för att prognosticera fraktratsindex. Dataserierna som i huvudsak prognosticeras är fraktratsindexen BDI och BDTI från Baltic exchange. I projektet undersöks om en aggregerad Vektor Autoregressiv(VAR) modell överträffar en univariat modell, i detta fall en Autoregressive Integrated Moving Average(ARIMA) med säsongsvariabel. I andra delen av denna avhandling modelleras chocker i fraktratsindexen givet impulser i de andra underliggande tidsserierna i de aggregerade VAR-modellerna. Huvudresultaten är att VAR-modellens prognos överträffar ARIMA-modellen för tankerraterna (BDTI), medan bulkraterna(BDI) bättre prognosticeras av ARIMA-modellen, i avseende på prognosernas beräknade mean square error.
150

Neural Ordinary Differential Equations for Anomaly Detection / : Neurala Ordinära Differentialekvationer för Anomalidetektion

Hlöðver Friðriksson, Jón, Ågren, Erik January 2021 (has links)
Today, a large amount of time series data is being produced from a variety of different devices such as smart speakers, cell phones and vehicles. This data can be used to make inferences and predictions. Neural network based methods are among one of the most popular ways to model time series data. The field of neural networks is constantly expanding and new methods and model variants are frequently introduced. In 2018, a new family of neural networks was introduced. Namely, Neural Ordinary Differential Equations (Neural ODEs). Neural ODEs have shown great potential in modelling the dynamics of temporal data. Here we present an investigation into using Neural Ordinary Differential Equations for anomaly detection. We tested two model variants, LSTM-ODE and latent-ODE. The former model utilises a neural ODE to model the continuous-time hidden state in between observations of an LSTM model, the latter is a variational autoencoder that uses the LSTM-ODE as encoding and a Neural ODE as decoding. Both models are suited for modelling sparsely and irregularly sampled time series data. Here, we test their ability to detect anomalies on various sparsity and irregularity ofthe data. The models are compared to a Gaussian mixture model, a vanilla LSTM model and an LSTM variational autoencoder. Experimental results using the Human Activity Recognition dataset showed that the Neural ODEbased models obtained a better ability to detect anomalies compared to their LSTM based counterparts. However, the computational training cost of the Neural ODE models were considerably higher than for the models that onlyutilise the LSTM architecture. The Neural ODE based methods were also more memory consuming than their LSTM counterparts. / Idag produceras en stor mängd tidsseriedata från en mängd olika enheter som smarta högtalare, mobiltelefoner och fordon. Denna datan kan användas för att dra slutsatser och förutsägelser. Neurala nätverksbaserade metoder är bland de mest populära sätten att modellera tidsseriedata. Mycket forskning inom området neurala nätverk pågår och nya metoder och modellvarianter introduceras ofta. Under 2018 introducerades en ny familj av neurala nätverk. Nämligen, Neurala Ordinära Differentialekvationer (NeuralaODE:er). Neurala ODE:er har visat en stor potential i att modellera dynamiken hos temporal data. Vi presenterar här en undersökning i att använda neuralaordinära differentialekvationer för anomalidetektion. Vi testade två olika modellvarianter, en som kallas LSTM-ODE och en annan som kallas latent-ODE.Den förstnämnda använder Neurala ODE:er för att modellera det kontinuerliga dolda tillståndet mellan observationer av en LSTM-modell, den andra är en variational autoencoder som använder LSTM-ODE som kodning och en Neural ODE som avkodning. Båda dessa modeller är lämpliga för att modellera glest och oregelbundet samplade tidsserier. Därför testas deras förmåga att upptäcka anomalier på olika gleshet och oregelbundenhet av datan. Modellerna jämförs med en gaussisk blandningsmodell, en vanlig LSTM modell och en LSTM variational autoencoder. Experimentella resultat vid användning av datasetet Human Activity Recognition (HAR) visade att de Neurala ODE-baserade modellerna erhöll en bättre förmåga att upptäcka avvikelser jämfört med deras LSTM-baserade motsvarighet. Träningstiden förde Neurala ODE-baserade modellerna var dock betydligt långsammare än träningstiden för deras LSTM-baserade motsvarighet. Neurala ODE-baserade metoder krävde också mer minnesanvändning än deras LSTM motsvarighet.

Page generated in 0.0894 seconds