• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Alternative approaches to trend estimation

Salter, Stephen James January 1996 (has links)
This thesis suggests a general approach for estimating the trend of a univariate time series. It begins by suggesting and defining a set of "desirable" trend properties, namely "Fidelity", "Smoothness", "Invariance" and "Additivity", which are then incorporated into the design of an appropriate non-stationary time series model. The unknown parameters of the model are then estimated using a wide selection of "optimal" procedures, each parameter having at least two such procedures applied to it. Attention is paid to the development of algorithms to implement the procedures in practice. The model is gradually extended from a basic, non-seasonal model consisting of a simple lagged trend to a general, seasonal model incorporating a variable parameter, general autoregressive trend.
2

The Effect Of Temporal Aggregation On Univariate Time Series Analysis

Sariaslan, Nazli 01 September 2010 (has links) (PDF)
Most of the time series are constructed by some kind of aggregation and temporal aggregation that can be defined as aggregation over consecutive time periods. Temporal aggregation takes an important role in time series analysis since the choice of time unit clearly influences the type of model and forecast results. A totally different time series model can be fitted on the same variable over different time periods. In this thesis, the effect of temporal aggregation on univariate time series models is studied by considering modeling and forecasting procedure via a simulation study and an application based on a southern oscillation data set. Simulation study shows how the model, mean square forecast error and estimated parameters change when temporally aggregated data is used for different orders of aggregation and sample sizes. Furthermore, the effect of temporal aggregation is also demonstrated through southern oscillation data set for different orders of aggregation. It is observed that the effect of temporal aggregation should be taken into account for data analysis since temporal aggregation can give rise to misleading results and inferences.
3

Growth theories and the persistence of output fluctuations. The case of Austria.

Ragacs, Christian, Steinberger, Thomas, Zagler, Martin January 1998 (has links) (PDF)
The paper analyses the degree of output persistence in GDP in order to empirically discriminate between the Solow growth model, the perfect competition endogenous growth model, the imperfect competition endogenous growth model, and the subcase of a multiple equilibria model of endogenous growth for the case of Austria. We find that a temporary shock in the growth rate of output induces a permanent and larger effect on the level of GDP. This leads us to refute the Solow growth model and the perfect competition model. We find strong empirical support for the imperfect competition growth model, but cannot fully rule out the possibility of multiple equilibria growth rates. / Series: Department of Economics Working Paper Series
4

Temporal and Spatial Analysis of Water Quality Time Series

Khalil Arya, Farid January 2015 (has links)
No description available.
5

Unsupervised anomaly detection for aircraft health monitoring system

Dani, Mohamed Cherif 10 March 2017 (has links)
La limite des connaissances techniques ou fondamentale, est une réalité dont l’industrie fait face. Le besoin de mettre à jour cette connaissance acquise est essentiel pour une compétitivité économique, mais aussi pour une meilleure maniabilité des systèmes et machines. Aujourd’hui grâce à ces systèmes et machine, l’expansion de données en quantité, en fréquence de génération est un véritable phénomène. À présent par exemple, les avions Airbus génèrent des centaines de mégas de données par jour, et intègrent des centaines voire des milliers de capteurs dans les nouvelles générations d’avions. Ces données générées par ces capteurs, sont exploitées au sol ou pendant le vol, pour surveiller l’état et la santé de l’avion, et pour détecter des pannes, des incidents ou des changements. En théorie, ces pannes, ces incidents ou ces changements sont connus sous le terme d’anomalie. Une anomalie connue comme un comportement qui ne correspond pas au comportement normal des données. Certains la définissent comme une déviation d’un modèle normal, d’autres la définissent comme un changement. Quelques soit la définition, le besoin de détecter cette anomalie est important pour le bon fonctionnement de l'avion. Actuellement, la détection des anomalies à bord des avions est assuré par plusieurs équipements de surveillance aéronautiques, l’un de ces équipements est le « Aircraft condition monitoring System –ACMS », enregistre les données générées par les capteurs en continu, il surveille aussi l’avion en temps réel grâce à des triggers et des seuils programmés par des Airlines ou autres mais à partir d’une connaissance a priori du système. Cependant, plusieurs contraintes limitent le bon fonctionnement de cet équipement, on peut citer par exemple, la limitation des connaissances humaines un problème classique que nous rencontrons dans plusieurs domaines. Cela veut dire qu’un trigger ne détecte que les anomalies et les incidents dont il est désigné, et si une nouvelle condition surgit suite à une maintenance, changement de pièce, etc. Le trigger est incapable s’adapter à cette nouvelle condition, et il va labéliser toute cette nouvelle condition comme étant une anomalie. D’autres problèmes et contraintes seront cités progressivement dans les chapitres qui suivent. L’objectif principal de notre travail est de détecter les anomalies et les changements dans les données de capteurs, afin d’améliorer le system de surveillance de santé d’avion connu sous le nom Aircraft Health Monitoring(AHM). Ce travail est basé principalement sur une analyse à deux étapes, Une analyse unie varie dans un contexte non supervisé, qui nous permettra de se focaliser sur le comportement de chaque capteur indépendamment, et de détecter les différentes anomalies et changements pour chaque capteur. Puis une analyse multi-variée qui nous permettra de filtrer certaines anomalies détectées (fausses alarmes) dans la première analyse et de détecter des groupes de comportement suspects. La méthode est testée sur des données réelles et synthétiques, où les résultats, l’identification et la validation des anomalies sont discutées dans cette thèse. / The limitation of the knowledge, technical, fundamental is a daily challenge for industries. The need to updates these knowledge are important for a competitive industry and also for an efficient reliability and maintainability of the systems. Actually, thanks to these machines and systems, the expansion of the data on quantity and frequency of generation is a real phenomenon. Within Airbus for example, and thanks to thousands of sensors, the aircrafts generate hundreds of megabytes of data per flight. These data are today exploited on the ground to improve safety and health monitoring system as a failure, incident and change detection. In theory, these changes, incident and failure are known as anomalies. An anomaly is known as deviation form a normal behavior of the data. Others define it as a behavior that do not conform the normal behavior. Whatever the definition, the anomaly detection process is very important for good functioning of the aircraft. Currently, the anomaly detection process is provided by several health monitoring equipments, one of these equipment is the Aircraft Health Monitoring System (ACMS), it records continuously the date of each sensor, and also monitor these sensors to detect anomalies and incident using triggers and predefined condition (exeedance approach). These predefined conditions are programmed by airlines and system designed according to a prior knowledge (physical, mechanical, etc.). However, several constraints limit the ACMS anomaly detection potential. We can mention, for example, the limitation the expert knowledge which is a classic problem in many domains, since the triggers are designed only to the targeted anomalies. Otherwise, the triggers do not cover all the system conditions. In other words, if a new behavior appears (new condition) in the sensor, after a maintenance action, parts changing, etc. the predefined conditions won't detect any thing and may be in many cases generated false alarms. Another constraint is that the triggers (predefined conditions) are static, they are unable to adapt their proprieties to each new condition. Another limitation is discussed gradually in the future chapters. The principle of objective of this thesis is to detect anomalies and changes in the ACMS data. In order to improve the health monitoring function of the ACMS. The work is based principally on two stages, the univariate anomaly detection stage, where we use the unsupervised learning to process the univariate sensors, since we don’t have any a prior knowledge of the system, and no documentation or labeled classes are available. The univariate analysis focuses on each sensor independently. The second stage of the analysis is the multivariate anomaly detection, which is based on density clustering, where the objective is to filter the anomalies detected in the first stage (false alarms) and to detect suspected behaviours (group of anomalies). The anomalies detected in both univariate and multivariate can be potential triggers or can be used to update the existing triggers. Otherwise, we propose also a generic concept of anomaly detection based on univariate and multivariate anomaly detection. And finally a new concept of validation anomalies within airbus.
6

Elastic matching for classification and modelisation of incomplete time series / Appariement élastique pour la classification et la modélisation de séries temporelles incomplètes

Phan, Thi-Thu-Hong 12 October 2018 (has links)
Les données manquantes constituent un challenge commun en reconnaissance de forme et traitement de signal. Une grande partie des techniques actuelles de ces domaines ne gère pas l'absence de données et devient inutilisable face à des jeux incomplets. L'absence de données conduit aussi à une perte d'information, des difficultés à interpréter correctement le reste des données présentes et des résultats biaisés notamment avec de larges sous-séquences absentes. Ainsi, ce travail de thèse se focalise sur la complétion de larges séquences manquantes dans les séries monovariées puis multivariées peu ou faiblement corrélées. Un premier axe de travail a été une recherche d'une requête similaire à la fenêtre englobant (avant/après) le trou. Cette approche est basée sur une comparaison de signaux à partir d'un algorithme d'extraction de caractéristiques géométriques (formes) et d'une mesure d'appariement élastique (DTW - Dynamic Time Warping). Un package R CRAN a été développé, DTWBI pour la complétion de série monovariée et DTWUMI pour des séries multidimensionnelles dont les signaux sont non ou faiblement corrélés. Ces deux approches ont été comparées aux approches classiques et récentes de la littérature et ont montré leur faculté de respecter la forme et la dynamique du signal. Concernant les signaux peu ou pas corrélés, un package DTWUMI a aussi été développé. Le second axe a été de construire une similarité floue capable de prender en compte les incertitudes de formes et d'amplitude du signal. Le système FSMUMI proposé est basé sur une combinaison floue de similarités classiques et un ensemble de règles floues. Ces approches ont été appliquées à des données marines et météorologiques dans plusieurs contextes : classification supervisée de cytogrammes phytoplanctoniques, segmentation non supervisée en états environnementaux d'un jeu de 19 capteurs issus d'une station marine MAREL CARNOT en France et la prédiction météorologique de données collectées au Vietnam. / Missing data are a prevalent problem in many domains of pattern recognition and signal processing. Most of the existing techniques in the literature suffer from one major drawback, which is their inability to process incomplete datasets. Missing data produce a loss of information and thus yield inaccurate data interpretation, biased results or unreliable analysis, especially for large missing sub-sequence(s). So, this thesis focuses on dealing with large consecutive missing values in univariate and low/un-correlated multivariate time series. We begin by investigating an imputation method to overcome these issues in univariate time series. This approach is based on the combination of shape-feature extraction algorithm and Dynamic Time Warping method. A new R-package, namely DTWBI, is then developed. In the following work, the DTWBI approach is extended to complete large successive missing data in low/un-correlated multivariate time series (called DTWUMI) and a DTWUMI R-package is also established. The key of these two proposed methods is that using the elastic matching to retrieving similar values in the series before and/or after the missing values. This optimizes as much as possible the dynamics and shape of knowledge data, and while applying the shape-feature extraction algorithm allows to reduce the computing time. Successively, we introduce a new method for filling large successive missing values in low/un-correlated multivariate time series, namely FSMUMI, which enables to manage a high level of uncertainty. In this way, we propose to use a novel fuzzy grades of basic similarity measures and fuzzy logic rules. Finally, we employ the DTWBI to (i) complete the MAREL Carnot dataset and then we perform a detection of rare/extreme events in this database (ii) forecast various meteorological univariate time series collected in Vietnam
7

PERFORMANCE EVALUATION OF UNIVARIATE TIME SERIES AND DEEP LEARNING MODELS FOR FOREIGN EXCHANGE MARKET FORECASTING: INTEGRATION WITH UNCERTAINTY MODELING

Wajahat Waheed (11828201) 13 December 2021 (has links)
Foreign exchange market is the largest financial market in the world and thus prediction of foreign exchange rate values is of interest to millions of people. In this research, I evaluated the performance of Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Autoregressive Integrated Moving Average (ARIMA) and Moving Average (MA) on the USD/CAD and USD/AUD exchange pairs for 1-day, 1-week and 2-weeks predictions. For LSTM and GRU, twelve macroeconomic indicators along with past exchange rate values were used as features using data from January 2001 to December 2019. Predictions from each model were then integrated with uncertainty modeling to find out the chance of a model’s prediction being greater than or less than a user-defined target value using the error distribution from the test dataset, Monte-Carlo simulation trials and ChancCalc excel add-in. Results showed that ARIMA performs slightly better than LSTM and GRU for 1-day predictions for both USD/CAD and USD/AUD exchange pairs. However, when the period is increased to 1-week and 2-weeks, LSTM and GRU outperform both ARIMA and moving average for both USD/CAD and USD/AUD exchange pair.

Page generated in 0.1339 seconds