• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 70
  • 25
  • 20
  • 18
  • 16
  • 15
  • 14
  • 13
  • 11
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Monitoring energy performance in local authority buildings

Stuart, Graeme January 2011 (has links)
Energy management has been an important function of organisations since the oil crisis of the mid 1970’s led to hugely increased costs of energy. Although the financial costs of energy are still important, the growing recognition of the environmental costs of fossil-fuel energy is becoming more important. Legislation is also a key driver. The UK has set an ambitious greenhouse gas (GHG) reduction target of 80% of 1990 levels by 2050 in response to a strong international commitment to reduce GHG emissions globally. This work is concerned with the management of energy consumption in buildings through the analysis of energy consumption data. Buildings are a key source of emissions with a wide range of energy-consuming equipment, such as photocopiers or refrigerators, boilers, air-conditioning plant and lighting, delivering services to the building occupants. Energy wastage can be identified through an understanding of consumption patterns and in particular, of changes in these patterns over time. Changes in consumption patterns may have any number of causes; a fault in heating controls; a boiler or lighting replacement scheme; or a change in working practice entirely unrelated to energy management. Standard data analysis techniques such as degree-day modelling and CUSUM provide a means to measure and monitor consumption patterns. These techniques were designed for use with monthly billing data. Modern energy metering systems automatically generate data at half-hourly or better resolution. Standard techniques are not designed to capture the detailed information contained in this comparatively high-resolution data. The introduction of automated metering also introduces the need for automated analysis. This work assumes that consumption patterns are generally consistent in the short-term but will inevitably change. A novel statistical method is developed which builds automated event detection into a novel consumption modelling algorithm. Understanding these changes to consumption patterns is critical to energy management. Leicester City Council has provided half-hourly data from over 300 buildings covering up to seven years of consumption (a total of nearly 50 million meter readings). Automatic event detection pinpoints and quantifies over 5,000 statistically significant events in the Leicester dataset. It is shown that the total impact of these events is a decrease in overall consumption. Viewing consumption patterns in this way allows for a new, event-oriented approach to energy management where large datasets are automatically and rapidly analysed to produce summary meta-data describing their salient features. These event-oriented meta-data can be used to navigate the raw data event by event and are highly complementary to strategic energy management.
62

Learning and smoothing in switching Markov models with copulas

Zheng, Fei 18 December 2017 (has links)
Les modèles de Markov à sauts (appelés JMS pour Jump Markov System) sont utilisés dans de nombreux domaines tels que la poursuite de cibles, le traitement des signaux sismiques et la finance, étant donné leur bonne capacité à modéliser des systèmes non-linéaires et non-gaussiens. De nombreux travaux ont étudié les modèles de Markov linéaires pour lesquels bien souvent la restauration de données est réalisée grâce à des méthodes d’échantillonnage statistique de type Markov Chain Monte-Carlo. Dans cette thèse, nous avons cherché des solutions alternatives aux méthodes MCMC et proposons deux originalités principales. La première a consisté à proposer un algorithme de restauration non supervisée d’un JMS particulier appelé « modèle de Markov couple à sauts conditionnellement gaussiens » (noté CGPMSM). Cet algorithme combine une méthode d’estimation des paramètres basée sur le principe Espérance-Maximisation (EM) et une méthode efficace pour lisser les données à partir des paramètres estimés. La deuxième originalité a consisté à étendre un CGPMSM spécifique appelé CGOMSM par l’introduction des copules. Ce modèle, appelé GCOMSM, permet de considérer des distributions plus générales que les distributions gaussiennes tout en conservant des méthodes de restauration optimales et rapides. Nous avons équipé ce modèle d’une méthode d’estimation des paramètres appelée GICE-LS, combinant le principe de la méthode d’estimation conditionnelle itérative généralisée et le principe des moindre-carrés linéaires. Toutes les méthodes sont évaluées sur des données simulées. En particulier, les performances de GCOMSM sont discutées au regard de modèles de Markov non-linéaires et non-gaussiens tels que la volatilité stochastique, très utilisée dans le domaine de la finance. / Switching Markov Models, also called Jump Markov Systems (JMS), are widely used in many fields such as target tracking, seismic signal processing and finance, since they can approach non-Gaussian non-linear systems. A considerable amount of related work studies linear JMS in which data restoration is achieved by Markov Chain Monte-Carlo (MCMC) methods. In this dissertation, we try to find alternative restoration solution for JMS to MCMC methods. The main contribution of our work includes two parts. Firstly, an algorithm of unsupervised restoration for a recent linear JMS known as Conditionally Gaussian Pairwise Markov Switching Model (CGPMSM) is proposed. This algorithm combines a parameter estimation method named Double EM, which is based on the Expectation-Maximization (EM) principle applied twice sequentially, and an efficient approach for smoothing with estimated parameters. Secondly, we extend a specific sub-model of CGPMSM known as Conditionally Gaussian Observed Markov Switching Model (CGOMSM) to a more general one, named Generalized Conditionally Observed Markov Switching Model (GCOMSM) by introducing copulas. Comparing to CGOMSM, the proposed GCOMSM adopts inherently more flexible distributions and non-linear structures, while optimal restoration is feasible. In addition, an identification method called GICE-LS based on the Generalized Iterative Conditional Estimation (GICE) and the Least-Square (LS) principles is proposed for GCOMSM to approximate any non-Gaussian non-linear systems from their sample data set. All proposed methods are tested by simulation. Moreover, the performance of GCOMSM is discussed by application on other generable non-Gaussian non-linear Markov models, for example, on stochastic volatility models which are of great importance in finance.
63

過濾靴帶反覆抽樣與一般動差估計式 / Sieve Bootstrap Inference Based on GMM Estimators of Time Series Data

劉祝安, Liu, Chu-An Unknown Date (has links)
In this paper, we propose two types of sieve bootstrap, univariate and multivariate approach, for the generalized method of moments estimators of time series data. Compared with the nonparametric block bootstrap, the sieve bootstrap is in essence parametric, which helps fitting data better when researchers have prior information about the time series properties of the variables of interested. Our Monte Carlo experiments show that the performances of these two types of sieve bootstrap are comparable to the performance of the block bootstrap. Furthermore, unlike the block bootstrap, which is sensitive to the choice of block length, these two types of sieve bootstrap are less sensitive to the choice of lag length.
64

Combinaison de l’Internet des objets, du traitement d’évènements complexes et de la classification de séries temporelles pour une gestion proactive de processus métier / Combining the Internet of things, complex event processing, and time series classification for a proactive business process management.

Mousheimish, Raef 27 October 2017 (has links)
L’internet des objets est au coeur desprocessus industriels intelligents grâce à lacapacité de détection d’évènements à partir dedonnées de capteurs. Cependant, beaucoup resteà faire pour tirer le meilleur parti de cettetechnologie récente et la faire passer à l’échelle.Cette thèse vise à combler le gap entre les fluxmassifs de données collectées par les capteurs etleur exploitation effective dans la gestion desprocessus métier. Elle propose une approcheglobale qui combine le traitement de flux dedonnées, l’apprentissage supervisé et/oul’utilisation de règles sur des évènementscomplexes permettant de prédire (et doncéviter) des évènements indésirables, et enfin lagestion des processus métier étendue par cesrègles complexes.Les contributions scientifiques de cette thèse sesituent dans différents domaines : les processusmétiers plus intelligents et dynamiques; letraitement d’évènements complexes automatisépar l’apprentissage de règles; et enfin et surtout,dans le domaine de la fouille de données deséries temporelles multivariéespar la prédiction précoce de risques.L’application cible de cette thèse est le transportinstrumenté d’oeuvres d’art / Internet of things is at the core ofsmart industrial processes thanks to its capacityof event detection from data conveyed bysensors. However, much remains to be done tomake the most out of this recent technologyand make it scale. This thesis aims at filling thegap between the massive data flow collected bysensors and their effective exploitation inbusiness process management. It proposes aglobal approach, which combines stream dataprocessing, supervised learning and/or use ofcomplex event processing rules allowing topredict (and thereby avoid) undesirable events,and finally business process managementextended to these complex rules. The scientificcontributions of this thesis lie in several topics:making the business process more intelligentand more dynamic; automation of complexevent processing by learning the rules; and lastand not least, in datamining for multivariatetime series by early prediction of risks. Thetarget application of this thesis is theinstrumented transportation of artworks.
65

Multivariate Time Series Data Generation using Generative Adversarial Networks : Generating Realistic Sensor Time Series Data of Vehicles with an Abnormal Behaviour using TimeGAN

Nord, Sofia January 2021 (has links)
Large datasets are a crucial requirement to achieve high performance, accuracy, and generalisation for any machine learning task, such as prediction or anomaly detection, However, it is not uncommon for datasets to be small or imbalanced since gathering data can be difficult, time-consuming, and expensive. In the task of collecting vehicle sensor time series data, in particular when the vehicle has an abnormal behaviour, these struggles are present and may hinder the automotive industry in its development. Synthetic data generation has become a growing interest among researchers in several fields to handle the struggles with data gathering. Among the methods explored for generating data, generative adversarial networks (GANs) have become a popular approach due to their wide application domain and successful performance. This thesis focuses on generating multivariate time series data that are similar to vehicle sensor readings from the air pressures in the brake system of vehicles with an abnormal behaviour, meaning there is a leakage somewhere in the system. A novel GAN architecture called TimeGAN was trained to generate such data and was then evaluated using both qualitative and quantitative evaluation metrics. Two versions of this model were tested and compared. The results obtained proved that both models learnt the distribution and the underlying information within the features of the real data. The goal of the thesis was achieved and can become a foundation for future work in this field. / När man applicerar en modell för att utföra en maskininlärningsuppgift, till exempel att förutsäga utfall eller upptäcka avvikelser, är det viktigt med stora dataset för att uppnå hög prestanda, noggrannhet och generalisering. Det är dock inte ovanligt att dataset är små eller obalanserade eftersom insamling av data kan vara svårt, tidskrävande och dyrt. När man vill samla tidsserier från sensorer på fordon är dessa problem närvarande och de kan hindra bilindustrin i dess utveckling. Generering av syntetisk data har blivit ett växande intresse bland forskare inom flera områden som ett sätt att hantera problemen med datainsamling. Bland de metoder som undersökts för att generera data har generative adversarial networks (GANs) blivit ett populärt tillvägagångssätt i forskningsvärlden på grund av dess breda applikationsdomän och dess framgångsrika resultat. Denna avhandling fokuserar på att generera flerdimensionell tidsseriedata som liknar fordonssensoravläsningar av lufttryck i bromssystemet av fordon med onormalt beteende, vilket innebär att det finns ett läckage i systemet. En ny GAN modell kallad TimeGAN tränades för att genera sådan data och utvärderades sedan både kvalitativt och kvantitativt. Två versioner av denna modell testades och jämfördes. De erhållna resultaten visade att båda modellerna lärde sig distributionen och den underliggande informationen inom de olika signalerna i den verkliga datan. Målet med denna avhandling uppnåddes och kan lägga grunden för framtida arbete inom detta område.
66

Assessing Query Execution Time and Implementational Complexity in Different Databases for Time Series Data / Utvärdering av frågeexekveringstid och implementeringskomplexitet i olika databaser för tidsseriedata

Jama Mohamud, Nuh, Söderström Broström, Mikael January 2024 (has links)
Traditional database management systems are designed for general purpose data handling, and fail to work efficiently with time-series data due to characteristics like high volume, rapid ingestion rates, and a focus on temporal relationships. However, what is a best solution is not a trivial question to answer. Hence, this thesis aims to analyze four different Database Management Systems (DBMS) to determine their suitability for managing time series data, with a specific focus on Internet of Things (IoT) applications. The DBMSs examined include PostgreSQL, TimescaleDB, ClickHouse, and InfluxDB. This thesis evaluates query performance across varying dataset sizes and time ranges, as well as the implementational complexity of each DBMS. The benchmarking results indicate that InfluxDB consistently delivers the best performance, though it involves higher implementational complexity and time consumption. ClickHouse emerges as a strong alternative with the second-best performance and the simplest implementation. The thesis also identifies potential biases in benchmarking tools and suggests that TimescaleDB's performance may have been affected by configuration errors. The findings provide significant insights into the performance metrics and implementation challenges of the selected DBMSs. Despite limitations in fully addressing the research questions, this thesis offers a valuable overview of the examined DBMSs in terms of performance and implementational complexity. These results should be considered alongside additional research when selecting a DBMS for time series data. / Traditionella databashanteringssystem är utformade för allmän datahantering och fungerar inte effektivt med tidsseriedata på grund av egenskaper som hög volym, snabba insättningshastigheter och fokus på tidsrelationer. Dock är frågan om vad som är den bästa lösningen inte trivial. Därför syftar denna avhandling till att analysera fyra olika databashanteringssystem (DBMS) för att fastställa deras lämplighet för att hantera tidsseriedata, med ett särskilt fokus på Internet of Things (IoT)-applikationer. De DBMS som undersöks inkluderar PostgreSQL, TimescaleDB, ClickHouse och InfluxDB. Denna avhandling utvärderar sökprestanda över varierande datamängder och tidsintervall, samt implementeringskomplexiteten för varje DBMS. Prestandaresultaten visar att InfluxDB konsekvent levererar den bästa prestandan, men med högre implementeringskomplexitet och tidsåtgång. ClickHouse framstår som ett starkt alternativ med näst bäst prestanda och är enklast att implementera. Studien identifierar också potentiella partiskhet i prestandaverktygen och antyder att TimescaleDB:s prestandaresultat kan ha påverkats av konfigurationsfel. Resultaten ger betydande insikter i prestandamått och implementeringsutmaningar för de utvalda DBMS. Trots begränsningarna i att fullt ut besvara forskningsfrågorna erbjuder studien en värdefull översikt. Dessa resultat bör beaktas tillsammans med ytterligare forskning vid val av ett DBMS för tidsseriedata.
67

Abandoned by Home and Burden of Host: Evaluating States' Economic Ability and Refugee Acceptance through Panel Data Analysis

Tabassum, Ummey Hanney January 2018 (has links)
No description available.
68

Sign of the Times : Unmasking Deep Learning for Time Series Anomaly Detection / Skyltarna på Tiden : Avslöjande av djupinlärning för detektering av anomalier i tidsserier

Richards Ravi Arputharaj, Daniel January 2023 (has links)
Time series anomaly detection has been a longstanding area of research with applications across various domains. In recent years, there has been a surge of interest in applying deep learning models to this problem domain. This thesis presents a critical examination of the efficacy of deep learning models in comparison to classical approaches for time series anomaly detection. Contrary to the widespread belief in the superiority of deep learning models, our research findings suggest that their performance may be misleading and the progress illusory. Through rigorous experimentation and evaluation, we reveal that classical models outperform deep learning counterparts in various scenarios, challenging the prevailing assumptions. In addition to model performance, our study delves into the intricacies of evaluation metrics commonly employed in time series anomaly detection. We uncover how it inadvertently inflates the performance scores of models, potentially leading to misleading conclusions. By identifying and addressing these issues, our research contributes to providing valuable insights for researchers, practitioners, and decision-makers in the field of time series anomaly detection, encouraging a critical reevaluation of the role of deep learning models and the metrics used to assess their performance. / Tidsperiods avvikelsedetektering har varit ett långvarigt forskningsområde med tillämpningar inom olika områden. Under de senaste åren har det uppstått ett ökat intresse för att tillämpa djupinlärningsmodeller på detta problemområde. Denna avhandling presenterar en kritisk granskning av djupinlärningsmodellers effektivitet jämfört med klassiska metoder för tidsperiods avvikelsedetektering. I motsats till den allmänna övertygelsen om överlägsenheten hos djupinlärningsmodeller tyder våra forskningsresultat på att deras prestanda kan vara vilseledande och framsteg illusoriskt. Genom rigorös experimentell utvärdering avslöjar vi att klassiska modeller överträffar djupinlärningsalternativ i olika scenarier och därmed utmanar de rådande antagandena. Utöver modellprestanda går vår studie in på detaljerna kring utvärderings-metoder som oftast används inom tidsperiods avvikelsedetektering. Vi avslöjar hur dessa oavsiktligt överdriver modellernas prestandapoäng och kan därmed leda till vilseledande slutsatser. Genom att identifiera och åtgärda dessa problem bidrar vår forskning till att erbjuda värdefulla insikter för forskare, praktiker och beslutsfattare inom området tidsperiods avvikelsedetektering, och uppmanar till en kritisk omvärdering av djupinlärningsmodellers roll och de metoder som används för att bedöma deras prestanda.
69

Ein neuer Algorithmus zur Zeitsynchronisierung von Ereignis-basierten Zeitreihendaten als Alternative zur Kreuzkorrelation

Schranz, Christoph, Mayr, Sebastian 14 October 2022 (has links)
Mit der Verwendung von Sensordaten aus mehreren Quellen entsteht oft die Notwendigkeit einer Synchronisierung der entstandenen Messreihen. Ein Standardverfahren dazu ist die Kreuzkorrelation, die jedoch übereinstimmende Zeitstempel voraussetzt und empfindlich gegenüber Ausreißern reagiert. In diesem Paper wird daher ein alternativer Algorithmus für die Synchronisierung von Ereignis-basierten Zeitreihendaten vorgestellt. / With the use of sensor data from multiple sources, the need for synchronization of the resulting measurement series often arises. A standard method for this is cross-correlation, but this requires matching timestamps and is sensitive to outliers. This paper therefore presents an alternative algorithm for the synchronization of event-based time series data.
70

La fécondité des Indiennes inscrites en fonction du traité historique d’affiliation

Landry, Maude 03 1900 (has links)
L’objectif de la présente étude est de documenter la fécondité des Indiennes inscrites au Canada en fonction du traité historique d’affiliation. Les traités historiques sont des ententes légales qui lient le gouvernement du Canada et certains membres des Premières Nations et qui décrivent, notamment, les dispositions prévues à leur égard pour compenser la cession de leurs terres. Mêmes si les traités ont principalement une fonction légale, ils regroupent aussi des individus qui partagent des caractéristiques communes sur le plan culturel, linguistique, socioéconomique, territorial et historique. À partir de données extraites du Registre des Indiens, nous avons produit l’indice synthétique de fécondité (ISF) pour chacune des populations affiliées aux traités historiques pour les périodes 1994-1998, 1999-2003 et 2004-2008. Nous voulions savoir si la fécondité des Indiennes inscrites différait en fonction du traité d’affiliation, si on observait des changements dans le temps et si de grandes tendances pouvaient être identifiées selon les régions couvertes par les traités. Des différences importantes sont relevées, particulièrement entre les traités numérotés qui couvrent les Prairies et les traités de l’est du pays. Étant donné l’absence dans le Registre des Indiens, d’informations sur les caractéristiques sociales, culturelles et économiques des populations affiliées aux différents traités, il n’est pas possible d’avancer des explications précises concernant ces écarts. Toutefois, il est possible de proposer une association entre la fécondité du moment et certaines caractéristiques des populations affiliées aux traités historiques et les dimensions géographique et historique des traités. / This research aims to document the fertility of registered Indians in Canada in relation to their affiliation with historic treaties. The historic treaties are legal agreements, between the government of Canada and certain members of the First Nations, which describe lands surrendered and related compensation. Although the treaties have mainly a legal role, they apply to Indigenous peoples sharing similar characteristics along cultural, linguistic, socioeconomic, territorial and historical lines. We used anonymized data extracted from the Indian Register to produce the total fertility rate (TFR) for the population concerned by each historic treaty for the periods 1994-1998, 1999-2003 and 2004-2008. We wanted to know if the fertility of registered Indians differed by treaty memberships, if we observed changes over time and if notable trends could be identified depending on the regions covered by the treaties. Our analyses show that important differences exist, particularly between the numbered treaties, which cover the Prairies provinces, and the treaties populations of Eastern Canada. Since the data collected by the Indian Register do not contain information on social, cultural and economic characteristics of Indigenous peoples that could explain these differences, it is not possible to develop precise explanations of these variations. However, it is possible to propose an association between the fertility rate and the geographical and historic aspects of the treaties populations.

Page generated in 0.0904 seconds