• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 469
  • 77
  • 34
  • 31
  • 29
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 807
  • 511
  • 239
  • 228
  • 173
  • 149
  • 129
  • 98
  • 97
  • 86
  • 83
  • 82
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Fault Detection in Mobile Robotics using Autoencoder and Mahalanobis Distance

Mortensen, Christian January 2021 (has links)
Intelligent fault detection systems using machine learning can be applied to learn to spot anomalies in signals sampled directly from machinery. As a result, expensive repair costs due to mechanical breakdowns and potential harm to humans due to malfunctioning equipment can be prevented. In recent years, Autoencoders have been applied for fault detection in areas such as industrial manufacturing. It has been shown that they are well suited for the purpose as such models can learn to recognize healthy signals that facilitate the detection of anomalies. The content of this thesis is an investigation into the applicability of Autoencoders for fault detection in mobile robotics by assigning anomaly scores to sampled torque signals based on the Autoencoder reconstruction errors and the Mahalanobis distance to a known distribution of healthy errors. An experiment was carried out by training a model with signals recorded from a four-wheeled mobile robot executing a pre-defined diagnostics routine to stress the motors, and datasets of healthy samples along with three different injected faults were created. The model produced overall greater anomaly scores for one of the fault cases in comparison to the healthy data. However, the two other cases did not yield any difference in anomaly scores due to the faults not impacting the pattern of the signals. Additionally, the Autoencoders ability to isolate a fault to a location was studied by examining the reconstruction errors faulty samples determine whether the errors of signals originating from the faulty component could be used for this purpose. Although we could not confirm this based on the results, fault isolation with Autoencoders could still be possible given more representative signals.
172

Unsupervised anomaly detection for aircraft health monitoring system

Dani, Mohamed Cherif 10 March 2017 (has links)
La limite des connaissances techniques ou fondamentale, est une réalité dont l’industrie fait face. Le besoin de mettre à jour cette connaissance acquise est essentiel pour une compétitivité économique, mais aussi pour une meilleure maniabilité des systèmes et machines. Aujourd’hui grâce à ces systèmes et machine, l’expansion de données en quantité, en fréquence de génération est un véritable phénomène. À présent par exemple, les avions Airbus génèrent des centaines de mégas de données par jour, et intègrent des centaines voire des milliers de capteurs dans les nouvelles générations d’avions. Ces données générées par ces capteurs, sont exploitées au sol ou pendant le vol, pour surveiller l’état et la santé de l’avion, et pour détecter des pannes, des incidents ou des changements. En théorie, ces pannes, ces incidents ou ces changements sont connus sous le terme d’anomalie. Une anomalie connue comme un comportement qui ne correspond pas au comportement normal des données. Certains la définissent comme une déviation d’un modèle normal, d’autres la définissent comme un changement. Quelques soit la définition, le besoin de détecter cette anomalie est important pour le bon fonctionnement de l'avion. Actuellement, la détection des anomalies à bord des avions est assuré par plusieurs équipements de surveillance aéronautiques, l’un de ces équipements est le « Aircraft condition monitoring System –ACMS », enregistre les données générées par les capteurs en continu, il surveille aussi l’avion en temps réel grâce à des triggers et des seuils programmés par des Airlines ou autres mais à partir d’une connaissance a priori du système. Cependant, plusieurs contraintes limitent le bon fonctionnement de cet équipement, on peut citer par exemple, la limitation des connaissances humaines un problème classique que nous rencontrons dans plusieurs domaines. Cela veut dire qu’un trigger ne détecte que les anomalies et les incidents dont il est désigné, et si une nouvelle condition surgit suite à une maintenance, changement de pièce, etc. Le trigger est incapable s’adapter à cette nouvelle condition, et il va labéliser toute cette nouvelle condition comme étant une anomalie. D’autres problèmes et contraintes seront cités progressivement dans les chapitres qui suivent. L’objectif principal de notre travail est de détecter les anomalies et les changements dans les données de capteurs, afin d’améliorer le system de surveillance de santé d’avion connu sous le nom Aircraft Health Monitoring(AHM). Ce travail est basé principalement sur une analyse à deux étapes, Une analyse unie varie dans un contexte non supervisé, qui nous permettra de se focaliser sur le comportement de chaque capteur indépendamment, et de détecter les différentes anomalies et changements pour chaque capteur. Puis une analyse multi-variée qui nous permettra de filtrer certaines anomalies détectées (fausses alarmes) dans la première analyse et de détecter des groupes de comportement suspects. La méthode est testée sur des données réelles et synthétiques, où les résultats, l’identification et la validation des anomalies sont discutées dans cette thèse. / The limitation of the knowledge, technical, fundamental is a daily challenge for industries. The need to updates these knowledge are important for a competitive industry and also for an efficient reliability and maintainability of the systems. Actually, thanks to these machines and systems, the expansion of the data on quantity and frequency of generation is a real phenomenon. Within Airbus for example, and thanks to thousands of sensors, the aircrafts generate hundreds of megabytes of data per flight. These data are today exploited on the ground to improve safety and health monitoring system as a failure, incident and change detection. In theory, these changes, incident and failure are known as anomalies. An anomaly is known as deviation form a normal behavior of the data. Others define it as a behavior that do not conform the normal behavior. Whatever the definition, the anomaly detection process is very important for good functioning of the aircraft. Currently, the anomaly detection process is provided by several health monitoring equipments, one of these equipment is the Aircraft Health Monitoring System (ACMS), it records continuously the date of each sensor, and also monitor these sensors to detect anomalies and incident using triggers and predefined condition (exeedance approach). These predefined conditions are programmed by airlines and system designed according to a prior knowledge (physical, mechanical, etc.). However, several constraints limit the ACMS anomaly detection potential. We can mention, for example, the limitation the expert knowledge which is a classic problem in many domains, since the triggers are designed only to the targeted anomalies. Otherwise, the triggers do not cover all the system conditions. In other words, if a new behavior appears (new condition) in the sensor, after a maintenance action, parts changing, etc. the predefined conditions won't detect any thing and may be in many cases generated false alarms. Another constraint is that the triggers (predefined conditions) are static, they are unable to adapt their proprieties to each new condition. Another limitation is discussed gradually in the future chapters. The principle of objective of this thesis is to detect anomalies and changes in the ACMS data. In order to improve the health monitoring function of the ACMS. The work is based principally on two stages, the univariate anomaly detection stage, where we use the unsupervised learning to process the univariate sensors, since we don’t have any a prior knowledge of the system, and no documentation or labeled classes are available. The univariate analysis focuses on each sensor independently. The second stage of the analysis is the multivariate anomaly detection, which is based on density clustering, where the objective is to filter the anomalies detected in the first stage (false alarms) and to detect suspected behaviours (group of anomalies). The anomalies detected in both univariate and multivariate can be potential triggers or can be used to update the existing triggers. Otherwise, we propose also a generic concept of anomaly detection based on univariate and multivariate anomaly detection. And finally a new concept of validation anomalies within airbus.
173

Real-time industrial systems anomaly detection with on-edge Tiny Machine Learning

Tiberg, Anton January 2022 (has links)
Embedded system such as microcontrollers has become more powerful and cheaper during the past couple of years. This has led to more and more development of on-edge applications, one of which is anomaly detection using machine learning. This thesis investigates the ability to implement, deploy and run the unsupervised anomaly detection algorithm called Isolation Forest, and its modified version Mondrian Isolation Forest on a microcontroller. Both algorithms were successfully implemented and deployed. The regular Isolation Forest algorithm resulted in being able to function as an anomaly detection algorithm by using both data sets and streaming data. However, the Mondrian Isolation Forest was too resource hungry to be able to function as a proper anomaly detection application.
174

Anomaly-Driven Belief Revision by Abductive Metareasoning

Eckroth, Joshua Ryan 09 July 2014 (has links)
No description available.
175

Scalable Validation of Data Streams

Xu, Cheng January 2016 (has links)
In manufacturing industries, sensors are often installed on industrial equipment generating high volumes of data in real-time. For shortening the machine downtime and reducing maintenance costs, it is critical to analyze efficiently this kind of streams in order to detect abnormal behavior of equipment. For validating data streams to detect anomalies, a data stream management system called SVALI is developed. Based on requirements by the application domain, different stream window semantics are explored and an extensible set of window forming functions are implemented, where dynamic registration of window aggregations allow incremental evaluation of aggregate functions over windows. To facilitate stream validation on a high level, the system provides two second order system validation functions, model-and-validate and learn-and-validate. Model-and-validate allows the user to define mathematical models based on physical properties of the monitored equipment, while learn-and-validate builds statistical models by sampling the stream in real-time as it flows. To validate geographically distributed equipment with short response time, SVALI is a distributed system where many SVALI instances can be started and run in parallel on-board the equipment. Central analyses are made at a monitoring center where streams of detected anomalies are combined and analyzed on a cluster computer. SVALI is an extensible system where functions can be implemented using external libraries written in C, Java, and Python without any modifications of the original code. The system and the developed functionality have been applied on several applications, both industrial and for sports analytics.
176

An Anomaly Behavior Analysis Intrusion Detection System for Wireless Networks

Satam, Pratik January 2015 (has links)
Wireless networks have become ubiquitous, where a wide range of mobile devices are connected to a larger network like the Internet via wireless communications. One widely used wireless communication standard is the IEEE 802.11 protocol, popularly called Wi-Fi. Over the years, the 802.11 has been upgraded to different versions. But most of these upgrades have been focused on the improvement of the throughput of the protocol and not enhancing the security of the protocol, thus leaving the protocol vulnerable to attacks. The goal of this research is to develop and implement an intrusion detection system based on anomaly behavior analysis that can detect accurately attacks on the Wi-Fi networks and track the location of the attacker. As a part of this thesis we present two architectures to develop an anomaly based intrusion detection system for single access point and distributed Wi-Fi networks. These architectures can detect attacks on Wi-Fi networks, classify the attacks and track the location of the attacker once the attack has been detected. The system uses statistical and probability techniques associated with temporal wireless protocol transitions, that we refer to as Wireless Flows (Wflows). The Wflows are modeled and stored as a sequence of n-grams within a given period of analysis. We studied two approaches to track the location of the attacker. In the first approach, we use a clustering approach to generate power maps that can be used to track the location of the user accessing the Wi-Fi network. In the second approach, we use classification algorithms to track the location of the user from a Central Controller Unit. Experimental results show that the attack detection and classification algorithms generate no false positives and no false negatives even when the Wi-Fi network has high frame drop rates. The Clustering approach for location tracking was found to perform highly accurate in static environments (81% accuracy) but the performance rapidly deteriorates with the changes in the environment. While the classification algorithm to track the location of the user at the Central Controller/RADIUS server was seen to perform with lesser accuracy then the clustering approach (76% accuracy) but the system's ability to track the location of the user deteriorated less rapidly with changes in the operating environment.
177

Development and Evaluation of a MODIS Vegetation Index Compositing Algorithm for Long-term Climate Studies

Solano Barajas, Ramon January 2011 (has links)
The acquisition of remote sensing data having an investigated quality level constitutes an important step to advance our understanding of the vegetation response to environmental factors. Spaceborne sensors introduce additional challenges that should be addressed to assure that derived findings are based on real phenomena, and not biased or misguided by instrument features or processing artifacts. As a consequence, updates to incorporate new advances and user requirements are regularly found on most cutting edge systems such as the MODIS system. In this dissertation, the objective was to design, characterize and assess any possible departure from current values, a MODIS VI algorithm for restoring the continuity 16-day 1-km product, based on the new 8-day 500-m MODIS SR product scheduled for MODIS C6. Additionally, the impact of increasing the time resolution from 16 to 8 days for the future basic MODIS C6 VI product was also assessed. The performance of the proposed algorithm was evaluated using high quality reference data and known biophysical relationships at several spatial and temporal scales. Firstly, it was evaluated using data from the ASRVN, FLUXNET-derived ecosystem GPP and an analysis of the seasonality parameters derived from current C5 and proxy C6 VI collections. The performance of the 8-day VI version was evaluated and contrasted with current 16-day using the reported correlation of the EVI with the GPP derived from CO2 flux measurements. Secondly, we performed an analysis at spatial level using entire images (or "tiles") to assess the BRDF effects on the VI product, as these can cause biases on the SR and VIs from scanning radiometers. Lastly, we evaluated the performance of the proposed algorithm for detecting inter-annual VI anomalies from long-term time series, as compared with current MODIS VI C5. For this, we analyzed the EVI anomalies from a densely vegetated evergreen region, for the period July-September (2000-2010). Results showed a high general similarity between results from both algorithms, but also systematic differences, suggesting that proposed algorithm towards C6 may represent an advance in the reduction of uncertainties for the MODIS VI product.
178

RETURN PATTERNS PROXIMAL TO CENTRAL BANK RATE DECISION ANNOUNCEMENTS : OMX 30 excess return and monetary policy announcements

Åkerström, Paul Linus Martin January 2014 (has links)
In this study, it is determined that excess returns on the OMX 30 are confirmed to rise in anticipation of monetary policy decisions made by the central banks of Sweden and The United States of America. Those findings were manifested at a greater magnitude on the first day prior to the announcements and on a statistically significant level one day prior to monetary policy decisions from the Federal Open Market Committee. Moreover, excess returns beyond the average rate were found to be substantially higher on the first and third day prior monetary policy decisions from the Swedish Central bank (Riksbanken) albeit not on a statistically significant level. The results drawn from the data in the study were reinforced by findings in similar tests conducted during times of global recession.
179

Detecting anomalies in multivariate time series from automotive systems

Theissler, Andreas January 2013 (has links)
In the automotive industry test drives are conducted during the development of new vehicle models or as a part of quality assurance for series vehicles. During the test drives, data is recorded for the use of fault analysis resulting in millions of data points. Since multiple vehicles are tested in parallel, the amount of data that is to be analysed is tremendous. Hence, manually analysing each recording is not feasible. Furthermore the complexity of vehicles is ever-increasing leading to an increase of the data volume and complexity of the recordings. Only by effective means of analysing the recordings, one can make sure that the effort put in the conducting of test drives pays off. Consequently, effective means of test drive analysis can become a competitive advantage. This Thesis researches ways to detect unknown or unmodelled faults in recordings from test drives with the following two aims: (1) in a data base of recordings, the expert shall be pointed to potential errors by reporting anomalies, and (2) the time required for the manual analysis of one recording shall be shortened. The idea to achieve the first aim is to learn the normal behaviour from a training set of recordings and then to autonomously detect anomalies. The one-class classifier “support vector data description” (SVDD) is identified to be most suitable, though it suffers from the need to specify parameters beforehand. One main contribution of this Thesis is a new autonomous parameter tuning approach, making SVDD applicable to the problem at hand. Another vital contribution is a novel approach enhancing SVDD to work with multivariate time series. The outcome is the classifier “SVDDsubseq” that is directly applicable to test drive data, without the need for expert knowledge to configure or tune the classifier. The second aim is achieved by adapting visual data mining techniques to make the manual analysis of test drives more efficient. The methods of “parallel coordinates” and “scatter plot matrices” are enhanced by sophisticated filter and query operations, combined with a query tool that allows to graphically formulate search patterns. As a combination of the autonomous classifier “SVDDsubseq” and user-driven visual data mining techniques, a novel, data-driven, semi-autonomous approach to detect unmodelled faults in recordings from test drives is proposed and successfully validated on recordings from test drives. The methodologies in this Thesis can be used as a guideline when setting up an anomaly detection system for own vehicle data.
180

Analyse de la composition isotopique de l'ion nitrate dans la basse atmosphère polaire et marine / Isotopic composition of atmospheric nitrate in the marine and polar boundary layer

Morin, Samuel 26 September 2008 (has links)
Les oxydes d’azote atmosphériques (NOx=NO+NO2) sont des composés clefs en chimie de l’environnement, jouant un rôle central pour la capacité oxydante de l’atmosphère et le cycle de l’azote. La composition isotopique du nitrate atmosphérique (NO?3 particulaire et HNO3 gazeux), constituant leur puits ultime, renseigne sur leur bilan chimique. Le rapport 15N/14N donne une indication de leurs sources, alors que l’anomalie isotopique en oxygène (?17O=d17O-0.52×d18O) révèle la nature de leurs mécanismes d’oxydation. Des études couplées de d15N et ?17O d’échantillons de nitrate atmosphérique collectés dans l’Arctique, en Antarctique et dans l’atmosphère marine au dessus de l’Océan Atlantique, où le bilan des NOx est souvent mal connu ont été effectuées. À ces fins, le défi que constitue la mesure simultanée des trois rapports isotopiques du nitrate (17O/16O, 18O/16O et 15N/14N) dans le même échantillon représentant moins d’une micromole a été relevé. La solution adoptée tire avantage des propriétés d’une bactérie dénitrifiante, utilisée pour convertir le nitrate en N2O, dont la composition isotopique totale a été mesurée en utilisant un système automatisé de chromatographie en phase gazeuse et spectrométrie de masse de rapport isotopique. Les principaux résultats obtenus via les isotopes de l’oxygène permettent l’identification claire de transitions saisonnières entre voies d’oxydation des NOx, y compris le rôle majeur des composés halogénés réactifs au printemps polaire en régions côtières. Les isotopes de l’azote ont quant à eux permis d’apporter de nouvelles contraintes sur le cycle de l’azote dans les régions polaires, grâce au fractionnement significatif induit par les phénomènes de remobilisation post-dépôt affectant le nitrate dans le manteau neigeux, et l’émission de NOx qui en découle / Atmospheric nitrogen oxides (NOx=NO+NO2) are central to the chemistry of the environment, as they play a pivotal role in the cycling of reactive nitrogen and the oxidative capacity of the atmosphere. The stable isotopes of atmospheric nitrate (in the form of particulate NO?3 or gaseous HNO3), their main ultimate sinks, provide insights in chemical budget of NOx : its nitrogen isotopes are almost conservative tracers of their sources, whereas NOx sinks are revealed by its triple oxygen isotopic composition. The long-awaited challenge of measuring all three stable isotope ratios of nitrate (17O/16O, 18O/16O and 15N/14N) in a single sample at sub-micromolar levels has been resolved. The newly developed method makes use of denitrifying bacteria to quantitatively convert nitrate to a stable species (N2O), whose isotope ratios are measured using an automated gas chromatography/isotope ratio mass spectrometry analytical system. Dual measurements of d15N and the isotope anomaly (?17O=d17O-0.52×d18O) of atmospheric nitrate samples collected in the Arctic, the Antarctic and in the marine boundary layer of the Atlantic Ocean, have been used to derive the chemical budget of NOx and atmospheric nitrate in these remote regions. Main results from oxygen isotope measurements pertain to the identification of seasonal and latitudinal shifts in NOx oxidative pathways in these environments (including the role of halogen oxides chemistry in polar regions during springtime), as a function of particle sizes. Nitrogen isotopes are found to provide strong constraints on the budget of reactive nitrogen in polar regions, due to the strong fractionation associated with snowpack photochemical loss of nitrate and its conversion to NOx

Page generated in 0.1809 seconds