171 |
PRAAG Algorithm in Anomaly DetectionZhang, Dongyang January 2016 (has links)
Anomaly detection has been one of the most important applications of datamining, widely applied in industries like financial, medical,telecommunication, even manufacturing. In many scenarios, data are in theform of streaming in a large amount, so it is preferred to analyze the datawithout storing all of them. In other words, the key is to improve the spaceefficiency of algorithms, for example, by extracting the statistical summary ofthe data. In this thesis, we study the PRAAG algorithm, a collective anomalydetection algorithm based on quantile feature of the data, so the spaceefficiency essentially depends on that of quantile algorithm.Firstly, the master thesis investigates quantile summary algorithms thatprovides quantile information of a dataset without storing all the data point.Then, we implement the selected algorithms and run experiments to test theperformance. Finally, the report focuses on experimenting on PRAAG tounderstand how the parameters affect the performance and compare it withother anomaly detection algorithms.In conclusion, GK algorithm provides a more space efficient way to estimatequantiles than simply storing all data points. Also, PRAAG is effective in termsof True Prediction Rate (TPR) and False Prediction Rate (FPR), comparingwith a baseline algorithm CUSUM. In addition, there are many possibleimprovements to be investigated, such as parallelizing the algorithm. / Att upptäcka avvikelser har varit en av de viktigaste tillämpningarna avdatautvinning (data mining). Det används stor utsträckning i branscher somfinans, medicin, telekommunikation, och även tillverkning. I många fallströmmas stora mängder data och då är det mest effektivt att analysera utanatt lagra data. Med andra ord är nyckeln att förbättra algoritmernasutrymmeseffektivitet till exempel genom att extraheraden statistiskasammanfattning avdatat. PRAAGär en kollektiv algoritm för att upptäckaavvikelser. Den ärbaserad på kvantilenegenskapernai datat, såutrymmeseffektiviteten beror i huvudsak på egenskapernahoskvantilalgoritmen.Examensarbetet undersöker kvantilsammanfattande algoritmer som gerkvantilinformationen av ett dataset utan att spara alla datapunkter. Vikommer fram till att GKalgoritmenuppfyllervåra krav. Sedan implementerarvialgoritmerna och genomför experiment för att testa prestandan. Slutligenfokuserar rapporten påexperiment på PRAAG för att förstå hur parametrarnapåverkar prestandan. Vi jämför även mot andra algoritmer för att upptäckaavvikelser.Sammanfattningsvis ger GK ett mer utrymmeseffektiv sätt att uppskattakvantiler än att lagra alla datapunkter. Dessutom är PRAAG, jämfört med enstandardalgoritm (CUSUM), effektiv när det gäller True Prediction Rate (TPR)och False Prediction Rate (FPR). Det finns fortfarande flertalet möjligaförbättringar som ska undersökas, t.ex. parallelisering av algoritmen.
|
172 |
Edge-based blockchain enabled anomaly detection for insider attack prevention in Internet of ThingsTukur, Yusuf M., Thakker, Dhaval, Awan, Irfan U. 31 March 2022 (has links)
Yes / Internet of Things (IoT) platforms are responsible for overall data processing in the IoT System. This ranges from analytics and big data processing to gathering all sensor data over time to analyze and produce long-term trends. However, this comes with prohibitively high demand for resources such as memory, computing power and bandwidth, which the highly resource constrained IoT devices lack to send data to the platforms to achieve efficient operations. This results in poor availability and risk of data loss due to single point of failure should the cloud platforms suffer attacks. The integrity of the data can also be compromised by an insider, such as a malicious system administrator, without leaving traces of their actions. To address these issues, we propose in this work an edge-based blockchain enabled anomaly detection technique to prevent insider attacks in IoT. The technique first employs the power of edge computing to reduce the latency and bandwidth requirements by taking processing closer to the IoT nodes, hence improving availability, and avoiding single point of failure. It then leverages some aspect of sequence-based anomaly detection, while integrating distributed edge with blockchain that offers smart contracts to perform detection and correction of abnormalities in incoming sensor data. Evaluation of our technique using real IoT system datasets showed that the technique remarkably achieved the intended purpose, while ensuring integrity and availability of the data which is critical to IoT success. / Petroleum Technology Development Fund(PTDF) Nigeria, Grant/Award Number:PTDF/ED/PHD/TYM/858/16
|
173 |
Detection and localization of link-level network anomalies using end-to-end path monitoring / Détection et localisation des anomalies réseau au niveau des liens en utilisant de la surveillance des chemins de bout-en-boutSalhi, Emna 13 February 2013 (has links)
L'objectif de cette thèse est de trouver des techniques de détection et de localisation des anomalies au niveau des liens qui soient à faible coût, précises et rapides. La plupart des techniques de détection et de localisation des anomalies au niveau des liens qui existent dans la littérature calculent les solutions, c-à-d l'ensemble des chemins à monitorer et les emplacements des dispositifs de monitorage, en deux étapes. La première étape sélectionne un ensemble minimal d'emplacements des dispositifs de monitorage qui permet de détecter/localiser toutes les anomalies possibles. La deuxième étape sélectionne un ensemble minimal de chemins de monitorage entre les emplacements sélectionnés de telle sorte que tous les liens du réseau soient couverts/distinguables paire par paire. Toutefois, ces techniques ignorent l'interaction entre les objectifs d'optimisation contradictoires des deux étapes, ce qui entraîne une utilisation sous-optimale des ressources du réseau et des mesures de monitorage biaisées. L'un des objectifs de cette thèse est d'évaluer et de réduire cette interaction. A cette fin, nous proposons des techniques de détection et de localisation d'anomalies au niveau des liens qui sélectionnent les emplacements des moniteurs et les chemins qui doivent être monitorés conjointement en une seule étape. Par ailleurs, nous démontrons que la condition établie pour la localisation des anomalies est suffisante mais pas nécessaire. Une condition nécessaire et suffisante qui minimise le coût de localisation considérablement est établie. Il est démontré que les deux problèmes sont NP-durs. Des algorithmes heuristiques scalables et efficaces sont alors proposés. / The aim of this thesis is to come up with cost-efficient, accurate and fast schemes for link-level network anomaly detection and localization. It has been established that for detecting all potential link-level anomalies, a set of paths that cover all links of the network must be monitored, whereas for localizing all potential link-level anomalies, a set of paths that can distinguish between all links of the network pairwise must be monitored. Either end-node of each path monitored must be equipped with a monitoring device. Most existing link-level anomaly detection and localization schemes are two-step. The first step selects a minimal set of monitor locations that can detect/localize any link-level anomaly. The second step selects a minimal set of monitoring paths between the selected monitor locations such that all links of the network are covered/distinguishable pairwise. However, such stepwise schemes do not consider the interplay between the conflicting optimization objectives of the two steps, which results in suboptimal consumption of the network resources and biased monitoring measurements. One of the objectives of this thesis is to evaluate and reduce this interplay. To this end, one-step anomaly detection and localization schemes that select monitor locations and paths that are to be monitored jointly are proposed. Furthermore, we demonstrate that the already established condition for anomaly localization is sufficient but not necessary. A necessary and sufficient condition that minimizes the localization cost drastically is established. The problems are demonstrated to be NP-Hard. Scalable and near-optimal heuristic algorithms are proposed.
|
174 |
Tópicos em física de neutrinos / Topics in neutrino physicsZavanin, Eduardo Marcio, 1989- 17 March 2017 (has links)
Orientador: Marcelo Moraes Guzzo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin / Made available in DSpace on 2018-09-01T21:05:54Z (GMT). No. of bitstreams: 1
Zavanin_EduardoMarcio_D.pdf: 12290577 bytes, checksum: e37b2af24ec03321ad993ebd98fbc0dc (MD5)
Previous issue date: 2017 / Resumo: O objetivo desse trabalho é estudar um mecanismo alternativo à hipótese de neutrino estéril para a solução das anomalias dos antineutrinos de reatores, da anomalia do Gálio e da anomalia dos aceleradores. Vamos também entender como encaixar esse mecanismo na teoria da física de partículas através de interações não padrão. Além disso, vamos estudar o duplo decaimento beta sem a emissão de neutrinos e colocar vínculos para a massa efetiva de Majorana. Não obstante, vamos entender os limites que o experimento ECHo fornecerá para medidas direta da massa dos neutrinos / Abstract: The objective of this work is the study of an alternative mechanism, that is not the hypothesized sterile neutrino, to solve the reactor anti-neutrino anomaly, the Gallium anomaly and the LSND anomaly. We will also understand how to fit this mechanism in the theory of particle physics through non standard interactions. In addition, we will study the neutrino-less double beta decay and set constraints to the effective Majorana neutrino mass. Furthermore we will understand the limits that the ECHo experiment will provide for direct measurements of the neutrino mass / Doutorado / Física / Doutor em Ciências / 2013/02518-7 / 1189631/2013 / FAPESP / CAPES
|
175 |
Fault Detection in Mobile Robotics using Autoencoder and Mahalanobis DistanceMortensen, Christian January 2021 (has links)
Intelligent fault detection systems using machine learning can be applied to learn to spot anomalies in signals sampled directly from machinery. As a result, expensive repair costs due to mechanical breakdowns and potential harm to humans due to malfunctioning equipment can be prevented. In recent years, Autoencoders have been applied for fault detection in areas such as industrial manufacturing. It has been shown that they are well suited for the purpose as such models can learn to recognize healthy signals that facilitate the detection of anomalies. The content of this thesis is an investigation into the applicability of Autoencoders for fault detection in mobile robotics by assigning anomaly scores to sampled torque signals based on the Autoencoder reconstruction errors and the Mahalanobis distance to a known distribution of healthy errors. An experiment was carried out by training a model with signals recorded from a four-wheeled mobile robot executing a pre-defined diagnostics routine to stress the motors, and datasets of healthy samples along with three different injected faults were created. The model produced overall greater anomaly scores for one of the fault cases in comparison to the healthy data. However, the two other cases did not yield any difference in anomaly scores due to the faults not impacting the pattern of the signals. Additionally, the Autoencoders ability to isolate a fault to a location was studied by examining the reconstruction errors faulty samples determine whether the errors of signals originating from the faulty component could be used for this purpose. Although we could not confirm this based on the results, fault isolation with Autoencoders could still be possible given more representative signals.
|
176 |
Unsupervised anomaly detection for aircraft health monitoring systemDani, Mohamed Cherif 10 March 2017 (has links)
La limite des connaissances techniques ou fondamentale, est une réalité dont l’industrie fait face. Le besoin de mettre à jour cette connaissance acquise est essentiel pour une compétitivité économique, mais aussi pour une meilleure maniabilité des systèmes et machines. Aujourd’hui grâce à ces systèmes et machine, l’expansion de données en quantité, en fréquence de génération est un véritable phénomène. À présent par exemple, les avions Airbus génèrent des centaines de mégas de données par jour, et intègrent des centaines voire des milliers de capteurs dans les nouvelles générations d’avions. Ces données générées par ces capteurs, sont exploitées au sol ou pendant le vol, pour surveiller l’état et la santé de l’avion, et pour détecter des pannes, des incidents ou des changements. En théorie, ces pannes, ces incidents ou ces changements sont connus sous le terme d’anomalie. Une anomalie connue comme un comportement qui ne correspond pas au comportement normal des données. Certains la définissent comme une déviation d’un modèle normal, d’autres la définissent comme un changement. Quelques soit la définition, le besoin de détecter cette anomalie est important pour le bon fonctionnement de l'avion. Actuellement, la détection des anomalies à bord des avions est assuré par plusieurs équipements de surveillance aéronautiques, l’un de ces équipements est le « Aircraft condition monitoring System –ACMS », enregistre les données générées par les capteurs en continu, il surveille aussi l’avion en temps réel grâce à des triggers et des seuils programmés par des Airlines ou autres mais à partir d’une connaissance a priori du système. Cependant, plusieurs contraintes limitent le bon fonctionnement de cet équipement, on peut citer par exemple, la limitation des connaissances humaines un problème classique que nous rencontrons dans plusieurs domaines. Cela veut dire qu’un trigger ne détecte que les anomalies et les incidents dont il est désigné, et si une nouvelle condition surgit suite à une maintenance, changement de pièce, etc. Le trigger est incapable s’adapter à cette nouvelle condition, et il va labéliser toute cette nouvelle condition comme étant une anomalie. D’autres problèmes et contraintes seront cités progressivement dans les chapitres qui suivent. L’objectif principal de notre travail est de détecter les anomalies et les changements dans les données de capteurs, afin d’améliorer le system de surveillance de santé d’avion connu sous le nom Aircraft Health Monitoring(AHM). Ce travail est basé principalement sur une analyse à deux étapes, Une analyse unie varie dans un contexte non supervisé, qui nous permettra de se focaliser sur le comportement de chaque capteur indépendamment, et de détecter les différentes anomalies et changements pour chaque capteur. Puis une analyse multi-variée qui nous permettra de filtrer certaines anomalies détectées (fausses alarmes) dans la première analyse et de détecter des groupes de comportement suspects. La méthode est testée sur des données réelles et synthétiques, où les résultats, l’identification et la validation des anomalies sont discutées dans cette thèse. / The limitation of the knowledge, technical, fundamental is a daily challenge for industries. The need to updates these knowledge are important for a competitive industry and also for an efficient reliability and maintainability of the systems. Actually, thanks to these machines and systems, the expansion of the data on quantity and frequency of generation is a real phenomenon. Within Airbus for example, and thanks to thousands of sensors, the aircrafts generate hundreds of megabytes of data per flight. These data are today exploited on the ground to improve safety and health monitoring system as a failure, incident and change detection. In theory, these changes, incident and failure are known as anomalies. An anomaly is known as deviation form a normal behavior of the data. Others define it as a behavior that do not conform the normal behavior. Whatever the definition, the anomaly detection process is very important for good functioning of the aircraft. Currently, the anomaly detection process is provided by several health monitoring equipments, one of these equipment is the Aircraft Health Monitoring System (ACMS), it records continuously the date of each sensor, and also monitor these sensors to detect anomalies and incident using triggers and predefined condition (exeedance approach). These predefined conditions are programmed by airlines and system designed according to a prior knowledge (physical, mechanical, etc.). However, several constraints limit the ACMS anomaly detection potential. We can mention, for example, the limitation the expert knowledge which is a classic problem in many domains, since the triggers are designed only to the targeted anomalies. Otherwise, the triggers do not cover all the system conditions. In other words, if a new behavior appears (new condition) in the sensor, after a maintenance action, parts changing, etc. the predefined conditions won't detect any thing and may be in many cases generated false alarms. Another constraint is that the triggers (predefined conditions) are static, they are unable to adapt their proprieties to each new condition. Another limitation is discussed gradually in the future chapters. The principle of objective of this thesis is to detect anomalies and changes in the ACMS data. In order to improve the health monitoring function of the ACMS. The work is based principally on two stages, the univariate anomaly detection stage, where we use the unsupervised learning to process the univariate sensors, since we don’t have any a prior knowledge of the system, and no documentation or labeled classes are available. The univariate analysis focuses on each sensor independently. The second stage of the analysis is the multivariate anomaly detection, which is based on density clustering, where the objective is to filter the anomalies detected in the first stage (false alarms) and to detect suspected behaviours (group of anomalies). The anomalies detected in both univariate and multivariate can be potential triggers or can be used to update the existing triggers. Otherwise, we propose also a generic concept of anomaly detection based on univariate and multivariate anomaly detection. And finally a new concept of validation anomalies within airbus.
|
177 |
Real-time industrial systems anomaly detection with on-edge Tiny Machine LearningTiberg, Anton January 2022 (has links)
Embedded system such as microcontrollers has become more powerful and cheaper during the past couple of years. This has led to more and more development of on-edge applications, one of which is anomaly detection using machine learning. This thesis investigates the ability to implement, deploy and run the unsupervised anomaly detection algorithm called Isolation Forest, and its modified version Mondrian Isolation Forest on a microcontroller. Both algorithms were successfully implemented and deployed. The regular Isolation Forest algorithm resulted in being able to function as an anomaly detection algorithm by using both data sets and streaming data. However, the Mondrian Isolation Forest was too resource hungry to be able to function as a proper anomaly detection application.
|
178 |
Anomaly-Driven Belief Revision by Abductive MetareasoningEckroth, Joshua Ryan 09 July 2014 (has links)
No description available.
|
179 |
Scalable Validation of Data StreamsXu, Cheng January 2016 (has links)
In manufacturing industries, sensors are often installed on industrial equipment generating high volumes of data in real-time. For shortening the machine downtime and reducing maintenance costs, it is critical to analyze efficiently this kind of streams in order to detect abnormal behavior of equipment. For validating data streams to detect anomalies, a data stream management system called SVALI is developed. Based on requirements by the application domain, different stream window semantics are explored and an extensible set of window forming functions are implemented, where dynamic registration of window aggregations allow incremental evaluation of aggregate functions over windows. To facilitate stream validation on a high level, the system provides two second order system validation functions, model-and-validate and learn-and-validate. Model-and-validate allows the user to define mathematical models based on physical properties of the monitored equipment, while learn-and-validate builds statistical models by sampling the stream in real-time as it flows. To validate geographically distributed equipment with short response time, SVALI is a distributed system where many SVALI instances can be started and run in parallel on-board the equipment. Central analyses are made at a monitoring center where streams of detected anomalies are combined and analyzed on a cluster computer. SVALI is an extensible system where functions can be implemented using external libraries written in C, Java, and Python without any modifications of the original code. The system and the developed functionality have been applied on several applications, both industrial and for sports analytics.
|
180 |
An Anomaly Behavior Analysis Intrusion Detection System for Wireless NetworksSatam, Pratik January 2015 (has links)
Wireless networks have become ubiquitous, where a wide range of mobile devices are connected to a larger network like the Internet via wireless communications. One widely used wireless communication standard is the IEEE 802.11 protocol, popularly called Wi-Fi. Over the years, the 802.11 has been upgraded to different versions. But most of these upgrades have been focused on the improvement of the throughput of the protocol and not enhancing the security of the protocol, thus leaving the protocol vulnerable to attacks. The goal of this research is to develop and implement an intrusion detection system based on anomaly behavior analysis that can detect accurately attacks on the Wi-Fi networks and track the location of the attacker. As a part of this thesis we present two architectures to develop an anomaly based intrusion detection system for single access point and distributed Wi-Fi networks. These architectures can detect attacks on Wi-Fi networks, classify the attacks and track the location of the attacker once the attack has been detected. The system uses statistical and probability techniques associated with temporal wireless protocol transitions, that we refer to as Wireless Flows (Wflows). The Wflows are modeled and stored as a sequence of n-grams within a given period of analysis. We studied two approaches to track the location of the attacker. In the first approach, we use a clustering approach to generate power maps that can be used to track the location of the user accessing the Wi-Fi network. In the second approach, we use classification algorithms to track the location of the user from a Central Controller Unit. Experimental results show that the attack detection and classification algorithms generate no false positives and no false negatives even when the Wi-Fi network has high frame drop rates. The Clustering approach for location tracking was found to perform highly accurate in static environments (81% accuracy) but the performance rapidly deteriorates with the changes in the environment. While the classification algorithm to track the location of the user at the Central Controller/RADIUS server was seen to perform with lesser accuracy then the clustering approach (76% accuracy) but the system's ability to track the location of the user deteriorated less rapidly with changes in the operating environment.
|
Page generated in 0.0534 seconds