Spelling suggestions: "subject:"anomaly"" "subject:"unomaly""
351 |
Log Frequency Analysis for Anomaly Detection in Cloud EnvironmentsBendapudi, Prathyusha January 2024 (has links)
Background: Log analysis has been proven to be highly beneficial in monitoring system behaviour, detecting errors and anomalies, and predicting future trends in systems and applications. However, with continuous evolution of these systems and applications, the amount of log data generated on a timely basis is increasing rapidly. Hence, the amount of manual effort invested in log analysis for error detection and root cause analysis is also increasing. While there is continuous research to reduce manual effort, This Thesis introduced a new approach based on the temporal patternsof logs in a particular system environment, to the current scenario of automated log analysis which can help in reducing manual effort to a great extent. Objectives: The main objective of this research is to identify temporal patterns in logs using clustering algorithms, extract the outlier logs which do not adhere to any time pattern, and further analyse them to check if these outlier logs are helpful in error detection and identifying the root cause of the said errors. Methods: Design Science Research was implemented to fulfil the objectives of the thesis, as the thesis required generation of intermediary results and an iterative and responsive approach. The initial part of the thesis consisted of building an artifact which aided in identifying temporal patterns in the logs of different log types using DBSCAN clustering algorithm. After identification of patterns and extraction of outlier logs, Interviews were conducted which employed manual analysis of the outlier logs by system experts, who then provided insights on the logs and validated the log frequency analysis. Results: The results obtained after running the clustering algorithm on logs of different log types show clusters which represent temporal patterns in most of the files. There are log files which do not have any time patterns, which indicate that not all log types have logs which adhere to a fixed time pattern. The interviews conducted with system experts on the outlier logs yield promising results, indicating that the log frequency analysis is indeed helpful in reducing manual effort involved in log analysis for error detection and root cause analysis. Conclusions: The results of the Thesis show that most of the logs in the given cloud environment adhere to time frequency patterns, and analysing these patterns and their outliers will lead to easier error detection and root cause analysis in the given cloud environment.
|
352 |
Robust Anomaly Detection in Critical InfrastructureAbdelaty, Maged Fathy Youssef 14 September 2022 (has links)
Critical Infrastructures (CIs) such as water treatment plants, power grids and telecommunication networks are critical to the daily activities and well-being of our society. Disruption of such CIs would have catastrophic consequences for public safety and the national economy. Hence, these infrastructures have become major targets in the upsurge of cyberattacks. Defending against such attacks often depends on an arsenal of cyber-defence tools, including Machine Learning (ML)-based Anomaly Detection Systems (ADSs). These detection systems use ML models to learn the profile of the normal behaviour of a CI and classify deviations that go well beyond the normality profile as anomalies. However, ML methods are vulnerable to both adversarial and non-adversarial input perturbations. Adversarial perturbations are imperceptible noises added to the input data by an attacker to evade the classification mechanism. Non-adversarial perturbations can be a normal behaviour evolution as a result of changes in usage patterns or other characteristics and noisy data from normally degrading devices, generating a high rate of false positives. We first study the problem of ML-based ADSs being vulnerable to non-adversarial perturbations, which causes a high rate of false alarms. To address this problem, we propose an ADS called DAICS, based on a wide and deep learning model that is both adaptive to evolving normality and robust to noisy data normally emerging from the system. DAICS adapts the pre-trained model to new normality with a small number of data samples and a few gradient updates based on feedback from the operator on false alarms. The DAICS was evaluated on two datasets collected from real-world Industrial Control System (ICS) testbeds. The results show that the adaptation process is fast and that DAICS has an improved robustness compared to state-of-the-art approaches. We further investigated the problem of false-positive alarms in the ADSs. To address this problem, an extension of DAICS, called the SiFA framework, is proposed. The SiFA collects a buffer of historical false alarms and suppresses every new alarm that is similar to these false alarms. The proposed framework is evaluated using a dataset collected from a real-world ICS testbed. The evaluation results show that the SiFA can decrease the false alarm rate of DAICS by more than 80%.
We also investigate the problem of ML-based network ADSs that are vulnerable to adversarial perturbations. In the case of network ADSs, attackers may use their knowledge of anomaly detection logic to generate malicious traffic that remains undetected. One way to solve this issue is to adopt adversarial training in which the training set is augmented with adversarially perturbed samples. This thesis presents an adversarial training approach called GADoT that leverages a Generative Adversarial Network (GAN) to generate adversarial samples for training. GADoT is validated in the scenario of an ADS detecting Distributed Denial of Service (DDoS) attacks, which have been witnessing an increase in volume and complexity. For a practical evaluation, the DDoS network traffic was perturbed to generate two datasets while fully preserving the semantics of the attack. The results show that adversaries can exploit their domain expertise to craft adversarial attacks without requiring knowledge of the underlying detection model. We then demonstrate that adversarial training using GADoT renders ML models more robust to adversarial perturbations. However, the evaluation of adversarial robustness is often susceptible to errors, leading to robustness overestimation. We investigate the problem of robustness overestimation in network ADSs and propose an adversarial attack called UPAS to evaluate the robustness of such ADSs. The UPAS attack perturbs the inter-arrival time between packets by injecting a random time delay before packets from the attacker. The attack is validated by perturbing malicious network traffic in a multi-attack dataset and used to evaluate the robustness of two robust ADSs, which are based on a denoising autoencoder and an adversarially trained ML model. The results demonstrate that the robustness of both ADSs is overestimated and that a standardised evaluation of robustness is needed.
|
353 |
Enhancement of an Ad Reviewal Process through Interpretable Anomaly Detecting Machine Learning Models / Förbättring av en annonsgranskingsprocess genom tolkbara och avvikelsedetekterande maskinsinlärningsmodellerDahlgren, Eric January 2022 (has links)
Technological advancements made in recent decades in the fields of artificial intelligence (AI) and machine learning (ML) has lead to further automation of tasks previously performed by humans. Manually reviewing and assessing content uploaded to social media and marketplace platforms is one of said tasks that is both tedious and expensive to perform, and could possibly be automated through ML based systems. When introducing ML model predictions to a human decision making process, interpretability and explainability of models has been proven to be important factors for humans to trust in individual sample predictions. This thesis project aims to explore the performance of interpretable ML models used together with humans in an ad review process for a rental marketplace platform. Utilizing the XGBoost framework and SHAP for interpretable ML, a system was built with the ability to score an individual ad and explain the prediction with human readable sentences based on feature importance. The model reached an ROC AUC score of 0.90 and an Average Precision score of 0.64 on a held out test set. An end user survey was conducted which indicated some trust in the model and an appreciation for the local prediction explanations, but low general impact and helpfulness. While most related work focus on model performance, this thesis contributes with a smaller model usability study which can provide grounds for utilizing interpretable ML software in any manual decision making process.
|
354 |
Comparison of Machine Learning Algorithms for Anomaly Detection in Train’s Real-Time Ethernet using an Intrusion Detection SystemChaganti, Trayi, Rohith, Tadi January 2022 (has links)
Background: The train communication network is vulnerable to intrusion assaultsbecause of the openness of the ethernet communication protocol. Therefore, an intru-sion detection system must be incorporated into the train communication network.There are many algorithms available in Machine Learning(ML) to develop the Intru-sion Detection System(IDS). Majorly, depending on the accuracy and execution timeof the algorithm, it is decided as the best. Performance metrics like F1 score, preci-sion, recall, and support are compared to see how well the algorithm fits the modelwhile training. The following thesis will detect the anomalies in the Train ControlManagement System(TCMS) and then the comparison of various algorithms will beheld in order to declare the accurate algorithm. Objectives: In this thesis work, we aim to research anomaly detection in a train’sreal-time ethernet using an IDS. The main objectives of this thesis include per-forming Principal Component Analysis(PCA) and feature selection using RandomForest(RF) for simplifying the complexity of the dataset by reducing dimensionalityand extracting significant features. Followed by, choosing the most consistent algo-rithm for anomaly detection from the selected algorithms by evaluating performanceparameters, especially accuracy and execution time after training the models usingML algorithms. Method: This thesis necessitates one research methodology which is experimen-tation, to answer our research questions. For RQ1, experimentation will help usgain better insights into the dataset to extract valuable and essential features as apart of feature selection using RF and dimensionality reduction using PCA. RQ2also uses experimentation because it provides better accuracy and reliability. Afterpre-processing, the data will be used to train the algorithms and will be evaluatedusing various methods. Results: In this study, we have analysed data using EDA, reduced dimensionalityand feature selection using PCA and RF algorithm respectively. We used five su-pervised machine learning methods namely, Support Vector Machine(SVM), NaiveBayes, Decision Tree, K-nearest Neighbor(KNN), and Random Forest(RF). Aftertesting and utilizing the "KDDCup 1999" pre-processed dataset from the Universityof California Irvine(UCI) ML repository, Decision Tree model has been concludedas the best-performing algorithm with an accuracy of 98.89% in 0.098 seconds, incomparison to other models. Conclusions: Five models have been trained using the five ML techniques foranomaly detection using an IDS. We concluded that the decision tree trained modelhas optimal performance with an accuracy of 98.89% and time of 0.098 seconds
|
355 |
Unsupervised Online Anomaly Detection in Multivariate Time-Series / Oövervakad online-avvikelsedetektering i flerdimensionella tidsserierSegerholm, Ludvig January 2023 (has links)
This research aims to identify a method for unsupervised online anomaly detection in multivariate time series in dynamic systems in general and on the case study of Devwards IoT-system in particular. A requirement of the solution is its explainability, online learning and low computational expense. A comprehensive literature review was conducted, leading to the experimentation and analysis of various anomaly detection approaches. Of the methods evaluated, a singular recurrent neural network autoencoder emerged as the most promising, emphasizing a simple model structure that encourages stable performance with consistent outputs, regardless of the average output. While other approaches such as Hierarchical Temporal Memory models and an ensemble strategy of adaptive model pooling yielded suboptimal results. A modified version of the Residual Explainer method for enhancing explainability in autoencoders for online scenarios showed promising outcomes. The use of Mahalanobis distance for anomaly detection was explored. Feature extraction and it's implications in the context of the proposed approach is explored. Conclusively, a single, streamlined recurrent neural network appears to be the superior approach for this application, though further investigation into online learning methods is warranted. The research contributes results into the field of unsupervised online anomaly detection in multivariate time series and contributes to the Residual Explainer method for online autoencoders. Additionally, it offers data on the ineffectiveness of the Mahalanobis distance in an online anomaly detection environment.
|
356 |
Construction of a machine learning training pipeline for merging AIS data with external datasources / Utveckling av en ML-pipeline för att kombinera AIS-data medexterna datakällor i träningsprocessenYahya, Sami Said January 2022 (has links)
Machine learning methods are increasingly being used in the maritime domain to predict traffic anomalies and to mitigate risk, for example avoiding collision and groundingaccidents. However, most machine learning systems used for detecting such issues hasbeen trained predominately on single data sources such as vessel positioning data. Hence,it is desirable to support the means to combine different sources of data - in the trainingphase - to allow more complex models to be built. In this thesis, we propose a multi-data pipeline for accumulating, decoding, preprocessing, and merging Automatic Identification System (AIS) data with weather datato train time series based deep learning models. The pipeline comprises several REST APIsto connect and listen to the data sources, and storing and merging them using StructuredQuery Language (SQL). Specifically, the training pipeline consists of an AIS NMEA message decoder, weather data receiver, and a Postgres database for merging and storing thedata sources. Moreover, the pipeline was assessed by training a TensorFlow vRNN model.The proposed pipeline approach allows flexibility in the inclusion of new data sources toeffectively build models for the maritime domain as well as other traffic domains that usespositioning data.
|
357 |
Anomaly Detection in Multi-Seasonal Time Series DataWilliams, Ashton Taylor 05 June 2023 (has links)
No description available.
|
358 |
Improving predictive behavior under distributional shiftAhmed, Faruk 08 1900 (has links)
L'hypothèse fondamentale guidant la pratique de l'apprentissage automatique est qu’en phase de test, les données sont \emph{indépendantes et identiquement distribuées} à la distribution d'apprentissage. En pratique, les ensembles d'entraînement sont souvent assez petits pour favoriser le recours à des biais trompeurs. De plus, lorsqu'il est déployé dans le monde réel, un modèle est susceptible de rencontrer des données nouvelles ou anormales. Lorsque cela se produit, nous aimerions que nos modèles communiquent une confiance prédictive réduite. De telles situations, résultant de différentes formes de changement de distribution, sont incluses dans ce que l'on appelle actuellement les situations \emph{hors distribution} (OOD). Dans cette thèse par article, nous discutons des aspects de performance OOD relativement à des changement de distribution sémantique et non sémantique -- ceux-ci correspondent à des instances de détection OOD et à des problèmes de généralisation OOD.
Dans le premier article, nous évaluons de manière critique le problème de la détection OOD, en se concentrant sur l’analyse comparative et l'évaluation. Tout en soutenant que la détection OOD est trop vague pour être significative, nous suggérons plutôt de détecter les anomalies sémantiques. Nous montrons que les classificateurs entraînés sur des objectifs auxiliaires auto-supervisés peuvent améliorer la sémanticité dans les représentations de caractéristiques, comme l’indiquent notre meilleure détection des anomalies sémantiques ainsi que notre meilleure généralisation.
Dans le deuxième article, nous développons davantage notre discussion sur le double objectif de robustesse au changement de distribution non sémantique et de sensibilité au changement sémantique. Adoptant une perspective de compositionnalité, nous décomposons le changement non sémantique en composants systématiques et non systématiques, la généralisation en distribution et la détection d'anomalies sémantiques formant les tâches correspondant à des compositions complémentaires. Nous montrons au moyen d'évaluations empiriques sur des tâches synthétiques qu'il est possible d'améliorer simultanément les performances sur tous ces aspects de robustesse et d'incertitude. Nous proposons également une méthode simple qui améliore les approches existantes sur nos tâches synthétiques.
Dans le troisième et dernier article, nous considérons un scénario de boîte noire en ligne dans lequel non seulement la distribution des données d'entrée conditionnées sur les étiquettes change de l’entraînement au test, mais aussi la distribution marginale des étiquettes. Nous montrons que sous de telles contraintes pratiques, de simples estimations probabilistes en ligne du changement d'étiquette peuvent quand même être une piste prometteuse.
Nous terminons par une brève discussion sur les pistes possibles. / The fundamental assumption guiding practice in machine learning has been that test-time data is \emph{independent and identically distributed} to the training distribution. In practical use, training sets are often small enough to encourage reliance upon misleading biases. Additionally, when deployed in the real-world, a model is likely to encounter novel or anomalous data. When this happens, we would like our models to communicate reduced predictive confidence. Such situations, arising as a result of different forms of distributional shift, comprise what are currently termed \emph{out-of-distribution} (OOD) settings. In this thesis-by-article, we discuss aspects of OOD performance with regards to semantic and non-semantic distributional shift — these correspond to instances of OOD detection and OOD generalization problems.
In the first article, we critically appraise the problem of OOD detection, with regard to benchmarking and evaluation. Arguing that OOD detection is too broad to be meaningful, we suggest detecting semantic anomalies instead. We show that classifiers trained with auxiliary self-supervised objectives can improve semanticity in feature representations, as indicated by improved semantic anomaly detection as well as improved generalization.
In the second article, we further develop our discussion of the twin goals of robustness to non-semantic distributional shift and sensitivity to semantic shift. Adopting a perspective of compositionality, we decompose non-semantic shift into systematic and non-systematic components, along with in-distribution generalization and semantic anomaly detection forming the complementary tasks. We show by means of empirical evaluations on synthetic setups that it is possible to improve performance at all these aspects of robustness and uncertainty simultaneously. We also propose a simple method that improves upon existing approaches on our synthetic benchmarks.
In the third and final article, we consider an online, black-box scenario in which both the distribution of input data conditioned on labels changes from training to testing, as well as the marginal distribution of labels. We show that under such practical constraints, simple online probabilistic estimates of label-shift can nevertheless be a promising approach.
We close with a brief discussion of possible avenues forward.
|
359 |
Two-way Multi-input Generative Neural Network for Anomaly Event Detection and LocalizationYang, Mingchen January 2022 (has links)
Anomaly event detection has become increasingly important and is of great significance for real-time monitoring systems. However, developing a reliable anomaly detection and localization model still requires overcoming many challenging problems considering the ambiguity in the definition of an abnormal event and the lack of ground truth datasets for training. In this thesis, we propose a Two-way Multi-input Generative Neural Network (TMGNN), which is an unsupervised anomaly events detection and localization method based on Generative Adversarial Network (GAN). TMGNN is composed of two neural networks, an appearance generation neural network and a motion generation neural network. These two networks are trained on normal frames and their corresponding motion and mosaic frames respectively. In the testing steps, the trained model cannot properly reconstruct the anomalous objects since the network is trained only on normal frames and has not learned patterns of anomalous cases. With the help of our new patch-based evaluation method, we utilize the reconstruction error to detect and localize possible anomalous objects. Our experiments show that on the UCSD Pedestrain2 dataset, our approach achieves 96.5% Area Under Curve (AUC) and 94.1% AUC for the frame-level and pixel-level criteria, respectively, reaching the best classification results compared to other traditional and deep learning methods. / Thesis / Master of Applied Science (MASc) / Recently, abnormal event detection has attracted increasing attention in the field of surveillance video. However, it is still a big challenge to build an automatic and reliable abnormal event detection system to review a surveillance video containing hundreds of frames and mask the frames with abnormal objects or events. In this thesis, we build a model and teach it to memorize the structure of normal frames. Then the model is able to tell which frames are normal. Any other frames that appear in the surveillance video will be classified as abnormal frames. Moreover, we design a new method to evaluate the performance of our model and compare it with other models’ results.
|
360 |
Machine learning-based performance analytics for high-performance computing systemsAksar, Burak 17 January 2024 (has links)
High-performance Computing (HPC) systems play pivotal roles in societal and scientific advancements, executing up to quintillions of calculations every second. As we shift towards exascale computing and beyond, modern HPC systems emphasize resource sharing, where various applications share processors, memory, networks, and other components. While this sharing enhances power efficiency, it complicates performance prediction and introduces significant variations in application running times, affecting overall system efficiency and operational costs.
HPC systems utilize monitoring frameworks that gather numerical telemetry data on resource usage to track operational status. Given the massive complexity and volume of this data, manual analysis is often daunting and inefficient. Machine learning (ML) techniques offer automated performance anomaly diagnosis, but the transition from successful research outcomes to production-scale deployment encounters two critical obstacles. First, the scarcity of labeled training data (i.e., identifying healthy and anomalous runs) in telemetry datasets makes it hard to train these ML systems effectively. Second, runtime analysis, required for providing timely detection and diagnosis of performance anomalies, demands seamless integration of ML-based methods with the monitoring frameworks.
This thesis claims that ML-based performance analytics frameworks that leverage a limited amount of labeled data and ensure runtime analysis can achieve sufficient anomaly diagnosis performance for production HPC systems. To support this claim, we undertake ML-based performance analytics on two fronts. First, we design and develop novel frameworks for anomaly diagnosis that leverage semi-supervised or unsupervised learning techniques to reduce the need for extensive labeled data. Second, we design a simple yet adaptable architecture to enable deployment and demonstrate that these frameworks are feasible for runtime analysis.
This thesis makes the following specific contributions: First, we design a semi-supervised anomaly diagnosis framework, Proctor, which operates with hundreds of labeled samples (in contrast to tens of thousands) and a vast number of unlabeled samples. We show that Proctor outperforms the fully supervised baseline by up to 11% in F1-score for diagnosing anomalies when there are approximately 30 labeled samples. We then reframe the problem and introduce ALBADRoss to determine which samples should be labeled by experts to maximize the model performance using active learning. On a production HPC dataset, ALBADRoss achieves a 0.95 F1-score (the same score that a fully-supervised framework achieved) and near-zero false alarm rate using 24x fewer labeled samples. Finally, with Prodigy, we solve the anomaly detection problem but with a focus on deployment. Prodigy is designed for detecting performance anomalies on compute nodes using unsupervised learning. Our framework achieves a 0.95 F1-score in detecting anomalies on a production HPC system telemetry dataset. We also design a simple and adaptable software architecture and deploy it on a 1488-node production HPC system, detecting real-world performance anomalies with 88% accuracy.
|
Page generated in 0.0532 seconds