• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 18
  • 17
  • 17
  • 15
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 494
  • 494
  • 219
  • 218
  • 163
  • 139
  • 116
  • 91
  • 81
  • 75
  • 71
  • 70
  • 63
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Improving predictive behavior under distributional shift

Ahmed, Faruk 08 1900 (has links)
L'hypothèse fondamentale guidant la pratique de l'apprentissage automatique est qu’en phase de test, les données sont \emph{indépendantes et identiquement distribuées} à la distribution d'apprentissage. En pratique, les ensembles d'entraînement sont souvent assez petits pour favoriser le recours à des biais trompeurs. De plus, lorsqu'il est déployé dans le monde réel, un modèle est susceptible de rencontrer des données nouvelles ou anormales. Lorsque cela se produit, nous aimerions que nos modèles communiquent une confiance prédictive réduite. De telles situations, résultant de différentes formes de changement de distribution, sont incluses dans ce que l'on appelle actuellement les situations \emph{hors distribution} (OOD). Dans cette thèse par article, nous discutons des aspects de performance OOD relativement à des changement de distribution sémantique et non sémantique -- ceux-ci correspondent à des instances de détection OOD et à des problèmes de généralisation OOD. Dans le premier article, nous évaluons de manière critique le problème de la détection OOD, en se concentrant sur l’analyse comparative et l'évaluation. Tout en soutenant que la détection OOD est trop vague pour être significative, nous suggérons plutôt de détecter les anomalies sémantiques. Nous montrons que les classificateurs entraînés sur des objectifs auxiliaires auto-supervisés peuvent améliorer la sémanticité dans les représentations de caractéristiques, comme l’indiquent notre meilleure détection des anomalies sémantiques ainsi que notre meilleure généralisation. Dans le deuxième article, nous développons davantage notre discussion sur le double objectif de robustesse au changement de distribution non sémantique et de sensibilité au changement sémantique. Adoptant une perspective de compositionnalité, nous décomposons le changement non sémantique en composants systématiques et non systématiques, la généralisation en distribution et la détection d'anomalies sémantiques formant les tâches correspondant à des compositions complémentaires. Nous montrons au moyen d'évaluations empiriques sur des tâches synthétiques qu'il est possible d'améliorer simultanément les performances sur tous ces aspects de robustesse et d'incertitude. Nous proposons également une méthode simple qui améliore les approches existantes sur nos tâches synthétiques. Dans le troisième et dernier article, nous considérons un scénario de boîte noire en ligne dans lequel non seulement la distribution des données d'entrée conditionnées sur les étiquettes change de l’entraînement au test, mais aussi la distribution marginale des étiquettes. Nous montrons que sous de telles contraintes pratiques, de simples estimations probabilistes en ligne du changement d'étiquette peuvent quand même être une piste prometteuse. Nous terminons par une brève discussion sur les pistes possibles. / The fundamental assumption guiding practice in machine learning has been that test-time data is \emph{independent and identically distributed} to the training distribution. In practical use, training sets are often small enough to encourage reliance upon misleading biases. Additionally, when deployed in the real-world, a model is likely to encounter novel or anomalous data. When this happens, we would like our models to communicate reduced predictive confidence. Such situations, arising as a result of different forms of distributional shift, comprise what are currently termed \emph{out-of-distribution} (OOD) settings. In this thesis-by-article, we discuss aspects of OOD performance with regards to semantic and non-semantic distributional shift — these correspond to instances of OOD detection and OOD generalization problems. In the first article, we critically appraise the problem of OOD detection, with regard to benchmarking and evaluation. Arguing that OOD detection is too broad to be meaningful, we suggest detecting semantic anomalies instead. We show that classifiers trained with auxiliary self-supervised objectives can improve semanticity in feature representations, as indicated by improved semantic anomaly detection as well as improved generalization. In the second article, we further develop our discussion of the twin goals of robustness to non-semantic distributional shift and sensitivity to semantic shift. Adopting a perspective of compositionality, we decompose non-semantic shift into systematic and non-systematic components, along with in-distribution generalization and semantic anomaly detection forming the complementary tasks. We show by means of empirical evaluations on synthetic setups that it is possible to improve performance at all these aspects of robustness and uncertainty simultaneously. We also propose a simple method that improves upon existing approaches on our synthetic benchmarks. In the third and final article, we consider an online, black-box scenario in which both the distribution of input data conditioned on labels changes from training to testing, as well as the marginal distribution of labels. We show that under such practical constraints, simple online probabilistic estimates of label-shift can nevertheless be a promising approach. We close with a brief discussion of possible avenues forward.
192

Two-way Multi-input Generative Neural Network for Anomaly Event Detection and Localization

Yang, Mingchen January 2022 (has links)
Anomaly event detection has become increasingly important and is of great significance for real-time monitoring systems. However, developing a reliable anomaly detection and localization model still requires overcoming many challenging problems considering the ambiguity in the definition of an abnormal event and the lack of ground truth datasets for training. In this thesis, we propose a Two-way Multi-input Generative Neural Network (TMGNN), which is an unsupervised anomaly events detection and localization method based on Generative Adversarial Network (GAN). TMGNN is composed of two neural networks, an appearance generation neural network and a motion generation neural network. These two networks are trained on normal frames and their corresponding motion and mosaic frames respectively. In the testing steps, the trained model cannot properly reconstruct the anomalous objects since the network is trained only on normal frames and has not learned patterns of anomalous cases. With the help of our new patch-based evaluation method, we utilize the reconstruction error to detect and localize possible anomalous objects. Our experiments show that on the UCSD Pedestrain2 dataset, our approach achieves 96.5% Area Under Curve (AUC) and 94.1% AUC for the frame-level and pixel-level criteria, respectively, reaching the best classification results compared to other traditional and deep learning methods. / Thesis / Master of Applied Science (MASc) / Recently, abnormal event detection has attracted increasing attention in the field of surveillance video. However, it is still a big challenge to build an automatic and reliable abnormal event detection system to review a surveillance video containing hundreds of frames and mask the frames with abnormal objects or events. In this thesis, we build a model and teach it to memorize the structure of normal frames. Then the model is able to tell which frames are normal. Any other frames that appear in the surveillance video will be classified as abnormal frames. Moreover, we design a new method to evaluate the performance of our model and compare it with other models’ results.
193

Machine learning-based performance analytics for high-performance computing systems

Aksar, Burak 17 January 2024 (has links)
High-performance Computing (HPC) systems play pivotal roles in societal and scientific advancements, executing up to quintillions of calculations every second. As we shift towards exascale computing and beyond, modern HPC systems emphasize resource sharing, where various applications share processors, memory, networks, and other components. While this sharing enhances power efficiency, it complicates performance prediction and introduces significant variations in application running times, affecting overall system efficiency and operational costs. HPC systems utilize monitoring frameworks that gather numerical telemetry data on resource usage to track operational status. Given the massive complexity and volume of this data, manual analysis is often daunting and inefficient. Machine learning (ML) techniques offer automated performance anomaly diagnosis, but the transition from successful research outcomes to production-scale deployment encounters two critical obstacles. First, the scarcity of labeled training data (i.e., identifying healthy and anomalous runs) in telemetry datasets makes it hard to train these ML systems effectively. Second, runtime analysis, required for providing timely detection and diagnosis of performance anomalies, demands seamless integration of ML-based methods with the monitoring frameworks. This thesis claims that ML-based performance analytics frameworks that leverage a limited amount of labeled data and ensure runtime analysis can achieve sufficient anomaly diagnosis performance for production HPC systems. To support this claim, we undertake ML-based performance analytics on two fronts. First, we design and develop novel frameworks for anomaly diagnosis that leverage semi-supervised or unsupervised learning techniques to reduce the need for extensive labeled data. Second, we design a simple yet adaptable architecture to enable deployment and demonstrate that these frameworks are feasible for runtime analysis. This thesis makes the following specific contributions: First, we design a semi-supervised anomaly diagnosis framework, Proctor, which operates with hundreds of labeled samples (in contrast to tens of thousands) and a vast number of unlabeled samples. We show that Proctor outperforms the fully supervised baseline by up to 11% in F1-score for diagnosing anomalies when there are approximately 30 labeled samples. We then reframe the problem and introduce ALBADRoss to determine which samples should be labeled by experts to maximize the model performance using active learning. On a production HPC dataset, ALBADRoss achieves a 0.95 F1-score (the same score that a fully-supervised framework achieved) and near-zero false alarm rate using 24x fewer labeled samples. Finally, with Prodigy, we solve the anomaly detection problem but with a focus on deployment. Prodigy is designed for detecting performance anomalies on compute nodes using unsupervised learning. Our framework achieves a 0.95 F1-score in detecting anomalies on a production HPC system telemetry dataset. We also design a simple and adaptable software architecture and deploy it on a 1488-node production HPC system, detecting real-world performance anomalies with 88% accuracy.
194

Some new anomaly detection methods with applications to financial data

Zhao, Zhicong 06 August 2021 (has links)
Novel clustering methods are presented and applied to financial data. First, a scan-statistics method for detecting price point clusters in financial transaction data is considered. The method is applied to Electronic Business Transfer (EBT) transaction data of the Supplemental Nutrition Assistance Program (SNAP). For a given vendor, transaction amounts are fit via maximum likelihood estimation which are then converted to the unit interval via a natural copula transformation. Next, a new Markov type relation for order statistics on the unit interval is developed. The relation is used to characterize the distribution of the minimum exceedance of all copula transformed transaction amounts above an observed order statistic. Conditional on observed order statistics, independent and asymptotically identical indicator functions are constructed and the success probably as a function of the gaps in consecutive order statistics is specified. The success probabilities are shown to be a function of the hazard rate of the transformed transaction distribution. If gaps are smaller than expected, then the corresponding indicator functions are more likely to be one. A scan statistic is then applied to the sequence of indicator functions to detect locations where too many gaps are smaller than expected. These sets of gaps are then flagged as being anomalous price point clusters. It is noted that prominent price point clusters appearing in the data may be a historical vestige of previous versions of the SNAP program involving outdated paper "food stamps". The second part of the project develops a novel clustering method whereby the time series of daily total EBT transaction amounts are clustered by periodicity. The schemeworks by normalizing the time series of daily total transaction amounts for two distinct vendors and taking daily differences in those two series. The difference series is then examined for periodicity via a novel F statistic. We find one may cluster the monthly periodicities of vendors by type of store using the F statistic, a proxy for a distance metric. This may indicate that spending preferences for SNAP benefit recipients varies by day of the month, however, this opens further questions about potential forcing mechanisms and the apparent changing appetites for spending.
195

Identifying the Impact of Noise on Anomaly Detection through Functional Near-Infrared Spectroscopy (fNIRS) and Eye-tracking

Gabbard, Ryan Dwight 11 August 2017 (has links)
No description available.
196

Performance of One-class Support Vector Machine (SVM) in Detection of Anomalies in the Bridge Data

Dalvi, Aditi January 2017 (has links)
No description available.
197

Anomaly Detection and Microstructure Characterization in Fiber Reinforced Ceramic Matrix Composites

Bricker, Stephen January 2015 (has links)
No description available.
198

Approaches to Abnormality Detection with Constraints

Otey, Matthew Eric 12 September 2006 (has links)
No description available.
199

Topology-aware Correlated Network Anomaly Detection and Diagnosis

Dhanapalan, Manojprasadh 19 July 2012 (has links)
No description available.
200

Software Performance Anomaly Detection Through Analysis Of Test Data By Multivariate Techniques

Salahshour Torshizi, Sara January 2022 (has links)
This thesis aims to uncover anomalies in the data describing the performance behavior of a "robot controller" as measured by software metrics. The purpose of analyzing data is mainly to identify the changes that have resulted in different performance behaviors which we refer to as performance anomalies. To address this issue, two separate pre-processing approaches have been developed: one that adds the principal component to the data after cleaning steps and another that does not regard the principal component. Next, Isolation Forest is employed, which uses an ensemble of isolation trees for data points to segregate anomalies and generate scores that can be used to discover anomalies. Further, in order to detect anomalies, the highest distances matching cluster centroids are employed in the clustering procedure. These two data preparation methods, along with two anomaly detection algorithms, identified software builds that are very likely to be anomalies. According to an industrial evaluation conducted based on engineers’ domain knowledge, around 70% of the detected software builds as anomalous builds were successfully identified, indicating system variable deviations or software bugs.

Page generated in 0.0855 seconds