• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 30
  • 30
  • 14
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Uso de detectores de dimensões variáveis aplicados na detecção de anomalias através de sistemas imunológicos artificiais. / Use of varying lengths implemented in detecting anomalies by artificial immunological detection systems.

Daniel dos Santos Morim 15 July 2009 (has links)
O presente trabalho investiga um método de detecção de anomalias baseado em sistemas imunológicos artificiais, especificamente em uma técnica de reconhecimento próprio/não-próprio chamada algoritmo de seleção negativa (NSA). Foi utilizado um esquema de representação baseado em hiperesferas com centros e raios variáveis e um modelo capaz de gerar detectores, com esta representação, de forma eficiente. Tal modelo utiliza algoritmos genéticos onde cada gene do cromossomo contém um índice para um ponto de uma distribuição quasi-aleatória que servirá como centro do detector e uma função decodificadora responsável por determinar os raios apropriados. A aptidão do cromossomo é dada por uma estimativa do volume coberto através uma integral de Monte Carlo. Este algoritmo teve seu desempenho verificado em diferentes dimensões e suas limitações levantadas. Com isso, pode-se focar as melhorias no algoritmo, feitas através da implementação de operadores genéticos mais adequados para a representação utilizada, de técnicas de redução do número de pontos do conjunto próprio e de um método de pré-processamento baseado em bitmaps de séries temporais. Avaliações com dados sintéticos e experimentos com dados reais demonstram o bom desempenho do algoritmo proposto e a diminuição do tempo de execução. / This work investigates a novel detection method based on Artificial Immune Systems, specifically on a self/non-self recognition technique called negative selection algorithm (NSA). A representation scheme based on hyperspheres with variable center and radius and a model that is able to generate detectors, based on that representation scheme, have been used. This model employs Genetic Algorithms where each chromosome gene represents an index to a point in a quasi-random distribution, which serves as a detector center, and a decoder function that determines the appropriate radius. The chromosome fitness is given by an estimation of the covered volume, which is calculated through a Monte Carlo integral. This algorithm had its performance evaluated for different dimensions, and more suitable genetic operators for the used representation, techniques of reducing self-points number and a preprocessing method based on bitmap time series have been therefore implemented. Evaluations with synthetic data and experiments with real data demonstrate the performance of the proposed algorithm and the decrease in execution time.
22

Batch and Online Implicit Weighted Gaussian Processes for Robust Novelty Detection

Ramirez, Padron Ruben 01 January 2015 (has links)
This dissertation aims mainly at obtaining robust variants of Gaussian processes (GPs) that do not require using non-Gaussian likelihoods to compensate for outliers in the training data. Bayesian kernel methods, and in particular GPs, have been used to solve a variety of machine learning problems, equating or exceeding the performance of other successful techniques. That is the case of a recently proposed approach to GP-based novelty detection that uses standard GPs (i.e. GPs employing Gaussian likelihoods). However, standard GPs are sensitive to outliers in training data, and this limitation carries over to GP-based novelty detection. This limitation has been typically addressed by using robust non-Gaussian likelihoods. However, non-Gaussian likelihoods lead to analytically intractable inferences, which require using approximation techniques that are typically complex and computationally expensive. Inspired by the use of weights in quasi-robust statistics, this work introduces a particular type of weight functions, called here data weighers, in order to obtain robust GPs that do not require approximation techniques and retain the simplicity of standard GPs. This work proposes implicit weighted variants of batch GP, online GP, and sparse online GP (SOGP) that employ weighted Gaussian likelihoods. Mathematical expressions for calculating the posterior implicit weighted GPs are derived in this work. In our experiments, novelty detection based on our weighted batch GPs consistently and significantly outperformed standard batch GP-based novelty detection whenever data was contaminated with outliers. Additionally, our experiments show that novelty detection based on online GPs can perform similarly to batch GP-based novelty detection. Membership scores previously introduced by other authors are also compared in our experiments.
23

An adaptive ensemble classifier for mining concept drifting data streams

Farid, D.M., Zhang, L., Hossain, A., Rahman, C.M., Strachan, R., Sexton, G., Dahal, Keshav P. January 2013 (has links)
No / It is challenging to use traditional data mining techniques to deal with real-time data stream classifications. Existing mining classifiers need to be updated frequently to adapt to the changes in data streams. To address this issue, in this paper we propose an adaptive ensemble approach for classification and novel class detection in concept drifting data streams. The proposed approach uses traditional mining classifiers and updates the ensemble model automatically so that it represents the most recent concepts in data streams. For novel class detection we consider the idea that data points belonging to the same class should be closer to each other and should be far apart from the data points belonging to other classes. If a data point is well separated from the existing data clusters, it is identified as a novel class instance. We tested the performance of this proposed stream classification model against that of existing mining algorithms using real benchmark datasets from UCI (University of California, Irvine) machine learning repository. The experimental results prove that our approach shows great flexibility and robustness in novel class detection in concept drifting and outperforms traditional classification models in challenging real-life data stream applications. (C) 2013 Elsevier Ltd. All rights reserved.
24

Vital sign monitoring and data fusion for paediatric triage

Shah, Syed Ahmar January 2012 (has links)
Accurate assessment of a child’s health is critical for appropriate allocation of medical resources and timely delivery of healthcare in both primary care (GP consultations) and secondary care (ED consultations). Serious illnesses such as meningitis and pneumonia account for 20% of deaths in childhood and require early recognition and treatment in order to maximize the chances of survival of affected children. Due to time constraints, poorly defined normal ranges, difficulty in achieving accurate readings and the difficulties faced by clinicians in interpreting combinations of vital signs, vital signs are rarely measured in primary care and their utility is limited in emergency departments. This thesis aims to develop a monitoring and data fusion system, to be used in both primary care and emergency department settings during the initial assessment of children suspected of having a serious infection. The proposed system relies on the photoplethysmogram (PPG) which is routinely recorded in different clinical settings with a pulse oximeter using a small finger probe. The most difficult vital sign to measure accurately is respiratory rate which has been found to be predictive of serious infection. An automated method is developed to estimate the respiratory rate from the PPG waveform using both the amplitude modulation caused by changes in thoracic pressure during the respiratory cycle and the phenomenon of respiratory sinus arrhythmia, the heart rate variability associated with respiration. The performance of such automated methods deteriorates when monitoring children as a result of frequent motion artefact. A method is developed that automatically identifies high-quality PPG segments mitigating the effects of motion on the estimation of respiratory rate. In the final part of the thesis, the four vital signs (heart rate, temperature, oxygen saturation and respiratory rate) are combined using a probabilistic framework to provide a novelty score for ranking various diagnostic groups, and predicting the severity of infection in two independent data sets from two different clinical settings.
25

Novelty detection with extreme value theory in vital-sign monitoring

Hugueny, Samuel Y. January 2013 (has links)
Every year in the UK, tens of thousands of hospital patients suffer adverse events, such as un-planned transfers to Intensive Therapy Units or unexpected cardiac arrests. Studies have shown that in a large majority of cases, significant physiological abnormalities can be observed within the 24-hour period preceding such events. Such warning signs may go unnoticed, if they occur between observations by the nursing staff, or are simply not identified as such. Timely detection of these warning signs and appropriate escalation schemes have been shown to improve both patient outcomes and the use of hospital resources, most notably by reducing patients’ length of stay. Automated real-time early-warning systems appear to be cost-efficient answers to the need for continuous vital-sign monitoring. Traditionally, a limitation of such systems has been their sensitivity to noisy and artefactual measurements, resulting in false-alert rates that made them unusable in practice, or earned them the mistrust of clinical staff. Tarassenko et al. (2005) and Hann (2008) proposed a novelty detection approach to the problem of continuous vital-sign monitoring, which, in a clinical trial, was shown to yield clinically acceptable false alert rates. In this approach, an observation is compared to a data fusion model, and its “normality” assessed by comparing a chosen statistic to a pre-set threshold. The method, while informed by large amounts of training data, has a number of heuristic aspects. This thesis proposes a principled approach to multivariate novelty detection in stochastic time- series, where novelty scores have a probabilistic interpretation, and are explicitly linked to the starting assumptions made. Our approach stems from the observation that novelty detection using complex multivariate, multimodal generative models is generally an ill-defined problem when attempted in the data space. In situations where “novel” is equivalent to “improbable with respect to a probability distribution ”, formulating the problem in a univariate probability space allows us to use classical results of univariate statistical theory. Specifically, we propose a multivariate extension to extreme value theory and, more generally, order statistics, suitable for performing novelty detection in time-series generated from a multivariate, possibly multimodal model. All the methods introduced in this thesis are applied to a vital-sign monitoring problem and compared to the existing method of choice. We show that it is possible to outperform the existing method while retaining a probabilistic interpretation. In addition to their application to novelty detection for vital-sign monitoring, contributions in this thesis to existing extreme value theory and order statistics are also valid in the broader context of data-modelling, and may be useful for analysing data from other complex systems.
26

Représentations pour la détection d’anomalies : Application aux données vibratoires des moteurs d’avions / Representations for anomaly detection : Application to aircraft engines’ vibration data

Abdel Sayed, Mina 03 July 2018 (has links)
Les mesures de vibrations sont l’une des données les plus pertinentes pour détecter des anomalies sur les moteurs. Les vibrations sont acquises sur banc d’essai en phase d’accélération et de décélération pour assurer la fiabilité du moteur à la sortie de la chaine de production. Ces données temporelles sont converties en spectrogrammes pour permettre aux experts d’effectuer une analyse visuelle de ces données et de détecter les différentes signatures atypiques. Les sources vibratoires correspondent à des raies sur les spectrogrammes. Dans cette thèse, nous avons mis en place un outil d’aide à la décision automatique pour analyser les spectrogrammes et détecter tout type de signatures atypiques, ces signatures ne proviennent pas nécessairement d’un endommagement du moteur. En premier lieu, nous avons construit une base de données numérique de spectrogrammes annotés. Il est important de noter que les signatures inusuelles sont variables en forme, intensité et position et se trouvent dans un faible nombre de données. Par conséquent, pour détecter ces signatures, nous caractérisons les comportements normaux des spectrogrammes, de manière analogue aux méthodes de détection de nouveautés, en représentant les patchs des spectrogrammes sur des dictionnaires comme les curvelets et la Non-negative matrix factorization (NMF), ainsi qu’en estimant la distribution de chaque point du spectrogramme à partir de données normales dépendamment ou non de leur voisinage. La détection des points atypiques est réalisée par comparaison des données tests au modèle de normalité estimé sur des données d’apprentissage normales. La détection des points atypiques permet la détection des signatures inusuelles composées par ces points. / Vibration measurements are one of the most relevant data for detecting anomalies in engines. Vibrations are recorded on a test bench during acceleration and deceleration phases to ensure the reliability of every flight engine at the end of the production line. These temporal signals are converted into spectrograms for experts to perform visual analysis of these data and detect any unusual signature. Vibratory signatures correspond to lines on the spectrograms. In this thesis, we have developed a decision support system to automatically analyze these spectrograms and detect any type of unusual signatures, these signatures are not necessarily originated from a damage in the engine. Firstly, we have built a numerical spectrograms database with annotated zones, it is important to note that data containing these unusual signatures are sparse and that these signatures are quite variable in shape, intensity and position. Consequently, to detect them, like in the novelty detection process, we characterize the normal behavior of the spectrograms by representing patches of the spectrograms in dictionaries such as the curvelets and the Non-negative matrix factorization (NMF) and by estimating the distribution of every points of the spectrograms with normal data depending or not of the neighborhood. The detection of the unusual points is performed by comparing test data to the model of normality estimated on learning normal data. The detection of the unusual points allows the detection of the unusual signatures composed by these points.
27

DEEP LEARNING BASED MODELS FOR NOVELTY ADAPTATION IN AUTONOMOUS MULTI-AGENT SYSTEMS

Marina Wagdy Wadea Haliem (13121685) 20 July 2022 (has links)
<p>Autonomous systems are often deployed in dynamic environments and are challenged with unexpected changes (novelties) in the environments where they receive novel data that was not seen during training. Given the uncertainty, they should be able to operate without (or with limited) human intervention and they are expected to (1) Adapt to such changes while still being effective and efficient in performing their multiple tasks. The system should be able to provide continuous availability of its critical functionalities. (2) Make informed decisions independently from any central authority. (3) Be Cognitive: learns the new context, its possible actions, and be rich in knowledge discovery through mining and pattern recognition. (4) Be Reflexive: reacts to novel unknown data as well as to security threats without terminating on-going critical missions. These characteristics combine to create the workflow of autonomous decision-making process in multi-agent environments (i.e.,) any action taken by the system must go through these characteristic models to autonomously make an ideal decision based on the situation. </p> <p><br></p> <p>In this dissertation, we propose novel learning-based models to enhance the decision-making process in autonomous multi-agent systems where agents are able to detect novelties (i.e., unexpected changes in the environment), and adapt to it in a timely manner. For this purpose, we explore two complex and highly dynamic domains </p> <p>(1) Transportation Networks (e.g., Ridesharing application): where we develop AdaPool: a novel distributed diurnal-adaptive decision-making framework for multi-agent autonomous vehicles using model-free deep reinforcement learning and change point detection. (2) Multi-agent games (e.g., Monopoly): for which we propose a hybrid approach that combines deep reinforcement learning (for frequent but complex decisions) with a fixed-policy approach (for infrequent but straightforward decisions) to facilitate decision-making and it is also adaptive to novelties. (3) Further, we present a domain agnostic approach for decision making without prior knowledge in dynamic environments using Bootstrapped DQN. Finally, to enhance security of autonomous multi-agent systems, (4) we develop a machine learning based resilience testing of address randomization moving target defense. Additionally, to further  improve the decision-making process, we present (5) a novel framework for multi-agent deep covering option discovery that is designed to accelerate exploration (which is the first step of decision-making for autonomous agents), by identifying potential collaborative agents and encouraging visiting the under-represented states in their joint observation space. </p>
28

A DEEP LEARNING BASED FRAMEWORK FOR NOVELTY AWARE EXPLAINABLE MULTIMODAL EMOTION RECOGNITION WITH SITUATIONAL KNOWLEDGE

Mijanur Palash (16672533) 03 August 2023 (has links)
<p>Mental health significantly impacts issues like gun violence, school shootings, and suicide. There is a strong connection between mental health and emotional states. By monitoring emotional changes over time, we can identify triggering events, detect early signs of instability, and take preventive measures. This thesis focuses on the development of a generalized and modular system for human emotion recognition and explanation based on visual information. The aim is to address the challenges of effectively utilizing different cues (modalities) available in the data for a reliable and trustworthy emotion recognition system. Our face is one of the most important medium through which we can express our emotion. Therefore We first propose SAFER, A novel facial emotion recognition system with background and place features. We provide a detailed evaluation framework to prove the high accuracy and generalizability. However, relying solely on facial expressions for emotion recognition can be unreliable, as faces can be covered or deceptive.  To enhance the system's reliability, we introduce EMERSK, a multimodal emotion recognition system that integrates various modalities, including facial expressions, posture, gait, and scene background, in a flexible and modular manner. It employs convolutional neural networks (CNNs), Long Short-term Memory (LSTM), and denoising auto-encoders to extract features from facial images, posture, gait, and scene background. In addition to multimodal feature fusion, the system utilizes situational knowledge derived from place type and adjective-noun pairs (ANP) extracted from the scene, as well as the spatio-temporal average distribution of emotions, to generate comprehensive explanations for the recognition outcomes. Extensive experiments on different benchmark datasets demonstrate the superiority of our approach over existing state-of-the-art methods. The system achieves improved performance in accurately recognizing and explaining human emotions. Moreover, we investigate the impact of novelty, such as face masks during the Covid-19 pandemic, on the emotion recognition. The study critically examines the limitations of mainstream facial expression datasets and proposes a novel dataset specifically tailored for facial emotion recognition with masked subjects. Additionally, we propose a continuous learning-based approach that incorporates a novelty detector working in parallel with the classifier to detect and properly handle instances of novelty. This approach ensures robustness and adaptability in the automatic emotion recognition task, even in the presence of novel factors such as face masks. This thesis contributes to the field of automatic emotion recognition by providing a generalized and modular approach that effectively combines multiple modalities, ensuring reliable and highly accurate recognition. Moreover, it generates situational knowledge that is valuable for mission-critical applications and provides comprehensive explanations of the output. The findings and insights from this research have the potential to enhance the understanding and utilization of multimodal emotion recognition systems in various real-world applications.</p> <p><br></p>
29

Towards Novelty-Resilient AI: Learning in the Open World

Trevor A Bonjour (18423153) 22 April 2024 (has links)
<p dir="ltr">Current artificial intelligence (AI) systems are proficient at tasks in a closed-world setting where the rules are often rigid. However, in real-world applications, the environment is usually open and dynamic. In this work, we investigate the effects of such dynamic environments on AI systems and develop ways to mitigate those effects. Central to our exploration is the concept of \textit{novelties}. Novelties encompass structural changes, unanticipated events, and environmental shifts that can confound traditional AI systems. We categorize novelties based on their representation, anticipation, and impact on agents, laying the groundwork for systematic detection and adaptation strategies. We explore novelties in the context of stochastic games. Decision-making in stochastic games exercises many aspects of the same reasoning capabilities needed by AI agents acting in the real world. A multi-agent stochastic game allows for infinitely many ways to introduce novelty. We propose an extension of the deep reinforcement learning (DRL) paradigm to develop agents that can detect and adapt to novelties in these environments. To address the sample efficiency challenge in DRL, we introduce a hybrid approach that combines fixed-policy methods with traditional DRL techniques, offering enhanced performance in complex decision-making tasks. We present a novel method for detecting anticipated novelties in multi-agent games, leveraging information theory to discern patterns indicative of collusion among players. Finally, we introduce DABLER, a pioneering deep reinforcement learning architecture that dynamically adapts to changing environmental conditions through broad learning approaches and environment recognition. Our findings underscore the importance of developing AI systems equipped to navigate the uncertainties of the open world, offering promising pathways for advancing AI research and application in real-world settings.</p>
30

Ψηφιακή επεξεργασία και αυτόματη κατηγοριοποίηση περιβαλλοντικών ήχων

Νταλαμπίρας, Σταύρος 20 September 2010 (has links)
Στο κεφάλαιο 1 παρουσιάζεται μία γενική επισκόπηση της αυτόματης αναγνώρισης γενικευμένων ακουστικών γεγονότων. Επιπλέον συζητάμε τις εφαρμογές της τεχνολογίας αναγνώρισης ακουστικού σήματος και δίνουμε μία σύντομη περιγραφή του state of the art. Τέλος, αναφέρουμε τη συνεισφορά της διατριβής. Στο κεφάλαιο 2 εισάγουμε τον αναγνώστη στο χώρο της επεξεργασίας ακουστικών σημάτων που δε περιλαμβάνουν ομιλία. Παρουσιάζονται οι σύγχρονες προσεγγίσεις όσον αφορά στις μεθοδολογίες εξαγωγής χαρακτηριστικών και αναγνώρισης προτύπων. Στο κεφάλαιο 3 προτείνεται ένα καινοτόμο σύστημα αναγνώρισης ήχων ειδικά σχεδιασμένο για το χώρο των ηχητικών γεγονότων αστικού περιβάλλοντος και αναλύεται ο σχεδιασμός της αντίστοιχης βάσης δεδομένων. Δημιουργήθηκε μία ιεραρχική πιθανοτική δομή μαζί με δύο ομάδες ακουστικών παραμέτρων που οδηγούν σε υψηλή ακρίβεια αναγνώρισης. Στο κεφάλαιο 4 ερευνάται η χρήση της τεχνικής πολλαπλών αναλύσεων όπως εφαρμόζεται στο πρόβλημα της διάκρισης ομιλίας/μουσικής. Στη συνέχεια η τεχνική αυτή χρησιμοποιήθηκε για τη δημιουργία ενός συστήματος το οποίο συνδυάζει χαρακτηριστικά από διαφορετικά πεδία με στόχο την αποδοτική ανάλυση online ραδιοφωνικών σημάτων. Στο κεφάλαιο 5 προτείνεται ένα σύστημα το οποίο εντοπίζει μη-τυπικές καταστάσεις σε περιβάλλον σταθμού μετρό με στόχο να βοηθήσει το εξουσιοδοτημένο προσωπικό στην συνεχή επίβλεψη του χώρου. Στο κεφάλαιο 6 προτείνεται ένα προσαρμοζόμενο σύστημα για ακουστική παρακολούθηση εν δυνάμει καταστροφικών καταστάσεων ικανό να λειτουργεί κάτω από διαφορετικά περιβάλλοντα. Δείχνουμε ότι το σύστημα επιτυγχάνει υψηλή απόδοση και μπορεί να προσαρμόζεται αυτόνομα σε ετερογενείς ακουστικές συνθήκες. Στο κεφάλαιο 7 ερευνάται η χρήση της μεθόδου ανίχνευσης καινοτομίας για ακουστική επόπτευση κλειστών και ανοιχτών χώρων. Ηχογραφήθηκε μία βάση δεδομένων πραγματικού κόσμου και προτείνονται τρεις πιθανοτικές τεχνικές. Στο κεφάλαιο 8 παρουσιάζεται μία καινοτόμα μεθοδολογία για αναγνώριση γενικευμένου ακουστικού σήματος που οδηγεί σε υψηλή ακρίβεια αναγνώρισης. Εκμεταλλευόμαστε τα πλεονεκτήματα της χρονικής συγχώνευσης χαρακτηριστικών σε συνδυασμό με μία παραγωγική τεχνική κατηγοριοποίησης. / The dissertation is outlined as followed: In chapter 1 we present a general overview of the task of automatic recognition of sound events. Additionally we discuss the applications of the generalized audio signal recognition technology and we give a brief description of the state of the art. Finally we mention the contribution of the thesis. In chapter 2 we introduce the reader to the area of non speech audio processing. We provide the current trend in the feature extraction methodologies as well as the pattern recognition techniques. In chapter 3 we analyze a novel sound recognition system especially designed for addressing the domain of urban environmental sound events. A hierarchical probabilistic structure was constructed along with a combined set of sound parameters which lead to high accuracy. chapter 4 is divided in the following two parts: a) we explore the usage of multiresolution analysis as regards the speech/music discrimination problem and b) the previously acquired knowledge was used to build a system which combined features of different domains towards efficient analysis of online radio signals. In chapter 5 we exhaustively experiment on a new application of the sound recognition technology, space monitoring based on the acoustic modality. We propose a system which detects atypical situations under a metro station environment towards assisting the authorized personnel in the space monitoring task. In chapter 6 we propose an adaptive framework for acoustic surveillance of potentially hazardous situations under environments of different acoustic properties. We show that the system achieves high performance and has the ability to adapt to heterogeneous environments in an unsupervised way. In chapter 7 we investigate the usage of the novelty detection method to the task of acoustic monitoring of indoor and outdoor spaces. A database with real-world data was recorded and three probabilistic techniques are proposed. In chapter 8 we present a novel methodology for generalized sound recognition that leads to high recognition accuracy. The merits of temporal feature integration as well as multi domain descriptors are exploited in combination with a state of the art generative classification technique.

Page generated in 0.0896 seconds