• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 329
  • 18
  • 17
  • 17
  • 15
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 484
  • 484
  • 215
  • 212
  • 160
  • 138
  • 116
  • 91
  • 81
  • 75
  • 70
  • 68
  • 61
  • 60
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Computational Methods for Perceptual Training in Radiology

January 2012 (has links)
abstract: Medical images constitute a special class of images that are captured to allow diagnosis of disease, and their "correct" interpretation is vitally important. Because they are not "natural" images, radiologists must be trained to visually interpret them. This training process includes implicit perceptual learning that is gradually acquired over an extended period of exposure to medical images. This dissertation proposes novel computational methods for evaluating and facilitating perceptual training in radiologists. Part 1 of this dissertation proposes an eye-tracking-based metric for measuring the training progress of individual radiologists. Six metrics were identified as potentially useful: time to complete task, fixation count, fixation duration, consciously viewed regions, subconsciously viewed regions, and saccadic length. Part 2 of this dissertation proposes an eye-tracking-based entropy metric for tracking the rise and fall in the interest level of radiologists, as they scan chest radiographs. The results showed that entropy was significantly lower when radiologists were fixating on abnormal regions. Part 3 of this dissertation develops a method that allows extraction of Gabor-based feature vectors from corresponding anatomical regions of "normal" chest radiographs, despite anatomical variations across populations. These feature vectors are then used to develop and compare transductive and inductive computational methods for generating overlay maps that show atypical regions within test radiographs. The results show that the transductive methods produced much better maps than the inductive methods for 20 ground-truthed test radiographs. Part 4 of this dissertation uses an Extended Fuzzy C-Means (EFCM) based instance selection method to reduce the computational cost of transductive methods. The results showed that EFCM substantially reduced the computational cost without a substantial drop in performance. The dissertation then proposes a novel Variance Based Instance Selection (VBIS) method that also reduces the computational cost, but allows for incremental incorporation of new informative radiographs, as they are encountered. Part 5 of this dissertation develops and demonstrates a novel semi-transductive framework that combines the superior performance of transductive methods with the reduced computational cost of inductive methods. The results showed that the semi-transductive approach provided both an effective and efficient framework for detection of atypical regions in chest radiographs. / Dissertation/Thesis / Ph.D. Computer Science 2012
332

Anomaly detection technique for sequential data / Technique de détection d'anomalies utilisant des données séquentielles

Pellissier, Muriel 15 October 2013 (has links)
De nos jours, beaucoup de données peuvent être facilement accessibles. Mais toutes ces données ne sont pas utiles si nous ne savons pas les traiter efficacement et si nous ne savons pas extraire facilement les informations pertinentes à partir d'une grande quantité de données. Les techniques de détection d'anomalies sont utilisées par de nombreux domaines afin de traiter automatiquement les données. Les techniques de détection d'anomalies dépendent du domaine d'application, des données utilisées ainsi que du type d'anomalie à détecter.Pour cette étude nous nous intéressons seulement aux données séquentielles. Une séquence est une liste ordonnée d'objets. Pour de nombreux domaines, il est important de pouvoir identifier les irrégularités contenues dans des données séquentielles comme par exemple les séquences ADN, les commandes d'utilisateur, les transactions bancaires etc.Cette thèse présente une nouvelle approche qui identifie et analyse les irrégularités de données séquentielles. Cette technique de détection d'anomalies peut détecter les anomalies de données séquentielles dont l'ordre des objets dans les séquences est important ainsi que la position des objets dans les séquences. Les séquences sont définies comme anormales si une séquence est presque identique à une séquence qui est fréquente (normale). Les séquences anormales sont donc les séquences qui diffèrent légèrement des séquences qui sont fréquentes dans la base de données.Dans cette thèse nous avons appliqué cette technique à la surveillance maritime, mais cette technique peut être utilisée pour tous les domaines utilisant des données séquentielles. Pour notre application, la surveillance maritime, nous avons utilisé cette technique afin d'identifier les conteneurs suspects. En effet, de nos jours 90% du commerce mondial est transporté par conteneurs maritimes mais seulement 1 à 2% des conteneurs peuvent être physiquement contrôlés. Ce faible pourcentage est dû à un coût financier très élevé et au besoin trop important de ressources humaines pour le contrôle physique des conteneurs. De plus, le nombre de conteneurs voyageant par jours dans le monde ne cesse d'augmenter, il est donc nécessaire de développer des outils automatiques afin d'orienter le contrôle fait par les douanes afin d'éviter les activités illégales comme les fraudes, les quotas, les produits illégaux, ainsi que les trafics d'armes et de drogues. Pour identifier les conteneurs suspects nous comparons les trajets des conteneurs de notre base de données avec les trajets des conteneurs dits normaux. Les trajets normaux sont les trajets qui sont fréquents dans notre base de données.Notre technique est divisée en deux parties. La première partie consiste à détecter les séquences qui sont fréquentes dans la base de données. La seconde partie identifie les séquences de la base de données qui diffèrent légèrement des séquences qui sont fréquentes. Afin de définir une séquence comme normale ou anormale, nous calculons une distance entre une séquence qui est fréquente et une séquence aléatoire de la base de données. La distance est calculée avec une méthode qui utilise les différences qualitative et quantitative entre deux séquences. / Nowadays, huge quantities of data can be easily accessible, but all these data are not useful if we do not know how to process them efficiently and how to extract easily relevant information from a large quantity of data. The anomaly detection techniques are used in many domains in order to help to process the data in an automated way. The anomaly detection techniques depend on the application domain, on the type of data, and on the type of anomaly.For this study we are interested only in sequential data. A sequence is an ordered list of items, also called events. Identifying irregularities in sequential data is essential for many application domains like DNA sequences, system calls, user commands, banking transactions etc.This thesis presents a new approach for identifying and analyzing irregularities in sequential data. This anomaly detection technique can detect anomalies in sequential data where the order of the items in the sequences is important. Moreover, our technique does not consider only the order of the events, but also the position of the events within the sequences. The sequences are spotted as anomalous if a sequence is quasi-identical to a usual behavior which means if the sequence is slightly different from a frequent (common) sequence. The differences between two sequences are based on the order of the events and their position in the sequence.In this thesis we applied this technique to the maritime surveillance, but this technique can be used by any other domains that use sequential data. For the maritime surveillance, some automated tools are needed in order to facilitate the targeting of suspicious containers that is performed by the customs. Indeed, nowadays 90% of the world trade is transported by containers and only 1-2% of the containers can be physically checked because of the high financial cost and the high human resources needed to control a container. As the number of containers travelling every day all around the world is really important, it is necessary to control the containers in order to avoid illegal activities like fraud, quota-related, illegal products, hidden activities, drug smuggling or arm smuggling. For the maritime domain, we can use this technique to identify suspicious containers by comparing the container trips from the data set with itineraries that are known to be normal (common). A container trip, also called itinerary, is an ordered list of actions that are done on containers at specific geographical positions. The different actions are: loading, transshipment, and discharging. For each action that is done on a container, we know the container ID and its geographical position (port ID).This technique is divided into two parts. The first part is to detect the common (most frequent) sequences of the data set. The second part is to identify those sequences that are slightly different from the common sequences using a distance-based method in order to classify a given sequence as normal or suspicious. The distance is calculated using a method that combines quantitative and qualitative differences between two sequences.
333

DETECÇÃO DE ATAQUES DE NEGAÇÃO DE SERVIÇO EM REDES DE COMPUTADORES ATRAVÉS DA TRANSFORMADA WAVELET 2D / A BIDIMENSIONAL WAVELET TRANSFORM BASED ALGORITHM FOR DOS ATTACK DETECTION

Azevedo, Renato Preigschadt de 08 March 2012 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The analysis of network traffic is a key area for the management of fault-tolerant systems, since anomalies in network traffic can affect the availability and quality of service (QoS). Intrusion detection systems in computer networks are used to analyze network traffic in order to detect attacks and anomalies. The analysis based on anomalies allows attacks detection by analyzing the behavior of the traffic network. This work proposes an intrusion detection tool to quickly and effectively detect anomalies in computer networks generated by denial of service (DoS). The detection algorithm is based on the two-dimensional wavelet transform (2D Wavelet), a derived method of signal analysis. The wavelet transform is a mathematical tool with low computational cost that explores the existing information present in the input samples according to the different levels of the transformation. The proposed algorithm detects anomalies directly based on the wavelet coefficients, considering threshold techniques. This operation does not require the reconstruction of the original signal. Experiments were performed using two databases: a synthetic (DARPA) and another one from data collected at the Federal University of Santa Maria (UFSM), allowing analysis of the intrusion detection tool under different scenarios. The wavelets considered for the tests were all from the orthonormal family of Daubechies: Haar (Db1), Db2, Db4 and Db8 (with 1, 2, 4 and 8 null vanishing moments respectively). For the DARPA database we obtained a detection rate up to 100% using the Daubechies wavelet transform Db4, considering normalized wavelet coefficients. For the database collected at UFSM the detection rate was 95%, again considering Db4 wavelet transform with normalized wavelet coefficients. / A análise de tráfego de rede é uma área fundamental no gerenciamento de sistemas tolerantes a falhas, pois anomalias no tráfego de rede podem afetar a disponibilidade e a qualidade do serviço (QoS). Sistemas detectores de intrusão em redes de computadores são utilizados para analisar o tráfego de rede com o objetivo de detectar ataques ou anomalias. A análise baseada em anomalias permite detectar ataques através da análise do comportamento do tráfego de rede. Este trabalho propõe uma ferramenta de detecção de intrusão rápida e eficaz para detectar anomalias em redes de computadores geradas por ataques de negação de serviço (DoS). O algoritmo de detecção é baseado na transformada Wavelet bidimensional (Wavelet 2D), um método derivado da análise de sinais. A transformada wavelet é uma ferramenta matemática de baixo custo computacional, que explora as informações presentes nas amostras de entrada ao longo dos diversos níveis da transformação. O algoritmo proposto detecta anomalias diretamente nos coeficientes wavelets através de técnicas de corte, não necessitando da reconstrução do sinal original. Foram realizados experimentos utilizando duas bases de dados: uma sintética (DARPA), e outra coletada na instituição de ensino (UFSM), permitindo a análise da ferramenta de detecção de intrusão sob diferentes cenários. As famílias wavelets utilizadas nos testes foram as wavelets ortonormais de Daubechies: Haar (Db1), Db2, Db4 e Db8 (com 1, 2, 4 e 8 momentos nulos respectivamente). Para a base de dados DARPA obteve-se uma taxa de detecção de ataques DoS de até 100% utilizando a wavelet de Daubechies Db4 com os coeficientes wavelets normalizados, e de 95% para a base de dados da UFSM com a wavelet de Daubechies Db4 com os coeficientes wavelets normalizados.
334

Detecção de Cross-Site Scripting em páginas Web

Nunan, Angelo Eduardo 14 May 2012 (has links)
Made available in DSpace on 2015-04-11T14:03:18Z (GMT). No. of bitstreams: 1 Angelo Eduardo Nunan.pdf: 2892243 bytes, checksum: 5653024cae1270242c7b4f8228cf0d2c (MD5) Previous issue date: 2012-05-14 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Web applications are currently an important environment for access to services available on the Internet. However, the security assurance of these resources has become an elementary task. The structure of dynamic websites composed by a set of objects such as HTML tags, script functions, hyperlinks and advanced features in web browsers may provide numerous resources and interactive services, for instance e-commerce, Internet banking, social networking, blogs, forums, among others. On the other hand, these features helped to increase the potential security risks and attacks, which are the results of malicious codes injection. In this context, Cross-Site Scripting (XSS) is highlighted at the top of the lists of the greatest threats to web applications in recent years. This work presents a method based on supervised machine learning techniques to detect XSS in web pages. A set of features extracted from URL contents and web document are employed in order to discriminate XSS patterns and to successfully classify both malicious and non-malicious pages / As aplicações web atualmente representam um importante ambiente de acesso aos serviços oferecidos na Internet. Garantir a segurança desses recursos se tornou uma tarefa elementar. A estrutura de sites dinâmicos constituída por um conjunto de objetos, tais como tags de HTML, funções de script, hiperlinks e recursos avançados em navegadores web levou a inúmeras funcionalidades e à interatividade de serviços, tais como e-commerce, Internet banking, redes sociais, blogs, fóruns, entre outros. No entanto, esses recursos têm aumentado potencialmente os riscos de segurança e os ataques resultantes da injeção de códigos maliciosos, onde o Cross-Site Scripting aparece em destaque, no topo das listas das maiores ameaças para aplicações web nos últimos anos. Este trabalho apresenta um método baseado em técnicas de aprendizagem de máquina supervisionada para detectar XSS em páginas web, a partir de um conjunto de características extraídas da URL e do documento web, capazes de discriminar padrões de ataques XSS e distinguir páginas web maliciosas das páginas web normais ou benignas
335

Detecção de anomalias em aplicações Web utilizando filtros baseados em coeficiente de correlação parcial / Anomaly detection in web applications using filters based on partial correlation coefficient

Silva, Otto Julio Ahlert Pinno da 31 October 2014 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2015-03-09T12:10:52Z No. of bitstreams: 2 Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2015-03-09T12:11:12Z (GMT) No. of bitstreams: 2 Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-03-09T12:11:12Z (GMT). No. of bitstreams: 2 Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-10-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Finding faults or causes of performance problems in modernWeb computer systems is an arduous task that involves many hours of system metrics monitoring and log analysis. In order to aid administrators in this task, many anomaly detection mechanisms have been proposed to analyze the behavior of the system by collecting a large volume of statistical information showing the condition and performance of the computer system. One of the approaches adopted by these mechanism is the monitoring through strong correlations found in the system. In this approach, the collection of large amounts of data generate drawbacks associated with communication, storage and specially with the processing of information collected. Nevertheless, few mechanisms for detecting anomalies have a strategy for the selection of statistical information to be collected, i.e., for the selection of monitored metrics. This paper presents three metrics selection filters for mechanisms of anomaly detection based on monitoring of correlations. These filters were based on the concept of partial correlation technique which is capable of providing information not observable by common correlations methods. The validation of these filters was performed on a scenario of Web application, and, to simulate this environment, we use the TPC-W, a Web transactions Benchmark of type E-commerce. The results from our evaluation shows that one of our filters allowed the construction of a monitoring network with 8% fewer metrics that state-of-the-art filters, and achieve fault coverage up to 10% more efficient. / Encontrar falhas ou causas de problemas de desempenho em sistemas computacionais Web atuais é uma tarefa árdua que envolve muitas horas de análise de logs e métricas de sistemas. Para ajudar administradores nessa tarefa, diversos mecanismos de detecção de anomalia foram propostos visando analisar o comportamento do sistema mediante a coleta de um grande volume de informações estatísticas que demonstram o estado e o desempenho do sistema computacional. Uma das abordagens adotadas por esses mecanismo é o monitoramento por meio de correlações fortes identificadas no sistema. Nessa abordagem, a coleta desse grande número de dados gera inconvenientes associados à comunicação, armazenamento e, especialmente, com o processamento das informações coletadas. Apesar disso, poucos mecanismos de detecção de anomalias possuem uma estratégia para a seleção das informações estatísticas a serem coletadas, ou seja, para a seleção das métricas monitoradas. Este trabalho apresenta três filtros de seleção de métricas para mecanismos de detecção de anomalias baseados no monitoramento de correlações. Esses filtros foram baseados no conceito de correlação parcial, técnica que é capaz de fornecer informações não observáveis por métodos de correlações comuns. A validação desses filtros foi realizada sobre um cenário de aplicação Web, sendo que, para simular esse ambiente, nós utilizamos o TPC-W, um Benchmark de transações Web do tipo E-commerce. Os resultados obtidos em nossa avaliação mostram que um de nossos filtros permitiu a construção de uma rede de monitoramento com 8% menos métricas que filtros estado-da-arte, além de alcançar uma cobertura de falhas até 10% mais eficiente.
336

Cadre méthodologique et applicatif pour le développement de réseaux de capteurs fiables / The design of reliable sensor networks : methods and applications

Lalem, Farid 11 September 2017 (has links)
Les réseaux de capteurs sans fil émergent comme une technologie innovatrice qui peut révolutionner et améliorer notre façon de vivre, de travailler et d'interagir avec l'environnement physique qui nous entoure. Néanmoins, l'utilisation d'une telle technologie soulève de nouveaux défis concernant le développement de systèmes fiables et sécurisés. Ces réseaux de capteurs sans fil sont souvent caractérisés par un déploiement dense et à grande échelle dans des environnements limités en terme de ressources. Les contraintes imposées sont la limitation des capacités de traitement, de stockage et surtout d'énergie car ils sont généralement alimentés par des piles.Nous visons comme objectif principal à travers cette thèse à proposer des solutions permettant de garantir un certain niveau de fiabilité dans un RCSF dédié aux applications sensibles. Nous avons ainsi abordé trois axes, qui sont :- Le développement de méthodes permettant de détecter les noeuds capteurs défaillants dans un RCSF,- Le développement de méthodes permettant de détecter les anomalies dans les mesures collectées par les nœuds capteurs, et par la suite, les capteurs usés (fournissant de fausses mesures).- Le développement de méthodes permettant d'assurer l'intégrité et l'authenticité des données transmise dans un RCSF. / Wireless sensor networks emerge as an innovative technology that can revolutionize and improve our way to live, work and interact with the physical environment around us. Nevertheless, the use of such technology raises new challenges in the development of reliable and secure systems. These wireless sensor networks are often characterized by dense deployment on a large scale in resource-onstrained environments. The constraints imposed are the limitation of the processing, storage and especially energy capacities since they are generally powered by batteries.Our main objective is to propose solutions that guarantee a certain level of reliability in a WSN dedicated to sensitive applications. We have thus proposed three axes, which are:- The development of methods for detecting failed sensor nodes in a WSN.- The development of methods for detecting anomalies in measurements collected by sensor nodes, and subsequently fault sensors (providing false measurements).- The development of methods ensuring the integrity and authenticity of transmitted data over a WSN.
337

Monitoring et détection d'anomalie par apprentissage dans les infrastructures virtualisées / Monitoring and detection of learning abnormalities in virtualized infrastructures

Sauvanaud, Carla 13 December 2016 (has links)
Le cloud computing est un modèle de délivrance à la demande d’un ensemble de ressources informatiques distantes, partagées et configurables. Ces ressources, détenues par un fournisseur de service cloud, sont mutualisées grâce à la virtualisation de serveurs qu’elles composent et sont mises à disposition d’utilisateurs sous forme de services disponibles à la demande. Ces services peuvent être aussi variés que des applications, des plateformes de développement ou bien des infrastructures. Afin de répondre à leurs engagements de niveau de service auprès des utilisateurs, les fournisseurs de cloud se doivent de prendre en compte des exigences différentes de sûreté de fonctionnement. Assurer ces exigences pour des services différents et pour des utilisateurs aux demandes hétérogènes représente un défi pour les fournisseurs, notamment de part leur engagement de service à la demande. Ce défi est d’autant plus important que les utilisateurs demandent à ce que les services rendus soient au moins aussi sûrs de fonctionnement que ceux d’applications traditionnelles. Nos travaux traitent particulièrement de la détection d’anomalies dans les services cloud de type SaaS et PaaS. Les différents types d’anomalie qu’il est possible de détecter sont les erreurs, les symptômes préliminaires de violations de service et les violations de service. Nous nous sommes fixé quatre critères principaux pour la détection d’anomalies dans ces services : i) elle doit s’adapter aux changements de charge de travail et reconfiguration de services ; ii) elle doit se faire en ligne, iii) de manière automatique, iv) et avec un effort de configuration minimum en utilisant possiblement la même technique quel que soit le type de service. Dans nos travaux, nous avons proposé une stratégie de détection qui repose sur le traitement de compteurs de performance et sur des techniques d’apprentissage automatique. La détection utilise les données de performance système collectées en ligne à partir du système d’exploitation hôte ou bien via les hyperviseurs déployés dans le cloud. Concernant le traitement des ces données, nous avons étudié trois types de technique d’apprentissage : supervisé, non supervisé et hybride. Une nouvelle technique de détection reposant sur un algorithme de clustering est de plus proposée. Elle permet de prendre en compte l’évolution de comportement d’un système aussi dynamique qu’un service cloud. Une plateforme de type cloud a été déployée afin d’évaluer les performances de détection de notre stratégie. Un outil d’injection de faute a également été développé dans le but de cette évaluation ainsi que dans le but de collecter des jeux de données pour l’entraînement des modèles d’apprentissage. L’évaluation a été appliquée à deux cas d’étude : un système de gestion de base de données (MongoDB) et une fonction réseau virtualisée. Les résultats obtenus à partir d’analyses de sensibilité, montrent qu’il est possible d’obtenir de très bonnes performances de détection pour les trois types d’anomalies, tout en donnant les contextes adéquats pour la généralisation de ces résultats. / Nowadays, the development of virtualization technologies as well as the development of the Internet contributed to the rise of the cloud computing model. A cloud computing enables the delivery of configurable computing resources while enabling convenient, on-demand network access to these resources. Resources hosted by a provider can be applications, development platforms or infrastructures. Over the past few years, computing systems are characterized by high development speed, parallelism, and the diversity of task to be handled by applications and services. In order to satisfy their Service Level Agreements (SLA) drawn up with users, cloud providers have to handle stringent dependability demands. Ensuring these demands while delivering various services makes clouds dependability a challenging task, especially because providers need to make their services available on demand. This task is all the more challenging that users expect cloud services to be at least as dependable as traditional computing systems. In this manuscript, we address the problem of anomaly detection in cloud services. A detection strategy for clouds should rely on several principal criteria. In particular it should adapt to workload changes and reconfigurations, and at the same time require short configurations durations and adapt to several types of services. Also, it should be performed online and automatic. Finally, such a strategy needs to tackle the detection of different types of anomalies namely errors, preliminary symptoms of SLA violation and SLA violations. We propose a new detection strategy based on system monitoring data. The data is collected online either from the service, or the underlying hypervisor(s) hosting the service. The strategy makes use of machine learning algorithms to classify anomalous behaviors of the service. Three techniques are used, using respectively algorithms with supervised learning, unsupervised learning or using a technique exploiting both types of learning. A new anomaly detection technique is developed based on online clustering, and allowing to handle possible changes in a service behavior. A cloud platform was deployed so as to evaluate the detection performances of our strategy. Moreover a fault injection tool was developed for the sake of two goals : the collection of service observations with anomalies so as to train detection models, and the evaluation of the strategy in presence of anomalies. The evaluation was applied to two case studies : a database management system and a virtual network function. Sensitivity analyzes show that detection performances of our strategy are high for the three anomaly types. The context for the generalization of the results is also discussed.
338

Tier-scalable reconnaissance: the future in autonomous C4ISR systems has arrived: progress towards an outdoor testbed

Fink, Wolfgang, Brooks, Alexander J.-W., Tarbell, Mark A., Dohm, James M. 18 May 2017 (has links)
Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous (e.g., the theatre, disaster-stricken areas, etc.) or inaccessible operational areas (e.g., planetary surfaces, space). Such future missions will require increasing degrees of operational autonomy, especially when following up on transient events. Operational autonomy encompasses: (1) Automatic characterization of operational areas from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic (re-) deployment and navigation of robotic agents. This paper reports on progress towards several aspects of autonomous (CISR)-I-4 systems, including: Caltech-patented and NASA award-winning multi-tiered mission paradigm, robotic platform development (air, ground, water-based), robotic behavior motifs as the building blocks for autonomous telecommanding, and autonomous decision making based on a Caltech-patented framework comprising sensor-data-fusion (feature-vectors), anomaly detection (clustering and principal component analysis), and target prioritization (hypothetical probing).
339

Anomaly Detection in Electricity Consumption Data

GHORBANI, SONIYA January 2017 (has links)
Distribution grids play an important role in delivering electricityto end users. Electricity customers would like to have a continuouselectricity supply without any disturbance. For customerssuch as airports and hospitals electricity interruption may havedevastating consequences. Therefore, many electricity distributioncompanies are looking for ways to prevent power outages.Sometimes the power outages are caused from the grid sidesuch as failure in transformers or a break down in power cablesbecause of wind. And sometimes the outages are caused bythe customers such as overload. In fact, a very high peak inelectricity consumption and irregular load profile may causethese kinds of failures.In this thesis, we used an approach consisting of two mainsteps for detecting customers with irregular load profile. In thefirst step, we create a dictionary based on all common load profileshapes using daily electricity consumption for one-monthperiod. In the second step, the load profile shapes of customersfor a specific week are compared with the load patterns in thedictionary. If the electricity consumption for any customer duringthat week is not similar to any of the load patterns in thedictionary, it will be grouped as an anomaly. In this case, loadprofile data are transformed to symbols using Symbolic AggregateapproXimation (SAX) and then clustered using hierarchicalclustering.The approach is used to detect anomaly in weekly load profileof a data set provided by HEM Nät, a power distributioncompany located in the south of Sweden.
340

Network security monitoring and anomaly detection in industrial control system networks

Mantere, M. (Matti) 19 May 2015 (has links)
Abstract Industrial control system (ICS) networks used to be isolated environments, typically separated by physical air gaps from the wider area networks. This situation has been changing and the change has brought with it new cybersecurity issues. The process has also exacerbated existing problems that were previously less exposed due to the systems’ relative isolation. This process of increasing connectivity between devices, systems and persons can be seen as part of a paradigm shift called the Internet of Things (IoT). This change is progressing and the industry actors need to take it into account when working to improve the cybersecurity of ICS environments and thus their reliability. Ensuring that proper security processes and mechanisms are being implemented and enforced on the ICS network level is an important part of the general security posture of any given industrial actor. Network security and the detection of intrusions and anomalies in the context of ICS networks are the main high-level research foci of this thesis. These issues are investigated through work on machine learning (ML) based anomaly detection (AD). Potentially suitable features, approaches and algorithms for implementing a network anomaly detection system for use in ICS environments are investigated. After investigating the challenges, different approaches and methods, a proof-ofconcept (PoC) was implemented. The PoC implementation is built on top of the Bro network security monitoring framework (Bro) for testing the selected approach and tools. In the PoC, a Self-Organizing Map (SOM) algorithm is implemented using Bro scripting language to demonstrate the feasibility of using Bro as a base system. The implemented approach also represents a minimal case of event-driven machine learning anomaly detection (EMLAD) concept conceived during the research. The contributions of this thesis are as follows: a set of potential features for use in machine learning anomaly detection, proof of the feasibility of the machine learning approach in ICS network setting, a concept for event-driven machine learning anomaly detection, a design and initial implementation of user configurable and extendable machine learning anomaly detection framework for ICS networks. / Tiivistelmä Kehittyneet yhteiskunnat käyttävät teollisuuslaitoksissaan ja infrastruktuuriensa operoinnissa monimuotoisia automaatiojärjestelmiä. Näiden automaatiojärjestelmien tieto- ja kyberturvallisuuden tila on hyvin vaihtelevaa. Laitokset ja niiden hyödyntämät järjestelmät voivat edustaa usean eri aikakauden tekniikkaa ja sisältää useiden eri aikakauden heikkouksia ja haavoittuvaisuuksia. Järjestelmät olivat aiemmin suhteellisen eristyksissä muista tietoverkoista kuin omista kommunikaatioväylistään. Tämä automaatiojärjestelmien eristyneisyyden heikkeneminen on luonut uuden joukon uhkia paljastamalla niiden kommunikaatiorajapintoja ympäröivälle maailmalle. Nämä verkkoympäristöt ovat kuitenkin edelleen verrattaen eristyneitä ja tätä ominaisuutta voidaan hyödyntää niiden valvonnassa. Tässä työssä esitetään tutkimustuloksia näiden verkkojen turvallisuuden valvomisesta erityisesti poikkeamien havainnoinnilla käyttäen hyväksi koneoppimismenetelmiä. Alkuvaiheen haasteiden ja erityispiirteiden tutkimuksen jälkeen työssä käytetään itsejärjestyvien karttojen (Self-Organizing Map, SOM) algoritmia esimerkkiratkaisun toteutuksessa uuden konseptin havainnollistamiseksi. Tämä uusi konsepti on tapahtumapohjainen koneoppiva poikkeamien havainnointi (Event-Driven Machine Learning Anomaly Detection, EMLAD). Työn kontribuutiot ovat seuraavat, kaikki teollisuusautomaatioverkkojen kontekstissa: ehdotus yhdeksi anomalioiden havainnoinnissa käytettävien ominaisuuksien ryhmäksi, koneoppivan poikkeamien havainnoinnin käyttökelpoisuuden toteaminen, laajennettava ja joustava esimerkkitoteutus uudesta EMLAD-konseptista toteutettuna Bro NSM työkalun ohjelmointikielellä.

Page generated in 0.1154 seconds