• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 18
  • 17
  • 17
  • 15
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 482
  • 482
  • 214
  • 212
  • 160
  • 138
  • 116
  • 91
  • 81
  • 74
  • 69
  • 68
  • 61
  • 59
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

An Anomaly Behavior Analysis Methodology for Network Centric Systems

Alipour, Hamid Reza January 2013 (has links)
Information systems and their services (referred to as cyberspace) are ubiquitous and touch all aspects of our life. With the exponential growth in cyberspace activities, the number and complexity of cyber-attacks have increased significantly due to an increase in the number of applications with vulnerabilities and the number of attackers. Consequently, it becomes extremely critical to develop efficient network Intrusion Detection Systems (IDS) that can mitigate and protect cyberspace resources and services against cyber-attacks. On the other hand, since each network system and application has its own specification as defined in its protocol, it is hard to develop a single IDS which works properly for all network protocols. The keener approach is to design customized detection engines for each protocol and then aggregate the reports from these engines to define the final security state of the system. In this dissertation, we developed a general methodology based on data mining, statistical analysis and protocol semantics to perform anomaly behavior analysis and detection for network-centric systems and their protocols. In our approach, we develop runtime models of protocol's state transitions during a time interval ΔΤ. We consider any n consecutive messages in a session during the time interval ΔΤ as an n-transition pattern called n-gram. By applying statistical analysis over these n-gram patterns we can accurately model the normal behavior of any protocol. Then we use the amount of the deviation from this normal model to quantify the anomaly score of the protocol activities. If this anomaly score is higher than a well-defined threshold the system marks that activity as a malicious activity. To validate our methodology, we have applied it to two different protocols: DNS (Domain Name System) at the application layer and the IEEE 802.11(WiFi) at the data link layer, where we have achieved good detection results (>95%) with low detection errors (<0.1%).
242

Automatic Detection of Abnormal Behavior in Computing Systems

Roberts, James Frank 01 January 2013 (has links)
I present RAACD, a software suite that detects misbehaving computers in large computing systems and presents information about those machines to the system administrator. I build this system using preexisting anomaly detection techniques. I evaluate my methods using simple synthesized data, real data containing coerced abnormal behavior, and real data containing naturally occurring abnormal behavior. I find that the system adequately detects abnormal behavior and significantly reduces the amount of uninteresting computer health data presented to a system administrator.
243

Panic Detection in Human Crowds using Sparse Coding

Kumar, Abhishek 21 August 2012 (has links)
Recently, the surveillance of human activities has drawn a lot of attention from the research community and the camera based surveillance is being tried with the aid of computers. Cameras are being used extensively for surveilling human activities; however, placing cameras and transmitting visual data is not the end of a surveillance system. Surveillance needs to detect abnormal or unwanted activities. Such abnormal activities are very infrequent as compared to regular activities. At present, surveillance is done manually, where the job of operators is to watch a set of surveillance video screens to discover an abnormal event. This is expensive and prone to error. The limitation of these surveillance systems can be effectively removed if an automated anomaly detection system is designed. With powerful computers, computer vision is being seen as a panacea for surveillance. A computer vision aided anomaly detection system will enable the selection of those video frames which contain an anomaly, and only those selected frames will be used for manual verifications. A panic is a type of anomaly in a human crowd, which appears when a group of people start to move faster than the usual speed. Such situations can arise due to a fearsome activity near a crowd such as fight, robbery, riot, etc. A variety of computer vision based algorithms have been developed to detect panic in human crowds, however, most of the proposed algorithms are computationally expensive and hence too slow to be real-time. Dictionary learning is a robust tool to model a behaviour in terms of the linear combination of dictionary elements. A few panic detection algorithms have shown high accuracy using the dictionary learning method; however, the dictionary learning approach is computationally expensive. Orthogonal matching pursuit (OMP) is an inexpensive way to model a behaviour using dictionary elements and in this research OMP is used to design a panic detection algorithm. The proposed algorithm has been tested on two datasets and results are found to be comparable to state-of-the-art algorithms.
244

Visual analytics for maritime anomaly detection

Riveiro, María José January 2011 (has links)
The surveillance of large sea areas typically involves  the analysis of huge quantities of heterogeneous data.  In order to support the operator while monitoring maritime traffic, the identification of anomalous behavior or situations that might need further investigation may reduce operators' cognitive load. While it is worth acknowledging that existing mining applications support the identification of anomalies, autonomous anomaly detection systems are rarely used for maritime surveillance. Anomaly detection is normally a complex task that can hardly be solved by using purely visual or purely computational methods. This thesis suggests and investigates the adoption of visual analytics principles to support the detection of anomalous vessel behavior in maritime traffic data. This adoption involves studying the analytical reasoning process that needs to be supported,  using combined automatic and visualization approaches to support such process, and evaluating such integration. The analysis of data gathered during interviews and participant observations at various maritime control centers and the inspection of video recordings of real anomalous incidents lead to a characterization of the analytical reasoning process that operators go through when monitoring traffic. These results are complemented with a literature review of anomaly detection techniques applied to sea traffic. A particular statistical-based technique is implemented, tested, and embedded in a proof-of-concept prototype that allows user involvement in the detection process. The quantitative evaluation carried out by employing the prototype reveals that participants who used the visualization of normal behavioral models outperformed the group without aid. The qualitative assessment shows that  domain experts are positive towards providing automatic support and the visualization of normal behavioral models, since these aids may reduce reaction time, as well as increase trust and comprehensibility in the system. Based on the lessons learned, this thesis provides recommendations for designers and developers of maritime control and anomaly detection systems, as well as guidelines for carrying out evaluations of visual analytics environments. / Maria Riveiro is also affiliated to Informatics Research Centre, Högskolan i Skövde / Information Fusion Research Program, Högskolan i Skövde
245

An intrusion detection system for supervisory control and data acquisition systems

Hansen, Sinclair D. January 2008 (has links)
Despite increased awareness of threats against Critical Infrastructure (CI), securing of Supervisory Control and Data Acquisition (SCADA) systems remains incomplete. The majority of research focuses on preventative measures such as improving communication protocols and implementing security policies. New attempts are being made to use commercial Intrusion Detection System (IDS) software to protect SCADA systems. These have limited effectiveness because the ability to detect specific threats requires the context of the SCADA system. SCADA context is defined as any information that can be used to characterise the current status and function of the SCADA system. In this thesis the standard IDS model will be used with the varying SCADA data sources to provide SCADA context to a signature and anomaly detection engine. A novel addition to enhance the IDS model will be to use the SCADA data sources to simulate the remote SCADA site. The data resulting from the simulation is used by the IDS to make behavioural comparison between the real and simulated SCADA site. To evaluate the enhanced IDS model the specific context of a water and wastewater system is used to develop a prototype. Using this context it was found that the inflow between sites has similar diurnal characteristic to network traffic. This introduced the idea of using inflow data to detect abnormal behaviour for a remote wastewater site. Several experiments are proposed to validate the prototype using data from a real SCADA site. Initial results show good promise for detecting abnormal behaviour and specific threats against water and wastewater SCADA systems.
246

Finding early signals of emerging trends in text through topic modeling and anomaly detection

Redyuk, Sergey January 2018 (has links)
Trend prediction has become an extremely popular practice in many industrial sectors and academia. It is beneficial for strategic planning and decision making, and facilitates exploring new research directions that are not yet matured. To anticipate future trends in academic environment, a researcher needs to analyze an extensive amount of literature and scientific publications, and gain expertise in the particular research domain. This approach is time-consuming and extremely complicated due to abundance of data and its diversity. Modern machine learning tools, on the other hand, are capable of processing tremendous volumes of data, reaching the real-time human-level performance for various applications. Achieving high performance in unsupervised prediction of emerging trends in text can indicate promising directions for future research and potentially lead to breakthrough discoveries in any field of science. This thesis addresses the problem of emerging trend prediction in text in two main steps: it utilizes HDP topic model to represent latent topic space of a given temporal collection of documents, DBSCAN clustering algorithm to detect groups with high-density regions in the document space potentially leading to emerging trends, and applies KLdivergence in order to capture deviating text which might indicate birth of a new not-yet-seen phenomenon. In order to empirically evaluate the effectiveness of the proposed framework and estimate its predictive capability, both synthetically generated corpora and real-world text collections from arXiv.org, an open-access electronic archive of scientific publications (category: Computer Science), and NIPS publications are used. For synthetic data, a text generator is designed which provides ground truth to evaluate the performance of anomaly detection algorithms. This work contributes to the body of knowledge in the area of emerging trend prediction in several ways. First of all, the method of incorporating topic modeling and anomaly detection algorithms for emerging trend prediction is a novel approach and highlights new perspectives in the subject area. Secondly, the three-level word-document-topic topology of anomalies is formalized in order to detect anomalies in temporal text collections which might lead to emerging trends. Finally, a framework for unsupervised detection of early signals of emerging trends in text is designed. The framework captures new vocabulary, documents with deviating word/topic distribution, and drifts in latent topic space as three main indicators of a novel phenomenon to occur, in accordance with the three-level topology of anomalies. The framework is not limited by particular sources of data and can be applied to any temporal text collections in combination with any online methods for soft clustering.
247

Atlantic : a framework for anomaly traffic detection, classification, and mitigation in SDN / Atlantic : um framework para detecção, classificação e mitigação de tráfego anômalo em SDN

Silva, Anderson Santos da January 2015 (has links)
Software-Defined Networking (SDN) objetiva aliviar as limitações impostas por redes IP tradicionais dissociando tarefas de rede executadas em cada dispositivo em planos específicos. Esta abordagem oferece vários benefícios, tais como a possibilidade de uso de protocolos de comunicação padrão, funções de rede centralizadas, e elementos de rede mais específicos e modulares, tais como controladores de rede. Apesar destes benefícios, ainda há uma falta de apoio adequado para a realização de tarefas relacionadas à classificação de tráfego, pois (i) as características de fluxo nativas disponíveis no protocolo OpenFlow, tais como contadores de bytes e pacotes, não oferecem informação suficiente para distinguir de forma precisa fluxos específicos; (ii) existe uma falta de suporte para determinar qual é o conjunto ótimo de características de fluxo para caracterizar um dado perfil de tráfego; (iii) existe uma necessidade de estratégias flexíveis para compor diferentes mecanismos relacionados à detecção, classificação e mitigação de anomalias de rede usando abstrações de software; (iv) existe uma necessidade de monitoramento de tráfego em tempo real usando técnicas leves e de baixo custo; (v) não existe um framework capaz de gerenciar detecção, classificação e mitigação de anomalias de uma forma coordenada considerando todas as demandas acima. Adicionalmente, é sabido que mecanismos de detecção e classificação de anomalias de tráfego precisam ser flexíveis e fáceis de administrar, a fim de detectar o crescente espectro de anomalias. Detecção e classificação são tarefas difíceis por causa de várias razões, incluindo a necessidade de obter uma visão precisa e abrangente da rede, a capacidade de detectar a ocorrência de novos tipos de ataque, e a necessidade de lidar com erros de classificação. Nesta dissertação, argumentamos que SDN oferece ambientes propícios para a concepção e implementação de esquemas mais robustos e extensíveis para detecção e classificação de anomalias. Diferentemente de outras abordagens na literatura relacionada, que abordam individualmente detecção ou classificação ou mitigação de anomalias, apresentamos um framework para o gerenciamento e orquestração dessas tarefas em conjunto. O framework proposto é denominado ATLANTIC e combina o uso de técnicas com baixo custo computacional para monitorar tráfego e técnicas mais computacionalmente intensivas, porém precisas, para classificar os fluxos de tráfego. Como resultado, ATLANTIC é um framework flexível capaz de categorizar anomalias de tráfego utilizando informações coletadas da rede para lidar com cada perfil de tráfego de um modo específico, como por exemplo, bloqueando fluxos maliciosos. / Software-Defined Networking (SDN) aims to alleviate the limitations imposed by traditional IP networks by decoupling network tasks performed on each device in particular planes. This approach offers several benefits, such as standard communication protocols, centralized network functions, and specific network elements, such as controller devices. Despite these benefits, there is still a lack of adequate support for performing tasks related to traffic classification, because (i) the native flow features available in OpenFlow, such as packet and byte counts, do not convey sufficient information to accurately distinguish between some types of flows; (ii) there is a lack of support to determine what is the optimal set of flow features to characterize different types of traffic profiles; (iii) there is a need for a flexible way of composing different mechanisms to detect, classify and mitigate network anomalies using software abstractions; (iv) there is a need of online traffic monitoring using lightweight/low-cost techniques; (v) there is no framework capable of managing anomaly detection, classification and mitigation in a coordinated manner and considering all these demands. Additionally, it is well-known that anomaly traffic detection and classification mechanisms need to be flexible and easy to manage in order to detect the ever growing spectrum of anomalies. Detection and classification are difficult tasks because of several reasons, including the need to obtain an accurate and comprehensive view of the network, the ability to detect the occurrence of new attack types, and the need to deal with misclassification. In this dissertation, we argue that Software-Defined Networking (SDN) form propitious environments for the design and implementation of more robust and extensible anomaly classification schemes. Different from other approaches from the literature, which individually tackle either anomaly detection or classification or mitigation, we present a management framework to perform these tasks jointly. Our proposed framework is called ATLANTIC and it combines the use of lightweight techniques for traffic monitoring and heavyweight, but accurate, techniques to classify traffic flows. As a result, ATLANTIC is a flexible framework capable of categorizing traffic anomalies and using the information collected to handle each traffic profile in a specific manner, e.g., blocking malicious flows.
248

Modélisation de données de surveillance épidémiologique de la faune sauvage en vue de la détection de problèmes sanitaires inhabituels / Modelling of epidemiological surveillance data from wildlife for the detection of unusual health events

Petit, Eva 09 February 2011 (has links)
Des études récentes ont montré que parmi les infections émergentes chez l'homme, env. 40% étaient des zoonoses liées à la faune sauvage. La surveillance sanitaire de ces animaux devrait contribuer à améliorer la protection de leur santé et aussi celle des animaux domestiques et des hommes. Notre objectif était de développer des outils de détection de problèmes sanitaires inhabituels dans la faune sauvage, en adoptant une approche syndromique, utilisée en santé humaine, avec des profils pathologiques comme indicateurs de santé non spécifiques. Un réseau national de surveillance des causes de mortalité dans la faune sauvage, appelé SAGIR, a fourni les données. Entre 1986 et 2007, plus de 50.000 cas ont été enregistrés, représentant 244 espèces de mammifères terrestres et d'oiseaux, et attribués à 220 différentes causes de mort. Le réseau a d'abord été évalué pour sa capacité à détecter précocement des événements inhabituels. Des classes syndromiques ont ensuite été définies par une typologie statistique des lésions observées sur les cadavres. Les séries temporelles des syndromes ont été analysées en utilisant deux méthodes complémentaires de détection : un algorithme robuste développé par Farrington et un modèle linéaire généralisé avec des termes périodiques. Les tendances séculaires de ces syndromes et des signaux correspondent a des excès de cas ont été identifiés. Les signalements de problèmes de mortalité inhabituelle dans le bulletin du réseau ont été utilisés pour interpréter ces signaux. L'étude analyse la pertinence de l'utilisation de la surveillance syndromique sur ce type de données et donne des éléments pour des améliorations futures. / Recent studies have shown that amongst emerging infectious disease events in humans, about 40% were zoonoses linked to wildlife. Disease surveillance of wildlife should help to improve health protection of these animals and also of domestic animals and humans that are exposed to these pathogenic agents. Our aim was to develop tools capable of detecting unusual disease events in free ranging wildlife, by adopting a syndromic approach, as it is used for human health surveillance, with pathological profiles as early unspecific health indicators. We used the information registered by a national network monitoring causes of death in wildlife in France since 1986, called SAGIR. More than 50.000 cases of mortality in wildlife were recorded up to 2007, representing 244 species of terrestrial mammals and birds, and were attributed to 220 different causes of death. The network was first evaluated for its capacity to detect early unusual events. Syndromic classes were then defined by a statistical typology of the lesions observed on the carcasses. Syndrome time series were analyzed, using two complimentary methods of detection, one robust detection algorithm developed by Farrington and another generalized linear model with periodic terms. Historical trends of occurrence of these syndromes and greater-than-expected counts (signals) were identified. Reporting of unusual mortality events in the network bulletin was used to interpret these signals. The study analyses the relevance of the use of syndromic surveillance on this type of data and gives elements for future improvements.
249

Atlantic : a framework for anomaly traffic detection, classification, and mitigation in SDN / Atlantic : um framework para detecção, classificação e mitigação de tráfego anômalo em SDN

Silva, Anderson Santos da January 2015 (has links)
Software-Defined Networking (SDN) objetiva aliviar as limitações impostas por redes IP tradicionais dissociando tarefas de rede executadas em cada dispositivo em planos específicos. Esta abordagem oferece vários benefícios, tais como a possibilidade de uso de protocolos de comunicação padrão, funções de rede centralizadas, e elementos de rede mais específicos e modulares, tais como controladores de rede. Apesar destes benefícios, ainda há uma falta de apoio adequado para a realização de tarefas relacionadas à classificação de tráfego, pois (i) as características de fluxo nativas disponíveis no protocolo OpenFlow, tais como contadores de bytes e pacotes, não oferecem informação suficiente para distinguir de forma precisa fluxos específicos; (ii) existe uma falta de suporte para determinar qual é o conjunto ótimo de características de fluxo para caracterizar um dado perfil de tráfego; (iii) existe uma necessidade de estratégias flexíveis para compor diferentes mecanismos relacionados à detecção, classificação e mitigação de anomalias de rede usando abstrações de software; (iv) existe uma necessidade de monitoramento de tráfego em tempo real usando técnicas leves e de baixo custo; (v) não existe um framework capaz de gerenciar detecção, classificação e mitigação de anomalias de uma forma coordenada considerando todas as demandas acima. Adicionalmente, é sabido que mecanismos de detecção e classificação de anomalias de tráfego precisam ser flexíveis e fáceis de administrar, a fim de detectar o crescente espectro de anomalias. Detecção e classificação são tarefas difíceis por causa de várias razões, incluindo a necessidade de obter uma visão precisa e abrangente da rede, a capacidade de detectar a ocorrência de novos tipos de ataque, e a necessidade de lidar com erros de classificação. Nesta dissertação, argumentamos que SDN oferece ambientes propícios para a concepção e implementação de esquemas mais robustos e extensíveis para detecção e classificação de anomalias. Diferentemente de outras abordagens na literatura relacionada, que abordam individualmente detecção ou classificação ou mitigação de anomalias, apresentamos um framework para o gerenciamento e orquestração dessas tarefas em conjunto. O framework proposto é denominado ATLANTIC e combina o uso de técnicas com baixo custo computacional para monitorar tráfego e técnicas mais computacionalmente intensivas, porém precisas, para classificar os fluxos de tráfego. Como resultado, ATLANTIC é um framework flexível capaz de categorizar anomalias de tráfego utilizando informações coletadas da rede para lidar com cada perfil de tráfego de um modo específico, como por exemplo, bloqueando fluxos maliciosos. / Software-Defined Networking (SDN) aims to alleviate the limitations imposed by traditional IP networks by decoupling network tasks performed on each device in particular planes. This approach offers several benefits, such as standard communication protocols, centralized network functions, and specific network elements, such as controller devices. Despite these benefits, there is still a lack of adequate support for performing tasks related to traffic classification, because (i) the native flow features available in OpenFlow, such as packet and byte counts, do not convey sufficient information to accurately distinguish between some types of flows; (ii) there is a lack of support to determine what is the optimal set of flow features to characterize different types of traffic profiles; (iii) there is a need for a flexible way of composing different mechanisms to detect, classify and mitigate network anomalies using software abstractions; (iv) there is a need of online traffic monitoring using lightweight/low-cost techniques; (v) there is no framework capable of managing anomaly detection, classification and mitigation in a coordinated manner and considering all these demands. Additionally, it is well-known that anomaly traffic detection and classification mechanisms need to be flexible and easy to manage in order to detect the ever growing spectrum of anomalies. Detection and classification are difficult tasks because of several reasons, including the need to obtain an accurate and comprehensive view of the network, the ability to detect the occurrence of new attack types, and the need to deal with misclassification. In this dissertation, we argue that Software-Defined Networking (SDN) form propitious environments for the design and implementation of more robust and extensible anomaly classification schemes. Different from other approaches from the literature, which individually tackle either anomaly detection or classification or mitigation, we present a management framework to perform these tasks jointly. Our proposed framework is called ATLANTIC and it combines the use of lightweight techniques for traffic monitoring and heavyweight, but accurate, techniques to classify traffic flows. As a result, ATLANTIC is a flexible framework capable of categorizing traffic anomalies and using the information collected to handle each traffic profile in a specific manner, e.g., blocking malicious flows.
250

Explanation Methods for Bayesian Networks

Helldin, Tove January 2009 (has links)
The international maritime industry is growing fast due to an increasing number of transportations over sea. In pace with this development, the maritime surveillance capacity must be expanded as well, in order to be able to handle the increasing numbers of hazardous cargo transports, attacks, piracy etc. In order to detect such events, anomaly detection methods and techniques can be used. Moreover, since surveillance systems process huge amounts of sensor data, anomaly detection techniques can be used to filter out or highlight interesting objects or situations to an operator. Making decisions upon large amounts of sensor data can be a challenging and demanding activity for the operator, not only due to the quantity of the data, but factors such as time pressure, high stress and uncertain information further aggravate the task. Bayesian networks can be used in order to detect anomalies in data and have, in contrast to many other opaque machine learning techniques, some important advantages. One of these advantages is the fact that it is possible for a user to understand and interpret the model, due to its graphical nature. This thesis aims to investigate how the output from a Bayesian network can be explained to a user by first reviewing and presenting which methods exist and second, by making experiments. The experiments aim to investigate if two explanation methods can be used in order to give an explanation to the inferences made by a Bayesian network in order to support the operator’s situation awareness and decision making process when deployed in an anomaly detection problem in the maritime domain.

Page generated in 0.1155 seconds