Spelling suggestions: "subject:" anomaly detection"" "subject:" unomaly detection""
231 |
Clustering and Anomaly detection using Medical Enterprise system Logs (CAMEL) / Klustring av och anomalidetektering på systemloggarAhlinder, Henrik, Kylesten, Tiger January 2023 (has links)
Research on automated anomaly detection in complex systems by using log files has been on an upswing with the introduction of new deep-learning natural language processing methods. However, manually identifying and labelling anomalous logs is time-consuming, error-prone, and labor-intensive. This thesis instead uses an existing state-of-the-art method which learns from PU data as a baseline and evaluates three extensions to it. The first extension provides insight into the performance of the choice of word em-beddings on the downstream task. The second extension applies a re-labelling strategy to reduce problems from pseudo-labelling. The final extension removes the need for pseudo-labelling by applying a state-of-the-art loss function from the field of PU learning. The findings show that FastText and GloVe embeddings are viable options, with FastText providing faster training times but mixed results in terms of performance. It is shown that several of the methods studied in this thesis suffer from sporadically poor performances on one of the datasets studied. Finally, it is shown that using modified risk functions from the field of PU learning provides new state-of-the-art performances on the datasets considered in this thesis.
|
232 |
Leveraging contextual cues for dynamic scene understandingBettadapura, Vinay Kumar 27 May 2016 (has links)
Environments with people are complex, with many activities and events that need to be represented and explained. The goal of scene understanding is to either determine what objects and people are doing in such complex and dynamic environments, or to know the overall happenings, such as the highlights of the scene. The context within which the activities and events unfold provides key insights that cannot be derived by studying the activities and events alone. \emph{In this thesis, we show that this rich contextual information can be successfully leveraged, along with the video data, to support dynamic scene understanding}. We categorize and study four different types of contextual cues: (1) spatio-temporal context, (2) egocentric context, (3) geographic context, and (4) environmental context, and show that they improve dynamic scene understanding tasks across several different application domains.
We start by presenting data-driven techniques to enrich spatio-temporal context by augmenting Bag-of-Words models with temporal, local and global causality information and show that this improves activity recognition, anomaly detection and scene assessment from videos. Next, we leverage the egocentric context derived from sensor data captured from first-person point-of-view devices to perform field-of-view localization in order to understand the user's focus of attention. We demonstrate single and multi-user field-of-view localization in both indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. Next, we look at how geographic context can be leveraged to make challenging ``in-the-wild" object recognition tasks more tractable using the problem of food recognition in restaurants as a case-study. Finally, we study the environmental context obtained from dynamic scenes such as sporting events, which take place in responsive environments such as stadiums and gymnasiums, and show that it can be successfully used to address the challenging task of automatically generating basketball highlights. We perform comprehensive user-studies on 25 full-length NCAA games and demonstrate the effectiveness of environmental context in producing highlights that are comparable to the highlights produced by ESPN.
|
233 |
Detection and localization of link-level network anomalies using end-to-end path monitoringSalhi, Emna 13 February 2013 (has links) (PDF)
The aim of this thesis is to come up with cost-efficient, accurate and fast schemes for link-level network anomaly detection and localization. It has been established that for detecting all potential link-level anomalies, a set of paths that cover all links of the network must be monitored, whereas for localizing all potential link-level anomalies, a set of paths that can distinguish between all links of the network pairwise must be monitored. Either end-node of each path monitored must be equipped with a monitoring device. Most existing link-level anomaly detection and localization schemes are two-step. The first step selects a minimal set of monitor locations that can detect/localize any link-level anomaly. The second step selects a minimal set of monitoring paths between the selected monitor locations such that all links of the network are covered/distinguishable pairwise. However, such stepwise schemes do not consider the interplay between the conflicting optimization objectives of the two steps, which results in suboptimal consumption of the network resources and biased monitoring measurements. One of the objectives of this thesis is to evaluate and reduce this interplay. To this end, one-step anomaly detection and localization schemes that select monitor locations and paths that are to be monitored jointly are proposed. Furthermore, we demonstrate that the already established condition for anomaly localization is sufficient but not necessary. A necessary and sufficient condition that minimizes the localization cost drastically is established. The problems are demonstrated to be NP-Hard. Scalable and near-optimal heuristic algorithms are proposed.
|
234 |
A basis for intrusion detection in distributed systems using kernel-level data tainting.Hauser, Christophe 19 June 2013 (has links) (PDF)
Modern organisations rely intensively on information and communicationtechnology infrastructures. Such infrastructures offer a range of servicesfrom simple mail transport agents or blogs to complex e-commerce platforms,banking systems or service hosting, and all of these depend on distributedsystems. The security of these systems, with their increasing complexity, isa challenge. Cloud services are replacing traditional infrastructures byproviding lower cost alternatives for storage and computational power, butat the risk of relying on third party companies. This risk becomesparticularly critical when such services are used to host privileged companyinformation and applications, or customers' private information. Even in thecase where companies host their own information and applications, the adventof BYOD (Bring Your Own Device) leads to new security relatedissues.In response, our research investigated the characterization and detection ofmalicious activities at the operating system level and in distributedsystems composed of multiple hosts and services. We have shown thatintrusions in an operating system spawn abnormal information flows, and wedeveloped a model of dynamic information flow tracking, based on taintmarking techniques, in order to detect such abnormal behavior. We trackinformation flows between objects of the operating system (such as files,sockets, shared memory, processes, etc.) and network packetsflowing between hosts. This approach follows the anomaly detection paradigm.We specify the legal behavior of the system with respect to an informationflow policy, by stating how users and programs from groups of hosts areallowed to access or alter each other's information. Illegal informationflows are considered as intrusion symptoms. We have implemented this modelin the Linux kernel (the source code is availableat http://www.blare-ids.org), as a Linux Security Module (LSM), andwe used it as the basis for practical demonstrations. The experimentalresults validated the feasibility of our new intrusion detection principles.
|
235 |
A basis for intrusion detection in distributed systems using kernel-level data tainting. / Détection d'intrusions dans les systèmes distribués par propagation de teinte au niveau noyauHauser, Christophe 19 June 2013 (has links)
Les systèmes d'information actuels, qu'il s'agisse de réseaux d'entreprises, deservices en ligne ou encore d'organisations gouvernementales, reposent trèssouvent sur des systèmes distribués, impliquant un ensemble de machinesfournissant des services internes ou externes. La sécurité de tels systèmesd'information est construite à plusieurs niveaux (défense en profondeur). Lors de l'établissementde tels systèmes, des politiques de contrôle d'accès, d'authentification, defiltrage (firewalls, etc.) sont mises en place afin de garantir lasécurité des informations. Cependant, ces systèmes sont très souventcomplexes, et évoluent en permanence. Il devient alors difficile de maintenirune politique de sécurité sans faille sur l'ensemble du système (quand bienmême cela serait possible), et de résister aux attaques auxquelles ces servicessont quotidiennement exposés. C'est ainsi que les systèmes de détectiond'intrusions sont devenus nécessaires, et font partie du jeu d'outils desécurité indispensables à tous les administrateurs de systèmes exposés enpermanence à des attaques potentielles.Les systèmes de détection d'intrusions se classifient en deux grandes familles,qui diffèrent par leur méthode d'analyse: l'approche par scénarios et l'approchecomportementale. L'approche par scénarios est la plus courante, et elle estutilisée par des systèmes de détection d'intrusions bien connus tels queSnort, Prélude et d'autres. Cette approche consiste à reconnaître des signaturesd'attaques connues dans le trafic réseau (pour les IDS réseau) et des séquencesd'appels systèmes (pour les IDS hôtes). Il s'agit donc de détecter descomportements anormaux du système liés à la présence d'attaques. Bien que l'onpuisse ainsi détecter un grand nombre d'attaques, cette approche ne permet pasde détecter de nouvelles attaques, pour lesquelles aucune signature n'estconnue. Par ailleurs, les malwares modernes emploient souvent des techniquesdites de morphisme binaire, afin d'échapper à la détection parsignatures.L'approche comportementale, à l'inverse de l'approche par signature, se basesur la modélisation du fonctionnement normal du système. Cette approche permetainsi de détecter de nouvelles attaques tout comme des attaques plus anciennes,n'ayant recours à aucune base de données de connaissance d'attaques existantes.Il existe plusieurs types d'approches comportementales, certains modèles sontstatistiques, d'autres modèles s'appuient sur une politique de sécurité.Dans cette thèse, on s'intéresse à la détection d'intrusions dans des systèmesdistribués, en adoptant une approche comportementale basée sur une politique desécurité. Elle est exprimée sous la forme d'une politique de flux d'information. Les fluxd'informations sont suivis via une technique de propagation de marques (appeléeen anglais « taint marking ») appliquées sur les objets du systèmed'exploitation, directement au niveau du noyau. De telles approchesexistent également au niveau langage (par exemple par instrumentation de lamachine virtuelle Java, ou bien en modifiant le code des applications), ou encoreau niveau de l'architecture (en émulant le microprocesseur afin de tracer lesflux d'information entre les registres, pages mémoire etc.), etpermettent ainsi une analyse fine des flux d'informations. Cependant, nous avons choisi de nous placer au niveau du système d'exploitation, afin de satisfaire les objectifs suivants:• Détecter les intrusions à tous les niveaux du système, pas spécifiquement au sein d'une ou plusieurs applications.• Déployer notre système en présence d'applications natives, dont le code source n'est pas nécessairement disponible (ce qui rend leur instrumentation très difficile voire impossible).• Utiliser du matériel standard présent sur le marché. Il est très difficile de modifier physiquement les microprocesseurs, et leur émulation a un impact très important sur les performances du système. / Modern organisations rely intensively on information and communicationtechnology infrastructures. Such infrastructures offer a range of servicesfrom simple mail transport agents or blogs to complex e-commerce platforms,banking systems or service hosting, and all of these depend on distributedsystems. The security of these systems, with their increasing complexity, isa challenge. Cloud services are replacing traditional infrastructures byproviding lower cost alternatives for storage and computational power, butat the risk of relying on third party companies. This risk becomesparticularly critical when such services are used to host privileged companyinformation and applications, or customers' private information. Even in thecase where companies host their own information and applications, the adventof BYOD (Bring Your Own Device) leads to new security relatedissues.In response, our research investigated the characterization and detection ofmalicious activities at the operating system level and in distributedsystems composed of multiple hosts and services. We have shown thatintrusions in an operating system spawn abnormal information flows, and wedeveloped a model of dynamic information flow tracking, based on taintmarking techniques, in order to detect such abnormal behavior. We trackinformation flows between objects of the operating system (such as files,sockets, shared memory, processes, etc.) and network packetsflowing between hosts. This approach follows the anomaly detection paradigm.We specify the legal behavior of the system with respect to an informationflow policy, by stating how users and programs from groups of hosts areallowed to access or alter each other's information. Illegal informationflows are considered as intrusion symptoms. We have implemented this modelin the Linux kernel (the source code is availableat http://www.blare-ids.org), as a Linux Security Module (LSM), andwe used it as the basis for practical demonstrations. The experimentalresults validated the feasibility of our new intrusion detection principles.
|
236 |
Anomaly Detection in Industrial Networks using a Resource-Constrained Edge DeviceEliasson, Anton January 2019 (has links)
The detection of false data-injection attacks in industrial networks is a growing challenge in the industry because it requires knowledge of application and protocol specific behaviors. Profinet is a common communication standard currently used in the industry, which has the potential to encounter this type of attack. This motivates an examination on whether a solution based on machine learning with a focus on anomaly detection can be implemented and used to detect abnormal data in Profinet packets. Previous work has investigated this topic; however, a solution is not available in the market yet. Any solution that aims to be adopted by the industry requires the detection of abnormal data at the application level and to run the analytics on a resource-constrained device. This thesis presents an implementation, which aims to detect abnormal data in Profinet packets represented as online data streams generated in real-time. The implemented unsupervised learning approach is validated on data from a simulated industrial use-case scenario. The results indicate that the method manages to detect all abnormal behaviors in an industrial network.
|
237 |
Proposta e implementação de uma Camada de Integração de Serviços de Segurança (CISS) em SoC e multiplataforma. / Proposal and Implementation of an Security Services Integration Layer (ISSL) in SoC and multiplatform.Pereira, Fábio Dacêncio 09 November 2009 (has links)
As redes de computadores são ambientes cada vez mais complexos e dotados de novos serviços, usuários e infra-estruturas. A segurança e a privacidade de informações tornam-se fundamentais para a evolução destes ambientes. O anonimato, a fragilidade e outros fatores muitas vezes estimulam indivíduos mal intencionados a criar ferramentas e técnicas de ataques a informações e a sistemas computacionais. Isto pode gerar desde pequenas inconveniências até prejuízos financeiros e morais. Nesse sentido, a detecção de intrusão aliada a outras ferramentas de segurança pode proteger e evitar ataques maliciosos e anomalias em sistemas computacionais. Porém, considerada a complexidade e robustez de tais sistemas, os serviços de segurança muitas vezes não são capazes de analisar e auditar todo o fluxo de informações, gerando pontos falhos de segurança que podem ser descobertos e explorados. Neste contexto, esta tese de doutorado propõe, projeta, implementa e analisa o desempenho de uma camada de integração de serviços de segurança (CISS). Na CISS foram implementados e integrados serviços de segurança como Firewall, IDS, Antivírus, ferramentas de autenticação, ferramentas proprietárias e serviços de criptografia. Além disso, a CISS possui como característica principal a criação de uma estrutura comum para armazenar informações sobre incidentes ocorridos em um sistema computacional. Estas informações são consideradas como a fonte de conhecimento para que o sistema de detecção de anomalias, inserido na CISS, possa atuar com eficiência na prevenção e proteção de sistemas computacionais detectando e classificando prematuramente situações anômalas. Para isso, foram criados modelos comportamentais com base nos conceitos de Modelo Oculto de Markov (HMM) e modelos de análise de seqüências anômalas. A CISS foi implementada em três versões: (i) System-on-Chip (SoC), (ii) software JCISS em Java e (iii) simulador. Resultados como desempenho temporal, taxas de ocupação, o impacto na detecção de anomalias e detalhes de implementação são apresentados, comparados e analisados nesta tese. A CISS obteve resultados expressivos em relação às taxas de detecção de anomalias utilizando o modelo MHMM, onde se destacam: para ataques conhecidos obteve taxas acima de 96%; para ataques parciais por tempo, taxas acima de 80%; para ataques parciais por seqüência, taxas acima de 96% e para ataques desconhecidos, taxas acima de 54%. As principais contribuições da CISS são a criação de uma estrutura de integração de serviços de segurança e a relação e análise de ocorrências anômalas para a diminuição de falsos positivos, detecção e classificação prematura de anormalidades e prevenção de sistemas computacionais. Contudo, soluções foram criadas para melhorar a detecção como o modelo seqüencial e recursos como o subMHMM, para o aprendizado em tempo real. Por fim, as implementações em SoC e Java permitiram a avaliação e utilização da CISS em ambientes reais. / Computer networks are increasingly complex environments and equipped with new services, users and infrastructure. The information safety and privacy become fundamental to the evolution of these environments. The anonymity, the weakness and other factors often encourage people to create malicious tools and techniques of attacks to information and computer systems. It can generate small inconveniences or even moral and financial damage. Thus, the detection of intrusion combined with other security tools can protect and prevent malicious attacks and anomalies in computer systems. Yet, considering the complexity and robustness of these systems, the security services are not always able to examine and audit the entire information flow, creating points of security failures that can be discovered and explored. Therefore, this PhD thesis proposes, designs, implements and analyzes the performance of an Integrated Security Services Layer (ISSL). So several security services were implemented and integrated to the ISSL such as Firewall, IDS, Antivirus, authentication tools, proprietary tools and cryptography services. Furthermore, the main feature of our ISSL is the creation of a common structure for storing information about incidents in a computer system. This information is considered to be the source of knowledge so that the system of anomaly detection, inserted in the ISSL, can act effectively in the prevention and protection of computer systems by detecting and classifying early anomalous situations. In this sense, behavioral models were created based on the concepts of the Hidden Markov Model (MHMM) and models for analysis of anomalous sequences. The ISSL was implemented in three versions: (i) System-on-Chip (SoC), (ii) JCISS software in Java and (iii) one simulator. Results such as the time performance, occupancy rates, the impact on the detection of anomalies and details of implementation are presented, compared and analyzed in this thesis. The ISSL obtained significant results regarding the detection rates of anomalies using the model MHMM, which are: for known attacks, rates of over 96% were obtained; for partial attacks by a time, rates above 80%, for partial attacks by a sequence, rates were over 96% and for unknown attacks, rates were over 54%. The main contributions of ISSL are the creation of a structure for the security services integration and the relationship and analysis of anomalous occurrences to reduce false positives, early detection and classification of abnormalities and prevention of computer systems. Furthermore, solutions were figured out in order to improve the detection as the sequential model, and features such as subMHMM for learning at real time. Finally, the SoC and Java implementations allowed the evaluation and use of the ISSL in real environments.
|
238 |
Atlantic : a framework for anomaly traffic detection, classification, and mitigation in SDN / Atlantic : um framework para detecção, classificação e mitigação de tráfego anômalo em SDNSilva, Anderson Santos da January 2015 (has links)
Software-Defined Networking (SDN) objetiva aliviar as limitações impostas por redes IP tradicionais dissociando tarefas de rede executadas em cada dispositivo em planos específicos. Esta abordagem oferece vários benefícios, tais como a possibilidade de uso de protocolos de comunicação padrão, funções de rede centralizadas, e elementos de rede mais específicos e modulares, tais como controladores de rede. Apesar destes benefícios, ainda há uma falta de apoio adequado para a realização de tarefas relacionadas à classificação de tráfego, pois (i) as características de fluxo nativas disponíveis no protocolo OpenFlow, tais como contadores de bytes e pacotes, não oferecem informação suficiente para distinguir de forma precisa fluxos específicos; (ii) existe uma falta de suporte para determinar qual é o conjunto ótimo de características de fluxo para caracterizar um dado perfil de tráfego; (iii) existe uma necessidade de estratégias flexíveis para compor diferentes mecanismos relacionados à detecção, classificação e mitigação de anomalias de rede usando abstrações de software; (iv) existe uma necessidade de monitoramento de tráfego em tempo real usando técnicas leves e de baixo custo; (v) não existe um framework capaz de gerenciar detecção, classificação e mitigação de anomalias de uma forma coordenada considerando todas as demandas acima. Adicionalmente, é sabido que mecanismos de detecção e classificação de anomalias de tráfego precisam ser flexíveis e fáceis de administrar, a fim de detectar o crescente espectro de anomalias. Detecção e classificação são tarefas difíceis por causa de várias razões, incluindo a necessidade de obter uma visão precisa e abrangente da rede, a capacidade de detectar a ocorrência de novos tipos de ataque, e a necessidade de lidar com erros de classificação. Nesta dissertação, argumentamos que SDN oferece ambientes propícios para a concepção e implementação de esquemas mais robustos e extensíveis para detecção e classificação de anomalias. Diferentemente de outras abordagens na literatura relacionada, que abordam individualmente detecção ou classificação ou mitigação de anomalias, apresentamos um framework para o gerenciamento e orquestração dessas tarefas em conjunto. O framework proposto é denominado ATLANTIC e combina o uso de técnicas com baixo custo computacional para monitorar tráfego e técnicas mais computacionalmente intensivas, porém precisas, para classificar os fluxos de tráfego. Como resultado, ATLANTIC é um framework flexível capaz de categorizar anomalias de tráfego utilizando informações coletadas da rede para lidar com cada perfil de tráfego de um modo específico, como por exemplo, bloqueando fluxos maliciosos. / Software-Defined Networking (SDN) aims to alleviate the limitations imposed by traditional IP networks by decoupling network tasks performed on each device in particular planes. This approach offers several benefits, such as standard communication protocols, centralized network functions, and specific network elements, such as controller devices. Despite these benefits, there is still a lack of adequate support for performing tasks related to traffic classification, because (i) the native flow features available in OpenFlow, such as packet and byte counts, do not convey sufficient information to accurately distinguish between some types of flows; (ii) there is a lack of support to determine what is the optimal set of flow features to characterize different types of traffic profiles; (iii) there is a need for a flexible way of composing different mechanisms to detect, classify and mitigate network anomalies using software abstractions; (iv) there is a need of online traffic monitoring using lightweight/low-cost techniques; (v) there is no framework capable of managing anomaly detection, classification and mitigation in a coordinated manner and considering all these demands. Additionally, it is well-known that anomaly traffic detection and classification mechanisms need to be flexible and easy to manage in order to detect the ever growing spectrum of anomalies. Detection and classification are difficult tasks because of several reasons, including the need to obtain an accurate and comprehensive view of the network, the ability to detect the occurrence of new attack types, and the need to deal with misclassification. In this dissertation, we argue that Software-Defined Networking (SDN) form propitious environments for the design and implementation of more robust and extensible anomaly classification schemes. Different from other approaches from the literature, which individually tackle either anomaly detection or classification or mitigation, we present a management framework to perform these tasks jointly. Our proposed framework is called ATLANTIC and it combines the use of lightweight techniques for traffic monitoring and heavyweight, but accurate, techniques to classify traffic flows. As a result, ATLANTIC is a flexible framework capable of categorizing traffic anomalies and using the information collected to handle each traffic profile in a specific manner, e.g., blocking malicious flows.
|
239 |
Proposta e implementação de uma Camada de Integração de Serviços de Segurança (CISS) em SoC e multiplataforma. / Proposal and Implementation of an Security Services Integration Layer (ISSL) in SoC and multiplatform.Fábio Dacêncio Pereira 09 November 2009 (has links)
As redes de computadores são ambientes cada vez mais complexos e dotados de novos serviços, usuários e infra-estruturas. A segurança e a privacidade de informações tornam-se fundamentais para a evolução destes ambientes. O anonimato, a fragilidade e outros fatores muitas vezes estimulam indivíduos mal intencionados a criar ferramentas e técnicas de ataques a informações e a sistemas computacionais. Isto pode gerar desde pequenas inconveniências até prejuízos financeiros e morais. Nesse sentido, a detecção de intrusão aliada a outras ferramentas de segurança pode proteger e evitar ataques maliciosos e anomalias em sistemas computacionais. Porém, considerada a complexidade e robustez de tais sistemas, os serviços de segurança muitas vezes não são capazes de analisar e auditar todo o fluxo de informações, gerando pontos falhos de segurança que podem ser descobertos e explorados. Neste contexto, esta tese de doutorado propõe, projeta, implementa e analisa o desempenho de uma camada de integração de serviços de segurança (CISS). Na CISS foram implementados e integrados serviços de segurança como Firewall, IDS, Antivírus, ferramentas de autenticação, ferramentas proprietárias e serviços de criptografia. Além disso, a CISS possui como característica principal a criação de uma estrutura comum para armazenar informações sobre incidentes ocorridos em um sistema computacional. Estas informações são consideradas como a fonte de conhecimento para que o sistema de detecção de anomalias, inserido na CISS, possa atuar com eficiência na prevenção e proteção de sistemas computacionais detectando e classificando prematuramente situações anômalas. Para isso, foram criados modelos comportamentais com base nos conceitos de Modelo Oculto de Markov (HMM) e modelos de análise de seqüências anômalas. A CISS foi implementada em três versões: (i) System-on-Chip (SoC), (ii) software JCISS em Java e (iii) simulador. Resultados como desempenho temporal, taxas de ocupação, o impacto na detecção de anomalias e detalhes de implementação são apresentados, comparados e analisados nesta tese. A CISS obteve resultados expressivos em relação às taxas de detecção de anomalias utilizando o modelo MHMM, onde se destacam: para ataques conhecidos obteve taxas acima de 96%; para ataques parciais por tempo, taxas acima de 80%; para ataques parciais por seqüência, taxas acima de 96% e para ataques desconhecidos, taxas acima de 54%. As principais contribuições da CISS são a criação de uma estrutura de integração de serviços de segurança e a relação e análise de ocorrências anômalas para a diminuição de falsos positivos, detecção e classificação prematura de anormalidades e prevenção de sistemas computacionais. Contudo, soluções foram criadas para melhorar a detecção como o modelo seqüencial e recursos como o subMHMM, para o aprendizado em tempo real. Por fim, as implementações em SoC e Java permitiram a avaliação e utilização da CISS em ambientes reais. / Computer networks are increasingly complex environments and equipped with new services, users and infrastructure. The information safety and privacy become fundamental to the evolution of these environments. The anonymity, the weakness and other factors often encourage people to create malicious tools and techniques of attacks to information and computer systems. It can generate small inconveniences or even moral and financial damage. Thus, the detection of intrusion combined with other security tools can protect and prevent malicious attacks and anomalies in computer systems. Yet, considering the complexity and robustness of these systems, the security services are not always able to examine and audit the entire information flow, creating points of security failures that can be discovered and explored. Therefore, this PhD thesis proposes, designs, implements and analyzes the performance of an Integrated Security Services Layer (ISSL). So several security services were implemented and integrated to the ISSL such as Firewall, IDS, Antivirus, authentication tools, proprietary tools and cryptography services. Furthermore, the main feature of our ISSL is the creation of a common structure for storing information about incidents in a computer system. This information is considered to be the source of knowledge so that the system of anomaly detection, inserted in the ISSL, can act effectively in the prevention and protection of computer systems by detecting and classifying early anomalous situations. In this sense, behavioral models were created based on the concepts of the Hidden Markov Model (MHMM) and models for analysis of anomalous sequences. The ISSL was implemented in three versions: (i) System-on-Chip (SoC), (ii) JCISS software in Java and (iii) one simulator. Results such as the time performance, occupancy rates, the impact on the detection of anomalies and details of implementation are presented, compared and analyzed in this thesis. The ISSL obtained significant results regarding the detection rates of anomalies using the model MHMM, which are: for known attacks, rates of over 96% were obtained; for partial attacks by a time, rates above 80%, for partial attacks by a sequence, rates were over 96% and for unknown attacks, rates were over 54%. The main contributions of ISSL are the creation of a structure for the security services integration and the relationship and analysis of anomalous occurrences to reduce false positives, early detection and classification of abnormalities and prevention of computer systems. Furthermore, solutions were figured out in order to improve the detection as the sequential model, and features such as subMHMM for learning at real time. Finally, the SoC and Java implementations allowed the evaluation and use of the ISSL in real environments.
|
240 |
Image-based Process Monitoring via Generative Adversarial Autoencoder with Applications to Rolling Defect DetectionJanuary 2019 (has links)
abstract: Image-based process monitoring has recently attracted increasing attention due to the advancement of the sensing technologies. However, existing process monitoring methods fail to fully utilize the spatial information of images due to their complex characteristics including the high dimensionality and complex spatial structures. Recent advancement of the unsupervised deep models such as a generative adversarial network (GAN) and generative adversarial autoencoder (AAE) has enabled to learn the complex spatial structures automatically. Inspired by this advancement, we propose an anomaly detection framework based on the AAE for unsupervised anomaly detection for images. AAE combines the power of GAN with the variational autoencoder, which serves as a nonlinear dimension reduction technique with regularization from the discriminator. Based on this, we propose a monitoring statistic efficiently capturing the change of the image data. The performance of the proposed AAE-based anomaly detection algorithm is validated through a simulation study and real case study for rolling defect detection. / Dissertation/Thesis / Masters Thesis Industrial Engineering 2019
|
Page generated in 0.0812 seconds