Spelling suggestions: "subject:"[een] ANOMALY DETECTION"" "subject:"[enn] ANOMALY DETECTION""
21 |
Adversarial Anomaly DetectionRadhika Bhargava (7036556) 02 August 2019 (has links)
<p>Considerable attention has been given to the vulnerability of machine learning to adversarial samples. This is particularly critical in anomaly detection; uses such as detecting fraud, intrusion, and malware must assume a malicious adversary. We specifically address poisoning attacks, where the adversary injects carefully crafted benign samples into the data, leading to concept drift that causes the anomaly detection to misclassify the actual attack as benign. Our goal is to estimate the vulnerability of an anomaly detection method to an unknown attack, in particular the expected</p>
<p>minimum number of poison samples the adversary would need to succeed. Such an estimate is a necessary step in risk analysis: do we expect the anomaly detection to be sufficiently robust to be useful in the face of attacks? We analyze DBSCAN, LOF,</p>
<p>one-class SVM as an anomaly detection method, and derive estimates for robustness to poisoning attacks. The analytical estimates are validated against the number of poison samples needed for the actual anomalies in standard anomaly detection test</p>
<p>datasets. We then develop defense mechanism, based on the concept drift caused by the poisonous points, to identify that an attack is underway. We show that while it is possible to detect the attacks, it leads to a degradation in the performance of the</p>
<p>anomaly detection method. Finally, we investigate whether the generated adversarial samples for one anomaly detection method transfer to another anomaly detection method.</p>
|
22 |
Scalable Techniques for Anomaly DetectionYadav, Sandeep 1985- 14 March 2013 (has links)
Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection.
We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks.
First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation.
Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection.
We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates.
|
23 |
A NetFlow Based Internet-worm Detecting System in Large NetworkWang, Kuang-Ming 04 September 2005 (has links)
Internet-worms are a major threat to the security of today¡¦s Internet and cause significant worldwide disruptions, a huge number of infected hosts generating overwhelming traffic will impact the performance of the Internet. Network managers have the duty to mitigate this issue . In this paper we propose an automated method for detecting Internet-worm in large network based on NetFlow. We also implement a prototype system ¡V FloWorM which can help network managers to monitor suspect Internet-worms activities and identify their species in their managed networks. Our evaluation of the prototype system on real large and campus networks validates that it achieves pretty low false positive rate and good detecting rate.
|
24 |
Structural Analysis of Large Networks: Observations and ApplicationsMcGlohon, Mary 01 December 2010 (has links)
Network data (also referred to as relational data, social network data, real graph data) has become ubiquitous, and understanding patterns in this data has become an important research problem. We investigate how interactions in social networks are formed and how these interactions facilitate diffusion, model these behaviors, and apply these findings to real-world problems.
We examined graphs of size up to 16 million nodes, across many domains from academic citation networks, to campaign contributions and actor-movie networks. We also performed several case studies in online social networks such as blogs and message board communities.
Our major contributions are the following: (a) We discover several surprising patterns in network topology and interactions, such as Popularity Decay power law (in-links to a blog post decay with a power law with -1:5 exponent) and the oscillating size of connected components; (b) We propose generators such as the Butterfly generator that reproduce both established and new properties found in real networks; (c) several case studies, including a proposed method of detecting misstatements in accounting data, where using network effects gave a significant boost in detection accuracy.
|
25 |
Configuration and Implementation Issues for a Firewall System Running on a Mobile HandsetMartinsen, Pal-Erik January 2005 (has links)
Any device connected to the Internet needs to be protected. Using a firewall as a first line of defence is a very common way to provide protection. A firewall can be set up to protect an entire network or just a single host. As it is becoming more and more popular to connect mobile phones and other hand held devices to the Internet, the big question is;"how to protect those devices from the perils of the Internet?" This work investigates issues with the implementation of a firewall system for protecting mobile devices. Firewall administration is an error prone and difficult task. Setting up a correctly configured firewall in a network setting is a difficult task for a network administrator. To enable an ordinary mobile phone user to set up a firewall configuration to protect his mobile phone it is important to have a system that is easy to understand and warns the user of possible mistakes. Generic algorithms for firewall rule-set sorting and anomaly discovery are presented. This ensures that the rule-set is error free and safe to use. This is a vital part of any firewall system. The prototype developed can be used to find errors in existing firewall rule-sets. The rule-set can be in either a native firewall configuration format (currently only IPF is supported) or in a generic XML format. This generic XML format was developed as a part of this research project. Further a new graphical visualization concept that allows the end user to configure an advanced firewall configuration from a device with a small screen and limited input possibilities is presented.
|
26 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
27 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
28 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
29 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
30 |
Outlier detection by network flowLiu, Ying. January 2007 (has links) (PDF)
Thesis (Ph. D.)--University of Alabama at Birmingham, 2007. / Additional advisors: Elliot J. Lefkowitz, Kevin D. Reilly, Robert Thacker, Chengcui Zhang. Description based on contents viewed Feb. 7, 2008; title from title screen. Includes bibliographical references (p. 125-132).
|
Page generated in 0.0822 seconds