31 |
Exploitation of signal information for mobile speed estimation and anomaly detectionAfgani, Mostafa Z. January 2011 (has links)
Although the primary purpose of the signal received by amobile handset or smartphone is to enable wireless communication, the information extracted can be reused to provide a number of additional services. Two such services discussed in this thesis are: mobile speed estimation and signal anomaly detection. The proposed algorithms exploit the propagation environment specific information that is already imprinted on the received signal and therefore do not incur any additional signalling overhead. Speed estimation is useful for providing navigation and location based services in areas where global navigation satellite systems (GNSS) based devices are unusable while the proposed anomaly detection algorithms can be used to locate signal faults and aid spectrum sensing in cognitive radio systems. The speed estimation algorithms described within this thesis require a receiver with at least two antenna elements and a wideband radio frequency (RF) signal source. The channel transfer function observed at the antenna elements are compared to yield an estimate of the device speed. The basic algorithm is a one-dimensional and unidirectional two-antenna solution. The speed of the mobile receiver is estimated from a knowledge of the fixed inter-antenna distance and the time it takes for the trailing antenna to sense similar channel conditions previously observed at the leading antenna. A by-product of the algorithm is an environment specific spatial correlation function which may be combined with theoretical models of spatial correlation to extend and improve the accuracy of the algorithm. Results obtained via computer simulations are provided. The anomaly detection algorithms proposed in this thesis highlight unusual signal features while ignoring events that are nominal. When the test signal possesses a periodic frame structure, Kullback-Leibler divergence (KLD) analysis is employed to statistically compare successive signal frames. A method of automatically extracting the required frame period information from the signal is also provided. When the signal under test lacks a periodic frame structure, information content analysis of signal events can be used instead. Clean training data is required by this algorithm to initialise the reference event probabilities. In addition to the results obtained from extensive computer simulations, an architecture for field-programmable gate array (FPGA) based hardware implementations of the KLD based algorithm is provided. Results showing the performance of the algorithms against real test signals captured over the air are also presented. Both sets of algorithms are simple, effective and have low computational complexity – implying that real-time implementations on platforms with limited processing power and energy are feasible. This is an important quality since location based services are expected to be an integral part of next generation cognitive radio handsets.
|
32 |
Adversarial Anomaly DetectionRadhika Bhargava (7036556) 02 August 2019 (has links)
<p>Considerable attention has been given to the vulnerability of machine learning to adversarial samples. This is particularly critical in anomaly detection; uses such as detecting fraud, intrusion, and malware must assume a malicious adversary. We specifically address poisoning attacks, where the adversary injects carefully crafted benign samples into the data, leading to concept drift that causes the anomaly detection to misclassify the actual attack as benign. Our goal is to estimate the vulnerability of an anomaly detection method to an unknown attack, in particular the expected</p>
<p>minimum number of poison samples the adversary would need to succeed. Such an estimate is a necessary step in risk analysis: do we expect the anomaly detection to be sufficiently robust to be useful in the face of attacks? We analyze DBSCAN, LOF,</p>
<p>one-class SVM as an anomaly detection method, and derive estimates for robustness to poisoning attacks. The analytical estimates are validated against the number of poison samples needed for the actual anomalies in standard anomaly detection test</p>
<p>datasets. We then develop defense mechanism, based on the concept drift caused by the poisonous points, to identify that an attack is underway. We show that while it is possible to detect the attacks, it leads to a degradation in the performance of the</p>
<p>anomaly detection method. Finally, we investigate whether the generated adversarial samples for one anomaly detection method transfer to another anomaly detection method.</p>
|
33 |
Anomaly detection via high-dimensional data analysis on web access data.January 2009 (has links)
Suen, Ho Yan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 99-104). / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Organization --- p.4 / Chapter 2 --- Literature Review --- p.6 / Chapter 2.1 --- Related Works --- p.6 / Chapter 2.2 --- Background Study --- p.7 / Chapter 2.2.1 --- World Wide Web --- p.7 / Chapter 2.2.2 --- Distributed Denial of Service Attack --- p.11 / Chapter 2.2.3 --- Tools for Dimension Reduction --- p.13 / Chapter 2.2.4 --- Tools for Anomaly Detection --- p.20 / Chapter 2.2.5 --- Receiver operating characteristics (ROC) Analysis --- p.22 / Chapter 3 --- System Design --- p.25 / Chapter 3.1 --- Methodology --- p.25 / Chapter 3.2 --- System Overview --- p.27 / Chapter 3.3 --- Reference Profile Construction --- p.31 / Chapter 3.4 --- Real-time Anomaly Detection and Response --- p.32 / Chapter 3.5 --- Chapter Summary --- p.34 / Chapter 4 --- Reference Profile Construction --- p.35 / Chapter 4.1 --- Web Access Logs Collection --- p.35 / Chapter 4.2 --- Data Preparation --- p.37 / Chapter 4.3 --- Feature Extraction and Embedding Engine (FEE Engine) --- p.40 / Chapter 4.3.1 --- Sub-Sequence Extraction --- p.42 / Chapter 4.3.2 --- Hash Function on Sub-sequences (optional) --- p.45 / Chapter 4.3.3 --- Feature Vector Construction --- p.46 / Chapter 4.3.4 --- Diffusion Wavelets Embedding --- p.47 / Chapter 4.3.5 --- Numerical Example of Feature Set Reduction --- p.49 / Chapter 4.3.6 --- Reference Profile and Further Use of FEE Engine --- p.50 / Chapter 4.4 --- Chapter Summary --- p.50 / Chapter 5 --- Real-time Anomaly Detection and Response --- p.52 / Chapter 5.1 --- Session Filtering and Data Preparation --- p.54 / Chapter 5.2 --- Feature Extraction and Embedding --- p.54 / Chapter 5.3 --- Distance-based Outlier Scores Calculation --- p.55 / Chapter 5.4 --- Anomaly Detection and Response --- p.56 / Chapter 5.4.1 --- Length-Based Anomaly Detection Modules --- p.56 / Chapter 5.4.2 --- Characteristics of Anomaly Detection Modules --- p.59 / Chapter 5.4.3 --- Dynamic Threshold Adaptation --- p.60 / Chapter 5.5 --- Chapter Summary --- p.63 / Chapter 6 --- Experimental Results --- p.65 / Chapter 6.1 --- Experiment Datasets --- p.65 / Chapter 6.1.1 --- Normal Web Access Logs --- p.66 / Chapter 6.1.2 --- Attack Data Generation --- p.68 / Chapter 6.2 --- ROC Curve Construction --- p.70 / Chapter 6.3 --- System Parameters Selection --- p.71 / Chapter 6.4 --- Performance of Anomaly Detection --- p.82 / Chapter 6.4.1 --- Performance Analysis --- p.85 / Chapter 6.4.2 --- Performance in defending DDoS attacks --- p.87 / Chapter 6.5 --- Computation Requirement --- p.91 / Chapter 6.6 --- Chapter Summary --- p.95 / Chapter 7 --- Conclusion and Future Work --- p.96 / Bibliography --- p.99
|
34 |
Scalable Techniques for Anomaly DetectionYadav, Sandeep 1985- 14 March 2013 (has links)
Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection.
We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks.
First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation.
Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection.
We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates.
|
35 |
A NetFlow Based Internet-worm Detecting System in Large NetworkWang, Kuang-Ming 04 September 2005 (has links)
Internet-worms are a major threat to the security of today¡¦s Internet and cause significant worldwide disruptions, a huge number of infected hosts generating overwhelming traffic will impact the performance of the Internet. Network managers have the duty to mitigate this issue . In this paper we propose an automated method for detecting Internet-worm in large network based on NetFlow. We also implement a prototype system ¡V FloWorM which can help network managers to monitor suspect Internet-worms activities and identify their species in their managed networks. Our evaluation of the prototype system on real large and campus networks validates that it achieves pretty low false positive rate and good detecting rate.
|
36 |
Structural Analysis of Large Networks: Observations and ApplicationsMcGlohon, Mary 01 December 2010 (has links)
Network data (also referred to as relational data, social network data, real graph data) has become ubiquitous, and understanding patterns in this data has become an important research problem. We investigate how interactions in social networks are formed and how these interactions facilitate diffusion, model these behaviors, and apply these findings to real-world problems.
We examined graphs of size up to 16 million nodes, across many domains from academic citation networks, to campaign contributions and actor-movie networks. We also performed several case studies in online social networks such as blogs and message board communities.
Our major contributions are the following: (a) We discover several surprising patterns in network topology and interactions, such as Popularity Decay power law (in-links to a blog post decay with a power law with -1:5 exponent) and the oscillating size of connected components; (b) We propose generators such as the Butterfly generator that reproduce both established and new properties found in real networks; (c) several case studies, including a proposed method of detecting misstatements in accounting data, where using network effects gave a significant boost in detection accuracy.
|
37 |
Anomaly detection in the surveillance domainBrax, Christoffer January 2011 (has links)
In the post September 11 era, the demand for security has increased in virtually all parts of the society. The need for increased security originates from the emergence of new threats which differ from the traditional ones in such a way that they cannot be easily defined and are sometimes unknown or hidden in the “noise” of daily life. When the threats are known and definable, methods based on situation recognition can be used find them. However, when the threats are hard or impossible to define, other approaches must be used. One such approach is data-driven anomaly detection, where a model of normalcy is built and used to find anomalies, that is, things that do not fit the normal model. Anomaly detection has been identified as one of many enabling technologies for increasing security in the society. In this thesis, the problem of how to detect anomalies in the surveillance domain is studied. This is done by a characterisation of the surveillance domain and a literature review that identifies a number of weaknesses in previous anomaly detection methods used in the surveillance domain. Examples of identified weaknesses include: the handling of contextual information, the inclusion of expert knowledge and the handling of joint attributes. Based on the findings from this study, a new anomaly detection method is proposed. The proposed method is evaluated with respect to detection performance and computational cost on a number datasets, recorded from real-world sensors, in different application areas of the surveillance domain. Additionally, the method is also compared to two other commonly used anomaly detection methods. Finally, the method is evaluated on a dataset with anomalies developed together with maritime subject matter experts. The conclusion of the thesis is that the proposed method has a number of strengths compared to previous methods and is suitable foruse in operative maritime command and control systems. / Christoffer Brax forskar också vid högskolan i Skövde, Informatics Research Centre / Christoffer Brax also does research at the University of Skövde, Informatics Research Centre
|
38 |
Configuration and Implementation Issues for a Firewall System Running on a Mobile HandsetMartinsen, Pal-Erik January 2005 (has links)
Any device connected to the Internet needs to be protected. Using a firewall as a first line of defence is a very common way to provide protection. A firewall can be set up to protect an entire network or just a single host. As it is becoming more and more popular to connect mobile phones and other hand held devices to the Internet, the big question is;"how to protect those devices from the perils of the Internet?" This work investigates issues with the implementation of a firewall system for protecting mobile devices. Firewall administration is an error prone and difficult task. Setting up a correctly configured firewall in a network setting is a difficult task for a network administrator. To enable an ordinary mobile phone user to set up a firewall configuration to protect his mobile phone it is important to have a system that is easy to understand and warns the user of possible mistakes. Generic algorithms for firewall rule-set sorting and anomaly discovery are presented. This ensures that the rule-set is error free and safe to use. This is a vital part of any firewall system. The prototype developed can be used to find errors in existing firewall rule-sets. The rule-set can be in either a native firewall configuration format (currently only IPF is supported) or in a generic XML format. This generic XML format was developed as a part of this research project. Further a new graphical visualization concept that allows the end user to configure an advanced firewall configuration from a device with a small screen and limited input possibilities is presented.
|
39 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
40 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
Page generated in 0.0806 seconds