• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 214
  • 61
  • 32
  • 11
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 438
  • 438
  • 222
  • 177
  • 144
  • 141
  • 121
  • 94
  • 87
  • 84
  • 69
  • 63
  • 59
  • 59
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Risk, Privacy, and Security in Computer Networks

Årnes, Andre January 2006 (has links)
With an increasingly digitally connected society comes complexity, uncertainty, and risk. Network monitoring, incident management, and digital forensics is of increasing importance with the escalation of cybercrime and other network supported serious crimes. New laws and regulations governing electronic communications, cybercrime, and data retention are being proposed, continuously requiring new methods and tools. This thesis introduces a novel approach to real-time network risk assessment based on hidden Markov models to represent the likelihood of transitions between security states. The method measures risk as a composition of individual hosts, providing a precise, fine-grained model for assessing risk and providing decision support for incident response. The approach has been integrated with an existing framework for distributed, large-scale intrusion detection, and the results of the risk assessment are applied to prioritize the alerts produced by the intrusion detection sensors. Using this implementation, the approach is evaluated on both simulated and real-world data. Network monitoring can encompass large networks and process enormous amounts of data, and the practice and its ubiquity can represent a great threat to the privacy and confidentiality of network users. Existing measures for anonymization and pseudonymization are analyzed with respect to the trade-off of performing meaningful data analysis while protecting the identities of the users. The results demonstrate that most existing solutions for pseudonymization are vulnerable to a range of attacks. As a solution, some remedies for strengthening the schemes are proposed, and a method for unlinkable transaction pseudonyms is considered. Finally, a novel method for performing digital forensic reconstructions in a virtual security testbed is proposed. Based on a hypothesis of the security incident in question, the testbed is configured with the appropriate operating systems, services, and exploits. Attacks are formulated as event chains and replayed on the testbed. The effects of each event are analyzed in order to support or refute the hypothesis. The purpose of the approach is to facilitate reconstruction experiments in digital forensics. Two examples are given to demonstrate the approach; one overview example based on the Trojan defense and one detailed example of a multi-step attack. Although a reconstruction can neither prove a hypothesis with absolute certainty, nor exclude the correctness of other hypotheses, a standardized environment combined with event reconstruction and testing can lend credibility to an investigation and can be a valuable asset in court.
152

Enhancing Performance of Vulnerability-based Intrusion Detection Systems

Farroukh, Amer 31 December 2010 (has links)
The accuracy of current intrusion detection systems (IDSes) is hindered by the limited capability of regular expressions (REs) to express the exact vulnerability. Recent advances have proposed vulnerability-based IDSes that parse traffic and retrieve protocol semantics to describe the vulnerability. Such a description of attacks is analogous to subscriptions that specify events of interest in event processing systems. However, the matching engine of state-of-the-art IDSes lacks efficient matching algorithms that can process many signatures simultaneously. In this work, we place event processing in the core of the IDS and propose novel algorithms to efficiently parse and match vulnerability signatures. Also, we are among the first to detect complex attacks such as the Conficker worm which requires correlating multiple protocol data units (MPDUs) while maintaining a small memory footprint. Our approach incurs neglibile overhead when processing clean traffic, is resilient to attacks, and is faster than existing systems.
153

Enhancing Performance of Vulnerability-based Intrusion Detection Systems

Farroukh, Amer 31 December 2010 (has links)
The accuracy of current intrusion detection systems (IDSes) is hindered by the limited capability of regular expressions (REs) to express the exact vulnerability. Recent advances have proposed vulnerability-based IDSes that parse traffic and retrieve protocol semantics to describe the vulnerability. Such a description of attacks is analogous to subscriptions that specify events of interest in event processing systems. However, the matching engine of state-of-the-art IDSes lacks efficient matching algorithms that can process many signatures simultaneously. In this work, we place event processing in the core of the IDS and propose novel algorithms to efficiently parse and match vulnerability signatures. Also, we are among the first to detect complex attacks such as the Conficker worm which requires correlating multiple protocol data units (MPDUs) while maintaining a small memory footprint. Our approach incurs neglibile overhead when processing clean traffic, is resilient to attacks, and is faster than existing systems.
154

A Convert Channel Using 802.11 LANS

Calhoun, Telvis Eugene 10 April 2009 (has links)
We present a covert side channel that uses the 802.11 MAC rate switching protocol. The covert channel provides a general method to hide communications in an 802.11 LAN. The technique uses a one-time password algorithm to ensure high-entropy randomness of the covert messages. We investigate how the covert side channel affects node throughput in mobile and non-mobile scenarios. We also investigate the covertness of the covert side channel using standardized entropy. The results show that the performance impact is minimal and increases slightly as the covert channel bandwidth increases. We further show that the channel has 100% accuracy with minimal impact on rate switching entropy. Finally, we present two applications for the covert channel: covert authentication and covert WiFi botnets.
155

On Optimizing Traffic Distribution for Clusters of Network Intrusion Detection and Prevention Systems

Le, Anh January 2008 (has links)
To address the overload conditions caused by the increasing network traffic volume, recent literature in the network intrusion detection and prevention field has proposed the use of clusters of network intrusion detection and prevention systems (NIDPSs). We observe that simple traffic distribution schemes are usually used for NIDPS clusters. These schemes have two major drawbacks: (1) the loss of correlation information caused by the traffic distribution because correlated flows are not sent to the same NIDPS and (2) the unbalanced loads of the NIDPSs. The first drawback severely affects the ability to detect intrusions that require analysis of correlated flows. The second drawback greatly increases the chance of overloading an NIDPS even when loads of the others are low. In this thesis, we address these two drawbacks. In particular, we propose two novel traffic distribution systems: the Correlation-Based Load Balancer and the Correlation-Based Load Manager as two different solutions to the NIDPS traffic distribution problem. On the one hand, the Load Balancer and the Load Manager both consider the current loads of the NIDPSs while distributing traffic to provide fine-grained load balancing and dynamic load distribution, respectively. On the other hand, both systems take into account traffic correlation in their distributions, thereby significantly reducing the loss of correlation information during their distribution of traffic. We have implemented prototypes of both systems and evaluated them using extensive simulations and real traffic traces. Overall, the evaluation results show that both systems have low overhead in terms of the delays introduced to the packets. More importantly, compared to the naive hash-based distribution, the Load Balancer significantly improves the anomaly-based detection accuracy of DDoS attacks and port scans -- the two major attacks that require the analysis of correlated flows -- meanwhile, the Load Manager successfully maintains the anomaly-based detection accuracy of these two major attacks of the NIDPSs.
156

On Optimizing Traffic Distribution for Clusters of Network Intrusion Detection and Prevention Systems

Le, Anh January 2008 (has links)
To address the overload conditions caused by the increasing network traffic volume, recent literature in the network intrusion detection and prevention field has proposed the use of clusters of network intrusion detection and prevention systems (NIDPSs). We observe that simple traffic distribution schemes are usually used for NIDPS clusters. These schemes have two major drawbacks: (1) the loss of correlation information caused by the traffic distribution because correlated flows are not sent to the same NIDPS and (2) the unbalanced loads of the NIDPSs. The first drawback severely affects the ability to detect intrusions that require analysis of correlated flows. The second drawback greatly increases the chance of overloading an NIDPS even when loads of the others are low. In this thesis, we address these two drawbacks. In particular, we propose two novel traffic distribution systems: the Correlation-Based Load Balancer and the Correlation-Based Load Manager as two different solutions to the NIDPS traffic distribution problem. On the one hand, the Load Balancer and the Load Manager both consider the current loads of the NIDPSs while distributing traffic to provide fine-grained load balancing and dynamic load distribution, respectively. On the other hand, both systems take into account traffic correlation in their distributions, thereby significantly reducing the loss of correlation information during their distribution of traffic. We have implemented prototypes of both systems and evaluated them using extensive simulations and real traffic traces. Overall, the evaluation results show that both systems have low overhead in terms of the delays introduced to the packets. More importantly, compared to the naive hash-based distribution, the Load Balancer significantly improves the anomaly-based detection accuracy of DDoS attacks and port scans -- the two major attacks that require the analysis of correlated flows -- meanwhile, the Load Manager successfully maintains the anomaly-based detection accuracy of these two major attacks of the NIDPSs.
157

A statistical process control approach for network intrusion detection

Park, Yongro 13 January 2005 (has links)
Intrusion detection systems (IDS) have a vital role in protecting computer networks and information systems. In this thesis we applied an SPC monitoring concept to a certain type of traffic data in order to detect a network intrusion. We developed a general SPC intrusion detection approach and described it and the source and the preparation of data used in this thesis. We extracted sample data sets that represent various situations, calculated event intensities for each situation, and stored these sample data sets in the data repository for use in future research. A regular batch mean chart was used to remove the sample datas inherent 60-second cycles. However, this proved too slow in detecting a signal because the regular batch mean chart only monitored the statistic at the end of the batch. To gain faster results, a modified batch mean (MBM) chart was developed that met this goal. Subsequently, we developed the Modified Batch Mean Shewhart chart, the Modified Batch Mean Cusum chart, and the Modified Batch Mean EWMA chart and analyzed the performances of each one on simulated data. The simulation studies showed that the MBM charts perform especially well with large signals ?the type of signal typically associated with a DOS intrusion. The MBM Charts can be applied two ways: by using actual control limits or by using robust control limits. The actual control limits must be determined by simulation, but the robust control limits require nothing more than the use of the recommended limits. The robust MBM Shewhart chart was developed based on choosing appropriate values based on batch size. The robust MBM Cusum chart and robust MBM EWMA chart were developed on choosing appropriate values of charting parameters.
158

Intrusion Detection and Response Systems for Mobile Ad Hoc Networks

Huang, Yi-an 20 November 2006 (has links)
A mobile ad hoc network (MANET) consists of a group of autonomous mobile nodes with no infrastructure support. In this research, we develop a distributed intrusion detection and response system for MANET, and we believe it presents a second line of defense that cannot be replaced by prevention schemes. We based our detection framework on the study of attack taxonomy. We then propose a set of detection methods suitable of detecting different attack categories. Our approaches are based on protocol specification analysis with categorical and statistical measures. Node-based approaches may be too restrictive in scenarios where attack patterns cannot be observed by any isolated node. Therefore, we have developed cooperative detection approaches for a more effective detection model. One approach is to form IDS clusters by grouping nearby nodes, and information can be exchanged within clusters. The cluster-based scheme is more efficient in terms of power consumption and resource utilization, it is also proved resilient against common security compromises without changing the decentralized assumption. We further address two response techniques, traceback and filtering. Existing traceback systems are not suitable for MANET because they rely on incompatible assumptions such as trustworthy routers and static route topology. Our solution, instead, adapts to dynamic topology with no infrastructure requirement. Our solution is also resilient in the face of arbitrary number of collaborative adversaries. We also develop smart filtering schemes to maximize the dropping rate of attack packets while minimizing the dropping rate of normal packets with real-time guarantee. To validate our research, we present case study using both ns-2 simulation and MobiEmu emulation platform with three ad hoc routing protocols: AODV, DSR and OLSR. We implemented various representative attacks based on the attack taxonomy. Our experiments show very promising results using node-based and cluster-based approaches.
159

Algorithms for Large-Scale Internet Measurements

Leonard, Derek Anthony 2010 December 1900 (has links)
As the Internet has grown in size and importance to society, it has become increasingly difficult to generate global metrics of interest that can be used to verify proposed algorithms or monitor performance. This dissertation tackles the problem by proposing several novel algorithms designed to perform Internet-wide measurements using existing or inexpensive resources. We initially address distance estimation in the Internet, which is used by many distributed applications. We propose a new end-to-end measurement framework called Turbo King (T-King) that uses the existing DNS infrastructure and, when compared to its predecessor King, obtains delay samples without bias in the presence of distant authoritative servers and forwarders, consumes half the bandwidth, and reduces the impact on caches at remote servers by several orders of magnitude. Motivated by recent interest in the literature and our need to find remote DNS nameservers, we next address Internet-wide service discovery by developing IRLscanner, whose main design objectives have been to maximize politeness at remote networks, allow scanning rates that achieve coverage of the Internet in minutes/hours (rather than weeks/months), and significantly reduce administrator complaints. Using IRLscanner and 24-hour scan durations, we perform 20 Internet-wide experiments using 6 different protocols (i.e., DNS, HTTP, SMTP, EPMAP, ICMP and UDP ECHO). We analyze the feedback generated and suggest novel approaches for reducing the amount of blowback during similar studies, which should enable researchers to collect valuable experimental data in the future with significantly fewer hurdles. We finally turn our attention to Intrusion Detection Systems (IDS), which are often tasked with detecting scans and preventing them; however, it is currently unknown how likely an IDS is to detect a given Internet-wide scan pattern and whether there exist sufficiently fast stealth techniques that can remain virtually undetectable at large-scale. To address these questions, we propose a novel model for the windowexpiration rules of popular IDS tools (i.e., Snort and Bro), derive the probability that existing scan patterns (i.e., uniform and sequential) are detected by each of these tools, and prove the existence of stealth-optimal patterns.
160

An Extensible Framework For Automated Network Attack Signature Generation

Kenar, Serkan 01 January 2010 (has links) (PDF)
The effectiveness of misuse-based intrusion detection systems (IDS) are seriously broken, with the advance of threats in terms of speed and scale. Today worms, trojans, viruses and other threats can spread all around the globe in less than thirty minutes. In order to detect these emerging threats, signatures must be generated automatically and distributed to intrusion detection systems rapidly. There are studies on automatically generating signatures for worms and attacks. However, either these systems rely on Honeypots which are supposed to receive only suspicious traffic, or use port-scanning outlier detectors. In this study, an open, extensible system based on an network IDS is proposed to identify suspicious traffic using anomaly detection methods, and to automatically generate signatures of attacks out of this suspicious traffic. The generated signatures are classified and fedback into the IDS either locally or distributed. Design and proof-of-concept implementation are described and developed system is tested on both synthetic and real network data. The system is designed as a framework to test different methods and evaluate the outcomes of varying configurations easily. The test results show that, with a properly defined attack detection algorithm, attack signatures could be generated with high accuracy and efficiency. The resulting system could be used to prevent early damages of fast-spreading worms and other threats.

Page generated in 0.096 seconds