Spelling suggestions: "subject:"computer networks 2security measures"" "subject:"computer networks bsecurity measures""
81 |
Server's anonymity attack and protection of P2P-Vod systems. / Server's anonymity attack and protection of peer-to-peer video on demand systemsJanuary 2010 (has links)
Lu, Mengwei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 52-54). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Introduction of P2P-VoD Systems --- p.5 / Chapter 2.1 --- Major Components of the System --- p.5 / Chapter 2.2 --- Peer Join and Content Discovery --- p.6 / Chapter 2.3 --- Segment Sizes and Replication Strategy --- p.7 / Chapter 2.4 --- Piece Selection --- p.8 / Chapter 2.5 --- Transmission Strategy --- p.9 / Chapter 3 --- Detection Methodology --- p.10 / Chapter 3.1 --- Capturing Technique --- p.11 / Chapter 3.2 --- Analytical Framework --- p.15 / Chapter 3.3 --- Results of our Detection Methodology --- p.24 / Chapter 4 --- Protective Architecture --- p.25 / Chapter 4.1 --- Architecture Overview --- p.25 / Chapter 4.2 --- Content Servers --- p.27 / Chapter 4.3 --- Shield Nodes --- p.28 / Chapter 4.4 --- Tracker --- p.29 / Chapter 4.5 --- A Randomized Assignment Algorithm --- p.30 / Chapter 4.6 --- Seeding Algorithm --- p.31 / Chapter 4.7 --- Connection Management Algorithm --- p.33 / Chapter 4.8 --- Advantages of the Shield Nodes Architecture --- p.33 / Chapter 4.9 --- Markov Model for Shield Nodes Architecture Against Single Track Anonymity Attack --- p.35 / Chapter 5 --- Experiment Result --- p.40 / Chapter 5.1 --- Shield Node architecture against anonymity attack --- p.40 / Chapter 5.1.1 --- Performance Analysis for Single Track Anonymity Attack --- p.41 / Chapter 5.1.2 --- Experiment Result on PlanetLab for Single Track Anonymity Attack --- p.42 / Chapter 5.1.3 --- Parallel Anonymity Attack --- p.44 / Chapter 5.2 --- Shield Nodes architecture-against DoS attack --- p.45 / Chapter 6 --- Related Work --- p.48 / Chapter 7 --- Future Work --- p.49 / Chapter 8 --- Conclusion --- p.50
|
82 |
Novel Cryptographic Primitives and Protocols for Censorship ResistanceDyer, Kevin Patrick 24 July 2015 (has links)
Internet users rely on the availability of websites and digital services to engage in political discussions, report on newsworthy events in real-time, watch videos, etc. However, sometimes those who control networks, such as governments, censor certain websites, block specific applications or throttle encrypted traffic. Understandably, when users are faced with egregious censorship, where certain websites or applications are banned, they seek reliable and efficient means to circumvent such blocks. This tension is evident in countries such as a Iran and China, where the Internet censorship infrastructure is pervasive and continues to increase in scope and effectiveness.
An arms race is unfolding with two competing threads of research: (1) network operators' ability to classify traffic and subsequently enforce policies and (2) network users' ability to control how network operators classify their traffic. Our goal is to understand and progress the state-of-the-art for both sides. First, we present novel traffic analysis attacks against encrypted communications. We show that state-of-the-art cryptographic protocols leak private information about users' communications, such as the websites they visit, applications they use, or languages used for communications. Then, we investigate means to mitigate these privacy-compromising attacks. Towards this, we present a toolkit of cryptographic primitives and protocols that simultaneously (1) achieve traditional notions of cryptographic security, and (2) enable users to conceal information about their communications, such as the protocols used or websites visited. We demonstrate the utility of these primitives and protocols in a variety of real-world settings. As a primary use case, we show that these new primitives and protocols protect network communications and bypass policies of state-of-the-art hardware-based and software-based network monitoring devices.
|
83 |
Ant tree miner amyntas for intrusion detectionBotes, Frans Hendrik January 2018 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018. / With the constant evolution of information systems, companies have to acclimatise to the vast increase of data flowing through their networks. Business processes rely heavily on information technology and operate within a framework of little to no space for interruptions. Cyber attacks aimed at interrupting business operations, false intrusion detections and leaked information burden companies with large monetary and reputational costs. Intrusion detection systems analyse network traffic to identify suspicious patterns that intent to compromise the system. Classifiers (algorithms) are used to classify the data within different categories e.g. malicious or normal network traffic. Recent surveys within intrusion detection highlight the need for improved detection techniques and warrant further experimentation for improvement. This experimental research project focuses on implementing swarm intelligence techniques within the intrusion detection domain. The Ant Tree Miner algorithm induces decision trees by using ant colony optimisation techniques. The Ant Tree Miner poses high accuracy with efficient results. However, limited research has been performed on this classifier in other domains such as intrusion detection. The research provides the intrusion detection domain with a new algorithm that improves upon results of decision trees and ant colony optimisation techniques when applied to the domain. The research has led to valuable insights into the Ant Tree Miner classifier within a previously unknown domain and created an intrusion detection benchmark for future researchers.
|
84 |
Construction and formal security analysis of cryptographic schemes in the public key settingBaek, Joonsang, 1973- January 2004 (has links)
Abstract not available
|
85 |
Knowledge based anomaly detectionPrayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
|
86 |
Knowledge based anomaly detectionPrayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
|
87 |
A Methodology for Detecting and Classifying Rootkit ExploitsLevine, John G. (John Glenn) 18 March 2004 (has links)
A Methodology for Detecting and Classifying Rootkit Exploits
John G. Levine
164 Pages
Directed by Dr. Henry L. Owen
We propose a methodology to detect and classify rootkit exploits. The goal of this research is to provide system administrators, researchers, and security personnel with the information necessary in order to take the best possible recovery actions concerning systems that are compromised by rootkits. There is no such methodolgoy available at present to perform this function. This may also help to detect and fingerprint additional instances and prevent further security instances involving rootkits. A formal framework was developed in order to define rootkit exploits as an existing rootkit, a modification to an exisiting, or an entirely new rootkit. A methodology was then described in order to apply this framework against rootkits that are to be investigated. We then proposed some new methods to detect and characterize specific types of rootkit exploits. These methods consisted of identifying unique string signatures of binary executable files as well as examining the system call table within the system kernel. We established a Honeynet in order to aid in our research efforts and then applied our methodology to a previously unseen rootkit that was targeted against the Honeynet. By using our methodology we were able to uniquely characterize this rootkit and identify some unique signatures that could be used in the detection of this specific rootkit. We applied our methodolgy against nine additional rootkit exploits and were were able to identify unique characterstics for each of these rootkits. These charactersitics could also be used in the prevention and detection of these rootkits.
|
88 |
Secure Geolocation for Wireless Indoor NetworksLim, Yu-Xi 12 April 2006 (has links)
The objective of the research is to develop an accurate system for indoor location
estimation using a secure architecture based on the IEEE 802.11 standard for infrastructure
networks. Elements of this secure architecture include: server-oriented
platform for greater trust and manageability; multiple wireless network parameters
for improved accuracy; and Support Vector Regression (SVR) for accurate, high-resolution
estimates. While these elements have been investigated individually in
earlier research, none has combined them to a single security-oriented system. Thus
this research investigates the feasibility of using these elements together.
|
89 |
Scalable and efficient distributed algorithms for defending against malicious Internet activitySung, Minho 31 July 2006 (has links)
The threat of malicious Internet activities
such as Distributed Denial of Service (DDoS) attacks, spam emails
or Internet worms/viruses has been increasing in the
last several years. The impact and frequency of these malicious
activities are expected to grow unless they are properly addressed.
In this thesis, we propose to design and evaluate a set of practical and
effective protection measures against potential malicious
activities in current and future networks. Our research objective is twofold.
First, we design the methods to defend against DDoS attacks.
Our research focuses on two important issues related to DDoS attack defense mechanisms.
One issue is the method to trace the sources of attacking packets, which is known as
IP traceback. We propose a novel packet logging based (i.e., hash-based) traceback
scheme using only a one-bit marking field in IP header.
It reduces processing and storage cost by an order of magnitude than the existing
hash-based schemes, and is therefore scalable to much higher link speed (e.g., OC-768).
Next, we propose an improved traceback scheme with lower storage overhead
by using more marking space in IP header.
Another issue in DDoS defense is to investigate protocol-independent techniques for
improving the throughput of legitimate traffic during DDoS attacks.
We propose a novel technique that can effectively filter out the majority of DDoS
traffic, thus improving the overall throughput of the legitimate traffic.
Second, we investigate the problem of distributed network monitoring.
We propose a set of novel distributed data streaming algorithms
that allow scalable and efficient monitoring of aggregated traffic.
Our algorithms target the specific network monitoring problem of
finding common content in traffic traversing several
nodes/links across the Internet. These algorithms find applications in
network-wide intrusion detection, early warning for fast propagating worms,
and detection of hot objects and spam traffic.
|
90 |
Countering kernel malware in virtual execution environmentsXuan, Chaoting 10 November 2009 (has links)
We present a rootkit prevention system, namely DARK that tracks suspicious Linux loadable kernel modules (LKM) at a granular level by using on-demand emulation, a technique that dynamically switches a running system between virtualized and emulated execution. Combining the strengths of emulation and virtualization, DARK is able to thoroughly capture the activities of the target module in a guest operating system (OS), while maintaining reasonable run-time performance. To address integrity-violation and confidentiality-violation rootkits, we create a group of security policies that can detect all available Linux rootkits. It is shown that normal guest OS performance is unaffected. The performance is only decreased when rootkits attempt to run, while most rootkits are detected at installation.
Next, we present a sandbox-based malware analysis system called Rkprofiler that dynamically monitors and analyzes the behavior of Windows kernel malware. Kernel malware samples run inside a virtual machine (VM) that is supported and managed by a PC emulator. Rkprofiler provides several capabilities that other malware analysis systems do not have. First, it can detect the execution of malicious kernel code regardless of how the monitored kernel malware is loaded into the kernel and whether it is packed or not. Second, it captures all function calls made by the kernel malware and constructs call graphs from the trace files. Third, a technique called aggressive memory tagging (AMT) is proposed to track the dynamic data objects that the kernel malware visits. Last, Rkprofiler records and reports the hardware access events of kernel malware (e.g., MSR register reads and writes). Our evaluation results show that Rkprofiler can quickly expose the security-sensitive activities of kernel malware and thus reduces the effort exerted in conducting tedious manual malware analysis.
|
Page generated in 0.0687 seconds