• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 178
  • 21
  • 20
  • 12
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 310
  • 310
  • 288
  • 287
  • 88
  • 77
  • 63
  • 58
  • 45
  • 44
  • 44
  • 41
  • 40
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Construction and formal security analysis of cryptographic schemes in the public key setting

Baek, Joonsang, 1973- January 2004 (has links)
Abstract not available
102

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
103

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
104

A Methodology for Detecting and Classifying Rootkit Exploits

Levine, John G. (John Glenn) 18 March 2004 (has links)
A Methodology for Detecting and Classifying Rootkit Exploits John G. Levine 164 Pages Directed by Dr. Henry L. Owen We propose a methodology to detect and classify rootkit exploits. The goal of this research is to provide system administrators, researchers, and security personnel with the information necessary in order to take the best possible recovery actions concerning systems that are compromised by rootkits. There is no such methodolgoy available at present to perform this function. This may also help to detect and fingerprint additional instances and prevent further security instances involving rootkits. A formal framework was developed in order to define rootkit exploits as an existing rootkit, a modification to an exisiting, or an entirely new rootkit. A methodology was then described in order to apply this framework against rootkits that are to be investigated. We then proposed some new methods to detect and characterize specific types of rootkit exploits. These methods consisted of identifying unique string signatures of binary executable files as well as examining the system call table within the system kernel. We established a Honeynet in order to aid in our research efforts and then applied our methodology to a previously unseen rootkit that was targeted against the Honeynet. By using our methodology we were able to uniquely characterize this rootkit and identify some unique signatures that could be used in the detection of this specific rootkit. We applied our methodolgy against nine additional rootkit exploits and were were able to identify unique characterstics for each of these rootkits. These charactersitics could also be used in the prevention and detection of these rootkits.
105

Secure Geolocation for Wireless Indoor Networks

Lim, Yu-Xi 12 April 2006 (has links)
The objective of the research is to develop an accurate system for indoor location estimation using a secure architecture based on the IEEE 802.11 standard for infrastructure networks. Elements of this secure architecture include: server-oriented platform for greater trust and manageability; multiple wireless network parameters for improved accuracy; and Support Vector Regression (SVR) for accurate, high-resolution estimates. While these elements have been investigated individually in earlier research, none has combined them to a single security-oriented system. Thus this research investigates the feasibility of using these elements together.
106

Scalable and efficient distributed algorithms for defending against malicious Internet activity

Sung, Minho 31 July 2006 (has links)
The threat of malicious Internet activities such as Distributed Denial of Service (DDoS) attacks, spam emails or Internet worms/viruses has been increasing in the last several years. The impact and frequency of these malicious activities are expected to grow unless they are properly addressed. In this thesis, we propose to design and evaluate a set of practical and effective protection measures against potential malicious activities in current and future networks. Our research objective is twofold. First, we design the methods to defend against DDoS attacks. Our research focuses on two important issues related to DDoS attack defense mechanisms. One issue is the method to trace the sources of attacking packets, which is known as IP traceback. We propose a novel packet logging based (i.e., hash-based) traceback scheme using only a one-bit marking field in IP header. It reduces processing and storage cost by an order of magnitude than the existing hash-based schemes, and is therefore scalable to much higher link speed (e.g., OC-768). Next, we propose an improved traceback scheme with lower storage overhead by using more marking space in IP header. Another issue in DDoS defense is to investigate protocol-independent techniques for improving the throughput of legitimate traffic during DDoS attacks. We propose a novel technique that can effectively filter out the majority of DDoS traffic, thus improving the overall throughput of the legitimate traffic. Second, we investigate the problem of distributed network monitoring. We propose a set of novel distributed data streaming algorithms that allow scalable and efficient monitoring of aggregated traffic. Our algorithms target the specific network monitoring problem of finding common content in traffic traversing several nodes/links across the Internet. These algorithms find applications in network-wide intrusion detection, early warning for fast propagating worms, and detection of hot objects and spam traffic.
107

Countering kernel malware in virtual execution environments

Xuan, Chaoting 10 November 2009 (has links)
We present a rootkit prevention system, namely DARK that tracks suspicious Linux loadable kernel modules (LKM) at a granular level by using on-demand emulation, a technique that dynamically switches a running system between virtualized and emulated execution. Combining the strengths of emulation and virtualization, DARK is able to thoroughly capture the activities of the target module in a guest operating system (OS), while maintaining reasonable run-time performance. To address integrity-violation and confidentiality-violation rootkits, we create a group of security policies that can detect all available Linux rootkits. It is shown that normal guest OS performance is unaffected. The performance is only decreased when rootkits attempt to run, while most rootkits are detected at installation. Next, we present a sandbox-based malware analysis system called Rkprofiler that dynamically monitors and analyzes the behavior of Windows kernel malware. Kernel malware samples run inside a virtual machine (VM) that is supported and managed by a PC emulator. Rkprofiler provides several capabilities that other malware analysis systems do not have. First, it can detect the execution of malicious kernel code regardless of how the monitored kernel malware is loaded into the kernel and whether it is packed or not. Second, it captures all function calls made by the kernel malware and constructs call graphs from the trace files. Third, a technique called aggressive memory tagging (AMT) is proposed to track the dynamic data objects that the kernel malware visits. Last, Rkprofiler records and reports the hardware access events of kernel malware (e.g., MSR register reads and writes). Our evaluation results show that Rkprofiler can quickly expose the security-sensitive activities of kernel malware and thus reduces the effort exerted in conducting tedious manual malware analysis.
108

Forensic framework for honeypot analysis

Fairbanks, Kevin D. 05 April 2010 (has links)
The objective of this research is to evaluate and develop new forensic techniques for use in honeynet environments, in an effort to address areas where anti-forensic techniques defeat current forensic methods. The fields of Computer and Network Security have expanded with time to become inclusive of many complex ideas and algorithms. With ease, a student of these fields can fall into the thought pattern of preventive measures as the only major thrust of the topics. It is equally important to be able to determine the cause of a security breach. Thus, the field of Computer Forensics has grown. In this field, there exist toolkits and methods that are used to forensically analyze production and honeypot systems. To counter the toolkits, anti-forensic techniques have been developed. Honeypots and production systems have several intrinsic differences. These differences can be exploited to produce honeypot data sources that are not currently available from production systems. This research seeks to examine possible honeypot data sources and cultivate novel methods to combat anti-forensic techniques. In this document, three parts of a forensic framework are presented which were developed specifically for honeypot and honeynet environments. The first, TimeKeeper, is an inode preservation methodology which utilizes the Ext3 journal. This is followed with an examination of dentry logging which is primarily used to map inode numbers to filenames in Ext3. The final component presented is the initial research behind a toolkit for the examination of the recently deployed Ext4 file system. Each respective chapter includes the necessary background information and an examination of related work as well as the architecture, design, conceptual prototyping, and results from testing each major framework component.
109

Cyber security in power systems

Sridharan, Venkatraman 06 April 2012 (has links)
Many automation and power control systems are integrated into the 'Smart Grid' concept for efficiently managing and delivering electric power. This integrated approach created several challenges that need to be taken into consideration such as cyber security issues, information sharing, and regulatory compliance. There are several issues that need to be addressed in the area of cyber security. Currently, there are no metrics for evaluating cyber security and methodologies to detect cyber attacks are in their infancy. There is a perceived lack of security built into the smart grid systems, but there is no mechanism for information sharing on cyber security incidents. In this thesis, we discuss the vulnerabilities in power system devices, and present ideas and a proposal towards multiple-threat system intrusion detection. We propose to test the multiple-threat methods for cyber security monitoring on a multi-laboratory test bed, and aid the development of a SCADA test bed, to be constructed on the Georgia Tech Campus.
110

Effective and scalable botnet detection in network traffic

Zhang, Junjie 03 July 2012 (has links)
Botnets represent one of the most serious threats against Internet security since they serve as platforms that are responsible for the vast majority of large-scale and coordinated cyber attacks, such as distributed denial of service, spamming, and information stolen. Detecting botnets is therefore of great importance and a number of network-based botnet detection systems have been proposed. However, as botnets perform attacks in an increasingly stealthy way and the volume of network traffic is rapidly growing, existing botnet detection systems are faced with significant challenges in terms of effectiveness and scalability. The objective of this dissertation is to build novel network-based solutions that can boost both the effectiveness of existing botnet detection systems by detecting botnets whose attacks are very hard to be observed in network traffic, and their scalability by adaptively sampling network packets that are likely to be generated by botnets. To be specific, this dissertation describes three unique contributions. First, we built a new system to detect drive-by download attacks, which represent one of the most significant and popular methods for botnet infection. The goal of our system is to boost the effectiveness of existing drive-by download detection systems by detecting a large number of drive-by download attacks that are missed by these existing detection efforts. Second, we built a new system to detect botnets with peer-to-peer (P2P) command&control (C&C) structures (i.e., P2P botnets), where P2P C&Cs represent currently the most robust C&C structures against disruption efforts. Our system aims to boost the effectiveness of existing P2P botnet detection by detecting P2P botnets in two challenging scenarios: i) botnets perform stealthy attacks that are extremely hard to be observed in the network traffic; ii) bot-infected hosts are also running legitimate P2P applications (e.g., Bittorrent and Skype). Finally, we built a novel traffic analysis framework to boost the scalability of existing botnet detection systems. Our framework can effectively and efficiently identify a small percentage of hosts that are likely to be bots, and then forward network traffic associated with these hosts to existing detection systems for fine-grained analysis, thereby boosting the scalability of existing detection systems. Our traffic analysis framework includes a novel botnet-aware and adaptive packet sampling algorithm, and a scalable flow-correlation technique.

Page generated in 0.063 seconds