• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 433
  • 38
  • 35
  • 29
  • 19
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 4
  • 4
  • Tagged with
  • 757
  • 757
  • 464
  • 347
  • 184
  • 182
  • 159
  • 122
  • 112
  • 112
  • 108
  • 103
  • 100
  • 86
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Countering kernel malware in virtual execution environments

Xuan, Chaoting 10 November 2009 (has links)
We present a rootkit prevention system, namely DARK that tracks suspicious Linux loadable kernel modules (LKM) at a granular level by using on-demand emulation, a technique that dynamically switches a running system between virtualized and emulated execution. Combining the strengths of emulation and virtualization, DARK is able to thoroughly capture the activities of the target module in a guest operating system (OS), while maintaining reasonable run-time performance. To address integrity-violation and confidentiality-violation rootkits, we create a group of security policies that can detect all available Linux rootkits. It is shown that normal guest OS performance is unaffected. The performance is only decreased when rootkits attempt to run, while most rootkits are detected at installation. Next, we present a sandbox-based malware analysis system called Rkprofiler that dynamically monitors and analyzes the behavior of Windows kernel malware. Kernel malware samples run inside a virtual machine (VM) that is supported and managed by a PC emulator. Rkprofiler provides several capabilities that other malware analysis systems do not have. First, it can detect the execution of malicious kernel code regardless of how the monitored kernel malware is loaded into the kernel and whether it is packed or not. Second, it captures all function calls made by the kernel malware and constructs call graphs from the trace files. Third, a technique called aggressive memory tagging (AMT) is proposed to track the dynamic data objects that the kernel malware visits. Last, Rkprofiler records and reports the hardware access events of kernel malware (e.g., MSR register reads and writes). Our evaluation results show that Rkprofiler can quickly expose the security-sensitive activities of kernel malware and thus reduces the effort exerted in conducting tedious manual malware analysis.
352

Improving operating systems security: two case studies

Wei, Jinpeng 14 August 2009 (has links)
Malicious attacks on computer systems attempt to obtain and maintain illicit control over the victim system. To obtain unauthorized access, they often exploit vulnerabilities in the victim system, and to maintain illicit control, they apply various hiding techniques to remain stealthy. In this dissertation, we discuss and present solutions for two classes of security problems: TOCTTOU (time-of-check-to-time-of-use) and K-Queue. TOCTTOU is a vulnerability that can be exploited to obtain unauthorized root access, and K-Queue is a hiding technique that can be used to maintain stealthy control of the victim kernel. The first security problem is TOCTTOU, a race condition in Unix-style file systems in which an attacker exploits a small timing gap between a file system call that checks a condition and a use kernel call that depends on the condition. Our contributions on TOCTTOU include: (1) A model that enumerates the complete set of potential TOCTTOU vulnerabilities; (2) A set of tools that detect TOCTTOU vulnerabilities in Linux applications such as vi, gedit, and rpm; (3) A theoretical as well as an experimental evaluation of security risks that shows that TOCTTOU vulnerabilities can no longer be considered "low risk" given the wide-scale deployment of multiprocessors; (4) An event-driven protection mechanism and its implementation that defend Linux applications against TOCTTOU attacks at low performance overhead. The second security problem addressed in this dissertation is kernel queue or K-Queue, which can be used by the attacker to achieve continual malicious function execution without persistently changing either kernel code or data, which prevents state-of-the-art kernel integrity monitors such as CFI and SBCFI from detecting them. Based on our successful defense against a concrete instance of K-Queue-driven attacks that use the soft timer mechanism, we design and implement a solution to the general class of K-Queue-driven attacks, including (1) a unified static analysis framework and toolset that can generate specifications of legitimate K-Queue requests and the checker code in an automated way; (2) a runtime reference monitor that validates K-Queue invariants and guards such invariants against tampering; and (3) a comprehensive experimental evaluation of our static analysis framework and K-Queue Checkers.
353

Forensic framework for honeypot analysis

Fairbanks, Kevin D. 05 April 2010 (has links)
The objective of this research is to evaluate and develop new forensic techniques for use in honeynet environments, in an effort to address areas where anti-forensic techniques defeat current forensic methods. The fields of Computer and Network Security have expanded with time to become inclusive of many complex ideas and algorithms. With ease, a student of these fields can fall into the thought pattern of preventive measures as the only major thrust of the topics. It is equally important to be able to determine the cause of a security breach. Thus, the field of Computer Forensics has grown. In this field, there exist toolkits and methods that are used to forensically analyze production and honeypot systems. To counter the toolkits, anti-forensic techniques have been developed. Honeypots and production systems have several intrinsic differences. These differences can be exploited to produce honeypot data sources that are not currently available from production systems. This research seeks to examine possible honeypot data sources and cultivate novel methods to combat anti-forensic techniques. In this document, three parts of a forensic framework are presented which were developed specifically for honeypot and honeynet environments. The first, TimeKeeper, is an inode preservation methodology which utilizes the Ext3 journal. This is followed with an examination of dentry logging which is primarily used to map inode numbers to filenames in Ext3. The final component presented is the initial research behind a toolkit for the examination of the recently deployed Ext4 file system. Each respective chapter includes the necessary background information and an examination of related work as well as the architecture, design, conceptual prototyping, and results from testing each major framework component.
354

Cyber security in power systems

Sridharan, Venkatraman 06 April 2012 (has links)
Many automation and power control systems are integrated into the 'Smart Grid' concept for efficiently managing and delivering electric power. This integrated approach created several challenges that need to be taken into consideration such as cyber security issues, information sharing, and regulatory compliance. There are several issues that need to be addressed in the area of cyber security. Currently, there are no metrics for evaluating cyber security and methodologies to detect cyber attacks are in their infancy. There is a perceived lack of security built into the smart grid systems, but there is no mechanism for information sharing on cyber security incidents. In this thesis, we discuss the vulnerabilities in power system devices, and present ideas and a proposal towards multiple-threat system intrusion detection. We propose to test the multiple-threat methods for cyber security monitoring on a multi-laboratory test bed, and aid the development of a SCADA test bed, to be constructed on the Georgia Tech Campus.
355

Effective and scalable botnet detection in network traffic

Zhang, Junjie 03 July 2012 (has links)
Botnets represent one of the most serious threats against Internet security since they serve as platforms that are responsible for the vast majority of large-scale and coordinated cyber attacks, such as distributed denial of service, spamming, and information stolen. Detecting botnets is therefore of great importance and a number of network-based botnet detection systems have been proposed. However, as botnets perform attacks in an increasingly stealthy way and the volume of network traffic is rapidly growing, existing botnet detection systems are faced with significant challenges in terms of effectiveness and scalability. The objective of this dissertation is to build novel network-based solutions that can boost both the effectiveness of existing botnet detection systems by detecting botnets whose attacks are very hard to be observed in network traffic, and their scalability by adaptively sampling network packets that are likely to be generated by botnets. To be specific, this dissertation describes three unique contributions. First, we built a new system to detect drive-by download attacks, which represent one of the most significant and popular methods for botnet infection. The goal of our system is to boost the effectiveness of existing drive-by download detection systems by detecting a large number of drive-by download attacks that are missed by these existing detection efforts. Second, we built a new system to detect botnets with peer-to-peer (P2P) command&control (C&C) structures (i.e., P2P botnets), where P2P C&Cs represent currently the most robust C&C structures against disruption efforts. Our system aims to boost the effectiveness of existing P2P botnet detection by detecting P2P botnets in two challenging scenarios: i) botnets perform stealthy attacks that are extremely hard to be observed in the network traffic; ii) bot-infected hosts are also running legitimate P2P applications (e.g., Bittorrent and Skype). Finally, we built a novel traffic analysis framework to boost the scalability of existing botnet detection systems. Our framework can effectively and efficiently identify a small percentage of hosts that are likely to be bots, and then forward network traffic associated with these hosts to existing detection systems for fine-grained analysis, thereby boosting the scalability of existing detection systems. Our traffic analysis framework includes a novel botnet-aware and adaptive packet sampling algorithm, and a scalable flow-correlation technique.
356

Preventing abuse of online communities

Irani, Danesh 02 July 2012 (has links)
Online communities are growing at a phenomenal rate and with the large number of users these communities contain, attackers are drawn to exploit these users. Denial of information (DoI) attacks and information leakage attacks are two popular attacks that target users on online communities. These information based attacks are linked by their opposing views on low-quality information. On the one hand denial of information attacks which primarily use low-quality information (such as spam and phishing) are a nuisance for information consumers. On the other hand information leakage attacks, which use inadvertently leaked information, are less effective when low-quality information is used, and thus leakage of low-quality information is prefered by private information producers. In this dissertation, I introduce techniques for preventing abuse against these attacks in online communities using meta-model classification and information unification approaches, respectively. The meta-model classification approach involves classifying the ``connected payload" associated with the information and using the classification result for the determination. This approach allows for detection of DoI attacks in emerging domains where the amount of information may be constrained. My information unification approach allows for modeling and mitigating information leakage attacks. Unifying information across domains followed by a quantificiation of the information leaked, provides one of the first studies on users' susceptibality to information leakage attacks. Further, the modeling introduced allows me to quantify the reduced threat of information leakage attacks after applying information cloaking.
357

Information management and the biological warfare threat

Martinez, Antonio, January 2002 (has links)
Thesis (M.S.)--Naval Postgraduate School, 2002. / Title from title screen (viewed June 18, 2003). Includes bibliographical references.
358

New cryptographic schemes with application in network security and computer forensics

Jiang, Lin, 蒋琳 January 2010 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
359

A scalable and secure networking paradigm using identity-based cryptography

Kwok, Hon-man, Sammy., 郭漢文. January 2011 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
360

An investigation of the information security implementation strategies in further education and training colleges in South Africa

Mohlabeng, Moyahabo Rossett January 2014 (has links)
M. Tech. Information Networks / The increasing sophistication of information security threats and the ever-growing body of regulation has made information security a critical function in higher education institutions. Research was undertaken to investigate the implementation of information security strategies in higher education institutions in South Africa. This thesis investigates the following: How will the formulation of an information security strategy improve information security in higher education institutions; in what way should higher education institutions employ information security policies in order to improve information security; and how may the adoption of information security framework create information security awareness among employees in higher education institutions?

Page generated in 0.0424 seconds