• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 169
  • 19
  • 18
  • 9
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 279
  • 279
  • 279
  • 279
  • 82
  • 69
  • 59
  • 52
  • 41
  • 40
  • 39
  • 39
  • 38
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Security Architecture and Protocols for Overlay Network Services

Srivatsa, Mudhakar 16 May 2007 (has links)
Conventional wisdom suggests that in order to build a secure system, security must be an integral component in the system design. However, cost considerations drive most system designers to channel their efforts on the system's performance, scalability and usability. With little or no emphasis on security, such systems are vulnerable to a wide range of attacks that can potentially compromise confidentiality, integrity and availability of sensitive data. It is often cumbersome to redesign and implement massive systems with security as one of the primary design goals. This thesis advocates a proactive approach that cleanly retrofits security solutions into existing system architectures. The first step in this approach is to identify security threats, vulnerabilities and potential attacks on a system or an application. The second step is to develop security tools in the form of customizable and configurable plug-ins that address these security issues and minimally modify existing system code, while preserving its performance and scalability metrics. This thesis uses overlay network applications to shepherd through and address challenges involved in supporting security in large scale distributed systems. In particular, the focus is on two popular applications: publish/subscribe networks and VoIP networks. Our work on VoIP networks has for the first time identified and formalized caller identification attacks on VoIP networks. We have identified two attacks: a triangulation based timing attack on the VoIP network's route set up protocol and a flow analysis attack on the VoIP network's voice session protocol. These attacks allow an external observer (adversary) to uniquely (nearly) identify the true caller (and receiver) with high probability. Our work on the publish/subscribe networks has resulted in the development of an unified framework for handling event confidentiality, integrity, access control and DoS attacks, while incurring small overhead on the system. We have proposed a key isomorphism paradigm to preserve the confidentiality of events on publish/subscribe networks while permitting scalable content-based matching and routing. Our work on overlay network security has resulted in a novel information hiding technique on overlay networks. Our solution represents the first attempt to transparently hide the location of data items on an overlay network.
172

Improving host-based computer security using secure active monitoring and memory analysis

Payne, Bryan D. 03 June 2010 (has links)
Thirty years ago, research in designing operating systems to defeat malicious software was very popular. The primary technique was to design and implement a small security kernel that could provide security assurances to the rest of the system. However, as operating systems grew in size throughout the 1980's and 1990's, research into security kernels slowly waned. From a security perspective, the story was bleak. Providing security to one of these large operating systems typically required running software within that operating system. This weak security foundation made it relatively easy for attackers to subvert the entire system without detection. The research presented in this thesis aims to reimagine how we design and deploy computer systems. We show that through careful use of virtualization technology, one can effectively isolate the security critical components in a system from malicious software. Furthermore, we can control this isolation to allow the security software a complete view to monitor the running system. This view includes all of the necessary information for implementing useful security applications including the system memory, storage, hardware events, and network traffic. In addition, we show how to perform both passive and active monitoring securely, using this new system architecture. Security applications must be redesigned to work within this new monitoring architecture. The data acquired through our monitoring is typically very low-level and difficult to use directly. In this thesis, we describe work that helps bridge this semantic gap by locating data structures within the memory of a running virtual machine. We also describe work that shows a useful and novel security framework made possible through this new monitoring architecture. This framework correlates human interaction with the system to distinguish legitimate and malicious outgoing network traffic.
173

Semantic view re-creation for the secure monitoring of virtual machines

Carbone, Martim 28 June 2012 (has links)
The insecurity of modern-day software has created the need for security monitoring applications. Two serious deficiencies are commonly found in these applications. First, the absence of isolation from the system being monitored allows malicious software to tamper with them. Second, the lack of secure and reliable monitoring primitives in the operating system makes them easy to be evaded. A technique known as Virtual Machine Introspection attempts to solve these problems by leveraging the isolation and mediation properties of full-system virtualization. A problem known as semantic gap, however, occurs as a result of the low-level separation enforced by the hypervisor. This thesis proposes and investigates novel techniques to overcome the semantic gap, advancing the state-of-the-art on the syntactic and semantic view re-creation for applications that conduct passive and active monitoring of virtual machines. First, we propose a new technique for reconstructing a syntactic view of the guest OS kernel's heap state by applying a combination of static code and dynamic memory analysis. Our key contribution is the accuracy and completeness of our analysis. We also propose a new technique that allows out-of-VM applications to invoke and securely execute API functions inside the monitored guest's kernel, eliminating the need for the application to know details of the guest's internals. Our key contribution is the ability to overcome the semantic gap in a robust and secure manner. Finally, we propose a new virtualization-based event monitoring technique based on the interception of kernel data modifications. Our key contribution is the ability to monitor operating system events in a general and secure fashion.
174

Practical authentication in large-scale internet applications

Dacosta, Italo 03 July 2012 (has links)
Due to their massive user base and request load, large-scale Internet applications have mainly focused on goals such as performance and scalability. As a result, many of these applications rely on weaker but more efficient and simpler authentication mechanisms. However, as recent incidents have demonstrated, powerful adversaries are exploiting the weaknesses in such mechanisms. While more robust authentication mechanisms exist, most of them fail to address the scale and security needs of these large-scale systems. In this dissertation we demonstrate that by taking into account the specific requirements and threat model of large-scale Internet applications, we can design authentication protocols for such applications that are not only more robust but also have low impact on performance, scalability and existing infrastructure. In particular, we show that there is no inherent conflict between stronger authentication and other system goals. For this purpose, we have designed, implemented and experimentally evaluated three robust authentication protocols: Proxychain, for SIP-based VoIP authentication; One-Time Cookies (OTC), for Web session authentication; and Direct Validation of SSL/TLS Certificates (DVCert), for server-side SSL/TLS authentication. These protocols not only offer better security guarantees, but they also have low performance overheads and do not require additional infrastructure. In so doing, we provide robust and practical authentication mechanisms that can improve the overall security of large-scale VoIP and Web applications.
175

Monitoring and analysis system for performance troubleshooting in data centers

Wang, Chengwei 13 January 2014 (has links)
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.
176

Physical-layer security

Bloch, Matthieu 05 May 2008 (has links)
As wireless networks continue to flourish worldwide and play an increasingly prominent role, it has become crucial to provide effective solutions to the inherent security issues associated with a wireless transmission medium. Unlike traditional solutions, which usually handle security at the application layer, the primary concern of this thesis is to analyze and develop solutions based on coding techniques at the physical layer. First, an information-theoretically secure communication protocol for quasi-static fading channels was developed and its performance with respect to theoretical limits was analyzed. A key element of the protocol is a reconciliation scheme for secret-key agreement based on low-density parity-check codes, which is specifically designed to operate on non-binary random variables and offers high reconciliation efficiency. Second, the fundamental trade-offs between cooperation and security were analyzed by investigating the transmission of confidential messages to cooperative relays. This information-theoretic study highlighted the importance of jamming as a means to increase secrecy and confirmed the importance of carefully chosen relaying strategies. Third, other applications of physical-layer security were investigated. Specifically, the use of secret-key agreement techniques for alternative cryptographic purposes was analyzed, and a framework for the design of practical information-theoretic commitment protocols over noisy channels was proposed. Finally, the benefit of using physical-layer coding techniques beyond the physical layer was illustrated by studying security issues in client-server networks. A coding scheme exploiting packet losses at the network layer was proposed to ensure reliable communication between clients and servers and security against colluding attackers.
177

Source authentication in group communication

Al-Ibrahim, Mohamed Hussain January 2005 (has links)
Title from screen page; viewed 10 Oct 2005. / Thesis (PhD)--Macquarie University, Division of Information and Communication Sciences, Dept. of Computing, 2004. / Bibliography: leaves 163-175. / Introduction -- Cryptographic essentials -- Multicast: structure and security -- Authentication of multicast streams -- Authentication of concast communication -- Authentication of transit flows -- One-time signatures for authenticating group communication -- Authentication of anycast communication -- Authentication of joining operation - Conclusion and future directions. / Electronic publication; full text available in PDF format. / Multicast is a relatively new and emerging communication mode in which a sender sends a message to a group of recipients in just one connection establishment... reducing broadband overhead and increasing resource utilization in the already congested and contented network... The focus of the research in this area has been in two directions: first, building an efficient routing infrastructure, and secondly, building a sophisticated security infrastructure. The focus of this work is on the second issue. / An ideal authenticated multicast environment ... provides authenticity for all the communication operations in the system... We ... propose a comprehensive solution to the problem ... for all its possible operations... 1. one-to-one (or joining mode) 2. one-to-many (or broadcast mode) 3. many-to-one (or concast mode) 4. intermediate (or transit mode) ... We study the ... mode known as anycast, in which a server is selected from a group of servers. Further we develop ... schemes for group-based communication exploiting the distinct features of one-time signatures... cover situations when a threshold number of participants are involved and ... where a proxy signer is required. / Electronic reproduction. / Mode of access: World Wide Web. / Also available in a print form
178

Social media risks in large and medium enterprises in the Cape Metropole : the role of internal auditors

Gwaka, Leon Tinashe January 2015 (has links)
Thesis (MTech (Internal Auditing))--Cape Peninsula University of Technology, 2015. / Social media has undoubtedly emerged as one of the greatest developments in this technology driven generation. Despite its existence from years back, social media popularity has recently surged drastically, with billions of users worldwide reported to be on at least one social media platform. This increase in users of social media has further been necessitated by governmental and private-sector initiatives to boost Internet connectivity to bridge the digital divide globally. Mobile Internet access has also fuelled the use of social media as it allows easy and economical connectivity anytime, anywhere. The availability of hundreds of social media platforms has presented businesses with several opportunities to conduct business activities using social media. The use of social media has been reported to avail businesses with plenty of benefits when this use is strategically aligned to business objectives. On the flipside of the coin, these social media platforms have also emerged as new hunting grounds for fraudsters and other information-technology related criminals. As with any invention, engaging social media for business has its own inherent risks; this further complicates existing information-technology risks and also presents businesses with new risks. Despite blossoming into a global phenomenon, social media has no universally accepted frameworks or approaches (thus no safeguards) when engaging with it, resulting in almost unlimited risk exposures. The uncertainly, i.e. risks surrounding social media platforms, proves problematic in determining the optimum social media platform to use in business. Furthermore, organisations are facing challenges in deciding whether to formally adopt it or totally ignore it, with the latter emerging not to be a viable option. The complex nature of social media has made it difficult for enterprises to place a monetary value and determine a return on investment on these platforms. From a governance perspective, it remains a challenge for most enterprises to identify and appoint individuals responsible for social media management within businesses, although recently social media strategist positions have been surfacing. Due to their nature, the social media trigger matters relating to governance, risk and compliance, which imply that internal auditors therefore are expected to champion the adoption of social media in enterprises. As a relatively new concept, the role that internal auditors should play towards social media appears not to be well defined. Through examination of existing guidelines, an attempt is made to define the role of internal auditors towards social media.
179

An exploratory study of techniques in passive network telescope data analysis

Cowie, Bradley January 2013 (has links)
Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope datasets
180

Towards an evaluation and protection strategy for critical infrastructure

Gottschalk, Jason Howard January 2015 (has links)
Critical Infrastructure is often overlooked from an Information Security perspective as being of high importance to protect which may result in Critical Infrastructure being at risk to Cyber related attacks with potential dire consequences. Furthermore, what is considered Critical Infrastructure is often a complex discussion, with varying opinions across audiences. Traditional Critical Infrastructure included power stations, water, sewage pump stations, gas pipe lines, power grids and a new entrant, the “internet of things”. This list is not complete and a constant challenge exists in identifying Critical Infrastructure and its interdependencies. The purpose of this research is to highlight the importance of protecting Critical Infrastructure as well as proposing a high level framework aiding in the identification and securing of Critical Infrastructure. To achieve this, key case studies involving Cyber crime and Cyber warfare, as well as the identification of attack vectors and impact on against Critical Infrastructure (as applicable to Critical Infrastructure where possible), were identified and discussed. Furthermore industry related material was researched as to identify key controls that would aid in protecting Critical Infrastructure. The identification of initiatives that countries were pursuing, that would aid in the protection of Critical Infrastructure, were identified and discussed. Research was conducted into the various standards, frameworks and methodologies available to aid in the identification, remediation and ultimately the protection of Critical Infrastructure. A key output of the research was the development of a hybrid approach to identifying Critical Infrastructure, associated vulnerabilities and an approach for remediation with specific metrics (based on the research performed). The conclusion based on the research is that there is often a need and a requirement to identify and protect Critical Infrastructure however this is usually initiated or driven by non-owners of Critical Infrastructure (Governments, governing bodies, standards bodies and security consultants). Furthermore where there are active initiative by owners very often the suggested approaches are very high level in nature with little direct guidance available for very immature environments.

Page generated in 0.1114 seconds