• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 19
  • 18
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 285
  • 279
  • 82
  • 72
  • 59
  • 52
  • 42
  • 40
  • 40
  • 40
  • 38
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Perception of employees concerning information security policy compliance : case studies of a European and South African university

Lububu, Steven January 2018 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018. / This study recognises that, regardless of information security policies, information about institutions continues to be leaked due to the lack of employee compliance. The problem is that information leakages have serious consequences for institutions, especially those that rely on information for its sustainability, functionality and competitiveness. As such, institutions ensure that information about their processes, activities and services are secured, which they do through enforcement and compliance of policies. The aim of this study is to explore the extent of non-compliance with information security policy in an institution. The study followed an interpretive, qualitative case study approach to understand the meaningful characteristics of the actual situations of security breaches in institutions. Qualitative data was collected from two universities, using semi-structured interviews, with 17 participants. Two departments were selected: Human Resources and the Administrative office. These two departments were selected based on the following criteria: they both play key roles within an institution, they maintain and improve the university’s policies, and both departments manage and keep confidential university information (Human Resources transects and keeps employees’ information, whilst the Administrative office manages students’ records). This study used structuration theory as a lens to view and interpret the data. The qualitative content analysis was used to analyse documentation, such as brochures and information obtained from the websites of the case study’s universities. The documentation was then further used to support the data from the interviews. The findings revealed some factors that influence non-compliance with regards to information security policy, such as a lack of leadership skills, favouritism, fraud, corruption, insufficiency of infrastructure, lack of security education and miscommunication. In the context of this study, these factors have severe consequences on an institution, such as the loss of the institution’s credibility or the institution’s closure. Recommendations for further study are also made available.
172

Elliptic curves: identity-based signing and quantum arithmetic

Unknown Date (has links)
Pairing-friendly curves and elliptic curves with a trapdoor for the discrete logarithm problem are versatile tools in the design of cryptographic protocols. We show that curves having both properties enable a deterministic identity-based signing with “short” signatures in the random oracle model. At PKC 2003, Choon and Cheon proposed an identity-based signature scheme along with a provable security reduction. We propose a modification of their scheme with several performance benefits. In addition to faster signing, for batch signing the signature size can be reduced, and if multiple signatures for the same identity need to be verified, the verification can be accelerated. Neither the signing nor the verification algorithm rely on the availability of a (pseudo)random generator, and we give a provable security reduction in the random oracle model to the (`-)Strong Diffie-Hellman problem. Implementing the group arithmetic is a cost-critical task when designing quantum circuits for Shor’s algorithm to solve the discrete logarithm problem. We introduce a tool for the automatic generation of addition circuits for ordinary binary elliptic curves, a prominent platform group for digital signatures. Our Python software generates circuit descriptions that, without increasing the number of qubits or T-depth, involve less than 39% of the number of T-gates in the best previous construction. The software also optimizes the (CNOT) depth for F2-linear operations by means of suitable graph colorings. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
173

General Deterrence Theory: Assessing Information Systems Security Effectiveness in Large versus Small Businesses

Schuessler, Joseph H. 05 1900 (has links)
This research sought to shed light on information systems security (ISS) by conceptualizing an organization's use of countermeasures using general deterrence theory, positing a non-recursive relationship between threats and countermeasures, and by extending the ISS construct developed in prior research. Industry affiliation and organizational size are considered in terms of differences in threats that firms face, the different countermeasures in use by various firms, and ultimately, how a firm's ISS effectiveness is affected. Six information systems professionals were interviewed in order to develop the appropriate instruments necessary to assess the research model put forth; the final instrument was further refined by pilot testing with the intent of further clarifying the wording and layout of the instrument. Finally, the Association of Information Technology Professionals was surveyed using an online survey. The model was assessed using SmartPLS and a two-stage least squares analysis. Results indicate that a non-recursive relationship does indeed exist between threats and countermeasures and that countermeasures can be used to effectively frame an organization's use of countermeasures. Implications for practitioners include the ability to target the use of certain countermeasures to have desired effects on both ISS effectiveness and future threats. Additionally, the model put forth in this research can be used by practitioners to both assess their current ISS effectiveness as well as to prescriptively target desired levels of ISS effectiveness.
174

Distributed file systems in an authentication system

Merritt, John W January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
175

Scaling and Visualizing Network Data to Facilitate in Intrusion Detection Tasks

Abdullah, Kulsoom B. 07 April 2006 (has links)
As the trend of successful network attacks continue to rise, better forms of intrusion, detection and prevention are needed. This thesis addresses network traffic visualization techniques that aid administrators in recognizing attacks. A view of port statistics and Intrusion Detection System (IDS) alerts has been developed. Each help to address issues with analyzing large datasets involving networks. Due to the amount of traffic as well as the range of possible port numbers and IP addresses, scaling techniques are necessary. A port-based overview of network activity produces an improved representation for detecting and responding to malicious activity. We have found that presenting an overview using stacked histograms of aggregate port activity, combined with the ability to drill-down for finer details allows small, yet important details to be noticed and investigated without being obscured by large, usual traffic. Another problem administrators face is the cumbersome amount of alarm data generated from IDS sensors. As a result, important details are often overlooked, and it is difficult to get an overall picture of what is occurring in the network by manually traversing textual alarm logs. We have designed a novel visualization to address this problem by showing alarm activity within a network. Alarm data is presented in an overview from which system administrators can get a general sense of network activity and easily detect anomalies. They additionally have the option of then zooming and drilling down for details. Based on our system administrator requirements study, this graphical layout addresses what system administrators need to see, is faster and easier than analyzing text logs, and uses visualization techniques to effectively scale and display the data. With this design, we have built a tool that effectively uses operational alarm log data generated on the Georgia Tech campus network. For both of these systems, we describe the input data, the system design, and examples. Finally, we summarize potential future work.
176

Security Architecture and Protocols for Overlay Network Services

Srivatsa, Mudhakar 16 May 2007 (has links)
Conventional wisdom suggests that in order to build a secure system, security must be an integral component in the system design. However, cost considerations drive most system designers to channel their efforts on the system's performance, scalability and usability. With little or no emphasis on security, such systems are vulnerable to a wide range of attacks that can potentially compromise confidentiality, integrity and availability of sensitive data. It is often cumbersome to redesign and implement massive systems with security as one of the primary design goals. This thesis advocates a proactive approach that cleanly retrofits security solutions into existing system architectures. The first step in this approach is to identify security threats, vulnerabilities and potential attacks on a system or an application. The second step is to develop security tools in the form of customizable and configurable plug-ins that address these security issues and minimally modify existing system code, while preserving its performance and scalability metrics. This thesis uses overlay network applications to shepherd through and address challenges involved in supporting security in large scale distributed systems. In particular, the focus is on two popular applications: publish/subscribe networks and VoIP networks. Our work on VoIP networks has for the first time identified and formalized caller identification attacks on VoIP networks. We have identified two attacks: a triangulation based timing attack on the VoIP network's route set up protocol and a flow analysis attack on the VoIP network's voice session protocol. These attacks allow an external observer (adversary) to uniquely (nearly) identify the true caller (and receiver) with high probability. Our work on the publish/subscribe networks has resulted in the development of an unified framework for handling event confidentiality, integrity, access control and DoS attacks, while incurring small overhead on the system. We have proposed a key isomorphism paradigm to preserve the confidentiality of events on publish/subscribe networks while permitting scalable content-based matching and routing. Our work on overlay network security has resulted in a novel information hiding technique on overlay networks. Our solution represents the first attempt to transparently hide the location of data items on an overlay network.
177

Improving host-based computer security using secure active monitoring and memory analysis

Payne, Bryan D. 03 June 2010 (has links)
Thirty years ago, research in designing operating systems to defeat malicious software was very popular. The primary technique was to design and implement a small security kernel that could provide security assurances to the rest of the system. However, as operating systems grew in size throughout the 1980's and 1990's, research into security kernels slowly waned. From a security perspective, the story was bleak. Providing security to one of these large operating systems typically required running software within that operating system. This weak security foundation made it relatively easy for attackers to subvert the entire system without detection. The research presented in this thesis aims to reimagine how we design and deploy computer systems. We show that through careful use of virtualization technology, one can effectively isolate the security critical components in a system from malicious software. Furthermore, we can control this isolation to allow the security software a complete view to monitor the running system. This view includes all of the necessary information for implementing useful security applications including the system memory, storage, hardware events, and network traffic. In addition, we show how to perform both passive and active monitoring securely, using this new system architecture. Security applications must be redesigned to work within this new monitoring architecture. The data acquired through our monitoring is typically very low-level and difficult to use directly. In this thesis, we describe work that helps bridge this semantic gap by locating data structures within the memory of a running virtual machine. We also describe work that shows a useful and novel security framework made possible through this new monitoring architecture. This framework correlates human interaction with the system to distinguish legitimate and malicious outgoing network traffic.
178

Semantic view re-creation for the secure monitoring of virtual machines

Carbone, Martim 28 June 2012 (has links)
The insecurity of modern-day software has created the need for security monitoring applications. Two serious deficiencies are commonly found in these applications. First, the absence of isolation from the system being monitored allows malicious software to tamper with them. Second, the lack of secure and reliable monitoring primitives in the operating system makes them easy to be evaded. A technique known as Virtual Machine Introspection attempts to solve these problems by leveraging the isolation and mediation properties of full-system virtualization. A problem known as semantic gap, however, occurs as a result of the low-level separation enforced by the hypervisor. This thesis proposes and investigates novel techniques to overcome the semantic gap, advancing the state-of-the-art on the syntactic and semantic view re-creation for applications that conduct passive and active monitoring of virtual machines. First, we propose a new technique for reconstructing a syntactic view of the guest OS kernel's heap state by applying a combination of static code and dynamic memory analysis. Our key contribution is the accuracy and completeness of our analysis. We also propose a new technique that allows out-of-VM applications to invoke and securely execute API functions inside the monitored guest's kernel, eliminating the need for the application to know details of the guest's internals. Our key contribution is the ability to overcome the semantic gap in a robust and secure manner. Finally, we propose a new virtualization-based event monitoring technique based on the interception of kernel data modifications. Our key contribution is the ability to monitor operating system events in a general and secure fashion.
179

Practical authentication in large-scale internet applications

Dacosta, Italo 03 July 2012 (has links)
Due to their massive user base and request load, large-scale Internet applications have mainly focused on goals such as performance and scalability. As a result, many of these applications rely on weaker but more efficient and simpler authentication mechanisms. However, as recent incidents have demonstrated, powerful adversaries are exploiting the weaknesses in such mechanisms. While more robust authentication mechanisms exist, most of them fail to address the scale and security needs of these large-scale systems. In this dissertation we demonstrate that by taking into account the specific requirements and threat model of large-scale Internet applications, we can design authentication protocols for such applications that are not only more robust but also have low impact on performance, scalability and existing infrastructure. In particular, we show that there is no inherent conflict between stronger authentication and other system goals. For this purpose, we have designed, implemented and experimentally evaluated three robust authentication protocols: Proxychain, for SIP-based VoIP authentication; One-Time Cookies (OTC), for Web session authentication; and Direct Validation of SSL/TLS Certificates (DVCert), for server-side SSL/TLS authentication. These protocols not only offer better security guarantees, but they also have low performance overheads and do not require additional infrastructure. In so doing, we provide robust and practical authentication mechanisms that can improve the overall security of large-scale VoIP and Web applications.
180

Monitoring and analysis system for performance troubleshooting in data centers

Wang, Chengwei 13 January 2014 (has links)
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.

Page generated in 0.0826 seconds