• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 24
  • 22
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Data Protection over Cloud

January 2016 (has links)
abstract: Data protection has long been a point of contention and a vastly researched field. With the advent of technology and advances in Internet technologies, securing data has become much more challenging these days. Cloud services have become very popular. Given the ease of access and availability of the systems, it is not easy to not use cloud to store data. This however, pose a significant risk to data security as more of your data is available to a third party. Given the easy transmission and almost infinite storage of data, securing one's sensitive information has become a major challenge. Cloud service providers may not be trusted completely with your data. It is not very uncommon to snoop over the data for finding interesting patterns to generate ad revenue or divulge your information to a third party, e.g. government and law enforcing agencies. For enterprises who use cloud service, it pose a risk for their intellectual property and business secrets. With more and more employees using cloud for their day to day work, business now face a risk of losing or leaking out information. In this thesis, I have focused on ways to protect data and information over cloud- a third party not authorized to use your data, all this while still utilizing cloud services for transfer and availability of data. This research proposes an alternative to an on-premise secure infrastructure giving exibility to user for protecting the data and control over it. The project uses cryptography to protect data and create a secure architecture for secret key migration in order to decrypt the data securely for the intended recipient. It utilizes Intel's technology which gives it an added advantage over other existing solutions. / Dissertation/Thesis / Masters Thesis Computer Science 2016
12

TCPA/TCG and NGSCB : Benefits and Risks for Users

Ericson, Peter January 2004 (has links)
Trusted computing has been proposed as a way to enhance computer security and privacy significantly by including them in the design of computing platforms instead of adding them on top of an inherently insecure foundation; however, the project has attracted much criticism. This dissertation looks at trusted computing from the user perspective. Possible beneficial uses of the technology are brought up, and some of the raised criticism is discussed. The criticism is analyzed in an attempt to find out if the criticism is correct on all points, or if some of it is the result of misinformation or misunderstanding. The conclusion is that not all the arguments against trusted computing are correct, and that the possible implications for users are taken into account in the development process. The dissertation ends on a positive note, concluding that trusted computing is possible without the worst fears of the critics coming true.
13

Exploring Trusted Platform Module Capabilities: A Theoretical and Experimental Study

Gunupudi, Vandana 05 1900 (has links)
Trusted platform modules (TPMs) are hardware modules that are bound to a computer's motherboard, that are being included in many desktops and laptops. Augmenting computers with these hardware modules adds powerful functionality in distributed settings, allowing us to reason about the security of these systems in new ways. In this dissertation, I study the functionality of TPMs from a theoretical as well as an experimental perspective. On the theoretical front, I leverage various features of TPMs to construct applications like random oracles that are impossible to implement in a standard model of computation. Apart from random oracles, I construct a new cryptographic primitive which is basically a non-interactive form of the standard cryptographic primitive of oblivious transfer. I apply this new primitive to secure mobile agent computations, where interaction between various entities is typically required to ensure security. I prove these constructions are secure using standard cryptographic techniques and assumptions. To test the practicability of these constructions and their applications, I performed an experimental study, both on an actual TPM and a software TPM simulator which has been enhanced to make it reflect timings from a real TPM. This allowed me to benchmark the performance of the applications and test the feasibility of the proposed extensions to standard TPMs. My tests also show that these constructions are practical.
14

Reducing Size and Complexity of the Security-Critical Code Base of File Systems

Weinhold, Carsten 09 July 2014 (has links) (PDF)
Desktop and mobile computing devices increasingly store critical data, both personal and professional in nature. Yet, the enormous code bases of their monolithic operating systems (hundreds of thousands to millions of lines of code) are likely to contain exploitable weaknesses that jeopardize the security of this data in the file system. Using a highly componentized system architecture based on a microkernel (or a very small hypervisor) can significantly improve security. The individual operating system components have smaller code bases running in isolated address spaces so as to provide better fault containment. Their isolation also allows for smaller trusted computing bases (TCBs) of applications that comprise only a subset of all components. In my thesis, I built VPFS, a virtual private file system that is designed for such a componentized system architecture. It aims at reducing the amount of code and complexity that a file system implementation adds to the TCB of an application. The basic idea behind VPFS is similar to that of a VPN, which securely reuses an untrusted network: The core component of VPFS implements all functionality and cryptographic algorithms that an application needs to rely upon for confidentiality and integrity of file system contents. These security-critical cores reuse a much more complex and therefore untrusted file system stack for non-critical functionality and access to the storage device. Additional trusted components ensure recoverability.
15

Reducing Size and Complexity of the Security-Critical Code Base of File Systems

Weinhold, Carsten 14 January 2014 (has links)
Desktop and mobile computing devices increasingly store critical data, both personal and professional in nature. Yet, the enormous code bases of their monolithic operating systems (hundreds of thousands to millions of lines of code) are likely to contain exploitable weaknesses that jeopardize the security of this data in the file system. Using a highly componentized system architecture based on a microkernel (or a very small hypervisor) can significantly improve security. The individual operating system components have smaller code bases running in isolated address spaces so as to provide better fault containment. Their isolation also allows for smaller trusted computing bases (TCBs) of applications that comprise only a subset of all components. In my thesis, I built VPFS, a virtual private file system that is designed for such a componentized system architecture. It aims at reducing the amount of code and complexity that a file system implementation adds to the TCB of an application. The basic idea behind VPFS is similar to that of a VPN, which securely reuses an untrusted network: The core component of VPFS implements all functionality and cryptographic algorithms that an application needs to rely upon for confidentiality and integrity of file system contents. These security-critical cores reuse a much more complex and therefore untrusted file system stack for non-critical functionality and access to the storage device. Additional trusted components ensure recoverability.
16

Trustworthy services through attestation

Lyle, John January 2011 (has links)
Remote attestation is a promising mechanism for assurance of distributed systems. It allows users to identify the software running on a remote system before trusting it with an important task. This functionality is arriving at exactly the right time as security-critical systems, such as healthcare and financial services, are increasingly being hosted online. However, attestation has limitations and has been criticized for being impractical. Too much effort is required for too little reward: a large, rapidly-changing list of software must be maintained by users, who then have insufficient information to make a trust decision. As a result attestation is rarely used today. This thesis evaluates attestation in a service-oriented context to determine whether it can be made practical for assurance of servers rather than client machines. There are reasons to expect that it can: servers run fewer programs and the overhead of integrity reporting is more appropriate on a server which may be protecting important assets. However, a literature review and new experiments show that problems remain, many stemming from the large trusted computing base as well as the lack of information linking software identity to expected behaviour. Three novel solutions are proposed. Web service middleware is restructured to minimize the software running at the endpoint, thus lowering the effort for the relying party. A key advantage of the proposed two-tier structure is that strong integrity guarantees can be made without loss of conformance with service standards. Secondly, a program modelling approach is investigated to further automate the attestation and verification process and add more information about system behaviour. Several sets of programs are modelled, including the bootloader, a web service and a menu-based shell. Finally, service behaviour is attested through source code properties established during compilation. This provides a trustworthy and verifiable connection between the identity of the software on a service platform and its expected runtime behaviour. This approach is applicable to any programming language and verification method, and has the advantage of not requiring a runtime monitor. These contributions are evaluated using an example e-voting service to show the level of assurance attestation can provide. Overall, this thesis demonstrates that attestation can be made significantly more practical through the described new techniques. Although some problem remain, with further improvements to operating systems and better software engineering methods, attestation may become a trustworthy and reliable assurance mechanism for web services.
17

Towards a trusted grid architecture

Cooper, Andrew January 2010 (has links)
The malicious host problem is challenging in distributed systems such as grids and clouds. Rival organisations may share the same physical infrastructure. Administrators might deliberately or accidentally compromise users' data. The thesis concerns the development of a security architecture that allows users to place a high degree of trust in remote systems to process their data securely. The problem is tackled through a new security layer that ensures users' data can only be accessed within a trusted execution environment. Access to encrypted programs and data is authorised by a key management service using trusted computing attestation. Strong data integrity and confidentiality protection on remote hosts is provided by the job security manager virtual machine. The trusted grid architecture supports the enforcement of digital rights management controls. Subgrids allow users to define a strong trusted boundary for delegated grid jobs. Recipient keys enforce a trusted return path for job results to help users create secure grid workflows. Mandatory access controls allow stakeholders to mandate the software that is available to grid users. A key goal of the new architecture is backwards compatibility with existing grid infrastructure and data. This is achieved using a novel virtualisation architecture where the security layer is pushed down to the remote host, so it does not need to be pre-installed by the service provider. A new attestation scheme, called origin attestation, supports the execution of unmodified, legacy grid jobs. These features will ease the transition to a trusted grid and help make it practical for deployment on a global scale.
18

Integrity Verification of Applications on RADIUM Architecture

Tarigopula, Mohan Krishna 08 1900 (has links)
Trusted Computing capability has become ubiquitous these days, and it is being widely deployed into consumer devices as well as enterprise platforms. As the number of threats is increasing at an exponential rate, it is becoming a daunting task to secure the systems against them. In this context, the software integrity measurement at runtime with the support of trusted platforms can be a better security strategy. Trusted Computing devices like TPM secure the evidence of a breach or an attack. These devices remain tamper proof if the hardware platform is physically secured. This type of trusted security is crucial for forensic analysis in the aftermath of a breach. The advantages of trusted platforms can be further leveraged if they can be used wisely. RADIUM (Race-free on-demand Integrity Measurement Architecture) is one such architecture, which is built on the strength of TPM. RADIUM provides an asynchronous root of trust to overcome the TOC condition of DRTM. Even though the underlying architecture is trusted, attacks can still compromise applications during runtime by exploiting their vulnerabilities. I propose an application-level integrity measurement solution that fits into RADIUM, to expand the trusted computing capability to the application layer. This is based on the concept of program invariants that can be used to learn the correct behavior of an application. I used Daikon, a tool to obtain dynamic likely invariants, and developed a method of observing these properties at runtime to verify the integrity. The integrity measurement component was implemented as a Python module on top of Volatility, a virtual machine introspection tool. My approach is a first step towards integrity attestation, using hypervisor-based introspection on RADIUM and a proof of concept of application-level measurement capability.
19

Trusted Computing & Digital Rights Management : Theory & Effects

Gustafsson, Daniel, Stewén, Tomas January 2004 (has links)
<p>Trusted Computing Platform Alliance (TCPA), now known as Trusted Computing Group (TCG), is a trusted computing initiative created by IBM, Microsoft, HP, Compaq, Intel and several other smaller companies. Their goal is to create a secure trusted platform with help of new hardware and software. TCG have developed a new chip, the Trusted Platform Module (TPM) that is the core of this initiative, which is attached to the motherboard. An analysis is made on the TCG’s specifications and a summary is written of the different parts and functionalities implemented by this group. Microsoft is in the development stage for an operating system that can make use of TCG’s TPM and other new hardware. This initiative of the operating system is called NGSCB (Next Generation Secure Computing Base) former known as Palladium. This implementation makes use of TCG’s main functionalities with a few additions. An analysis is made on Microsoft’s NGSCB specifications and a summary is written of how this operating system will work. NGSCB is expected in Microsoft’s next operating system Longhorn version 2 that will have its release no sooner than 2006. Intel has developed hardware needed for a trusted platform and has come up with a template on how operating system developers should implement their OS to make use of this hardware. TCG’s TPM are also a part of the system. This system is called LaGrande. An analysis is also made on this trusted computing initiative and a sum up of it is written. This initiative is very similar to NGSCB, but Microsoft and Intel are not willing to comment on that. DRM (Digital Rights Management) is a technology that protects digital content (audio, video, documents, e-books etc) with rights. A DRM system is a system that manages the rights connected to the content and provides security for those by encryption. First, Microsoft’s RMS (Rights Management System) that controls the rights of documents within a company is considered. Second, a general digital media DRM system is considered that handles e-commerce for digital content. Analysis and discussion are made on what effects TC (Trusted Computing) and DRM could result in for home users, companies and suppliers of TC hardware and software. The different questions stated in the problemformulation is also discussed and considered. There are good and bad effects for every group but if TC will work as stated today, then the pros will outweigh the cons. The same goes for DRM on a TC platform. Since the benefits outweights the drawbacks, we think that TC should be fullfilled and taken into production. TC and DRM provides a good base of security and it is then up to the developers to use this in a good and responsible way.</p>
20

Improving System Security Through TCB Reduction

Kauer, Bernhard 16 April 2015 (has links) (PDF)
The OS (operating system) is the primary target of todays attacks. A single exploitable defect can be sufficient to break the security of the system and give fully control over all the software on the machine. Because current operating systems are too large to be defect free, the best approach to improve the system security is to reduce their code to more manageable levels. This work shows how the security-critical part of the OS, the so called TCB (Trusted Computing Base), can be reduced from millions to less than hundred thousand lines of code to achieve these security goals. Shrinking the software stack by more than an order of magnitude is an open challenge since no single technique can currently achieve this. We therefore followed a holistic approach and improved the design as well as implementation of several system layers starting with a new OS called NOVA. NOVA provides a small TCB for both newly written applications but also for legacy code running inside virtual machines. Virtualization is thereby the key technique to ensure that compatibility requirements will not increase the minimal TCB of our system. The main contribution of this work is to show how the virtual machine monitor for NOVA was implemented with significantly less lines of code without affecting the performance of its guest OS. To reduce the overall TCB of our system, other parts had to be improved as well. Additional contributions are the simplification of the OS debugging interface, the reduction of the boot stack and a new programming language called B1 that can be more easily compiled.

Page generated in 0.0933 seconds