• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 45
  • 29
  • Tagged with
  • 381
  • 105
  • 79
  • 43
  • 37
  • 33
  • 29
  • 29
  • 25
  • 23
  • 22
  • 22
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Generalised key distribution patterns

Novak, Julia January 2012 (has links)
Given a network of users, with certain secure communication requirements, we examine the mathematics that underpins the distribution of the necessary secret information, to enable the secure communications within that network. More precisely, we let f!lJ be a network of users and ~, § be some prede- termined families of subsets of those users. The secret information (keys or subkeys) must be distributed in such a way that for any G E ~, the members of G can communicate securely among themselves without fear of the members of some F E § (that have no users in common with G), colluding together to either eavesdrop on what is being said (and understand the content of the message) or tamper with the message, undetected. , In the case when ~ and § comprise of all the subsets of f!lJ that have some fixed cardinality t and w respectively, we have a well-known and much studied problem. However, in this thesis we remove these rigid cardinality constraints and make ~ and § as unrestricted as possible. This allows for situations where the members of ~ and § are completely irregular, giving a much less well-known and less studied problem. Without any regularity emanating from cardinality constraints, the best approach to the study of these general structures is unclear. It is unreason- able to expect that highly regular objects (such as designs or finite geometries) play any significant role in the analysis of such potentially irregular structures. Thus, we require some new techniques and a more general approach. In this thesis we use methods from set theory and ideas from convex analysis in order to construct these general structures and provide some mathematical insight into their behaviour. Furthermore, we analyse these general structures by ex- ploiting the proof techniques of other authors in new ways, tightening existing inequalities and generalising results from the literature.
42

Mixed radix design flow for security applications

Rafiev, Ashur January 2011 (has links)
The purpose of secure devices, such as smartcards, is to protect sensitive information against software and hardware attacks. Implementation of the appropriate protection techniques often implies non-standard methods that are not supported by the conventional design tools. In the recent decade the designers of secure devices have been working hard on customising the workflow. The presented research aims at collecting the up-to-date experiences in this area and create a generic approach to the secure design flow that can be used as guidance by engineers. Well-known countermeasures to hardware attacks imply the use of specific signal encodings. Therefore, multi-valued logic has been considered as a primary aspect of the secure design. The choice of radix is crucial for multi-valued logic synthesis. Practical examples reveal that it is not always possible to find the optimal radix when taking into account actual physical parameters of multi-valued operations. In other words, each radix has its advantages and disadvantages. Our proposal is to synthesise logic in different radices, so it could benefit from their combination. With respect to the design opportunities of the existing tools and the possibilities of developing new tools that would fill the gaps in the flow, two distinct design approaches have been formed: conversion driven design and pre-synthesis. The conversion driven design approach takes the outputs of mature and time-proven electronic design automation (EDA) synthesis tools to generate mixed radix datapath circuits in an endeavour to investigate the added relative advantages or disadvantages. An algorithm underpinning the approach is presented and formally described together with secure gate-level implementations. The obtained results are reported showing an increase in power consumption, thus giving further motivation for the second approach. The pre-synthesis approach is aimed at improving the efficiency by using multivalued logic synthesis techniques to produce an abstract component-level circuit before mapping it into technology libary. Reed-Muller expansions over Galois field arithmetic have been chosen as a theoretical foundation for this approach. In order to enable the combination of radices at the mathematical level, the multi-valued Reed-Muller expansions have been developed into mixed radix Reed-Muller expansions. The goals of the work is to estimate the potential of the new approach and to analyse its impact on circuit parameters down to the level of physical gates. The benchmark results show the approach extends the search space for optimisation and provides information on how the implemented functions are related to different radices. The theory of two-level radix models and corresponding computation methods are the primary theoretical contribution. It has been implemented in RMMixed tool and interfaced to the standard EDA tools to form a complete security-aware design flow.
43

A security infrastructure for dynamic virtual organizations

Hassan, Mohammad Waseem January 2008 (has links)
This thesis demonstrates how security can be provided in dynamic Virtual Organizations (dVOs) for enabling transient collaborations. The problem associated with the provision of security in dVOs is the absence of trust relationships among prospective dVO partners who are potentially strangers. The traditional methods of providing security in a distributed setting and the existing dVO security approaches are inadequate mechanisms in open environments where dVOs prevail. This has led to the use of alternative technologies. In this thesis use of the Trust Management technique that is based on the notion of trust and reputation has been advocated as available jn the social system. So far numerous studies on the formalization of social phenomenon of trust and reputation have been carried out in different environments e.g. P2P (peer-to-Peer), MANET (Mobile Ad-hoc Networks), pervasive computing. However, very little is known about building an architecture for Trust Management in scalable environments such as dVOs. Hence the main emphasis in this thesis has been to establish mechanisms for Trust Management at a global level and to carry out a study of their behaviour in a practical setting. In order to perform the above-mentioned task, there are a number of issues involved which include: handling the problem of naming and identities, devising innovative strategies for trust data placement, establishing reputation and making it available in an efficient way, devising a mechanism for trust assessments that incorporates different parameters through the use of policy management techniques in different collaboration scenarios and the investigation of the phenomenon of reputation evolution. For investigating the above issues, Object-Oriented Analysis and Design (OOAD) methodology has been followed (with some extensions for service modelling) using the Unified Modelling Language CUML). The solution provides a Web Services based Trust Management Framework (TMF) comprising scalable architecture for trust data dissemination and models for trust and reputation to address the above-mentioned issues. In order to validate the hypothesis the evaluation of the proposed system has been carried out in terms of building initial relationships among prospective dVO partners. It was deduced from the above study that the Trust Management Framework (TMF) as proposed in this thesis could improve security in a dVO environment.
44

Digital watermarking for compact discs and their effect on the error correction system

Rydyger, Kay January 2002 (has links)
A new technique, based on current compact disc technology, to image the transparent surface of a compact disc, or additionally the reflective information layer, has been designed, implemented and evaluated. This technique (image capture technique) has been tested and successfully applied to the detection of mechanically introduced compact disc watermarks and biometrical information with a resolution of 1.6um x l4um. Software has been written which, when used with the image capture technique, recognises a compact disc based on its error distribution. The software detects digital watermarks which cause either laser signal distortions or decoding error events. Watermarks serve as secure media identifiers. The complete channel coding of a Compact Disc Audio system including EFM modulation, error-correction and interleaving have been implemented in software. The performance of the error correction system of the compact disc has been assessed using this simulation model. An embedded data channel holding watermark data has been investigated. The covert channel is implemented by means of the error-correction ability of the Compact Disc system and was realised by aforementioned techniques like engraving the reflective layer or the polysubstrate layer. Computer simulations show that watermarking schemes, composed of regularly distributed single errors, impose a minimum effect on the error correction system. Error rates increase by a factor of ten if regular single-symbol errors per frame are introduced - all other patterns further increase the overall error rates. Results show that background signal noise has to be reduced by a factor of 60% to account for the additional burden of this optimal watermark pattern. Two decoding strategies, usually employed in modern CD decoders, have been examined. Simulations take emulated bursty background noise as it appears in user-handled discs into account. Variations in output error rates, depending on the decoder and the type of background noise became apparant. At low error rates {r < 0.003) the output symbol error rate for a bursty background differs by 20% depending on the decoder. Differences between a typical burst error distribution caused by user-handling and a non-burst error distribution has been found to be approximately 1% with the higher performing decoder. Simulation results show that the drop of the error-correction rates due to the presence of a watermark pattern quantitatively depends on the characteristic type of the background noise. A four times smaller change to the overall error rate was observed when adding a regular watermark pattern to a characteristic background noise, as caused by user-handling, compared to a non-bursty background.
45

Structured investigation of digital incidents in complex computing environments

Stephenson, Peter Reynolds January 2004 (has links)
No description available.
46

Data security in photonic information systems using quantum based approaches

Clarke, Patrick Joseph January 2013 (has links)
The last two decades has seen a revolution in how information is stored and transmitted across the world. In this digital age, it is vital for banking systems, governments and businesses that this information can be transmitted to authorised receivers quickly and efficiently. Current classical cryptosystems rely on the computational difficulty of calculating certain mathematical functions but with the advent of quantum computers, implementing efficient quantum algorithms, these systems could be rendered insecure overnight. Quantum mechanics thankfully also provides the solution, in which information is transmitted on single-photons called qubits and any attempt by an adversary to gain information on these qubits is limited by the laws of quantum mechanics. This thesis looks at three distinct different quantum information experiments. Two of the systems describe the implementation of distributing quantum keys, in which the presence of an eavesdropper introduces unavoidable errors by the laws of quantum mechanics. The first scheme used a quantum dot in a micropillar cavity as a singlephoton source. A polarisation encoding scheme was used for implementing the BB84, quantum cryptographic protocol, which operated at a wavelength of 905 nm and a clock frequency of 40 MHz. A second system implemented phase encoding using asymmetric unbalanced Mach-Zehnder interferometers, with a weak coherent source, operating at a wavelength of 850 nm and pulsed at a clock rate of 1 GHz. The system used depolarised light propagating in the fibre quantum channel. This helps to eliminate the random evolution of the state of polarisation of photons, as a result of stress induced changes in the intrinsic birefringence of the fibre. The system operated completely autonomously, using custom software to compensate for path length fluctuations in the arms of the interferometer and used a variety of different single-photon detector technologies. The final quantum information scheme looked at quantum digital signatures, which allows a sender, Alice, to distribute quantum signatures to two parties, Bob and Charlie, such that they are able to authenticate that the message originated from Alice and that the message was not altered in transmission.
47

Accelerating fully homomorphic encryption over the integers

Moore, Ciara Marie January 2015 (has links)
Many organisations are moving towards using cloud storage and cloud computation services. This raises the important issue of data security and privacy. Fully homomorphic encryption (FHE) is a privacy-preserving technique which allows computations on encrypted data without the use of a decryption key. FHE schemes have widespread applications. from secure cloud computation to the secure access of medical records for statistical purposes. However. current software implementations of FHE schemes are not practical for real time applications due to slow performance and inherently large parameter sizes required to guarantee an adequate level of security. Therefore, in this thesis, algorithmic and architectural optimisations of FHE hardware designs are proposed to Improve the performance of these schemes, targeting the FPGA platform. The focus is on FHE over the integers. The first reported hardware designs of the encryption step of the integer-based FHE scheme are proposed, incorporating Comba and FFT multiplication methods. These designs achieve speed up factors of up to 13 and 45 respectively compared to the existing benchmark software implementation. A novel design in which a low Hamming weight multiplicand Is incorporated into the multiplication required in the encryption step is also proposed to further enhance performance and a speed up factor of up to 130 is attained. A comprehensive analysis of multiplication techniques for large integers is presented to determine the most optimal hardware building blocks for FHE operations. Hardware designs of novel combinations of multiplication methods are proposed for this purpose. For some applications, these combined multiplier architectures are shown to perform better than architectures using individual multiplication methods. Throughout this research, it is shown that optimised hardware architectures of FHE schemes can greatly improve practicality; significant speed up factors, ranging up to 130, are achieved with the hardware design of the encryption step of the integer-based FHE scheme.
48

Obfuscation of abstract data-types

Drape, Stephen January 2004 (has links)
No description available.
49

User-controlled access management to resources on the Web

Machulak, Maciej Pawel January 2014 (has links)
The rapidly developing Web environment provides users with a wide set of rich services as varied and complex as desktop applications. Those services are collectively referred to as "Web 2.0", with such examples as Facebook, Google Apps, Salesforce, or Wordpress, among many others. These applications are used for creating, managing, and sharing online data between users and services on the Web. With the shift from desktop computers to the Web, users create and store more of their data online and not on the hard drives of their computers. This data includes personal information, documents, photos, as well as other resources. Irrespective of the environment, either desktop or the Web, it is the user who creates the data, who disseminates it and who shares this data. On the Web, however, sharing resources poses new security and usability challenges which were not present in traditional computing. Access control, also known as authorisation, that aims to protect such sharing, is currently poorly addressed in this environment. Existing access control is often not well suited to the increasing amount of highly distributed Web data and does not give users the required flexibility in managing their data. This thesis discusses new solutions to access control for the Web. Firstly, it shows a proposal named User-Managed Access Control (UMAC) and presents its architecture and protocol. This thesis then focuses on the User-Managed Access (UMA) solution that is researched by the User-Managed Access Work Group at Kantara Initiative. The UMA approach allows the user to play a pivotal role in assigning access rights to their resources which may be spread across multiple cloud-based Web applications. Unlike existing authorisation systems, it relies on a user’s centrally located security requirements for these resources. The security requirements are expressed in the form of access control policies and are stored and evaluated in a specialised component called Authorisation Manager. Users are provided with a consistent User Experience for managing access control for their distributed online data and are provided with a holistic view of the security applied to this data. Furthermore, this thesis presents the software that implements the UMA proposal. In particular, this thesis shows frameworks that allow Web applications to delegate their access control function to an Authorisation Manager. It also presents design and implementation of an Authorisation Manager and discusses its evaluation conducted with a user study. It then discusses design and implementation of a second, improved Authorisation Manager. Furthermore, this thesis presents the applicability of the UMA approach and the implemented software to real-world scenarios.
50

Dual-phase side-channel evaluations : leakage detection and exploitation

Mather, Luke January 2015 (has links)
Side-channel analysis may be used by an adversary to recover secret information from some form of environmental data emitted by a cryptographic device or application. In this thesis, we discuss some of the challenges faced by evaluation bodies attempting to certify the resistance of devices and applications to side-channel attacks, with relevance to the development of the Common Criteria version 3.1 and FIPS 140-3 standardisation documents. We separate this question into two components: identifying the presence of information leakage in a detection phase, and determining the exact level of the resistance of a device in an exploitation phase. We explore these two components when applied to information leakage in cryptographic hardware and networked web applications. For the detection phase, we demonstrate how various hypothesis tests can be used to reliably detect the presence of information leakage, either as part of a "pass or fail" style approach or to identify instances of leakage warranting further investigation. For the exploitation phase, we present a novel method for combining the results of multiple differential power analysis attacks, finding that in some cases we can dramatically increase the success rate of an adversary using the same data set. We also focus on the implications of the growth in high-performance computing technologies on the evaluation processes, demonstrating that dramatic decreases in the running time of common algorithms can be achieved using modern general purpose graphics processing unit devices and a pipelined architecture. This suggests that the efficiency of the implementation of an attack should be of concern to side-channel evaluators and researchers.

Page generated in 0.0311 seconds