101 |
Development of advanced cryptographic algorithms using number theoretic transformsYang, Xiao Bo January 2008 (has links)
No description available.
|
102 |
New attacks on FCSR-based stream ciphersAli, Arshad January 2011 (has links)
This thesis presents a new family of cryptanalytic attacks on a class of binary additive synchronous stream ciphers, the theory of which is based on the properties of 2-adic numbers. We refer to this new family of cryptanalytic attacks as State Transition Attacks (STAs); we identify three variants of this class of attack, namely Conventional State Transition Attacks (CSTAs), Fast State Transition Attacks (FSTAs) and Improved State Transition Attacks (ISTAs). These attack variants give rise to trade-offs between data, time and memory complexities. The thesis describes STAs on a class of binary additive synchronous stream ciphers whose keystream generators use l-sequences, which are generated by binary Feedback with Carry Shift Registers (FCSRs). A new theory of linearisation intervals for FCSR state update functions is also presented, and results on correlations between the feedback bit and the Hamming weights of the main and carry registers of Galois FCSRs are developed. These theoretical findings are used to cryptanalyse an eSTREAM candidate known as F-FCSR-H v2, as well as two variants of this cipher, known as F-FCSR-H and F-FCSR-16. This cryptanalysis yields State Recovery Algorithms (SRAs) for these ciphers. The cryptanalytic attacks on F-FCSR-H v2, F-FCSR-H and F-FCSR-16 presented in this thesis are the most efficient attacks known so far on these ciphers. The thesis also presents a FCSR key recovery algorithm which works in conjunction with the SRAs in order to recover the effective key used in these ciphers. The thesis also presents various techniques, which can be considered as pre-requisite for simulating new attacks on FCSR-based stream ciphers. In order to describe these techniques, the thesis defines a small-scale variant of the F-FCSR-H type keystream generators and names it as T-cipher. The thesis develops a statistical analysis for the T-cipher and uses it to describe various aspects of the sequences generated by such ciphers. These include computing the frequency distribution of linearisation intervals, formulating and solving systems of equations in these intervals. The thesis further presents enumeration and pseudocode algorithms for solving systems of equations in the finite field F2.
|
103 |
Formal analysis of modern security protocols in current standardsHorvat, Marko January 2015 (has links)
While research has been done in the past on evaluating standardised security protocols, most notably TLS, there is still room for improvement. Modern security protocols need to be rigorously and thoroughly analysed, ideally before they are widely deployed, so as to minimise the impact of often creative, powerful adversaries. We explore the potential vulnerabilities of modern security protocols specified in current standards, including TLS 1.2, TLS 1.3, and SSH. We introduce and formalise the threat of Actor Key Compromise (AKC), and show how this threat can and cannot be avoided in the protocol design stage. We find AKC-related and other serious security flaws in protocols from the ISO/IEC 11770 standard, find realistic exploits, and harden the protocols to ensure strong security properties. Based on our work, the ISO/IEC 11770 working group is releasing an updated version of the standard that incorporates our suggested improvements. We analyse the unilaterally and mutually authenticated modes of the TLS 1.3 Handshake and Record protocols according to revision 06 of their specification draft. We verify session key secrecy and perfect forward secrecy in both modes with respect to a powerful symbolic attacker and an unbounded number of threads. Subsequently, we model and verify the standard authenticated key exchange requirements in revision 10. We analyse a proposal for its extension and uncover a flaw in it, which directly impacts the draft of revision 11.
|
104 |
Openness for privacy : applying open approaches to personal data challengesBinns, Reuben January 2015 (has links)
This thesis comprises three papers undertaken as part of a PhD by publication or 'Three-Paper PhD', in addition to an introduction and conclusion. The introduction outlines the concept of Openness for Privacy, which describes a class of technological, social and policy approaches for addressing the challenges of personal data. Various manifestations of this concept are investigated in the three papers. The first paper explores the idea of 'open data for privacy', in particular the potential of machine-readable privacy notices to provide transparency and insight into organisations' uses of personal data. It provides an empirical overview of UK organisations' personal data practices. The second paper examines services which give individuals transparency and control over their digital profiles, assessing the potential benefits to industry, and the empowering potential for individuals. The first part is a user study, which tests how consumer responses to personalised targeting are affected by the degree of transparency and control they have over their profiles, with implications for digital marketing and advertising. The second part draws from qualitative data, and theoretical perspectives, to develop an account of the empowering potential of these services. The third paper concerns Privacy Impact Assessments (PIAs), a regulatory tool included in the European Union's proposed general data protection regulation reform. It assesses the potential of PIAs through concepts from regulatory theory, namely, meta-regulation and the open corporation, and outlines implications for regulators, civil society and industry.
|
105 |
The effectiveness of intrusion detection systemsIheagwara, Charles M. January 2004 (has links)
This study investigates the following hypothesis: "The effectiveness of intrusion detection systems can be improved by rethinking the way the IDS is managed and by adopting effective and systematic implementation approaches." This submission introduces the work done to show the validity of this hypothesis. It demonstrates its practicability and discusses how different technical factors; local environmental (systems/network) factors; implementation and management factors affect intrusion detection systems effectiveness. We conduct studies on intrusion detection systems to expand our knowledge of their basic concepts, designs, approaches and implementation pitfalls. We analyze implementations of the major intrusion detection systems approaches/products and their inherent limitations in different environments. We discuss the issues that affect intrusion detection systems effectiveness and explore the dependencies on several components, each of which is different and variable in nature. Then, we investigate each component as a separate and independent subhypothesis. To provide evidence in support of the hypothesis, we conduct several studies using different approaches: experimental investigations, case studies, and analytical studies (with empirically derived arguments). We develop methodologies for testing intrusion detection systems in switched and gigabit environments and perform tests to measure their effectiveness against a wide range of tunable parameters and environmentally desirable characteristics for a broad range of known intrusions. The experimental results establish the impact of deployment techniques on intrusion detection systems effectiveness. The results also establish empirical bandwidth limits for selecting appropriate intrusion detection technologies/products for highly scalable environments. Through case studies, we demonstrate how management and implementation methods affect intrusion detection systems effectiveness and the Return on Investment. Finally, in our analytical work we illustrate how systems configuration settings and local security policies affect intrusion detection systems effectiveness. Together, the results provide the evidence in support of the hypothesis and, hence, we contribute to the existing body of knowledge by suggesting and demonstrating the ways to improve the effectiveness of intrusion detection systems.
|
106 |
Investigations on dirty paper trellis codes for watermarkingWang, C. K. January 2007 (has links)
Recently, watermarking has been modelled as communications with side information at the transmitter. The advantage of this is that in theory the interference due to the cover Work or host signal can be eliminated, thereby improving the capacity of the watermarking system. Hence a number of different practical methods have been proposed, one of which is based on dirty paper trellis coding. These codes are a form of spherical code, and as such, have the advantage of being robust to amplitude scaling. Dirty paper trellises have a number of design parameters. There is a lack of understanding on the influence of these parameters on performance, and this thesis attempts to address this. In particular, the thesis examines the following parameters: (i) the number of states and the number of arcs per state in the trellis, (ii) the distribution of the codewords generated by the trellis, and (iii) the cost function associated with each arc. Experimental results are provided on both synthetic signals and real images that demonstrate how performance is affected and a number of suggestions and improved designs are discussed. In particular, a deeper understanding of trellis configurations is provided that serves as a foundation on which to choose the best trellis structure based on bit error rate performance and computational cost. Secondly, trellis coded modulation (TCM) is adapted for use in a dirty paper trellis. This results in an improved distribution of the codewords on the sphere which leads to improved performance. Lastly, during embedding, the embedder usually searches for the codeword that has the highest linear correlation with the cover Work. However, this codeword may be difficult to embed due to perceptual constraints. We show that searching for a codeword that maximises a cost function based on linear correlation and perceptual distance can significantly improve performance.
|
107 |
Machine learning methods for behaviour analysis and anomaly detection in videoIsupova, Olga January 2017 (has links)
Behaviour analysis and anomaly detection are key components of intelligent vision systems. Anomaly detection can be considered from two perspectives: abnormal events can be defined as those that violate typical activities or as a sudden change in behaviour. Topic modeling and change point detection methodologies, respectively, are employed to achieve these objectives. The thesis starts with development of novel learning algorithms for a dynamic topic model. Topics extracted by the learning algorithms represent typical activities happening within an observed scene. These typical activities are used for likelihood computation. The likelihood serves as a normality measure in anomaly detection decision making. A novel anomaly localisation procedure is proposed. In the considered dynamic topic model a number of topics, i.e., typical activities, should be specified in advance. A novel dynamic nonparametric hierarchical Dirichlet process topic model is then developed where the number of topics is determined from data. Conventional posterior inference algorithms require processing of the whole data through several passes. It is computationally intractable for massive or sequential data. Therefore, batch and online inference algorithms for the proposed model are developed. A novel normality measure is derived for decision making in anomaly detection. The latter part of the thesis considers behaviour analysis and anomaly detection within the change point detection methodology. A novel general framework for change point detection is introduced. Gaussian process time series data is considered and a change is defined as an alteration in hyperparameters of the Gaussian process prior. The problem is formulated in the context of statistical hypothesis testing and several tests suitable both for offline and online data processing and multiple change point detection are proposed. Theoretical properties of the proposed tests are derived based on the distribution of the test statistics.
|
108 |
Proving cryptographic C programs secure with general-purpose verification toolsDupressoir, François January 2013 (has links)
Security protocols, such as TLS or Kerberos, and security devices such as the Trusted Platform Module (TPM), Hardware Security Modules (HSMs) or PKCS#11 tokens, are central to many computer interactions. Yet, such security critical components are still often found vulnerable to attack after their deployment, either because the specification is insecure, or because of implementation errors. Techniques exist to construct machine-checked proofs of security properties for abstract specifications. However, this may leave the final executable code, often written in lower level languages such as C, vulnerable both to logical errors, and low-level flaws. Recent work on verifying security properties of C code is often based on soundly extracting, from C programs, protocol models on which security properties can be proved. However, in such methods, any change in the C code, however trivial, may require one to perform a new and complex security proof. Our goal is therefore to develop or identify a framework in which security properties of cryptographic systems can be formally proved, and that can also be used to soundly verify, using existing general-purpose tools, that a C program shares the same security properties. We argue that the current state of general-purpose verification tools for the C language, as well as for functional languages, is sufficient to achieve this goal, and illustrate our argument by developing two verification frameworks around the VCC verifier. In the symbolic model, we illustrate our method by proving authentication and weak secrecy for implementations of several network security protocols. In the computational model, we illustrate our method by proving authentication and strong secrecy properties for an exemplary key management API, inspired by the TPM.
|
109 |
Towards a framework for trust negotiations in composite web servicesThomas, Anitta 08 1900 (has links)
Web Services propose a framework for the standardisation of interfaces and
interaction, and for publishing software componen1S as services on the Internet By
using this framework, composite servi<:es that make use of more than one Web service
can be created. Although a composite Web service may provide a unified service to
the service requestor~ it cannot be viewed as a single unified entity when its
trustworthiness is evaluated, since its constituent services may differ in their nonfunctional
attributes.
Based on the context of trust (which includes security, reliability, quality of servi~e.
etc.), information has to be collected from the constituent services of a composite
Web service. Trust negotiation is perfonned to gather infonnation from these
services, with the ultimate goal of establishing trust relationships with them.
In this dissertation~ the Web Services framework is analysed to determine its support
for trust negotiation in any trust context. A trust negotiation procedure is subsequeotly
presemed. which can be applied in a composite as well as an elementary Web service. / Computing / M.Sc. (Computer Science)
|
110 |
A generic framework for process execution and secure multi-party transaction authorizationWeigold, Thomas January 2010 (has links)
Process execution engines are not only an integral part of workflow and business process management systems but are increasingly used to build process-driven applications. In other words, they are potentially used in all kinds of software across all application domains. However, contemporary process engines and workflow systems are unsuitable for use in such diverse application scenarios for several reasons. The main shortcomings can be observed in the areas of interoperability, versatility, and programmability. Therefore, this thesis makes a step away from domain specific, monolithic workflow engines towards generic and versatile process runtime frameworks, which enable integration of process technology into all kinds of software. To achieve this, the idea and corresponding architecture of a generic and embeddable process virtual machine (ePVM), which supports defining process flows along the theoretical foundation of communicating extended finite state machines, are presented. The architecture focuses on the core process functionality such as control flow and state management, monitoring, persistence, and communication, while using JavaScript as a process definition language. This approach leads to a very generic yet easily programmable process framework. A fully functional prototype implementation of the proposed framework is provided along with multiple example applications. Despite the fact that business processes are increasingly automated and controlled by information systems, humans are still involved, directly or indirectly, in many of them. Thus, for process flows involving sensitive transactions, a highly secure authorization scheme supporting asynchronous multi-party transaction authorization must be available within process management systems. Therefore, along with the ePVM framework, this thesis presents a novel approach for secure remote multi-party transaction authentication - the zone trusted information channel (ZTIC). The ZTIC approach uniquely combines multiple desirable properties such as the highest level of security, ease-of-use, mobility, remote administration, and smooth integration with existing infrastructures into one device and method. Extensively evaluating both, the ePVM framework and the ZTIC, this thesis shows that ePVM in combination with the ZTIC approach represents a unique and very powerful framework for building workflow systems and process-driven applications including support for secure multi-party transaction authorization.
|
Page generated in 0.0386 seconds