Spelling suggestions: "subject:"cryptography,"" "subject:"ryptography,""
371 |
Destructive and constructive aspects of efficient algorithms and implementation of cryptographic hardwareMeurice de Dormale, Guerric 04 October 2007 (has links)
In an ever-increasing digital world, the need for secure communications over unsecured channels like Internet has exploded. To meet the different security requirements, communication devices have to perform expensive cryptographic operations. Hardware processors are therefore often needed to meet goals such as speed, ubiquity or cost-effectiveness. For such devices, the size of security parameters is chosen as small as possible to save resources and time. It is therefore necessary to know the effective security of given sets of parameters in order to achieve the best trade-off between efficiency and security. The best way to address this problem is by means of accurate estimations of dedicated hardware attacks.
In this thesis, we investigate two aspects of cryptographic hardware: constructive applications that deal with general purpose secure devices and destructive applications that handle dedicated hardware attacks against cryptosystems. Their set of constraints is clearly different but they both need efficient algorithms and hardware architectures.
First, we deal with efficient and novel modular inversion and division algorithms on Field-Programmable Gate Array (FPGA) hardware platform. Such algorithms are an important building block for both constructive and destructive use of elliptic curve cryptography.
Then, we provide new or highly improved architectures for attacks against RC5 cipher, GF(2m) elliptic curves and RSA by means of efficient elliptic curve-based factorization engines (ECM). We prove that FPGA-based solutions are much more cost-effective and low power than software-based solutions. Our resulting cost assessments should serve as a basis for improving the accuracy of current hardware or software-based security evaluations.
Finally, we handle the efficiency-flexibility trade-off problem for high-speed hardware implementations of elliptic curve. Then, we present efficient elliptic curve digital signature algorithm coprocessors for smart cards. We also show that, surprisingly, affine coordinates can be an attractive solution for such an application.
|
372 |
A Secured Data Protocol for the Trusted Truck(R) SystemBulusu, Srinivasa Anuradha 01 December 2010 (has links)
Security has become one of the major concerns in the Intelligent Transportation Systems (ITS). The Trusted Truck(R) System, provides an efficient wireless communication mechanism for safe exchange of messages between the moving vehicles (trucks) and the roadside inspection stations. The vehicles and the station are equipped with processing units but with different computational capabilities. To make this Trusted Truck(R) system more secure, this thesis proposes a secured data protocol which ensures data integrity, message authentication and non-repudiation. The uniqueness of the protocol is: it is cost-effective, resource-efficient and embeds itself into the Trusted Truck (R) environment without demanding any additional infrastructure. The protocol also balances the computational load between the vehicle and station by incorporating an innovative key transport mechanism. Digital signatures and encryption techniques are used for authentication and data con
dentiality. Cryptography algorithms along with optimization methods are used for the digital signatures. The computational time for the algorithms are analyzed. Combining all these techniques, an efficient secured data protocol is developed and implemented successfully.
|
373 |
Étude du métamorphisme viral : modélisation, conception et détectionBorello, Jean-Marie 01 April 2011 (has links) (PDF)
La protection contre les codes malveillants représente un enjeu majeur. Il suffit de considérer les exemples récents des vers Conficker et Stuxnet pour constater que tout système d'information peut aujourd'hui être la cible de ce type d'attaques. C'est pourquoi, nous abordons dans cette thèse la menace représentée par les codes malveillants et plus particulièrement le cas du métamorphisme. Ce dernier constitue en effet l'aboutissement des techniques d'évolution de codes (" obfuscation ") permettant à un programme d'éviter sa détection. Pour aborder le métamorphisme nous adoptons une double problématique : dans une première partie, nous nous attachons à l'élaboration d'un moteur de métamorphisme afin d'en estimer le potentiel offensif. Pour cela, nous proposons une technique d'obscurcissement de code dont la transformation inverse est démontrée NP-complète dans le cadre de l'analyse statique. Ensuite, nous appliquons ce moteur sur une souche malveillante préalablement détectée afin d'évaluer les capacités des outils de détection actuels. Après cette première partie, nous cherchons ensuite à détecter, outre les variantes engendrées par notre moteur de métamorphisme, celles issues de codes malveillants connus. Pour cela, nous proposons une approche de détection dynamique s'appuyant sur la similarité comportementale entre programmes. Nous employons alors la complexité de Kolmogorov afin de définir une nouvelle mesure de similarité obtenue par compression sans perte. Finalement, un prototype de détection de code malveillant est proposé et évalué.
|
374 |
Implementing the Schoof-Elkies-Atkin Algorithm with NTLKok, Yik Siong 25 April 2013 (has links)
In elliptic curve cryptography, cryptosystems are based on an additive subgroup of an elliptic curve defined over a finite field, and the hardness of the Elliptic Curve Discrete Logarithm Problem is dependent on the order of this subgroup. In particular, we often want to find a subgroup with large prime order. Hence when finding a suitable curve for cryptography, counting the number of points on the curve is an essential step in determining its security.
In 1985, René Schoof proposed the first deterministic polynomial-time algorithm for point counting on elliptic curves over finite fields. The algorithm was improved by Noam Elkies and Oliver Atkin, resulting in an algorithm which is sufficiently fast for practical purposes. The enhancements leveraged the arithmetic properties of the l-th classical modular polynomial, where l- is either an Elkies or Atkin prime. As the Match-Sort algorithm relating to Atkin primes runs in exponential time, it is eschewed in common practice.
In this thesis, I will discuss my implementation of the Schoof-Elkies-Atkin algorithm in C++, which makes use of the NTL package. The implementation also supports the computation of classical modular polynomials via isogeny volcanoes, based on the methods proposed recently by Bröker, Lauter and Sutherland.
Existing complexity analysis of the Schoof-Elkies-Atkin algorithm focuses on its asymptotic performance. As such, there is no estimate of the actual impact of the Match-Sort algorithm on the running time of the Schoof-Elkies-Atkin algorithm for elliptic curves defined over prime fields of cryptographic sizes. I will provide rudimentary estimates for the largest Elkies or Atkin prime used, and discuss the variants of the Schoof-Elkies-Atkin algorithm using their run-time performances.
The running times of the SEA variants supports the use Atkin primes for prime fields of sizes up to 256 bits. At this size, the selective use of Atkin primes runs in half the time of the Elkies-only variant on average. This suggests that Atkin primes should be used in point counting on elliptic curves of cryptographic sizes.
|
375 |
Quantum Cryptography in Rreal-life Applications: Assumptions and SecurityZhao, Yi 03 March 2010 (has links)
Quantum cryptography, or quantum key distribution (QKD), provides a means of unconditionally secure communication. The security is in principle based on the fundamental laws of physics. Security proofs show that if quantum cryptography is appropriately implemented, even the most
powerful eavesdropper cannot decrypt the message from a cipher.
The implementations of quantum crypto-systems in real life may not fully comply with the assumptions made in the security proofs. Such discrepancy between the experiment and the theory can be fatal to the
security of a QKD system. In this thesis we address a number of these discrepancies.
A perfect single-photon source is often assumed in many security proofs. However, a weak coherent source is widely used in a real-life QKD implementation. Decoy state protocols have been proposed as a novel
approach to dramatically improve the performance of a weak coherent source based QKD implementation without jeopardizing its security. Here, we present the first experimental demonstrations of decoy state
protocols. Our experimental scheme was later adopted by most decoy state QKD implementations.
In the security proof of decoy state protocols as well as many other QKD protocols, it is widely assumed that a sender generates a phase-randomized coherent state. This assumption has been enforced in few implementations. We close this gap in two steps: First, we implement and verify the phase randomization experimentally;
second, we prove the security of a QKD implementation without the coherent state assumption.
In many security proofs of QKD, it is assumed that all the detectors on the receiver's side have identical detection efficiencies. We show experimentally that this assumption may be violated in a
commercial QKD implementation due to an eavesdropper's malicious manipulation. Moreover, we show that the eavesdropper can learn part of the final key shared by the legitimate users as a consequence of this violation of the assumptions.
|
376 |
Homomorphic EncryptionWeir, Brandon January 2013 (has links)
In this thesis, we provide a summary of fully homomorphic encryption, and in particular, look at the BGV encryption scheme by Brakerski, Gentry, and Vaikuntanathan; as well the DGHV encryption scheme by van Dijk, Gentry, Halevi, and Vaikuntanathan. We explain the mechanisms developed by Gentry in his breakthrough work, and show examples of how they are used.
While looking at the BGV encryption scheme, we make improvements to the underlying lemmas dealing with modulus switching and noise management, and show that the lemmas as currently stated are false. We then examine a lower bound on the hardness of the Learning With Errors lattice problem, and use this to develop specific parameters for the BGV encryption scheme at a variety of security levels.
We then study the DGHV encryption scheme, and show how the somewhat homomorphic encryption scheme can be implemented as both a fully homomorphic encryption scheme with bootstrapping, as well as a leveled fully homomorphic encryption scheme using the techniques from the BGV encryption scheme. We then extend the parameters from the optimized version of this scheme to higher security levels, and describe a more straightforward way of arriving at these parameters.
|
377 |
Leakage Resilience and Black-box Impossibility Results in CryptographyJuma, Ali 31 August 2011 (has links)
In this thesis, we present constructions of leakage-resilient cryptographic primitives, and we give black-box impossibility results for certain classes of constructions of pseudo-random number generators.
The traditional approach for preventing side-channel attacks has been primarily hardware-based. Recently, there has been significant progress in developing algorithmic approaches for preventing such attacks. These algorithmic approaches involve modeling side-channel attacks as {\em leakage} on the internal state of a device; constructions secure against such leakage are {\em leakage-resilient}.
We first consider the problem of storing a key and computing on it repeatedly in a leakage-resilient manner. For this purpose, we define a new primitive called a {\em key proxy}. Using a fully-homomorphic
public-key encryption scheme, we construct a leakage-resilient key proxy. We work in the ``only computation leaks'' leakage model, tolerating a logarithmic number of bits of polynomial-time computable leakage per computation and an unbounded total amount of leakage.
We next consider the problem of verifying that a message sent over a public channel has not been modified, in a setting where the sender and the receiver have previously shared a key, and where the adversary controls the public channel and is simultaneously mounting side-channel attacks on both parties. Using only the assumption that pseudo-random generators exist, we construct a leakage-resilient shared-private-key authenticated session protocol. This construction tolerates a logarithmic number of bits of polynomial-time computable leakage per computation, and an unbounded total amount of leakage. This leakage occurs on the entire state, input, and randomness of the party performing the computation.
Finally, we consider the problem of constructing a large-stretch pseudo-random generator given a one-way permutation or given a smaller-stretch pseudo-random generator. The standard approach for doing this involves repeatedly composing the given object with itself. We provide evidence that this approach is necessary. Specifically,
we consider three classes of constructions of pseudo-random generators from pseudo-random generators of smaller stretch or from one-way permutations, and for each class, we give a black-box impossibility result that demonstrates a contrast between the stretch that can be achieved by adaptive and non-adaptive black-box constructions.
|
378 |
Quantum Cryptography in Rreal-life Applications: Assumptions and SecurityZhao, Yi 03 March 2010 (has links)
Quantum cryptography, or quantum key distribution (QKD), provides a means of unconditionally secure communication. The security is in principle based on the fundamental laws of physics. Security proofs show that if quantum cryptography is appropriately implemented, even the most
powerful eavesdropper cannot decrypt the message from a cipher.
The implementations of quantum crypto-systems in real life may not fully comply with the assumptions made in the security proofs. Such discrepancy between the experiment and the theory can be fatal to the
security of a QKD system. In this thesis we address a number of these discrepancies.
A perfect single-photon source is often assumed in many security proofs. However, a weak coherent source is widely used in a real-life QKD implementation. Decoy state protocols have been proposed as a novel
approach to dramatically improve the performance of a weak coherent source based QKD implementation without jeopardizing its security. Here, we present the first experimental demonstrations of decoy state
protocols. Our experimental scheme was later adopted by most decoy state QKD implementations.
In the security proof of decoy state protocols as well as many other QKD protocols, it is widely assumed that a sender generates a phase-randomized coherent state. This assumption has been enforced in few implementations. We close this gap in two steps: First, we implement and verify the phase randomization experimentally;
second, we prove the security of a QKD implementation without the coherent state assumption.
In many security proofs of QKD, it is assumed that all the detectors on the receiver's side have identical detection efficiencies. We show experimentally that this assumption may be violated in a
commercial QKD implementation due to an eavesdropper's malicious manipulation. Moreover, we show that the eavesdropper can learn part of the final key shared by the legitimate users as a consequence of this violation of the assumptions.
|
379 |
Leakage Resilience and Black-box Impossibility Results in CryptographyJuma, Ali 31 August 2011 (has links)
In this thesis, we present constructions of leakage-resilient cryptographic primitives, and we give black-box impossibility results for certain classes of constructions of pseudo-random number generators.
The traditional approach for preventing side-channel attacks has been primarily hardware-based. Recently, there has been significant progress in developing algorithmic approaches for preventing such attacks. These algorithmic approaches involve modeling side-channel attacks as {\em leakage} on the internal state of a device; constructions secure against such leakage are {\em leakage-resilient}.
We first consider the problem of storing a key and computing on it repeatedly in a leakage-resilient manner. For this purpose, we define a new primitive called a {\em key proxy}. Using a fully-homomorphic
public-key encryption scheme, we construct a leakage-resilient key proxy. We work in the ``only computation leaks'' leakage model, tolerating a logarithmic number of bits of polynomial-time computable leakage per computation and an unbounded total amount of leakage.
We next consider the problem of verifying that a message sent over a public channel has not been modified, in a setting where the sender and the receiver have previously shared a key, and where the adversary controls the public channel and is simultaneously mounting side-channel attacks on both parties. Using only the assumption that pseudo-random generators exist, we construct a leakage-resilient shared-private-key authenticated session protocol. This construction tolerates a logarithmic number of bits of polynomial-time computable leakage per computation, and an unbounded total amount of leakage. This leakage occurs on the entire state, input, and randomness of the party performing the computation.
Finally, we consider the problem of constructing a large-stretch pseudo-random generator given a one-way permutation or given a smaller-stretch pseudo-random generator. The standard approach for doing this involves repeatedly composing the given object with itself. We provide evidence that this approach is necessary. Specifically,
we consider three classes of constructions of pseudo-random generators from pseudo-random generators of smaller stretch or from one-way permutations, and for each class, we give a black-box impossibility result that demonstrates a contrast between the stretch that can be achieved by adaptive and non-adaptive black-box constructions.
|
380 |
Efficient algorithm to construct phi function in vector space secret sharing scheme and application of secret sharing scheme in Visual CryptographyPotay, Sunny 01 May 2012 (has links)
Secret Sharing refers to a method through which a secret key K can be shared among a group of authorized participants, such that when they come together later, they can figure out the secret key K to decrypt the encrypted message. Any group which is not authorized cannot determine the secret key K. Some of the important secret schemes are Shamir Threshold Scheme, Monotone Circuit Scheme, and Brickell Vector Space Scheme. Brikell’s vector space secret sharing construction requires the existence of a function from a set of participant P in to vector space Zdp, where p is a prime number and d is a positive number. There is no known algorithm to construct such a function in general. We developed an efficient algorithm to construct function for some special secret sharing scheme. We also give an algorithm to demonstrate how a secret sharing scheme can be used in visual cryptography.
|
Page generated in 0.0477 seconds