Spelling suggestions: "subject:"cublic key cryptography"" "subject:"cublic key ryptography""
91 |
"Proposta de esquemas de criptografia e de assinatura sob modelo de criptografia de chave pública sem certificado" / "Proposal for encryption and signature schemes under certificateless public key cryptography model"Goya, Denise Hideko 28 June 2006 (has links)
Sob o modelo de criptografia de chave pública baseada em identidades (ID-PKC), a própria identidade dos usuários é usada como chave pública, de modo a dispensar a necessidade de uma infra-estrutura de chaves públicas (ICP), na qual o gerenciamento de certificados digitais é complexo. Por outro lado, sistemas nesse modelo requerem uma entidade capaz de gerar chaves secretas. Essa entidade é conhecida por PKG (Private Key Generator); ela possui uma chave-mestra e mantém custódia das chaves secretas geradas a partir dessa chave-mestra. Naturalmente, a custódia de chaves é indesejável em muitas aplicações. O conceito de Criptografia de Chave Pública sem Certificado, ou Certificateless Public Key Cryptography (CL-PKC), foi proposto para que a custódia de chaves fosse eliminada, mantendo, porém, as características de interesse: a não necessidade de uma ICP e a eliminação de certificados digitais. CL-PKC deixa de ser um sistema baseado em identidades, pois é introduzida uma chave pública, gerada a partir de uma informação secreta do usuário. Nesta dissertação, apresentamos a construção de dois esquemas, um CL-PKE e um CL-PKS, baseados em emparelhamentos bilineares sobre curvas elípticas. Ambas propostas: (1) eliminam custódia de chaves; (2) dispensam certificados digitais; (3) são mais eficientes, sob certos aspectos, que esquemas anteriormente publicados; (4) e são seguros contra ataques adaptativos de texto cifrado escolhido (em CL-PKE) e contra ataques adaptativos de mensagem escolhida (em CL-PKS), sob o modelo de oráculos aleatórios. / Under the model of Identity Based Cryptography (ID-PKC), the public key can be the user's identity, therefore it does not require a Public Key Infrastructure (PKI) with its complex management of Digital Certificates. On the other hand, this system requires a Private Key Generator (PKG), a trusted authority who is in possession of a master key and can generate any of the private keys. In this way, PKG can exercise the so-called key escrow, which is undesirable in many applications. The concept of Certificateless Public Key Cryptography (CL-PKC) was proposed in order to remove the key escrow characteristic of IBC, while it does not require PKI neither Digital Certificates to certify the public keys. CL-PKC is no more an IBC because public keys are introduced, to bind the identities with its secret keys. In this thesis we construct two schemes, one CL-PKE and one CL-PKS, based on bilinear pairing functions which: (1) does not allow key escrow by the PKG; (2) does not require Digital Certificates; (3) is more efficient, in some aspects, than previously published CL-PKE and CL-PKS schemes; (4) and is secure in the sense that it is strong against adaptive chosen ciphertext attacks (in CL-PKE) and adaptive chosen message attacks (in CL-PKS), under Random Oracle Model.
|
92 |
"Proposta de esquemas de criptografia e de assinatura sob modelo de criptografia de chave pública sem certificado" / "Proposal for encryption and signature schemes under certificateless public key cryptography model"Denise Hideko Goya 28 June 2006 (has links)
Sob o modelo de criptografia de chave pública baseada em identidades (ID-PKC), a própria identidade dos usuários é usada como chave pública, de modo a dispensar a necessidade de uma infra-estrutura de chaves públicas (ICP), na qual o gerenciamento de certificados digitais é complexo. Por outro lado, sistemas nesse modelo requerem uma entidade capaz de gerar chaves secretas. Essa entidade é conhecida por PKG (Private Key Generator); ela possui uma chave-mestra e mantém custódia das chaves secretas geradas a partir dessa chave-mestra. Naturalmente, a custódia de chaves é indesejável em muitas aplicações. O conceito de Criptografia de Chave Pública sem Certificado, ou Certificateless Public Key Cryptography (CL-PKC), foi proposto para que a custódia de chaves fosse eliminada, mantendo, porém, as características de interesse: a não necessidade de uma ICP e a eliminação de certificados digitais. CL-PKC deixa de ser um sistema baseado em identidades, pois é introduzida uma chave pública, gerada a partir de uma informação secreta do usuário. Nesta dissertação, apresentamos a construção de dois esquemas, um CL-PKE e um CL-PKS, baseados em emparelhamentos bilineares sobre curvas elípticas. Ambas propostas: (1) eliminam custódia de chaves; (2) dispensam certificados digitais; (3) são mais eficientes, sob certos aspectos, que esquemas anteriormente publicados; (4) e são seguros contra ataques adaptativos de texto cifrado escolhido (em CL-PKE) e contra ataques adaptativos de mensagem escolhida (em CL-PKS), sob o modelo de oráculos aleatórios. / Under the model of Identity Based Cryptography (ID-PKC), the public key can be the user's identity, therefore it does not require a Public Key Infrastructure (PKI) with its complex management of Digital Certificates. On the other hand, this system requires a Private Key Generator (PKG), a trusted authority who is in possession of a master key and can generate any of the private keys. In this way, PKG can exercise the so-called key escrow, which is undesirable in many applications. The concept of Certificateless Public Key Cryptography (CL-PKC) was proposed in order to remove the key escrow characteristic of IBC, while it does not require PKI neither Digital Certificates to certify the public keys. CL-PKC is no more an IBC because public keys are introduced, to bind the identities with its secret keys. In this thesis we construct two schemes, one CL-PKE and one CL-PKS, based on bilinear pairing functions which: (1) does not allow key escrow by the PKG; (2) does not require Digital Certificates; (3) is more efficient, in some aspects, than previously published CL-PKE and CL-PKS schemes; (4) and is secure in the sense that it is strong against adaptive chosen ciphertext attacks (in CL-PKE) and adaptive chosen message attacks (in CL-PKS), under Random Oracle Model.
|
93 |
A Polymorphic Finite Field MultiplierDas, Saptarsi 06 1900 (has links) (PDF)
Cryptography algorithms like the Advanced Encryption Standard, Elliptic Curve Cryptography algorithms etc are designed using algebraic properties of finite fields. Thus performance of these algorithms depend on performance of the underneath field operations. Moreover, different algorithms use finite fields of widely varying order. In order to cater to these finite fields of different orders in an area efficient manner, it is necessary to design solutions in the form of hardware-consolidations, keeping the performance requirements in mind. Due to their small area occupancy and high utilization, such circuits are less likely to stay idle and therefore are less prone to loss of energy due to leakage power dissipation. There is another class of applications that rely on finite field algebra namely the various error detection and correction techniques. Most of the classical block codes used for detection of bit-error in communications over noisy communication channels apply the algebraic properties of finite fields. Cyclic redundancy check is one such algorithm used for detection of error in data in computer network. Reed-Solomon code is most notable among classical block codes because of its widespread use in storage devices like CD, DVD, HDD etc.
In this work we present the architecture of a polymorphic multiplier for operations over various extensions of GF(2). We evolved the architecture of a textbook shift-and-add multiplier to arrive at the architecture of the polymorphic multiplier through a generalized mathematical formulation. The polymorphic multiplier is capable of morphing itself in runtime to create data-paths for multiplications of various orders. In order to optimally exploit the resources, we also introduced the capability of sub-word parallel execution in the polymorphic multiplier. The synthesis results of an instance of such a polymorphic multipliershowsabout41% savings in area with 21% degradation in maximum operating frequency compared to a collection of dedicated multipliers with equivalent functionality. We introduced the multiplier as an accelerator unit for field operations in the coarse grained runtime reconfigurable platform called REDEFINE. We observed about 40-50% improvement in performance of the AES algorithm and about 52×improvement in performance of Karatsuba-Ofman multiplication algorithm.
|
94 |
Authentication issues in low-cost RFIDEl Moustaine, Ethmane 13 December 2013 (has links) (PDF)
This thesis focuses on issues related to authentication in low-cost radio frequency identification technology, more commonly referred to as RFID. This technology it is often referred to as the next technological revolution after the Internet. However, due to the very limited resources in terms of computation, memory and energy on RFID tags, conventional security algorithms cannot be implemented on low-cost RFID tags making security and privacy an important research subject today. First of all, we investigate the scalability in low-cost RFID systems by developing a ns-3 module to simulate the universal low-cost RFID standard EPC Class-1 Generation-2 in order to establish a strict framework for secure identification in low-cost RFID systems. We show that, the symmetrical key cryptography is excluded from being used in any scalable low-cost RFID standard. Then, we propose a scalable authentification protocol based on our adaptation of the famous public key cryptosystem NTRU. This protocol is specially designed for low-cost RFID systems, it can be efficiently implemented into low-cost tags. Finally, we consider the zero-knowledge identification i.e. when the no secret sharing between the tag and the reader is needed. Such identification approaches are very helpful in many RFID applications when the tag changes constantly the field of administration. We propose two lightweight zero-knowledge identification approaches based on GPS and randomized GPS schemes. The proposed approaches consist in storing in the back-end precomputed values in the form of coupons. So, the GPS-based variant can be private and the number of coupons can be much higher than in other approaches thus leading to higher resistance to denial of service attacks for cheaper tags
|
95 |
The use of technology to automate the registration process within the Torrens system and its impact on fraud : an analysisLow, Rouhshi January 2008 (has links)
Improvements in technology and the Internet have seen a rapid rise in the use of technology in various sectors such as medicine, the courts and banking. The conveyancing sector is also experiencing a similar revolution, with technology touted as able to improve the effectiveness of the land registration process. In some jurisdictions, such as New Zealand and Canada, the paper-based land registration system has been replaced with one in which creation, preparation, and lodgement of land title instruments are managed in a wholly electronic environment. In Australia, proposals for an electronic registration system are under way. The research question addressed by this thesis is what would be the impact on fraud of automating the registration process. This is pertinent because of the adverse impact of fraud on the underlying principles of the Torrens system, particularly security of title. This thesis first charts the importance of security of title, examining how security of title is achieved within the Torrens system and the effects that fraud has on this. Case examples are used to analyse perpetration of fraud under the paper registration system. Analysis of functional electronic registration systems in comparison with the paper-based registration system is then undertaken to reveal what changes might be made to conveyancing practices were an electronic registration system implemented. Whether, and if so, how, these changes might impact upon paper based frauds and whether they might open up new opportunities for fraud in an electronic registration system forms the next step in the analysis. The final step is to use these findings to propose measures that might be used to minimise fraud opportunities in an electronic registration system, so that as far as possible the Torrens system might be kept free from fraud, and the philosophical objectives of the system, as initially envisaged by Sir Robert Torrens, might be met.
|
96 |
Η μέθοδος παραγοντοποίησης ακεραίων αριθμών number field sieve : θεωρία και υλοποίηση / The integer factorization algorithm number field sieve : theory and implementationΚαραπάνος, Νικόλαος 21 September 2010 (has links)
Πολλά κρυπτογραφικά σχήματα δημόσιου κλειδιού βασίζονται στο γεγονός ότι είναι υπολογιστικά δύσκολο να παραγοντοποιήσουμε μεγάλους ακέραιους αριθμούς. Ο ταχύτερος, και ταυτόχρονα πολυπλοκότερος, κλασσικός αλγόριθμος που είναι γνωστός μέχρι σήμερα για την παραγοντοποίηση ακεραίων μήκους άνω των 110 δεκαδικών ψηφίων είναι ο General Number Field Sieve (GNFS). Ο αλγόριθμος αυτός είναι ο καρπός πολλών ετών έρευνας, κατά τη διάρκεια της οποίας παράγονταν ολοένα και ταχύτεροι αλγόριθμοι για να καταλήξουμε μέχρι στιγμής στον αλγόριθμο GNFS.
Πρωταρχικός σκοπός της παρούσης μεταπτυχιακής εργασίας είναι η παρουσίαση του θεωρητικού μαθηματικού υπόβαθρου πάνω στο οποίο βασίζεται ο GNFS καθώς και η ακολουθιακή υλοποίηση της βασικής εκδοχής του αλγορίθμου. Ως γλώσσα υλοποίησης επιλέχθηκε η C++. Η υλοποίηση έγινε σε συνεργασία με τον συμφοιτητή μου και αγαπητό φίλο Χρήστο Μπακογιάννη, όπου στα πλαίσια της μεταπτυχιακής του εργασίας πραγματοποιήθηκε η μεταφορά της ακολουθιακής υλοποίησης του αλγορίθμου σε παράλληλο κατανεμημένο περιβάλλον χρησιμοποιώντας το Message Passing Interface (MPI). Ο πηγαίος κώδικας της υλοποίησης καθώς και σχετικές πληροφορίες υπάρχουν online στη σελίδα http://kmgnfs.cti.gr.
Σημειώνεται πως για την ευκολότερη και απρόσκοπτη ανάγνωση της εργασίας αυτής, ο αναγνώστης θα πρέπει να έχει ένα βαθμό εξοικείωσης με βασικές έννοιες της θεωρίας αριθμών, της αλγεβρικής θεωρίας αριθμών και της γραμμικής άλγεβρας. / Many public-key cryptosystems build their security on our inability to factor very large integers. The General Number Field Sieve (GNFS) is the most efficient, and at the same time most complex, classical known algorithm for factoring integers larger than 110 digits. This algorithm is the result of many years of research, during which, faster and faster algorithms were developed finally winding up to the development of the GNFS.
The main purpose of this master thesis is the presentation of the mathematical ideas, on which the GNFS was developed, as well as a sequential implementation of the basic version of the algorithm. C++ was the language of choice. The implementation took place in collaboration with my colleague and dear friend Christos Bakogiannis, where as part of his master thesis, a distributed implementation of the algorithm using Message Passing Interface (MPI) was also developed. The source code of the implementations is publicly available and can be found online at http://kmgnfs.cti.gr.
It is presumed that the reader is familiar with basic concepts of number theory, algebraic number theory and linear algebra.
|
97 |
ARQUITETURAS DE CRIPTOGRAFIA DE CHAVE PÚBLICA: ANÁLISE DE DESEMPENHO E ROBUSTEZ / PUBLIC-KEY CRYPTOGRAPHY ARCHITECTURES: PERFORMANCE AND ROBUSTNESS EVALUATIONPerin, Guilherme 15 April 2011 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Given the evolution of the data communication field, and the resulting increase of the information flow in data, networks security became a major concern. Modern cryptographic methods are mathematically reliable. However their implementation in hardware leaks confidential
information through side-channels like power consumption and electromagnetic emissions. Although performance issues are crucial for a hardware design, aspects of robustness against attacks based on side-channel informations have gained much attention in recent years. This work focuses on hardware architectures based on the RSA public-key algorithm, originally proposed in 1977 by Rivest, Shamir and Adleman. This algorithm has the modular exponentiation as its main operation and it is performed through successive modular multiplications. Because the RSA involves integers of 1024 bits or more, the inherent division of modular multiplications became the main concern. The Montgomery algorithm, proposed in 1985, is a largely used method for hardware designs of modular multiplications, because it avoids divisions and all operations are performed in a multiple-precision context with all terms represented in a numerical base, generally, a power of two. This dissertation proposes a systolic architecture able to perform the Montgomery modular
multiplication with multiple-precision arithmetic. Following, an improvement to the systolic architecture is presented, through an architecture that computes the Montgomery multiplication by multiplexing the multi-precision arithmetic processes. The multiplexed architecture is employed in the left-to-right square-and-multiply and square-and-multiply always modular exponentiation methods and is subjected to SPA (Simple Power Analysis) and SEMA (Simple Electromagnetic Analysis) side-channel attacks and robustness aspects are analysed. Different word sizes (numerical bases) are applied as well as different input operands. As an improvement to SPA and SEMA attacks, the power consumption and electromagnetic traces are demodulated in amplitude to eliminate the clock harmonics influence in the acquired traces. Finally, interpretations, conclusions and countermeasure propositions to the multiplexed architecture against
the implemented side-channel attacks are presented. / Com a expansão da área de comunicação de dados e o consequente aumento do fluxo de informações, a segurança tem se tornado uma grande preocupação. Apesar dos métodos criptográficos modernos serem matematicamente seguros, sua implementação em hardware tende a apresentar fugas de informações confidenciais por canais laterais, tais como consumo de potência e emissões eletromagnéticas. Embora questões de desempenho sejam cruciais para um
projeto de hardware, aspectos de robustez contra ataques baseados em fugas de informações por canais laterais tem ganhado maior atenção nos últimos anos. Neste trabalho, explora-se arquiteturas em hardware voltadas para o algoritmo de chave pública RSA, originalmente proposto em 1977 por Rivest, Shamir e Adleman. Este algoritmo possui como principal operação a exponenciação modular, e esta é calculada através de sucessivas multiplicações modulares. Sendo que o RSA envolve números inteiros da ordem de 1024
bits ou mais, a operação de divisão inerente em multiplicações modulares torna-se o principal problema. O algoritmo de Montgomery, proposto em 1985, é um método bastante utilizado na implementação da multiplicação modular em hardware, pois além de evitar divisões, trabalha em um contexto de precisão múltipla com termos representados por bases numéricas, geralmente, potências de dois. Dentro deste contexto, propõe-se inicialmente uma arquitetura sistólica, baseada nas propriedades de aritmética de precisão múltipla do Algoritmo de Montgomery. Em seguida, apresenta-se uma melhoria para a arquitetura sistólica, através de uma arquitetura que realiza a multiplicação modular de Montgomery voltada à multiplexação dos processos aritméticos.
A arquitetura multiplexada é empregada nos métodos de exponenciação modular left-to-right square-and-multiply e square-and-multiply always e é submetida a ataques por canais laterais SPA (Simple Power Analysis) e SEMA (Simple Electromagnetic Analysis) e aspectos de robustez da arquitetura multiplexada são analisados para diversos tamanhos de palavras (base numérica do algoritmo de Montgomery). Como proposta de melhoria aos ataques por canais laterais simples, os traços de consumo de potência e emissão eletromagnética são demodulados em amplitude de modo a eliminar a influência das harmônicas do sinal de clock sobre os traços coletados. Por fim, interpretações e conclusões dos resultados são apresentados, assim como
propostas de contra-medidas para a arquitetura multiplexada com relação aos ataques por canais laterais realizados.
|
98 |
CONTRA-MEDIDA POR RANDOMIZAÇÃO DE ACESSO À MEMÓRIA EM ARQUITETURA DE CRIPTOGRAFIA DE CHAVE PÚBLICA / MEMORY RANDOM ACCESS COUNTERMEASURE ON A PUBLIC KEY CRYPTOGRAPHY ARCHITECTUREHenes, Felipe Moraes 18 November 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The expansion of the data communication, due to the large
ow of information that pass through these systems has meant that the security becomes an item of constant
concern. Even when considering the efficient encryption systems that exists today, which present relevant mathematical protection, some implementations in hardware of these systems will favor the leak of confidential information through side channels attacks, such as
power consumption and electromagnetic radiation. Performance issues have fundamental importance in the design of a physical system, however aspects which make the system
robust against side channel attacks has gotten more attention nowadays.This work focuses on hardware architectures based on the RSA public key algorithm,
proposed by Rivest, Shamir and Adleman in 1977, which presents the modular exponentiation
operation, calculated from several modular multiplications, as main operation. The
RSA algorithm involves integers in order of 1024 or 2048 bits, so the division inherent in
modular multiplications can become a major problem. In order to avoid these divisions,
the Montgomery algorithm, proposed in 1985, appears as an efficient alternative.
On this context, this dissertation presents a multiplexed architecture based on the
properties of the Montgomery's algorithm. Forwarding, an improvement to this architecture
is presented, implemented with the randomization of internal memories accesses, in
order to increase system robustness against specialized side-channel attacks. Thus, the
implemented architecture is exposed to side channels SPA (Simple Power Analysis) and
SEMA (Simple Electromagnetig Analysis) and the aspects of security and robustness of
the implemented system are evaluated and presented. / A constante expansão dos sistemas de comunicação de dados devido ao grande fluxo de informações que trafegam por estes sistemas tem feito com que a segurança se torne um item de constante preocupação. Mesmo ao considerar-se os eficientes sistemas de criptografia atuais, os quais apresentam relevante proteção matemática, a implementação em hardware destes sistemas tende a propiciar a fuga de informações confidenciais através de ataques por canais laterais, como consumo de potência e emissão eletromagnética. Mesmo sabendo-se que questões de desempenho tem fundamental importância no projeto de um sistema físico, aspectos que tornem o sistema robusto frente a ataques por canais laterais tem obtido maior atenção nos últimos anos. Neste trabalho apresentam-se arquiteturas implementadas em hardware para o cálculo
do algoritmo de chave pública RSA, proposto por Rivest, Shamir e Adleman em 1977, o qual tem como principal operarção a exponenciação modular, calculada a partir de várias
multiplicações modulares. Sabendo-se que o algoritmo RSA envolve números inteiros da ordem de 1024 ou 2048 bits, a divisão inerente em multiplicações modulares pode tornar-se
o grande problema. A fim de que se evite estas divisões, o algoritmo de Montgomery, proposto em 1985, aparece como uma boa alternativa por também trabalhar em um contexto de precisão múltipla e com números na base numérica de potência de dois.
Neste contexto apresenta-se inicialmente uma arquitetura multiplexada, baseada nas propriedades de execução do algoritmo de Montgomery. A seguir apresenta-se uma melhoria
a esta arquitetura com a implementação da randomização dos acessos as memórias internas, com o objetivo de aumentar a robustez do sistema frente a ataques por canais laterais especializados. Sendo assim, a arquitetura implementada é submetida a ataques por canais laterais SPA (Simple Power Analysis) e SEMA (Simple Electromagnetig Analysis) e os aspectos de segurança e robustez do sistema implementado são analisados e apresentados.
|
99 |
Gaussian sampling in lattice-based cryptography / Le Gaussian sampling dans la cryptographie sur les réseaux euclidiensPrest, Thomas 08 December 2015 (has links)
Bien que relativement récente, la cryptographie à base de réseaux euclidiens s’est distinguée sur de nombreux points, que ce soit par la richesse des constructions qu’elle permet, par sa résistance supposée à l’avènement des ordinateursquantiques ou par la rapidité dont elle fait preuve lorsqu’instanciée sur certaines classes de réseaux. Un des outils les plus puissants de la cryptographie sur les réseaux est le Gaussian sampling. À très haut niveau, il permet de prouver qu’on connaît une base particulière d’un réseau, et ce sans dévoiler la moindre information sur cette base. Il permet de réaliser une grande variété de cryptosystèmes. De manière quelque peu surprenante, on dispose de peu d’instanciations pratiques de ces schémas cryptographiques, et les algorithmes permettant d’effectuer du Gaussian sampling sont peu étudiés. Le but de cette thèse est de combler le fossé qui existe entre la théorie et la pratique du Gaussian sampling. Dans un premier temps, nous étudions et améliorons les algorithmes existants, à la fois par une analyse statistique et une approche géométrique. Puis nous exploitons les structures sous-tendant de nombreuses classes de réseaux, ce qui nous permet d’appliquer à un algorithme de Gaussian sampling les idées de la transformée de Fourier rapide, passant ainsi d’une complexité quadratique à quasilinéaire. Enfin, nous utilisons le Gaussian sampling en pratique et instancions un schéma de signature et un schéma de chiffrement basé sur l’identité. Le premierfournit des signatures qui sont les plus compactes obtenues avec les réseaux à l’heure actuelle, et le deuxième permet de chiffrer et de déchiffrer à une vitesse près de mille fois supérieure à celle obtenue en utilisant un schéma à base de couplages sur les courbes elliptiques. / Although rather recent, lattice-based cryptography has stood out on numerous points, be it by the variety of constructions that it allows, by its expected resistance to quantum computers, of by its efficiency when instantiated on some classes of lattices. One of the most powerful tools of lattice-based cryptography is Gaussian sampling. At a high level, it allows to prove the knowledge of a particular lattice basis without disclosing any information about this basis. It allows to realize a wide array of cryptosystems. Somewhat surprisingly, few practical instantiations of such schemes are realized, and the algorithms which perform Gaussian sampling are seldom studied. The goal of this thesis is to fill the gap between the theory and practice of Gaussian sampling. First, we study and improve the existing algorithms, byboth a statistical analysis and a geometrical approach. We then exploit the structures underlying many classes of lattices and apply the ideas of the fast Fourier transform to a Gaussian sampler, allowing us to reach a quasilinearcomplexity instead of quadratic. Finally, we use Gaussian sampling in practice to instantiate a signature scheme and an identity-based encryption scheme. The first one yields signatures that are the most compact currently obtained in lattice-based cryptography, and the second one allows encryption and decryption that are about one thousand times faster than those obtained with a pairing-based counterpart on elliptic curves.
|
100 |
Advances in public-key cryptology and computer exploitation / Avancées en cryptologie à clé publique et exploitation informatiqueGéraud, Rémi 05 September 2017 (has links)
La sécurité de l’information repose sur la bonne interaction entre différents niveaux d’abstraction : les composants matériels, systèmes d’exploitation, algorithmes, et réseaux de communication. Cependant, protéger ces éléments a un coût ; ainsi de nombreux appareils sont laissés sans bonne couverture. Cette thèse s’intéresse à ces différents aspects, du point de vue de la sécurité et de la cryptographie. Nous décrivons ainsi de nouveaux algorithmes cryptographiques (tels que des raffinements du chiffrement de Naccache–Stern), de nouveaux protocoles (dont un algorithme d’identification distribuée à divulgation nulle de connaissance), des algorithmes améliorés (dont un nouveau code correcteur et un algorithme efficace de multiplication d’entiers),ainsi que plusieurs contributions à visée systémique relevant de la sécurité de l’information et à l’intrusion. En outre, plusieurs de ces contributions s’attachent à l’amélioration des performances des constructions existantes ou introduites dans cette thèse. / Information security relies on the correct interaction of several abstraction layers: hardware, operating systems, algorithms, and networks. However, protecting each component of the technological stack has a cost; for this reason, many devices are left unprotected or under-protected. This thesis addresses several of these aspects, from a security and cryptography viewpoint. To that effect we introduce new cryptographic algorithms (such as extensions of the Naccache–Stern encryption scheme), new protocols (including a distributed zero-knowledge identification protocol), improved algorithms (including a new error-correcting code, and an efficient integer multiplication algorithm), as well as several contributions relevant to information security and network intrusion. Furthermore, several of these contributions address the performance of existing and newly-introduced constructions.
|
Page generated in 0.0556 seconds