• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 768
  • 132
  • 121
  • 68
  • 66
  • 36
  • 27
  • 24
  • 12
  • 12
  • 8
  • 7
  • 7
  • 7
  • 6
  • Tagged with
  • 1452
  • 513
  • 460
  • 312
  • 235
  • 230
  • 221
  • 193
  • 162
  • 156
  • 151
  • 144
  • 134
  • 133
  • 110
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

NTRU over the Eisenstein Integers

Jarvis, Katherine 29 March 2011 (has links)
NTRU is a fast public-key cryptosystem that is constructed using polynomial rings with integer coefficients. We present ETRU, an NTRU-like cryptosystem based on the Eisenstein integers. We discuss parameter selection and develop a model for the probabilty of decryption failure. We also provide an implementation of ETRU. We use theoretical and experimental data to compare the security and efficiency of ETRU to NTRU with comparable parameter sets and show that ETRU is an improvement over NTRU in terms of security.
92

Message Authentication and Recognition Protocols Using Two-Channel Cryptography

Mashatan, Atefeh 27 November 2008 (has links)
We propose a formal model for non-interactive message authentication protocols (NIMAPs) using two channels and analyze all the attacks that can occur in this model. Further, we introduce the notion of hybrid-collision resistant (HCR) hash functions. This leads to a new proposal for a NIMAP based on HCR hash functions. This protocol is as efficient as the best previous NIMAP while having a very simple structure and not requiring any long strings to be authenticated ahead of time. We investigate interactive message authentication protocols (IMAPs) and propose a new IMAP, based on the existence of interactive-collision resistant (ICR) hash functions, a new notion of hash function security. The efficient and easy-to-use structure of our IMAP makes it very practical in real world ad hoc network scenarios. We also look at message recognition protocols (MRPs) and prove that there is a one-to-one correspondence between non-interactive MRPs and digital signature schemes with message recovery. Further, we look at an existing recognition protocol and point out its inability to recover in case of a specific adversarial disruption. We improve this protocol by suggesting a variant which is equipped with a resynchronization process. Moreover, another variant of the protocol is proposed which self-recovers in case of an intrusion. Finally, we propose a new design for message recognition in ad hoc networks which does not make use of hash chains. This new design uses random passwords that are being refreshed in each session, as opposed to precomputed elements of a hash chain.
93

Message Authentication and Recognition Protocols Using Two-Channel Cryptography

Mashatan, Atefeh 27 November 2008 (has links)
We propose a formal model for non-interactive message authentication protocols (NIMAPs) using two channels and analyze all the attacks that can occur in this model. Further, we introduce the notion of hybrid-collision resistant (HCR) hash functions. This leads to a new proposal for a NIMAP based on HCR hash functions. This protocol is as efficient as the best previous NIMAP while having a very simple structure and not requiring any long strings to be authenticated ahead of time. We investigate interactive message authentication protocols (IMAPs) and propose a new IMAP, based on the existence of interactive-collision resistant (ICR) hash functions, a new notion of hash function security. The efficient and easy-to-use structure of our IMAP makes it very practical in real world ad hoc network scenarios. We also look at message recognition protocols (MRPs) and prove that there is a one-to-one correspondence between non-interactive MRPs and digital signature schemes with message recovery. Further, we look at an existing recognition protocol and point out its inability to recover in case of a specific adversarial disruption. We improve this protocol by suggesting a variant which is equipped with a resynchronization process. Moreover, another variant of the protocol is proposed which self-recovers in case of an intrusion. Finally, we propose a new design for message recognition in ad hoc networks which does not make use of hash chains. This new design uses random passwords that are being refreshed in each session, as opposed to precomputed elements of a hash chain.
94

Digital Signature : Comparative study of its usage in developed and developing countries

Thangavel, Jayakumar January 2014 (has links)
The online trading is growing widely day by day, which makes safety the biggest concern while carrying out trading by electronic means. As many other operations can be done with digital environment and internet, operation that provides identity validation should also be added to the digital environment. When data are transferred, the user should make sure that there are no changes in the original data while transferring them from sender to receiver. And it has also become necessary to authenticate the users often to ensure security and to avoid fraud. There are lot of different ways of online identification, in which digital signature is considered to be one of the powerful way of authentication. So, the online user use digital signature to authenticate the sender and to maintain the integrity of the document sent. In this paper, a study is carried out to identify the usage of digital signature and the perspective of people towards it in developed and developing countries and a survey is taken to support the theory.
95

NTRU over the Eisenstein Integers

Jarvis, Katherine 29 March 2011 (has links)
NTRU is a fast public-key cryptosystem that is constructed using polynomial rings with integer coefficients. We present ETRU, an NTRU-like cryptosystem based on the Eisenstein integers. We discuss parameter selection and develop a model for the probabilty of decryption failure. We also provide an implementation of ETRU. We use theoretical and experimental data to compare the security and efficiency of ETRU to NTRU with comparable parameter sets and show that ETRU is an improvement over NTRU in terms of security.
96

Distributed pre-computation for a cryptanalytic time-memory trade-off /

Taber, Michael S. January 2008 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2008. / Typescript. Includes bibliographical references (leaves 106-107).
97

NTRU over the Eisenstein Integers

Jarvis, Katherine January 2011 (has links)
NTRU is a fast public-key cryptosystem that is constructed using polynomial rings with integer coefficients. We present ETRU, an NTRU-like cryptosystem based on the Eisenstein integers. We discuss parameter selection and develop a model for the probabilty of decryption failure. We also provide an implementation of ETRU. We use theoretical and experimental data to compare the security and efficiency of ETRU to NTRU with comparable parameter sets and show that ETRU is an improvement over NTRU in terms of security.
98

Protocolos criptográficos de identificação baseados em reticulados / Lattice-based identification schemes

Oniki Chiquito, Izumi, 1985- 22 August 2018 (has links)
Orientador: Ricardo Dahab / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T11:38:01Z (GMT). No. of bitstreams: 1 OnikiChiquito_Izumi_M.pdf: 3419663 bytes, checksum: 5f621e251ebc62429a85ff141091f7f5 (MD5) Previous issue date: 2012 / Resumo: Na área de Segurança da Informação, controle de acesso diz respeito á habilidade de permitir ou negar a utilização de determinados recursos, sejam eles informações, dispositivos, serviços etc., por parte de um indivíduo. Protocolos de identificação correspondem a algoritmos criptográficos que permitem verificar, com certo grau de confiança, se a alegação de um indivíduo a respeito de sua identidade é verdadeira. Dessa forma, pode-se prover acesso controlado e conceder privilégios de utilização de recursos somente a entidades ou indivíduos cuja identidade tenha sido comprovada. Algoritmos baseados em reticulados, de uma forma geral, têm despertado particular interesse em aplicações criptográficas, devido à sua provável resistência a ataques empregando computadores quânticos, ao contrário dos criptossistemas baseados em problemas da Teoria dos Números. Por esse motivo, nos _últimos anos, tem-se buscado desenvolver protocolos de identificação cuja segurança esteja relacionada a problemas envolvendo reticulados. Neste trabalho, foram abordadas as principais propostas recentes de protocolos de identificação baseados em reticulados. Além da apresentação dos algoritmos, é feita uma análise comparativa entre protocolos selecionados, incorporando dados experimentais de execução. A etapa de implementação aqui apresentada tem também como finalidade suprir a ausência de resultados experimentais para essa categoria de protocolos, no sentido de iniciar um processo de validação para uso dos algoritmos em aplicações práticas. Questões como possibilidades de otimização e expectativas para o futuro da área também são discutidas / Abstract: One of the main concerns of the field of Information Security is access control, which refers to the restriction of access to several kinds of resources, such as data, places, devices, services and others. Identification schemes are cryptographic algorithms that allow verifying with some level of certainty if an identity claim is legitimate. Therefore, such schemes make possible to provide access control and grant privileges only to authorized individuals whose identities have been previously verified. Lattice-based algorithms are particularly interesting as the cryptography community believes them to remain secure even to quantum computers attacks, as opposite to some cryptosystems used today based on Number Theory problems. For this reason, identification schemes based on lattices have received growing attention lately. In this work, we address the main recent developments of lattice-based identification schemes. After introducing the algorithms, we make a comparative analysis of the selected schemes, using experimental data collected from our own implementation of the algorithms. The implementation phase also aims to help validating these schemes for practical use, since to this date there were practically no experimental results available. Other issues, like optimization possibilities and the future of the area, are also addressed in this work / Mestrado / Ciência da Computação / Mestra em Ciência da Computação
99

Resource-constrained and Resource-efficient Modern Cryptosystem Design

Aysu, Aydin 20 July 2016 (has links)
In the context of a system design, resource-constraints refer to severe restrictions on allowable resources, while resource-efficiency is the capability to achieve a desired performance and, at the same time, to reduce wasting resources. To design for low-cost platforms, these fundamental concepts are useful under different scenarios and they call for different approaches, yet they are often mixed. Resource-constrained systems require aggressive optimizations, even at the expense of performance, to meet the stringent resource limitations. On the other hand, resource-efficient systems need a careful trade-off between resources and performance, to achieve the best possible combination. Designing systems for resource-constraints with the optimizations for resource-efficiency, or vice versa, can result in a suboptimal solution. Using modern cryptographic applications as the driving domain, I first distinguish resource-constraints from resource-efficiency. Then, I introduce the recurring strategies to handle these cases and apply them on modern cryptosystem designs. I illustrate that by clarifying the application context, and then by using appropriate strategies, it is possible to push the envelope on what is perceived as achievable, by up to two orders-of-magnitude. In the first part of this dissertation, I focus on resource-constrained modern cryptosystems. The driving application is Physical Unclonable Function (PUF) based symmetric-key authentication. I first propose the smallest block cipher in 128-bit security level. Then, I show how to systematically extend this design into the smallest application-specific instruction set processor for PUF-based authentication protocols. I conclude this part by proposing a compact method to combine multiple PUF components within a system into a single device identifier. In the second part of this dissertation, I focus on resource-efficient modern cryptosystems. The driving application is post-quantum public-key schemes. I first demonstrate energy-efficient computing techniques for post-quantum digital signatures. Then, I propose an area-efficient partitioning and a Hardware/Software codesign for its implementation. The results of these implemented modern cryptosystems validate the advantage of my approach by quantifying the drastic improvements over the previous best. / Ph. D.
100

Exploring the Composition of Coding Theory and Cryptography through Secure Computation, Succinct Arguments, and Local Codes

Alexander R Block (13154601) 26 July 2022 (has links)
<p>    We examine new ways in which coding theory and cryptography continue to be  composed together, and show that the composition of these two fields  yield new constructions in the areas of Secure Computation Protocols,  Succinct Interactive Arguments, and Locally Decodable Codes. This  dissertation is a continuation of several decades of research in  composing coding theory and cryptography; examples include secret  sharing, encryption schemes, randomness extraction, pseudo-random number  generation, and the PCP theorem, to name a few. </p> <p>    In Part I of this dissertation, we examine the composition of  coding theory with cryptography, explicitly and implicitly. On the  explicit side, we construct a new family of linear error-correcting  codes, based on algebraic geometric codes, and use this family to  construct new correlation extractors (Ishai et al., FOCS 2009).  Correlation extractors are two-party secure computation protocols for  distilling samples of a leaky correlation (e.g., pre-processed secret  shares that have been exposed to side-channel attacks) into secure and  fresh shares of another correlation (e.g., shares of oblivious  transfer). Our correlation extractors are (nearly) optimal in all  parameters. On the implicit side, we use coding theoretic arguments to  show the security of succinct interactive arguments (Micali, FOCS 1994).  Succinct interactive arguments are a restriction of interactive proofs  (Goldwasser, Micali, Rackoff, STOC 1985) for which security only holds  against computationally bounded provers (i.e., probabilistic polynomial  time), and where the proofs are sub-linear in the size of the statement  being proven. Our new succinct interactive arguments are the first  public-coin, zero-knowledge arguments with time and space efficient  provers: we give two protocols where any NP statement that is verifiable  by a time-T space-S RAM program in is provable time O~(T) and space S *  polylog(T).<br>      In Part II of this dissertation, we examine the composition of  cryptography with coding theory, again explicitly and implicitly,  focusing specifically on locally decodable codes (Katz and Trevisan,  STOC 2000). Locally decodable codes, or LDCs, are error-correcting codes  with super-efficient probabilistic decoding procedures that allow for  decoding individual symbols of the encoded message, without decoding the  entire codeword. On the implicit side, we utilize cryptographic  analysis tools to give a conceptually simpler proof of the so-called  "Hamming-to-InsDel" compiler (Ostrovsky and Paskin-Cherniavsky, ITS  2015). This compiler transforms any Hamming LDC (i.e., a code that is  resilient to bit-flip errors) to another LDC that is resilient to the  broad class of insertion-deletion errors, approximately preserving the  rate and error-tolerance of the code at the cost of a poly-logarithmic  increase in the query complexity. We further extend this compiler to  both the private LDC (Ostrovsky, Pandey, and Sahai, ICALP 2007) setting,  where the encoder and decoder are assumed to share a secret key unknown  to the adversarial channel, and the resource-bounded LDC (Blocki,  Kulkarni, and Zhou, ITC 2020) setting, where the adversarial channel is  assumed to be resource constrained. On the explicit side, we utilize two  cryptographic primitives to give new constructions of alternative  notions of LDCs. First, we use cryptographic puzzles (Bitansky et al.,  ITCS 2016) to construct resource-bounded Hamming LDCs in the standard  model without random oracles, answering an open question of Blocki,  Kulkarni, and Zhou (ITC 2020); we then naturally extend these LDCs to  the InsDel setting via our previously mentioned compiler. Second, we use  digital signature schemes to directly construct computationally relaxed  LDCs (Blocki et al., ITIT 2021) that are resilient to  insertion-deletion errors. Computationally relaxed LDCs allow the  decoder to output an extra symbol signifying it does not know the  correct output and are only secure against probabilistic polynomial time  adversarial channels. To the best of our knowledge, this is the first  such LDC (of any type) resilient against insertion-deletion errors that  does not rely on the aforementioned compiler.</p>

Page generated in 0.0383 seconds