• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Convenient Decentralized Authentication Using Passwords

Van Der Horst, Timothy W. 10 March 2010 (has links) (PDF)
Passwords are a very convenient way to authenticate. In terms of simplicity and portability they are very difficult to match. Nevertheless, current password-based login mechanisms are vulnerable to phishing attacks and typically require users to create and manage a new password for each of their accounts. This research investigates the potential for indirect/decentralized approaches to improve password-based authentication. Adoption of a decentralized authentication mechanism requires the agreement between users and service providers on a trusted third party that vouches for users' identities. Email providers are the de facto trusted third parties on the Internet. Proof of email address ownership is typically required to both create an account and to reset a password when it is forgotten. Despite its shortcomings (e.g., latency, vulnerability to passive attack), this approach is a practical solution to the difficult problem of authenticating strangers on the Internet. This research utilizes this emergent, lightweight relationship with email providers to offload primary user authentication from service providers; thus reducing the need for service provider-specific passwords. Our goal is to provide decentralized authentication that maintains the convenience and portability of passwords, while improving its assurances (especially against phishing). Our first step to leverage this emergent trust, Simple Authentication for the Web (SAW), improves the security and convenience of email-based authentications and moves them from the background into the forefront, replacing need for an account-specific password. Wireless Authenticationg using Remote Passwords (WARP) adapts the principles of SAW to authentication in wireless networks. Lightweight User AUthentication (Luau) improves upon WARP and unifies user authentication across the application and network (especially wireless) layers. Our final protocol, pwdArmor, started as a simple wrapper to facilitate the use of existing databases of password verifiers in Luau, but grew into a generic middleware framework that augments the assurances of conventional password protocols.
12

Efficiency of Logic Minimization Techniques for Cryptographic Hardware Implementation

Raghuraman, Shashank 15 July 2019 (has links)
With significant research effort being directed towards designing lightweight cryptographic primitives, logical metrics such as gate count are extensively used in estimating their hardware quality. Specialized logic minimization tools have been built to make use of gate count as the primary optimization cost function. The first part of this thesis aims to investigate the effectiveness of such logical metrics in predicting hardware efficiency of corresponding circuits. Mapping a logical representation onto hardware depends on the standard cell technology used, and is driven by trade-offs between area, performance, and power. This work evaluates aforementioned parameters for circuits optimized for gate count, and compares them with a set of benchmark designs. Extensive analysis is performed over a wide range of frequencies at multiple levels of abstraction and system integration, to understand the different regions in the solution space where such logic minimization techniques are effective. A prototype System-on-Chip (SoC) is designed to benchmark the performance of these circuits on actual hardware. This SoC is built with an aim to include multiple other cryptographic blocks for analysis of their hardware efficiency. The second part of this thesis analyzes the overhead involved in integrating selected authenticated encryption ciphers onto an SoC, and explores different design alternatives for the same. Overall, this thesis is intended to serve as a comprehensive guideline on hardware factors that can be overlooked, but must be considered during logical-to-physical mapping and during the integration of standalone cryptographic blocks onto a complete system. / Master of Science / The proliferation of embedded smart devices for the Internet-of-Things necessitates a constant search for smaller and power-efficient hardware. The need to ensure security of such devices has been driving extensive research on lightweight cryptography, which focuses on minimizing the logic footprint of cryptographic hardware primitives. Different designs are optimized, evaluated, and compared based on the number of gates required to express them at a logical level of abstraction. The expectation is that circuits requiring fewer gates to represent their logic will be smaller and more efficient on hardware. However, converting a logical representation into a hardware circuit, known as “synthesis”, is not trivial. The logic is mapped to a “library” of hardware cells, and one of many possible solutions for a function is selected - a process driven by trade-offs between area, speed, and power consumption on hardware. Our work studies the impact of synthesis on logical circuits with minimized gate count. We evaluate the hardware quality of such circuits by comparing them with that of benchmark designs over a range of speeds. We wish to answer questions such as “At what speeds do logical metrics rightly predict area- and power-efficiency?”, and “What impact does this have after integrating cryptographic primitives onto a complete system?”. As part of this effort, we build a System-on-Chip in order to observe the efficiency of these circuits on actual hardware. This chip also includes recently developed ciphers for authenticated encryption. The second part of this thesis explores different ways of integrating these ciphers onto a system, to understand their effect on the ciphers’ compactness and performance. Our overarching aim is to provide a suitable reference on how synthesis and system integration affect the hardware quality of cryptographic blocks, for future research in this area.
13

Vector Instruction Set Extensions for Efficient and Reliable Computation of Keccak

Rawat, Hemendra Kumar 27 August 2016 (has links)
Recent processor architectures such as Intel Westmere (and later) and ARMv8 include instruction-level support for the Advanced Encryption Standard (AES), for the Secure Hashing Standard (SHA-1, SHA2) and for carry-less multiplication. These crypto-instructions are optimized for a single algorithm and provide significant performance improvements over software written using general-purpose instruction set. However, today's secure systems and protocols do not rely on just one, but a suite of many cryptographic applications that are expected to work in a correct and reliable manner. In this work, we propose a new instruction set for supporting efficient and reliable cryptography on modern processors. For efficiency, we propose flexible instruction set extensions for Keccak, a cryptographic kernel for hashing, authenticated encryption, key-stream generation and random-number generation. Keccak is the basis of the SHA-3 standard and the newly proposed Keyak and Ketje authenticated ciphers. For reliability, we propose a set of trusted instructions to verify the integrity of a cryptographic software library. These instructions are aimed at detecting tamper in the software or in the configurable hardware. We develop the instruction extensions for a 128-bit interface, commonly available in the vector processing unit of many modern processors. Simulation results on GEM5 architectural simulator show that the proposed instructions not only improves the performance of Keccak applications by 2 times (over NEON programming) and 6 times (over assembly programming), but also improves the reliability of applications at a performance overhead of just 6%. / Master of Science
14

Cryptanalyse des algorithmes de chiffrement symétrique / Cryptanalysis of symmetric encryption algorithms

Chaigneau, Colin 28 November 2018 (has links)
La sécurité des transmissions et du stockage des données est devenue un enjeu majeur de ces dernières années et la cryptologie, qui traite de la protection algorithmique de l'information, est un sujet de recherche extrêmement actif. Elle englobe la conception d'algorithmes cryptographiques, appelée cryptographie, et l'analyse de leur sécurité, appelée cryptanalyse.Dans cette thèse, nous nous concentrons uniquement sur la cryptanalyse, et en particulier celle des algorithmes de chiffrement symétrique, qui reposent sur le partage d'un même secret entre l'entité qui chiffre l'information et celle qui la déchiffre. Dans ce manuscrit, trois attaques contre des algorithmes de chiffrement symétriques sont présentées. Les deux premières portent sur deux candidats de l'actuelle compétition cryptographique CAESAR, les algorithmes AEZ et NORX, tandis que la dernière porte sur l'algorithme Kravatte, une instance de la construction Farfalle qui utilise la permutation de la fonction de hachage décrite dans le standard SHA-3. Les trois algorithmes étudiés présentent une stratégie de conception similaire, qui consiste à intégrer dans une construction nouvelle une primitive, i.e. une fonction cryptographique élémentaire, déjà existante ou directement inspirée de travaux précédents.La compétition CAESAR, qui a débuté en 2015, a pour but de définir un portefeuille d'algorithmes recommandés pour le chiffrement authentifié. Les deux candidats étudiés, AEZ et NORX, sont deux algorithmes qui ont atteint le troisième tour de cette compétition. Les deux attaques présentées ici ont contribué à l'effort de cryptanalyse nécessaire dans une telle compétition. Cet effort n'a, en l'occurrence, pas permis d'établir une confiance suffisante pour justifier la présence des algorithmes AEZ et NORX parmi les finalistes.AEZ est une construction reposant sur la primitive AES, dont l'un des principaux objectifs est d'offrir une résistance optimale à des scénarios d'attaque plus permissifs que ceux généralement considérés pour les algorithmes de chiffrement authentifié. Nous montrons ici que dans de tels scénarios il est possible, avec une probabilité anormalement élevée, de retrouver l'ensemble des secrets utilisés dans l'algorithme.NORX est un algorithme de chiffrement authentifié qui repose sur une variante de la construction dite en éponge employée par exemple dans la fonction de hachage Keccak. Sa permutation interne est inspirée de celles utilisées dans BLAKE et ChaCha. Nous montrons qu'il est possible d'exploiter une propriété structurelle de cette permutation afin de récupérer la clé secrète utilisée. Pour cela, nous tirons parti du choix des concepteurs de réduire les marges de sécurité dans le dimensionnement de la construction en éponge.Enfin, la dernière cryptanalyse remet en cause la robustesse de l'algorithme Kravatte, une fonction pseudo-aléatoire qui autorise des entrées et sorties de taille variable. Dérivée de la permutation Keccak-p de SHA-3 au moyen de la construction Farfalle, Kravatte est efficace et parallélisable. Ici, nous exploitons le faible degré algébrique de la permutation interne pour mettre au jour trois attaques par recouvrement de clé : une attaque différentielle d'ordre supérieur, une attaque algébrique "par le milieu" et une attaque inspirée de la cryptanalyse de certains algorithmes de chiffrement à flot. / Nowadays, cryptology is heavily used to protect stored and transmitted data against malicious attacks, by means of security algorithms. Cryptology comprises cryptography, the design of these algorithms, and cryptanalysis, the analysis of their security.In this thesis, we focus on the cryptanalysis of symmetric encryption algorithms, that is cryptographic algorithms that rely on a secret value shared beforehand between two parties to ensure both encryption and decryption. We present three attacks against symmetric encryption algorithms. The first two cryptanalyses target two high profile candidates of the CAESAR cryptographic competition, the AEZ and NORX algorithms, while the last one targets the Kravatte algorithm, an instance of the Farfalle construction based on the Keccak permutation. Farfalle is multipurpose a pseudo-random function (PRF) developed by the same designers' team as the permutation Keccak used in the SHA-3 hash function.The CAESAR competition, that began in 2015, aims at selecting a portfolio of algorithms recommended for authenticated encryption. The two candidates analysed, AEZ and NORX, reached the third round of the CAESAR competition but were not selected to be part of the finalists. These two results contributed to the cryptanalysis effort required in such a competition. This effort did not establish enough confidence to justify that AEZ and NORX accede to the final round of the competition.AEZ is a construction based on the AES primitive, that aims at offering an optimal resistance against more permissive attack scenarios than those usually considered for authenticated encryption algorithms. We show here that one can recover all the secret material used in AEZ with an abnormal success probability.NORX is an authenticated encryption algorithm based on a variant of the so-called sponge construction used for instance in the SHA-3 hash function. The internal permutation is inspired from the one of BLAKE and ChaCha. We show that one can leverage a strong structural property of this permutation to recover the secret key, thanks to the designers' non-conservative choice of reducing the security margin in the sponge construction.Finally, the last cryptanalysis reconsiders the robustness of the Kravatte algorithm. Kravatte is an efficient and parallelizable PRF with input and output of variable length. In this analysis, we exploit the low algebraic degree of the permutation Keccak used in Kravatte to mount three key-recovery attacks targeting different parts of the construction: a higher order differential attack, an algebraic meet-in-the-middle attack and an attack based on a linear recurrence distinguisher.
15

TCB Minimizing Model of Computation (TMMC)

Bushra, Naila 13 December 2019 (has links)
The integrity of information systems is predicated on the integrity of processes that manipulate data. Processes are conventionally executed using the conventional von Neumann (VN) architecture. The VN computation model is plagued by a large trusted computing base (TCB), due to the need to include memory and input/output devices inside the TCB. This situation is becoming increasingly unjustifiable due to the steady addition of complex features such as platform virtualization, hyper-threading, etc. In this research work, we propose a new model of computation - TCB minimizing model of computation (TMMC) - which explicitly seeks to minimize the TCB, viz., hardware and software that need to be trusted to guarantee the integrity of execution of a process. More specifically, in one realization of the model, the TCB can be shrunk to include only a low complexity module; in a second realization, the TCB can be shrunk to include nothing, by executing processes in a blockchain network. The practical utilization of TMMC using a low complexity trusted module, as well as a blockchain network, is detailed in this research work. The utility of the TMMC model in guaranteeing the integrity of execution of a wide range of useful algorithms (graph algorithms, computational geometric algorithms, NP algorithms, etc.), and complex large-scale processes composed of such algorithms, are investigated.
16

Architecture Design and Performance Optimization of Wireless Mesh Networks

He, Bing 03 August 2010 (has links)
No description available.
17

Contributions à l'efficacité des mécanismes cryptographiques

Atighehchi, Kevin 21 September 2015 (has links)
Les besoins constants d’innovation en matière de performances et d’économie des ressources nous poussent à effectuer des optimisations dans la conception et l’utilisation des outils cryptographiques. Cela nous amène à étudier plusieurs aspects dans cette thèse : les algorithmes cryptographiques parallèles, les algorithmes cryptographiques incrémentaux et les dictionnaires authentifiés.Dans le cadre de la cryptographie parallèle, nous nous intéressons aux fonctions de hachage basées sur des arbres. Nous montrons en particulier quelles structures arborescentes utiliser pour atteindre un temps d’exécution optimum avec un nombre de processeurs que nous cherchons à minimiser dans un second temps. Nous étudions également d'autres formesd'arborescence favorisant l'équité et la scalabilité.Les systèmes cryptographiques incrémentaux permettent, lorsque nous modifions des documents, de mettre à jour leurs formes cryptographiques efficacement. Nous montrons que les systèmes actuels restreignent beaucoup trop les modifications possibles et introduisons de nouveaux algorithmes s’appuyant sur ces derniers, utilisés comme des boites noires, afin de rendre possible une large gamme de modifications aux documents tout en conservant une propriété de secret de l’opération effectuée.Notre intérêt porte ensuite sur les dictionnaires authentifiés, utilisés pour authentifier les réponses aux requêtes des utilisateurs sur un dictionnaire, en leur fournissant une preuve d’authenticité pour chaque réponse. Nous nous focalisons sur des systèmes basés sur des arbres de hachage et proposons une solution pour amoindrir leur principal inconvénient, celui de la taille des preuves. / The need for continuing innovation in terms of performances and resource savings impel us to optimize the design and the use of cryptographic mechanisms. This leads us to consider several aspects in this dissertation: parallel cryptographic algorithms, incremental cryptographic algorithms and authenticated dictionaries.In the context of parallel cryptography we are interested in hash functions. In particular, we show which tree structures to use to reach an optimal running time. For this running time, we show how to decrease the amount of involved processors. We also explore alternative (sub-optimal) tree structures which decrease the number of synchronizations in multithreaded implementations while balancing at best the load of the work among the threads.Incremental cryptographic schemes allow the efficient updating of cryptographic forms when we change some blocks of the corresponding documents. We show that the existing incremental schemes restrict too much the possible modification operations. We then introduce new algorithms which use these ones as black boxes to allow a broad range of modification operations, while preserving a privacy property about these operations.We then turn our attention to authenticated dictionaries which are used to authenticate answers to queries on a dictionary, by providing to users an authentication proof for each answer. We focus on authenticated dictionaries based on hash trees and we propose a solution to remedy their main shortcoming, the size of proofs provided to users.
18

Akcelerace vektorových a krytografických operací na platformě x86-64 / Acceleration of Vector and Cryptographic Operations on x86-64 Platform

Šlenker, Samuel January 2017 (has links)
The aim of this thesis was to study and subsequently process a comparison of older and newer SIMD processing units of modern microprocessors on the x86-64 platform. The thesis provides an overview of the fastest computations of vector operations with matrices and vectors, including corresponding source codes. Furthermore, the thesis is focused on authenticated encryption, specifically on block cipher AES operating in Galois Counter Mode, and on a discussion of possibilities of instruction sets for cryptographic support.
19

Formal Verification of a LTE Security Protocol for Dual-Connectivity : An Evaluation of Automatic Model Checking Tools

Pfeffer, Katharina January 2014 (has links)
Security protocols are ubiquitously used in various applications with the intention to ensure secure and private communication. To achieve this goal, a mechanism offering reliable and systematic protocol verification is needed. Accordingly, a major interest in academic research on formal methods for protocol analysis has been apparent for the last two decades. Such methods formalize the operational semantics of a protocol, laying the base for protocol verification with automatic model checking tools. So far, little work in this field has focused on protocol standardization. Within this thesis a security analysis of a novel Authenticated Key-Exchange (AKE) protocol for secure association handover between two Long-Term Evolution (LTE) base stations (which support dual-connectivity) is carried out by applying two state-of-the-art tools for automated model checking (Scyther and Tamarin Prover). In the course of this a formal protocol model and tool input models are developed. Finally, the suitability of the used tools for LTE protocol analysis is evaluated. The major outcome is that none of the two applied tools is capable to accurately model and verify the dual-connectivity protocol in such detail that it would make them particularly useful in the considered setting. The reason for this are restrictions in the syntax of Scyther and a degraded performance of Tamarin when using complex protocol input models. However, the use of formal methods in protocol standardization can be highly beneficial, since it implies a careful consideration of a protocol’s fundamentals. Hence, formal methods are helpful to improve and structure a protocol’s design process when applied in conjunction to current practices. / Säkerhetsprotokoll används i många typer av applikationer för att säkerställa säkerhet och integritet för kommunikation. För att uppnå detta mål behövs en behövs mekanismer som tillhandahåller pålitlig och systematisk verifiering av protokollen. Därför har det visats stort akademiskt intresse för forskning inom formell verifiering av säkerhetsprotokoll de senaste två decennierna. Sådana metoder formaliserar protokollsemantiken, vilket lägger grunden till automatiserad verifiering med modellverifieringsverktyg. Än så la¨nge har det inte varit stort focus på praktiska tilla¨mpningar, som t.ex. hur väl metoderna fungerar för de problem som dyker upp under en standardiseringsprocess. I detta examensarbete konstrueras en formell modell för ett säkerhetsprotokoll som etablerar en säkerhetsassociation mellan en terminal och två Long-Term Evolution (LTE) basstationer i ett delsystem kallat Dual Connectivity. Detta delsystem standardiseras för närvarande i 3GPP. Den formella modellen verifieras sedan med bästa tillgängliga verktyg för automatiserad modellverifiering (Scyther och Tamarin Prover). För att åstadkomma detta har den formella modellen implementerats i inmatningsspråken för de två verktygen.  Slutligen ha de två verktygen evaluerats. Huvudslutsatsen är att inget av de två verktygen tillräckligt väl kan modellera de koncept där maskinstödd verifiering som mest behövs. Skälen till detta är Scythers begränsade syntax, och Tamarins begränsade prestanda och möjlighet att terminera för komplexa protokollmodeller. Trots detta är formella metoder andvändbara i standardiseringsprocessen eftersom de tvingar fram väldigt noggrann granskning av protokollens fundamentala delar. Därför kan formella metoder bidra till att förbättra strukturen på protokollkonstruktionsprocessen om det kombineras med nuvarande metoder.
20

Algoritmos de autenticação de mensagens para redes de sensores. / Message authentication algorithms for wireless sensor networks.

Simplício Junior, Marcos Antonio 12 March 2010 (has links)
Prover segurança às informações trafegadas nos mais diversos tipos de redes é algo essencial. Entretanto, redes altamente dependentes de dispositivos com recursos limitados (como sensores, tokens e smart cards) apresentam um desafio importante: a reduzida disponibilidade de memória e energia destes dispositivos, bem como sua baixa capacidade de processamento, dificultam a utilização de diversos algoritmos criptográficos considerados seguros atualmente. Este é o caso não apenas de cifras simétricas, que proveem confidencialidade aos dados, mas também de MACs (Message Authentication Code, ou Código de Autenticação de Mensagem), que garantem sua integridade e autenticidade. De fato, algumas propostas recentes de cifras de bloco dedicadas a plataformas limitadas (e.g., o Curupira-2) proveem segurança e desempenho em um nível mais adequado a este tipo de cenário do que soluções tradicionais. Seguindo uma linha semelhante, o presente trabalho concentra-se no projeto e análise MACs leves e seguros voltados a cenários com recursos limitados, com foco especial em redes de sensores sem fio (RSSF). Marvin é o nome do algoritmo de MAC proposto neste trabalho. Marvin adota a estrutura Alred, que reutiliza porções de código de uma cifra de bloco subjacente e, assim, introduz um reduzido impacto em termos de ocupação de memória. Este algoritmo apresenta uma estrutura bastante flexível e é altamente paralelizável, permitindo diversas otimizações em função dos recursos disponíveis na plataforma alvo. Como vantagem adicional, Marvin pode ser usado tanto em cenários que necessitam apenas da autenticação de mensagens quanto em esquemas de AEAD (Authenticated- Encryption with Associated Data, ou Encriptação Autenticada com Dados Associados), que aliam encriptação e autenticação. O esquema de AEAD proposto neste trabalho, denominado LetterSoup, explora as características da estrutura do Mar vin e adota uma cifra de bloco operando no modo LFSRC (Linear Feedback Shift Register Counter, ou Contador-Registrador de Deslocamento Linear com Retroalimentação). Além da especificação de ambos os algoritmos, este documento apresenta uma análise detalhada da segurança e desempenho dos mesmos em alguns cenários representativos. / Security is an important concern in any modern network. However, networks that are highly dependent on constrained devices (such as sensors, tokens and smart cards) impose a difficult challenge: their reduced availability of memory, processing power and (specially) energy hinders the deployment of many modern cryptographic algorithms known to be secure. This inconvenience affects not only the deployment of symmetric ciphers, which provide data confidentiality, but also Message Authentication Codes (MACs), used to attest the messages integrity and authenticity. Due to the existence of dedicated block ciphers whose performance and security are adequate for use in resource-constrained scenarios (e.g., the Curupira-2), the focus of this document is on the design and analysis of message authentication algorithms. Our goal is to develop a secure and lightweight solution for deployment on resource constrained scenarios, with especial focus on Wireless Sensor Networks (WSNs). Marvin is the name of the MAC algorithm proposed in this document. Marvin adopts the Alred structure, allowing it to reuse parts of an underlying block cipher machinery; as a result, Marvins implementation builds on the ciphers efficiency and introduces little impact in terms of memory occupation. Moreover, this algorithm presents a flexible and highly parallelizable structure, allowing many implementation optimizations depending on the resources available on the target platform. Marvin can be used not only as an authentication-only function, but also in an Authenticated- Encryption with Associated Data (AEAD) scheme, combining authentication and encryption. In this document, we define a new AEAD proposal called LetterSoup, which is based on the LFSRC (Linear Feedback Shift Register Counter) mode of operation and builds on Marvin. Together with the specification of both algorithms, we provide a detailed security analysis and evaluate their performance in some representative scenarios.

Page generated in 0.1173 seconds