• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 35
  • 12
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 233
  • 70
  • 51
  • 50
  • 44
  • 42
  • 38
  • 36
  • 30
  • 27
  • 26
  • 25
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Modeling, simulations, and experiments to balance performance and fairness in P2P file-sharing systems

Li,Yunzhao January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Don Gruenbacher / Caterina Scoglio / In this dissertation, we investigate research gaps still existing in P2P file-sharing systems: the necessity of fairness maintenance during the content information publishing/retrieving process, and the stranger policies on P2P fairness. First, through a wide range of measurements in the KAD network, we present the impact of a poorly designed incentive fairness policy on the performance of looking up content information. The KAD network, designed to help peers publish and retrieve sharing information, adopts a distributed hash table (DHT) technology and combines itself into the aMule/eMule P2P file-sharing network. We develop a distributed measurement framework that employs multiple test nodes running on the PlanetLab testbed. During the measurements, the routing tables of around 20,000 peers are crawled and analyzed. More than 3,000,000 pieces of source location information from the publishing tables of multiple peers are retrieved and contacted. Based on these measurements, we show that the routing table is well maintained, while the maintenance policy for the source-location-information publishing table is not well designed. Both the current maintenance schedule for the publishing table and the poor incentive policy on publishing peers eventually result in the low availability of the publishing table, which accordingly cause low lookup performance of the KAD network. Moreover, we propose three possible solutions to address these issues: the self-maintenance scheme with short period renewal interval, the chunk-based publishing/retrieving scheme, and the fairness scheme. Second, using both numerical analyses and agent-based simulations, we evaluate the impact of different stranger policies on system performance and fairness. We explore that the extremely restricting stranger policy brings the best fairness at a cost of performance degradation. The varying tendency of performance and fairness under different stranger policies are not consistent. A trade-off exists between controlling free-riding and maintaining system performance. Thus, P2P designers are required to tackle strangers carefully according to their individual design goals. We also show that BitTorrent prefers to maintain fairness with an extremely restricting stranger policy, while aMule/eMule’s fully rewarding stranger policy promotes free-riders’ benefit.
82

Analyse et conception de fonctions de hachage cryptographiques

Manuel, Stéphane 23 November 2010 (has links) (PDF)
Une fonction de hachage est une fonction prenant comme argument un élément de taille arbitraire finie et renvoyant un élément de longueur fixée. Il existe différents types de fonctions de hachage qui correspondent à autant de domaines d'utilisation. Parmi ces fonctions, les fonctions de hachage cryptographiques se distinguent par la variété des missions qui leur sont confiées et par l'exigence qui leur est faîte de respecter de nombreux impératifs de sécurité. Les fonctions de hachage cryptographiques les plus utilisées en pratiques appartiennent à la famille MD-SHA, dont les membres les plus connus sont les fonctions MD5 et SHA-1. Durant ces dernières années, de nouvelles techniques de crytptanalyses ont fait leur apparition. Ces techniques, bien que très complexes, se sont montrés si efficaces qu'elles ont conduit à l'abandon de l'utilisation des fonctions MD5 et SHA-1, et à l'ouverture d'une compétition internationale pour le développement d'un nouvel algorithme de hachage cryptographique. Les travaux de recherche que nous avons menés dans le cadre de cette thèse s'inscrivent à la fois dans une démarche d'analyse et de conception. Nous étudions les nouvelles avancées dans la cryptanalyse des fonctions de hachage, et plus particulièrement leurs mise en oeuvre dans le cadre des fonctions SHA-0 et SHA-1. Nous présentons à ce titre la meilleure attaque pratique connue à ce jour contre SHA-0 et proposons la première classification des vecteurs de perturbations utilisés par les attaques par collision contre la fonction SHA-1. Nous abordons ensuite la conception de nouvelle fonctions par le biais des fonction XOR-Hash et FSB.
83

Statistinė SHA-3 konkurso maišos funkcijų analizė / Statistical analysis of hash functions from sha-3 competition

Orvidaitė, Halina 04 July 2014 (has links)
Pagrindinis magistro baigiamojo darbo tikslas buvo, pasinaudojant NIST SHA-3 maišos algoritmų kompresijos funkcijomis, sukurti pseudo-atsitiktinių skaičių generatorių ir atliktų juo sugeneruotų sekų statistinius testus. Darbo metu surinkau pagrindinę teorinę bazę, reikalingą, norint susipaţinti su naujosiomis SHA-3 maišos funkcijomis bei NIST pateikiamu statistinių testų paketu. Detaliai išanalizavau algoritmus, kurie šiuo metu yra maišos funkcijų standartai, ir kurių savybių tenkinimas yra minimalus reikalavimas SHA-3 algoritmų kandidatams. Detaliai pristačiau kiekvieną iš penkių finalinių SHA-3 algoritmų, testavimo algoritmus, kurie yra pateikti statistinių testų pakete: aptariau jų idėją ir tikslą, pateikiamus įvesties kintamuosius, atliekamus algoritmų ţingsnius, reikalavimus funkcijoms paduodamiems kintamiesiems bei gautų rezultatų interpretavimo aspektus. Taip pat pristačiau sugalvotą pseudo-atsitiktinių skaičių generatoriaus algoritmą ir jo Java realizaciją. Sugeneravus testinių duomenų paketą, jį įvertinau NIST statistinių testų pagalba. / The main aim of my final master paper work was to gather theoretical basis, which provides description of cryptology and it‘s elements, valid hash function standards and NIST competition for SHA-3. During my studies I’ve gathered needed information to understand hash algorithms which are represented by five finalists of NIST SHA-3 competition. I’ve analyzed algorithms of current hash function standards and main requirements participants must fulfil in order to become a winner of a competition in detail. I’ve represented each SHA-3 finalist’s function with deep analysis. Also I’ve gathered theoretical basis, which provides description of US National Institute of Standards and Technology created Statistical Test Suite. This statistical test suite is testing binary streams generated by random or pseudorandom number generators. I have given a detailed description of algorithms in given statistical suite: I have provided the main idea and aim of those tests, variables used for input, steps of those algorithms, requirements for input data and possible interpretation of results. Also I’ve introduced an algorithm of pseudorandom numbers generator and have given its’ realization in Java. Finally I’ve created a test data suite and have assessed it with NIST provided statistical test suite.
84

Fingerprinting codes and separating hash families

Rochanakul, Penying January 2013 (has links)
The thesis examines two related combinatorial objects, namely fingerprinting codes and separating hash families. Fingerprinting codes are combinatorial objects that have been studied for more than 15 years due to their applications in digital data copyright protection and their combinatorial interest. Four well-known types of fingerprinting codes are studied in this thesis; traceability, identifiable parent property, secure frameproof and frameproof. Each type of code is named after the security properties it guarantees. However, the power of these four types of fingerprinting codes is limited by a certain condition. The first known attempt to go beyond that came out in the concept of two-level traceability codes, introduced by Anthapadmanabhan and Barg (2009). This thesis extends their work to the other three types of fingerprinting codes, so in this thesis four types of two-level fingerprinting codes are defined. In addition, the relationships between the different types of codes are studied. We propose some first explicit non-trivial con- structions for two-level fingerprinting codes and provide some bounds on the size of these codes. Separating hash families were introduced by Stinson, van Trung, and Wei as a tool for creating an explicit construction for frameproof codes in 1998. In this thesis, we state a new definition of separating hash families, and mainly focus on improving previously known bounds for separating hash families in some special cases that related to fingerprinting codes. We improve upper bounds on the size of frameproof and secure frameproof codes under the language of separating hash families.
85

Geo-process lookup management

Hägglund, Andreas January 2017 (has links)
This thesis presents a method to deploy and lookup applications and devices based on a geographical location. The proposed solution is a combination of two existing technologies, where the first one is a geocode system to encode latitude and longitude coordinates, and the second one is a Distributed Hash Table (DHT) where values are stored and accessed with a $<$key,value$>$ pair. The purpose of this work is to be able to search a specific location for the closest device that solves the user needs, such as finding an Internet of Things (IoT) device. The thesis covers a method for searching by iterating key-value pairs in the DHT and expanding the area to find the devices further away. The search is performed using two main algorithm implementations LayerExpand and SpiralBoxExpand, to scan the area around where the user started the search. LayerExpand and SpiralBoxExpand are tested and evaluated in comparison to each other. The comparison results are presented in the form of plots where both of the functions are shown together. The function analysis results show how the size of the DHT, the number of users, and size of the search area affects the performance of the searches.
86

Analýza návrhu nových hašovacích funkcí pro soutěž SHA-3 / Analýza návrhu nových hašovacích funkcí pro soutěž SHA-3

Marková, Lucie January 2011 (has links)
In the present work we study a linearization framework for assessing the security of hash functions and analyze the proposal of hash function BLAKE. The thesis demonstrates a limitation of a method presented in the linearization framework for which the method could not be applied to the full extent. Further in the thesis, it is explained how to find a message difference for second preimage attack with the help of linear codes. To that end, a matrix representing the linearized compression function of BLAKE is constructed. My thesis as a PDF file and source codes of computations that I created in Mathematica software are on an enclosed CD.
87

Exploitation de la logique propositionnelle pour la résolution parallèle des problèmes cryptographiques / Exploitation of propositional logic for parallel solving of cryptographic problems

Legendre, Florian 30 June 2014 (has links)
La démocratisation des ordinateurs, des téléphones portables et surtout de l'Internet a considérablement révolutionné le monde de la communication. Les besoins en matière de cryptographie sont donc plus nombreux et la nécessité de vérifier la sûreté des algorithmes de chiffrement est vitale. Cette thèse s'intéresse à l'étude d'une nouvelle cryptanalyse, appelée cryptanalyse logique, qui repose sur l'utilisation de la logique propositionnelle - à travers le problème de satisfaisabilité - pour exprimer et résoudre des problèmes cryptographiques. Plus particulièrement, les travaux présentés ici portent sur une certaine catégorie de chiffrements utilisés dans les protocoles d'authentification et d'intégrité de données qu'on appelle fonctions de hachage cryptographiques. Un premier point concerne l'aspect modélisation d'un problème cryptographique en un problème de satisfaisabilité et sa simplification logique. Sont ensuite présentées plusieurs façons pour utiliser cette modélisation fine, dont un raisonnement probabiliste sur les données du problème qui permet, entres autres, d'améliorer les deux principaux points d'une attaque par cryptanalyse logique, à savoir la modélisation et la résolution. Un second point traite des attaques menées en pratique. Dans ce cadre, la recherche de pré-Image pour les fonctions de hachage les plus couramment utilisées mènent à repousser les limites de la résistance de ces fonctions à la cryptanalyse logique. À cela s'ajoute plusieurs attaques pour la recherche de collisions dans le cadre de la logique propositionnelle. / Democratization of increasingly high-Performance digital technologies and especially the Internet has considerably changed the world of communication. Consequently, needs in cryptography are more and more numerous and the necessity of verifying the security of cipher algorithms is essential.This thesis deals with a new cryptanalysis, called logical cryptanalysis, which is based on the use of logical formalism to express and solve cryptographic problems. More precisely, works presented here focuses on a particular category of ciphers, called cryptographic hash functions, used in authentication and data integrity protocols.Logical cryptanalysis is a specific algebraic cryptanalysis where the expression of the cryptographic problem is done through the satisfiabilty problem, fluently called sat problem. It consists in a combinatorial problem of decision which is central in complexity theory. In the past years, works led by the scientific community have allowed to develop efficient solvers for industrial and academical problems.Works presented in this thesis are the fruit of an exploration between satisfiability and cryptanalysis, and have enabled to display new results and innovative methods to weaken cryptographic functions.The first contribution is the modeling of a cryptographic problem as a sat problem. For this, we present some rules that lead to describe easily basic operations involved in cipher algorithms. Then, a section is dedicated to logical reasoning in order to simplify the produced sat formulas and show how satisfiability can help to enrich a knowledge on a studied problem. Furthermore, we also present many points of view to use our smooth modeling to apply a probabilistic reasoning on all the data associated with the generated sat formulas. This has then allowed to improve both the modeling and the solving of the problem and underlined a weakness about the use of round constants.Second, a section is devoted to practical attacks. Within this framework, we tackled preimages of the most popular cryptographic hash functions. Moreover, the collision problem is also approached in different ways, and particularly, the one-Bloc collision attack of Stevens on MD5 was translated within a logical context. It's interesting to remark that in both cases, logical cryptanalysis takes a new look on the considered problems.
88

Hashing algorithms : A comparison for blockchains in Internet of things

Dahlin, Karl January 2018 (has links)
In today’s society blockchains and the Internet of things are two very discussed subjects this has led to thoughts about combining them by using a blockchain in Internet of things. This objective of this study has been to solve the problem which hashing algorithms is the best for a blockchain used in an Internet of things network. This have been done by first selecting a few hashing algorithms and setting up a scenario where a blockchain can be used in an Internet of things network. After that I specified what to compare, speed, input length and output length. The study has been conducted with the aid of literary studies about the hashing algorithms and a program that implements the algorithms and tests their speed. The study has shown that out of the selected hashing algorithms MD5, SHA-256, SHA3-256 with the conditions specified for the study that the hashing algorithm SHA3-256 is the best option if speed is not of the utmost importance in the scenario since it is from a newer standard and do not have a max input length. If speed is the very important in other words if SHA3-256 is to slow then SHA-256 would be best for the Internet of things network.
89

Är lösenord informationssystemens akilleshäl? En studie av studenter vid Högskolan i Borås lösenord / Passwords : The Achille’s heel of information systems? A study of student Passwords – The Achille’s heel of information systems? A study of student passwords at the University College of Borås

Ahlgren, Martin, Blomberg, Marcus, Davidsson, Per January 2008 (has links)
Denna studie berör de lösenord som studenterna vid Högskolan i Borås använder sig av, och omdessa är tillräckligt bra för att skydda den information de vill skydda. Den tar upp de problemsom finns med att använda lösenord som autentiseringslösning, samt de egenskaper som skaparett bra lösenord.Ett bra lösenord innehåller fler än fjorton tecken och är en blandning av bokstäver (gemener ochversaler), siffror och specialtecken, och innehåller inga fraser, välkända uttryck eller ord som ståri en ordlista. Lösenorden krypteras ofta med en så kallad hashfunktion, en envägsfunktion somskapar en kontrollsumma. Om man på ett säkert sätt skall spara lösenorden med denna metod börman använda ett så kallat salt, en textsträng som gör själva lagringen säkrare.För att knäcka lösenord kan man använda sig av flera olika metoder: Så kalladedictionaryattacker, som skapar lösenord utifrån en ordlista; brute forceattacker som används föratt slumpa fram möjliga permutationer och sedan jämföra med lösenordet man vill knäcka; samtrainbow tableattacker, en metod där man använder färdiggenererade tabeller som innehåller allamöjliga permutationer.Vi har genom en enkät fått en bild av hur studenterna vid Högskolan i Borås lösenord ser ut. Vihar genom ett experiment bekräftat att dictionaryattacker kan knäcka 34 % av samtliga lösenord,och detta är endast en av metoderna. Vi har också kommit fram till att en autentiseringslösningmed enbart användarnamn/lösenord inte är en fullgod lösning ur ett IT-säkerhetsperspektiv,samtidigt som den kanske inte stjälper hela informationssystemet. / Uppsatsnivå: C
90

Uma arquitetura escalável para recuperação e atualização de informações com relação de ordem total. / A scalable architecture for retrieving information with total order relationship.

Rocha, Vladimir Emiliano Moreira 17 November 2017 (has links)
Desde o início do século XXI, vivenciamos uma explosão na produção de informações de diversos tipos, tais como fotos, áudios, vídeos, entre outros. Dentre essas informações, existem aquelas em que a informação pode ser dividida em partes menores, mas que devem ser relacionadas seguindo uma ordem total. Um exemplo deste tipo de informação é um arquivo de vídeo que foi dividido em dez segmentos identificados com números de 1 a 10. Para reproduzir o vídeo original a partir dos segmentos é necessário que seus identificadores estejam ordenados. A estrutura denominada tabela de hash distribuída (DHT) tem sido amplamente utilizada para armazenar, atualizar e recuperar esse tipo de informação de forma eficiente em diversos cenários, como monitoramento de sensores e vídeo sob demanda. Entretanto, a DHT apresenta problemas de escalabilidade quando um membro da estrutura não consegue atender as requisições recebidas, trazendo como consequência a inacessibilidade da informação. Este trabalho apresenta uma arquitetura em camadas denominada MATe, que trata o problema da escalabilidade em dois níveis: estendendo a DHT com a introdução de agentes baseados na utilidade e organizando a quantidade de requisições solicitadas. A primeira camada trata a escalabilidade ao permitir a criação de novos agentes com o objetivo de distribuir as requisições evitando que um deles tenha a escalabilidade comprometida. A segunda camada é composta por grupos de dispositivos organizados de tal forma que somente alguns deles serão escolhidos para fazer requisições. A arquitetura foi implementada para dois cenários onde os problemas de escalabilidade acontecem: (i) monitoramento de sensores; e (ii) vídeo sob demanda. Para ambos cenários, os resultados experimentais mostraram que MATe melhora a escalabilidade quando comparada com as implementações originais da DHT. / Since the beginning of the 21st century, we have experienced an explosive growth in the generation of information, such as photos, audios, videos, among others. Within this information, there are some in which the information can be divided and related following a total order. For example, a video file can be divided into ten segments identified with numbers from 1 to 10. To play the original video from these segments, their identifiers must be fully ordered. A structure called Distributed Hash Table (DHT) has been widely used to efficiently store, update, and retrieve this kind of information in several application domains, such as video on demand and sensor monitoring. However, DHT encounters scalability issues when one of its members fails to answer the requests, resulting in information loss. This work presents MATe, a layered architecture that addresses the problem of scalability on two levels: extending the DHT with the introduction of utility-based agents and organizing the volume of requests. The first layer manages the scalability by allowing the creation of new agents to distribute the requests when one of them has compromised its scalability. The second layer is composed of groups of devices, organized in such a way that only a few of them will be chosen to perform requests. The architecture was implemented in two application scenarios where scalability problems arise: (i) sensor monitoring; and (ii) video on demand. For both scenarios, the experimental results show that MATe improves scalability when compared to original DHT implementations.

Page generated in 0.035 seconds