• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 35
  • 12
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 233
  • 70
  • 51
  • 50
  • 44
  • 42
  • 38
  • 36
  • 30
  • 27
  • 26
  • 25
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

OPTIMALIZACE ALGORITMŮ A DATOVÝCH STRUKTUR PRO VYHLEDÁVÁNÍ REGULÁRNÍCH VÝRAZŮ S VYUŽITÍM TECHNOLOGIE FPGA / OPTIMIZATION OF ALGORITHMS AND DATA STRUCTURES FOR REGULAR EXPRESSION MATCHING USING FPGA TECHNOLOGY

Kaštil, Jan Unknown Date (has links)
Disertační práce se zabývá rychlým vyhledáváním regulárních výrazů v síťovém provozu s použitím technologie FPGA. Vyhledávání regulárních výrazů v síťovém provozu je výpočetně náročnou operací využívanou převážně v oblasti síťové bezpečnosti a v oblasti monitorování provozu vysokorychlostních počítačových sítí. Současná řešení neumožňují dosáhnout požadovaných multigigabitových propustností při dodržení všech požadavků, které jsou na vyhledávací jednotky kladeny. Nejvyšších propustností dosahují implementace založené na využití inovativních hardwarových architektur implementovaných v FPGA případně v ASIC. Tato disertační práce popisuje nové architektury vyhledávací jednotky, které jsou vhodné pro implementaci jak v FPGA tak v ASIC. Základní myšlenkou navržených architektur je využití perfektní hashovací funkce pro implementaci přechodové tabulky konečného automatu. Dále byla navržena architektura, která umožňuje uživateli zanést malou pravděpodobnost chyby při vyhledávání a tím snížit paměťové nároky vyhledávací jednotky. Disertační práce analyzuje vliv pravděpodobnosti této chyby na celkovou spolehlivost systému a srovnává ji s řešením používaným v současnosti. V rámci disertační práce byla provedena měření vlastností regulárních výrazů používaných při analýze provozu moderních počítačových sítí. Z provedené analýzy vyplývá, že velká část regulárních výrazů je vhodná pro implementaci pomocí navržených architektur. Pro dosažení vysoké propustnosti vyhledávací jednotky práce navrhuje nový algoritmus transformace abecedy, který umožňuje, aby vyhledávací jednotka zpracovala více znaků v jednom kroku. Na rozdíl od současných metod, navržený algoritmus umožňuje konstrukci automatu zpracovávajícího libovolný počet symbolů v jednom taktu. Implementované architektury dosahují v porovnání se současnými metodami úspory paměti zlepšení až 200MB.
182

Chord - A Distributed Hash Table

Liao, Yimei 24 July 2006 (has links)
An introduction to Chord Algorithm.
183

Chord - A Distributed Hash Table

Liao, Yimei 21 August 2007 (has links)
Source is converted into pdf format. An introduction to Chord Algorithm.
184

Algebraicko-diferenční analýza Keccaku / Algebraic-differential analysis of Keccak

Seidlová, Monika January 2016 (has links)
In this thesis, we analyze the cryptographic sponge function family Keccak - the winner of the SHA-3 Cryptographic Hash Standard competition. Firstly, we explore how higher order differentials can be used to forge a tag in a parallelizable MAC function. We introduce new terms and theory studying what affine spaces remain affine after one round of Keccak's underlying permutation Keccak-f. This allows us to improve the forgery. Secondly, collisions in Keccak could be generated from pairs of values, that follow particular differential trails in Keccak-f. We tested finding pairs for a given differential trail in reduced-round Keccak-f using algebraic techniques with the mathematics software SAGE. We found a pair in a 4-round trail in Keccak-f[50] in under 5 minutes and a 3-round trail in Keccak-f[100] in 80 seconds on a regular PC. Powered by TCPDF (www.tcpdf.org)
185

Persistence and Node FailureRecovery in Strongly Consistent Key-Value Datastore

Ehsan ul Haque, Muhammad January 2012 (has links)
Consistency preservation of replicated data is a critical aspect for distributed databaseswhich are strongly consistent. Further, in fail-recovery model each process also needs todeal with the management of stable storage and amnesia [1]. CATS is a key/value datastore which combines the Distributed Hash Table (DHT) like scalability and selforganization and also provides atomic consistency of the replicated items. However beingan in memory data store with consistency and partition tolerance (CP), it suffers frompermanent unavailability in the event of majority failure. The goals of this thesis were twofold (i) to implement disk persistent storage in CATS,which would allow the records and state of the nodes to be persisted on disk and (ii) todesign nodes failure recovery-algorithm for CATS which enable the system to run with theassumption of a Fail Recovery model without violating consistency. For disk persistent storage two existing key/value databases LevelDB [2] and BerkleyDB[3] are used. LevelDB is an implementation of log structured merged trees [4] where asBerkleyDB is an implementation of log structured B+ trees [5]. Both have been used as anunderlying local storage for nodes and throughput and latency of the system with each isdiscussed. A technique to improve the performance by allowing concurrent operations onthe nodes is also discussed. The nodes failure-recovery algorithm is designed with a goalto allow the nodes to crash and then recover without violating consistency and also toreinstate availability once the majority of nodes recover. The recovery algorithm is based onpersisting the state variables of Paxos [6] acceptor and proposer and consistent groupmemberships. For fault-tolerance and recovery, processes also need to copy records from the replicationgroup. This becomes problematic when the number of records and the amount of data ishuge. For this problem a technique for transferring key/value records in bulk is alsodescribed, and its effect on the latency and throughput of the system is discussed.
186

Combined robust and fragile watermarking algorithms for still images. Design and evaluation of combined blind discrete wavelet transform-based robust watermarking algorithms for copyright protection using mobile phone numbers and fragile watermarking algorithms for content authentication of digital still images using hash functions.

Jassim, Taha D. January 2014 (has links)
This thesis deals with copyright protection and content authentication for still images. New blind transform domain block based algorithms using one-level and two-level Discrete Wavelet Transform (DWT) were developed for copyright protection. The mobile number with international code is used as the watermarking data. The robust algorithms used the Low-Low frequency coefficients of the DWT to embed the watermarking information. The watermarking information is embedded in the green channel of the RGB colour image and Y channel of the YCbCr images. The watermarking information is scrambled by using a secret key to increase the security of the algorithms. Due to the small size of the watermarking information comparing to the host image size, the embedding process is repeated several times which resulted in increasing the robustness of the algorithms. Shuffling process is implemented during the multi embedding process in order to avoid spatial correlation between the host image and the watermarking information. The effects of using one-level and two-level of DWT on the robustness and image quality have been studied. The Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index Measure (SSIM) and Normalized Correlation Coefficient (NCC) are used to evaluate the fidelity of the images. Several grey and still colour images are used to test the new robust algorithms. The new algorithms offered better results in the robustness against different attacks such as JPEG compression, scaling, salt and pepper noise, Gaussian noise, filters and other image processing compared to DCT based algorithms. The authenticity of the images were assessed by using a fragile watermarking algorithm by using hash function (MD5) as watermarking information embedded in the spatial domain. The new algorithm showed high sensitivity against any tampering on the watermarked images. The combined fragile and robust watermarking caused minimal distortion to the images. The combined scheme achieved both the copyright protection and content authentication.
187

[en] HX: A PROPOSAL OF A NEW STREAM CIPHER BASED ON COLLISION RESISTANT HASH FUNCTIONS / [pt] HX: UMA PROPOSTA DE UMA NOVA CIFRA DE FLUXO BASEADA EM FUNÇÕES DE HASH RESISTENTES À COLISÃO

MARCIO RICARDO ROSEMBERG 25 March 2021 (has links)
[pt] No futuro próximo, viveremos em cidades inteligentes. Nossas casas, nossos carros e a maioria dos nossos equipamentos estarão interconectados. Se a infraestrutura das cidades inteligentes não fornecerem privacidade e segurança, os cidadãos ficarão relutantes em participar e as principais vantagens de uma cidade inteligente irão se dissolver. Vários algoritmos de criptografia recentemente foram quebrados ou enfraquecidos e os comprimentos das chaves estão aumentando, conforme cresce o poder computacional. Um estudo recente descobriu que 93 porcento de 20.000 aplicações Android tinham violado uma ou mais regras de criptografia. Essas violações enfraquecem a criptografia ou as inutiliza. Outro problema é a autenticação. Uma chave privada comprometida de única autoridade de certificação intermediária pode comprometer toda cidade inteligente que utilizar certificados digitais para autenticação. Neste trabalho, investigamos por que tais violações ocorrem. Propomos o HX: um algoritmo de criptografia modular baseado em funções de hash resistentes à colisão que reduz automaticamente as violações de regras de criptografia e o HXAuth: um protocolo de autenticação de chave simétrica para trabalhar em conjunto com o SRAP ou independentemente, com um segredo previamente partilhado. Nossos experimentos apontam na direção de que a maioria dos desenvolvedores não tem o conhecimento básico necessário em criptografia para utilizar corretamente um algoritmo de criptografia. Nossos experimentos também provam que o HX é seguro, modular e é mais forte, mais eficaz e mais eficiente do que o AES, o Salsa20 e o HC-256. / [en] In the near future, we will live in smart cities. Our house, our car and most of our appliances will be interconnected. If the infrastructure of the smart cities fails to provide privacy and security, citizens will be reluctant to participate and the main advantages of a smart city will dissolve. Several encryption algorithms have been broken recently or significantly weakened and key lengths are increasing as computing power availability grows. In addition to the ever growing computing power a recent study discovered that 93 percent from 20,000 Android applications had violated one or more cryptographic rules. Those violations either weaken the encryption or render them useless. Another problem is authentication. A single compromised private key from any intermediate certificate authority can compromise every smart city which will use digital certificates for authentication. In this work, we investigate why such violations occur and we propose: HX, a modular encryption algorithm based on Collision Resistant Hash Functions that automatically mitigates cryptographic rules violations and HXAuth, a symmetric key authentication protocol to work in tandem with Secure RDF Authentication Protocol (SRAP) or independently with a pre-shared secret. Our experiments points in the direction that most developers do not have the necessary background in cryptography to correctly use encryption algorithms, even those who believed they had. Our experiments also prove HX is safe, modular and is stronger, more effective and more efficient than AES, Salsa20 and HC-256.
188

Secure and Efficient Implementations of Cryptographic Primitives

Guo, Xu 30 May 2012 (has links)
Nowadays pervasive computing opens up many new challenges. Personal and sensitive data and computations are distributed over a wide range of computing devices. This presents great challenges in cryptographic system designs: how to protect privacy, authentication, and integrity in this distributed and connected computing world, and how to satisfy the requirements of different platforms, ranging from resource constrained embedded devices to high-end servers. Moreover, once mathematically strong cryptographic algorithms are implemented in either software or hardware, they are known to be vulnerable to various implementation attacks. Although many countermeasures have been proposed, selecting and integrating a set of countermeasures thwarting multiple attacks into a single design is far from trivial. Security, performance and cost need to be considered together. The research presented in this dissertation deals with the secure and efficient implementation of cryptographic primitives. We focus on how to integrate cryptographic coprocessors in an efficient and secure way. The outcome of this research leads to four contributions to hardware security research. First, we propose a programmable and parallel Elliptic Curve Cryptography (ECC) coprocessor architecture. We use a systematic way of analyzing the impact of System-on-Chip (SoC) integration to the cryptographic coprocessor performance and optimize the hardware/software codesign of cryptographic coprocessors. Second, we provide a hardware evaluation methodology to the NIST SHA-3 standardization process. Our research efforts cover both of the SHA-3 fourteen Second Round candidates and five Third Round finalists. We design the first SHA-3 benchmark chip and discuss the technology impact to the SHA-3 hardware evaluation process. Third, we discuss two technology dependent issues in the fair comparison of cryptographic hardware. We provide a systematic approach to do a cross-platform comparison between SHA-3 FPGA and ASIC benchmarking results and propose a methodology for lightweight hash designs. Finally, we provide guidelines to select implementation attack countermeasures in ECC cryptosystem designs. We discuss how to integrate a set of countermeasures to resist a collection of side-channel analysis (SCA) attacks and fault attacks. The first part of the dissertation discusses how system integration can affect the efficiency of the cryptographic primitives. We focus on the SoC integration of cryptographic coprocessors and analyze the system profile in a co-simulation environment and then on an actual FPGA-based SoC platform. We use this system-level design flow to analyze the SoC integration issues of two block ciphers: the existing Advanced Encryption Standard (AES) and a newly proposed lightweight cipher PRESENT. Next, we use hardware/software codesign techniques to design a programmable ECC coprocessor architecture which is highly flexible and scalable for system integration into a SoC architecture. The second part of the dissertation describes our efforts in designing a hardware evaluation methodology applied to the NIST SHA-3 standardization process. Our Application Specific Integrated Circuit (ASIC) implementation results of five SHA-3 finalists are the first ASIC real measurement results reported in the literature. As a contribution to the NIST SHA-3 competition, we provide timely ASIC implementation cost and performance results of the five SHA-3 finalists in the SHA-3 standard final round evaluation process. We define a consistent and comprehensive hardware evaluation methodology to the NIST SHA-3 standardization process from Field Programmable Gate Array (FPGA) prototyping to ASIC implementation. The third part of the dissertation extends the discussion on hardware benchmarking of NIST SHA-3 candidates by analyzing the impact of technology to the fair comparison of cryptographic hardware. First, a cross-platform comparison between the FPGA and ASIC results of SHA-3 designs demonstrates the gap between two sets of benchmarking results. We describe a systematic approach to analyze a SHA-3 hardware benchmark process for both FPGAs and ASICs. Next, by observing the interaction of hash algorithm design, architecture design, and technology mapping, we propose a methodology for lightweight hash implementation and apply it to CubeHash optimizations. Our ultra-lightweight design of the CubeHash algorithm represents the smallest ASIC implementation of this algorithm reported in the literature. Then, we introduced a cost model for analyzing the hardware cost of lightweight hash implementations. The fourth part of the dissertation discusses SCA attacks and fault attacks resistant cryptosystem designs. We complete a comprehensive survey of state-of-the-art of secure ECC implementations and propose a methodology on selecting countermeasures to thwart multiple side-channel attacks and fault attacks. We focus on a systematic way of organizing and understanding known attacks and countermeasures. / Ph. D.
189

Interconnection of Heterogeneous Overlay Networks: Definition, Formalization and Applications / Povezivanje heterogenih prekrivajućih mreža: definicija, formalizacija i primene

Marinković Bojan 10 October 2014 (has links)
<p>This Ph.D. thesis addresses topics related to overlay networks, their de_nition,<br />formalization and applications. Descriptions of the Chord and Synapse protocols using<br />the ASM formalism is presented, and both a high-level and a re_ned proof of the<br />correctness of the Chord formalization is given. A probabilistic assessment of the<br />exhaustiveness of the Synapse protocol is performed. An updated version of the<br />Proposal of metadata schemata for movable cultural heritage as well as a Proposal of<br />metadata schemata for describing collections are provided. Based of the Chord protocol, a Distributed catalog of digitized collections of Serbian cultural herigate is implemented.</p> / <p>Doktorska disertacija se bavi temama vezanim za prekrivajuće mreže, njihovom<br />definicijom, formalizacijom i primenama. Dati su opisi Chord i Synapse protokola<br />kori&scaron;ćenjem ASM formalizma, kao i dokaz korektnosti formalizacije Chord protokola<br />na visokom nivou, kao i njegovo profinjenje. Izvr&scaron;ena je verovatnosna ocena uspe&scaron;nosti pretrage pomoću Synapse protokola. Predstavljena je ažurirana verzija Predloga sheme meta podataka za pokretna kulturna dobra, kao i Predlog sheme meta podataka za opis kolekcija. Implementiran je Distribuirani katalog digitalizovanih kolekcija kulturne ba&scaron;tine Srbije zasnovan na Chord protokolu.</p>
190

Analýza návrhu hašovací funkce CubeHash / Analysis of the CubeHash proposal

Stankovianska, Veronika January 2013 (has links)
The present thesis analyses the proposal of CubeHash with spe- cial emphasis on the following papers: "Inside the Hypercube" [1], "Sym- metric States and Their Improved Structure" [7] and "Linearisation Frame- work for Collision Attacks" [6]. The CubeHash algorithm is presented in a concise manner together with a proof that the CubeHash round function R : ({0, 1}32 )32 → ({0, 1}32 )32 is a permutation. The results of [1] and [7] con- cerning the CubeHash symmetric states are reviewed, corrected and substan- tiated by proofs. More precisely, working with a definition of D-symmetric state, based on [7], the thesis proves both that for V = Z4 2 and its linear subspace D, there are 22 |V | |D| D-symmetric states and an internal state x is D-symmetric if and only if the state R(x) is D-symmetric. In response to [1], the thesis presents a step-by-step computation of a lower bound for the num- ber of distinct symmetric states, explains why the improved preimage attack does not work as stated and gives a mathematical background for a search for fixed points in R. The thesis further points out that the linearisation method from [6] fails to consider the equation (A ⊕ α) + β = (A + β) ⊕ α (∗), present during the CubeHash iteration phase. Necessary and sufficient conditions for A being a solution to (∗) are...

Page generated in 0.0427 seconds