91 |
Kombinatorika hashovacích funkcí / Kombinatorika hashovacích funkcíSýkora, Jiří January 2012 (has links)
In this thesis, we study hash functions. We focus mainly on the famous Merkle-Damg˚ard construction and its generalisation. We show that even this generalised construction is not resistant to multicollision attacks. Combinatorics on words plays a fundamental role in the construction of our attack. We prove that regularities unavoidably appear in long words with bounded number of symbol occurences. We present our original results concerning regularities in long words. We lower some earlier published estimates, thus reducing the comlexity of the attack. Our results show that generalised iterated hash functions are interesting rather from the theoretical than practical point of view. 1
|
92 |
Application of Huffman Data Compression Algorithm in Hashing ComputationDevulapalli Venkata,, Lakshmi Narasimha 01 April 2018 (has links)
Cryptography is the art of protecting information by encrypting the original message into an unreadable format. A cryptographic hash function is a hash function which takes an arbitrary length of the text message as input and converts that text into a fixed length of encrypted characters which is infeasible to invert. The values returned by the hash function are called as the message digest or simply hash values. Because of its versatility, hash functions are used in many applications such as message authentication, digital signatures, and password hashing [Thomsen and Knudsen, 2005].
The purpose of this study is to apply Huffman data compression algorithm to the SHA-1 hash function in cryptography. Huffman data compression algorithm is an optimal compression or prefix algorithm where the frequencies of the letters are used to compress the data [Huffman, 1952]. An integrated approach is applied to achieve new compressed hash function by integrating Huffman compressed codes in the core functionality of hashing computation of the original hash function.
|
93 |
Scalable Streaming Graph PartitioningSeyed Khamoushi, Seyed Mohammadreza January 2017 (has links)
Large-scale graph-structured datasets are growing at an increasing rate. Social network graphs are an example of these datasets. Processing large-scale graphstructured datasets are central to many applications ranging from telecommunication to biology and has led to the development of many parallel graph algorithms. Performance of parallel graph algorithms largely depends on how the underlying graph is partitioned. In this work, we focus on studying streaming vertex-cut graph partitioning algorithms where partitioners receive a graph as a stream of vertices and edges and assign partitions to them on their arrival once and for all. Some of these algorithms maintain a state during partitioning. In some cases, the size of the state is so huge that it cannot be kept in a single machine memory. In many real world scenarios, several instances of a streaming graph partitioning algorithm are run simultaneously to improve the system throughput. However, running several instances of a partitioner drops the partitioning quality considerably due to the incomplete information of partitioners. Even frequently sharing states and its combination with buffering mechanisms does not completely solves the problem because of the heavy communication overhead produced by partitioners. In this thesis, we propose an algorithm which tackles the problem of low scalability and performance of existing streaming graph partitioning algorithms by providing an efficient way of sharing states and its combination with windowing mechanism. We compare state-of-the-art streaming graph partitioning algorithms with our proposed solution concerning performance and efficiency. Our solution combines a batch processing method with a shared-state mechanism to achieve both an outstanding performance and a high partitioning quality. Shared state mechanism is used for sharing states of partitioners. We provide a robust implementation of our method in a PowerGraph framework. Furthermore, we empirically evaluate the impact of partitioning quality on how graph algorithms perform in a real cloud environment. The results show that our proposed method outperforms other algorithms in terms of partitioning quality and resource consumption and improves partitioning time considerably. On average our method improves partitioning time by 23%, decreases communication load by 15% and increase memory consumption by only 5% compared to the state-of-the-art streaming graph partitioning.
|
94 |
Distributed k-ary System: Algorithms for Distributed Hash TablesGhodsi, Ali January 2006 (has links)
This dissertation presents algorithms for data structures called distributed hash tables (DHT) or structured overlay networks, which are used to build scalable self-managing distributed systems. The provided algorithms guarantee lookup consistency in the presence of dynamism: they guarantee consistent lookup results in the presence of nodes joining and leaving. Similarly, the algorithms guarantee that routing never fails while nodes join and leave. Previous algorithms for lookup consistency either suffer from starvation, do not work in the presence of failures, or lack proof of correctness. Several group communication algorithms for structured overlay networks are presented. We provide an overlay broadcast algorithm, which unlike previous algorithms avoids redundant messages, reaching all nodes in O(log n) time, while using O(n) messages, where n is the number of nodes in the system. The broadcast algorithm is used to build overlay multicast. We introduce bulk operation, which enables a node to efficiently make multiple lookups or send a message to all nodes in a specified set of identifiers. The algorithm ensures that all specified nodes are reached in O(log n) time, sending maximum O(log n) messages per node, regardless of the input size of the bulk operation. Moreover, the algorithm avoids sending redundant messages. Previous approaches required multiple lookups, which consume more messages and can render the initiator a bottleneck. Our algorithms are used in DHT-based storage systems, where nodes can do thousands of lookups to fetch large files. We use the bulk operation algorithm to construct a pseudo-reliable broadcast algorithm. Bulk operations can also be used to implement efficient range queries. Finally, we describe a novel way to place replicas in a DHT, called symmetric replication, that enables parallel recursive lookups. Parallel lookups are known to reduce latencies. However, costly iterative lookups have previously been used to do parallel lookups. Moreover, joins or leaves only require exchanging O(1) messages, while other schemes require at least log(f) messages for a replication degree of f. The algorithms have been implemented in a middleware called the Distributed k-ary System (DKS), which is briefly described. / QC 20100824
|
95 |
Distributed Key Generation and Its ApplicationsKate, Aniket 25 June 2010 (has links)
Numerous cryptographic applications require a trusted authority to hold a secret. With a plethora of malicious attacks over the Internet, however, it is difficult to establish and maintain such an authority in online systems. Secret-sharing schemes attempt to solve this problem by distributing the required trust to hold and use the secret over multiple servers; however, they still require a trusted {\em dealer} to choose and share the secret, and have problems related to single points of failure and key escrow. A distributed key generation (DKG) scheme overcomes these
hurdles by removing the requirement of a dealer in secret sharing. A (threshold) DKG scheme achieves this using a complete distribution of the trust among a number of servers such that any subset of servers of size greater than a given threshold can reveal or use the shared secret, while
any smaller subset cannot. In this thesis, we make contributions to DKG in the computational security setting and describe three applications of it.
We first define a constant-size commitment scheme for univariate polynomials over finite fields and use it to reduce the size of broadcasts required for DKG protocols in the synchronous communication model by a linear factor. Further, we observe that the existing (synchronous) DKG protocols do not provide a liveness guarantee over the Internet and design the first DKG protocol for use over the Internet. Observing the necessity of long-term stability, we then present proactive security and group modification protocols for our DKG system. We also demonstrate the practicality of our DKG protocol over the Internet by testing our implementation over PlanetLab.
For the applications, we use our DKG protocol to define IND-ID-CCA secure distributed private-key generators (PKGs) for three important identity-based encryption (IBE) schemes: Boneh and Franklin's BF-IBE, Sakai and Kasahara's SK-IBE, and Boneh and Boyen's BB1-IBE.
These IBE schemes cover all three important IBE frameworks: full-domain-hash IBEs, exponent-inversion IBEs and commutative-blinding IBEs respectively, and our distributed PKG constructions can easily be
modified for other IBE schemes in these frameworks. As the second application, we use our distributed PKG for BF-IBE to define an onion routing circuit construction mechanism in the identity-based setting,
which solves the scalability problem in single-pass onion routing circuit construction without hampering forward secrecy. As the final application, we use our DKG implementation to design a threshold signature architecture for quorum-based distributed hash tables and use it to define two robust communication protocols in these peer-to-peer systems.
|
96 |
Understanding Churn in Decentralized Peer-to-Peer NetworksYao, Zhongmei 2009 August 1900 (has links)
This dissertation presents a novel modeling framework for understanding the dynamics
of peer-to-peer (P2P) networks under churn (i.e., random user arrival/departure)
and designing systems more resilient against node failure. The proposed models are
applicable to general distributed systems under a variety of conditions on graph construction
and user lifetimes.
The foundation of this work is a new churn model that describes user arrival and
departure as a superposition of many periodic (renewal) processes. It not only allows
general (non-exponential) user lifetime distributions, but also captures heterogeneous
behavior of peers. We utilize this model to analyze link dynamics and the ability
of the system to stay connected under churn. Our results offers exact computation
of user-isolation and graph-partitioning probabilities for any monotone lifetime distribution,
including heavy-tailed cases found in real systems. We also propose an
age-proportional random-walk algorithm for creating links in unstructured P2P networks
that achieves zero isolation probability as system size becomes infinite. We
additionally obtain many insightful results on the transient distribution of in-degree,
edge arrival process, system size, and lifetimes of live users as simple functions of the
aggregate lifetime distribution.
The second half of this work studies churn in structured P2P networks that are
usually built upon distributed hash tables (DHTs). Users in DHTs maintain two types of neighbor sets: routing tables and successor/leaf sets. The former tables determine
link lifetimes and routing performance of the system, while the latter are built for
ensuring DHT consistency and connectivity. Our first result in this area proves that
robustness of DHTs is mainly determined by zone size of selected neighbors, which
leads us to propose a min-zone algorithm that significantly reduces link churn in
DHTs. Our second result uses the Chen-Stein method to understand concurrent
failures among strongly dependent successor sets of many DHTs and finds an optimal
stabilization strategy for keeping Chord connected under churn.
|
97 |
On The Security Of Tiger Hash FunctionOzen, Onur 01 January 2008 (has links) (PDF)
Recent years have witnessed several real threats to the most widely used hash functions which are generally inspired from MD4, such as MD5, RIPEMD, SHA0 and SHA1. These extraordinary developments in cryptanalysis of hash functions
brought the attention of the cryptology researchers to the alternative designs. Tiger is an important type of alternative hash functions and is proved to be secure so far as there is no known collision attack on the full (24 rounds) Tiger.
It is designed by Biham and Anderson in 1995 to be very fast on modern computers.
In two years some weaknesses have been found for Tiger-hash function. First, in FSE 006 Kelsey and Lucks found a collision for 16-17 rounds of Tiger and a pseudo-near-collision for 20 rounds. Then, Mendel et al extended this attack to find 19-round collision and 22-round pseudo-near-collision. Finally in 2007, Mendel and Rijmen found a pseudo-near-collision for the full Tiger. In this work, we modify the attack of Kelsey and Lucks slightly and present the exact values of the differences used in the attack.
Moreover, there have been several cryptanalysis papers investigating the randomness properties of the designed hash functions under the encryption modes. In these papers, related-key boomerang and related-key rectangle attacks are performed on MD4,MD5, HAVAL and SHA. In this thesis, we introduce our 17,19 and 21-round related-key boomerang and rectangle distinguishers to the encryption mode of Tiger.
|
98 |
Statistical Analysis Of Block Ciphers And Hash FunctionsSulak, Fatih 01 February 2011 (has links) (PDF)
One of the most basic properties expected from block ciphers and hash functions is passing statistical randomness testing, as they are supposed to behave like random mappings. Previously, testing of AES candidate block ciphers was done by using the statistical tests defined in the NIST Test Suite. As some of the tests in this suite require long sequences, data sets are formed by concatenating the outputs of the algorithms obtained from various input types. However, the nature of block cipher and hash function algorithms necessitates devising tests and test parameters focused particularly on short sequences, therefore we propose a package of statistical randomness tests which produce reliable results for short sequences and test the outputs of the algorithms directly rather than concatenations. Moreover, we propose an alternative method to evaluate the test results and state the required computations of related probabilities for the new evaluation method.
We also propose another package of statistical tests which are designed basing on certain cryptographic properties of block ciphers and hash functions to evaluate their randomness, namely the cryptographic randomness testing. The packages are applied to the AES finalists, and produced more precise results than those obtained in similar applications. Moreover, the packages are also applied to SHA-3 second round candidate algorithms.
|
99 |
Verifiable and redactable medical documentsBrown, Jordan Lee 16 July 2012 (has links)
The objective of the proposed research is to answer the question of how to provide verification and redactability to medical documents at a manageable computation cost to all parties involved. The approach for this solution examines the use of Merkle Hash Trees to provide the redaction and verification characteristics required. Using the Merkle Hash Tree, various Continuity of Care Documents will have their various elements extracted for storage in the signature scheme. An analysis of the approach and the various characteristics that made this approach a likely candidate for success are provided within. A description of a framework implementation and a sample application are provided to demonstrate potential uses of the system. Finally, results seen from various experiments with the framework are included to provide concrete evidence of a solution to the question which was the focus of this research.
|
100 |
Chord - A Distributed Hash TableLiao, Yimei 24 July 2006 (has links) (PDF)
An introduction to Chord Algorithm.
|
Page generated in 0.0281 seconds