• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 35
  • 12
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 233
  • 70
  • 51
  • 50
  • 44
  • 42
  • 38
  • 36
  • 30
  • 27
  • 26
  • 25
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Perfect Hash Families: Constructions and Applications

Kim, Kyung-Mi January 2003 (has links)
Let <b>A</b> and <b>B</b> be finite sets with |<b>A</b>|=<i>n</i> and |<b>B</b>|=<i>m</i>. An (<i>n</i>,<i>m</i>,<i>w</i>)-<i>perfect hash</i> family</i> is a collection <i>F</i> of functions from <b>A</b> to <b>B</b> such that for any <b>X</b> &#8838; <b>A</b> with |<b>X</b>|=<i>w</i>, there exists at least one ? &#8712; <i>F</i> such that ? is one-to-one when restricted to <b>X</b>. Perfect hash families are basic combinatorial structures and they have played important roles in Computer Science in areas such as database management, operating systems, and compiler constructions. Such hash families are used for memory efficient storage and fast retrieval of items such as reserved words in programming languages, command names in interactive systems, or commonly used words in natural languages. More recently, perfect hash families have found numerous applications to cryptography, for example, to broadcast encryption schemes, secret sharing, key distribution patterns, visual cryptography, cover-free families and secure frameproof codes. In this thesis, we survey constructions and applications of perfect hash families. For constructions, we divided the results into three parts, depending on underlying structure and properties of the constructions: combinatorial structures, linear functionals, and algebraic structures. For applications, we focus on those related to cryptography.
32

Aspects of Metric Spaces in Computation

Skala, Matthew Adam January 2008 (has links)
Metric spaces, which generalise the properties of commonly-encountered physical and abstract spaces into a mathematical framework, frequently occur in computer science applications. Three major kinds of questions about metric spaces are considered here: the intrinsic dimensionality of a distribution, the maximum number of distance permutations, and the difficulty of reverse similarity search. Intrinsic dimensionality measures the tendency for points to be equidistant, which is diagnostic of high-dimensional spaces. Distance permutations describe the order in which a set of fixed sites appears while moving away from a chosen point; the number of distinct permutations determines the amount of storage space required by some kinds of indexing data structure. Reverse similarity search problems are constraint satisfaction problems derived from distance-based index structures. Their difficulty reveals details of the structure of the space. Theoretical and experimental results are given for these three questions in a wide range of metric spaces, with commentary on the consequences for computer science applications and additional related results where appropriate.
33

A Grouped Hamming Network

Logan, Bryan January 2010 (has links)
A distributed hash table (DHT) is a type of peer-to-peer (P2P) network that, like traditional hash tables, maps keys to values. Unlike traditional hash tables, however, the data is distributed across a network with each node being responsible for a particular range of keys. Numerous other DHTs have been presented and have become the cornerstone of wildly popular P2P file-sharing applications, such as BitTorrent. Each of these DHTs trades-off the number of pointers maintained per node with the overhead and lookup time; storing more pointers decreases the lookup time at the expense of increased overhead. A Grouped Hamming Network (GHN), the overlay network presented in this thesis, allows for the number of pointers per node to be any increasing function of n, P(n) = Ω(log n). The system presented assumes that nodes fail independently and uniformly at random with some probability q = 1 − p. Three different schemes for routing in a GHN are presented. For each routing scheme a theoretical estimate on the probability of failure is given and optimal configurations in terms of n and P(n) are given. Simulations of GHNs with various configurations indicate that the given estimates are indeed accurate for reasonable values of q and that the optimal configurations are accurate.
34

Fast Hash-Based Algorithms for Analyzing Large Collections of Evolutionary Trees

Sul, Seung Jin 2009 December 1900 (has links)
Phylogenetic analysis can produce easily tens of thousands of equally plausible evolutionary trees. Consensus trees and topological distance matrices are often used to summarize the evolutionary relationships among the trees of interest. However, current approaches are not designed to analyze very large tree collections. In this dissertation, we present two fast algorithms— HashCS and HashRF —for analyzing large collections of evolutionary trees based on a novel hash table data structure, which provides a convenient and fast approach to store and access the bipartition information collected from the tree collections. Our HashCS algorithm is a fast ( ) technique for constructing consensus trees, where is the number of taxa and is the number of trees. By reprocessing the bipartition information in our hash table, HashCS constructs strict and majority consensus trees. In addition to a consensus algorithm, we design a fast topological distance algorithm called HashRF to compute the × Robinson-Foulds distance matrix, which requires ( ^ 2) running time. A RF distance matrix provides plenty of data-mining opportunities to help researchers understand the evolutionary relationships contained in their collection of trees. We also introduce a series of extensions based on HashRF to provide researchers with more convenient set of tools for analyzing their trees. We provide extensive experimentation regarding the practical performance of our hash-based algorithms across a diverse collection of biological and artificial trees. Our results show that both algorithms easily outperform existing consensus and RF matrix implementations. For example, on our biological trees, HashCS and HashRF are 1.8 and 100 times faster than PAUP*, respectively. We show two real-world applications of our fast hashing algorithms: (i) comparing phylogenetic heuristic implementations, and (ii) clustering and visualizing trees. In our first application, we design novel methods to compare the PaupRat and Rec-I-DCM3, two popular phylogenetic heuristics that use the Maximum Parsimony criterion, and show that RF distances are more effective than parsimony scores at identifying heterogeneity within a collection of trees. In our second application, we empirically show how to determine the distinct clusters of trees within large tree collections. We use two different techniques to identify distinct tree groups. Both techniques show that partitioning the trees into distinct groups and summarizing each group separately is a better representation of the data. Additional benefits of our approach are better consensus trees as well as insightful information regarding the convergence behavior of phylogenetic heuristics. Our fast hash-based algorithms provide scientists with a very powerful tools for analyzing the relationships within their large phylogenetic tree collections in new and exciting ways. Our work has many opportunities for future work including detecting convergence and designing better heuristics. Furthermore, our hash tables have lots of potential future extensions. For example, we can also use our novel hashing structure to design algorithms for computing other distance metrics such as Nearest Neighbor Interchange (NNI), Subtree Pruning and Regrafting (SPR), and Tree Bisection and Reconnection (TBR) distances.
35

A Structured Segment Tree Approach to Supporting Range Queries in P2P Systems

Huang, Tzu-lun 05 July 2007 (has links)
A Peer-to-Peer system is a distributed system whose component nodes participate in similar roles. Every user node (the peer) can exchange and contribute its resources to another one in the system. Similar to the case that peers may dynamically join and leave the system, the data will also be inserted into and removed from the system dynamically. Given a certain range, a range query will find any data item whose value within the range. For example, a range query can find all the Beatle's works between 1961 and 1968 for us. However, once the range data is distributed over a P2P system through the hash function which has been used largely in many P2P systems, the continuity of the range data is not guaranteed to exist. Therefore, finding the scattered data whose value within a certain range costs much in a P2P system. The Distributed Segment Tree method (DST) preserves the local continuity of the range data at each node by using a segment tree and can break any given range into minimum number of node intervals whose union constitutes the whole requested range. The DST method works based on the Distributed Hash Table (DHT) logic; therefore, it can be applied in any DHT-based P2P system. However, data distribution of the DST method may cause overlapping. When searching a data range, the DST method sends more number of requests than what is really needed. Although the DST method designs the Downward Load Stripping Mechanism, the load on peers still may not be balanced. The main reason of these problems is that the DST method applies the DHT logic to the P2P systems. Therefore, in this thesis, we propose a method called Structured Segment Tree (SST) that does not use the DHT logic but embeds the structure of the segment tree into the P2P systems. In fact, the P2P network topology of an SST is the structure of a segment tree. Unlike a DST, an SST can fully reflect the properties of the original segment tree. Each peer in our proposed P2P system represents a node of a segment tree. Data intervals at the same level are continuous and will not overlap with each other. The union of data intervals at a level with full nodes is totally the whole data range which the P2P system can support. When searching a data range, the SST method sends as many number of requests as needed. In addition, we add sibling links to preserve the spatial locality and speed up the search efficiency. For the issue of load balance, our SST method also performs better than the DST method. From our simulation, we show that the SST method routes less number of peers to locate the requested range data than the DST method. We also show that the load based on our method is more balanced than that based on the DST method.
36

Design And Fpga Implementation Of Hash Processor

Siltu (celebi), Tugba 01 December 2007 (has links) (PDF)
In this thesis, an FPGA based hash processor is designed and implemented using a hardware description language / VHDL. Hash functions are among the most important cryptographic primitives and used in the several fields of communication integrity and signature authentication. These functions are used to obtain a fixed-size fingerprint or hash value of an arbitrary long message. The hash functions SHA-1 and SHA2-256 are examined in order to find the common instructions to implement them using same hardware blocks on the FPGA. As a result of this study, a hash processor supporting SHA-1 and SHA2-256 hashing and having a standard UART serial interface is proposed. The proposed hash processor has 14 instructions. Among these instructions, 6 of them are special instructions developed for SHA-1 and SHA-256 hash functions. The address length of the instructions is six bits. The data length is 32 bits. The proposed instruction set can be extended for other hash algorithms and they can be implemented over the same architecture. The hardware is described in VHDL and verified on Xilinx FPGAs. The advantages and open issues of implementing hash functions using a processor structure are also discussed.
37

An E-Cash Protocol with Efficient Double-Spending Revocability

Yu, Yao-chun 25 August 2009 (has links)
Due to the fast progress of the internet technologies, electronic commerce becomes more and more popular. Many people and businesses deal with their transactions via the internet. The technologies of credit cards, electronic tickets, e-cash, and other advanced services have realized the vision of electronic commerce. In this thesis, we propose an off-line e-cash scheme with anonymity, untraceability, double-spending checking, and traceability. Anonymity and untraceability must be possessed in any e-cash scheme. In an off-line e-cash scheme, the bank or the third party (TTP) must be able to revoke the anonymity of a user who doubly spent her/his e-cash(s). In our proposed e-cash scheme, the bank can fast derive the identity of the user who doubly spent her/his e-cash(s) without the participation of TTP. If some illegal transactions are reported, TTP can also directly revoke the anonymity of the user who spent her/his e-cash(s) in the illegal transactions. In addition, the police needs to trace a specific user in some situation, and we propose a process to satisfy this requirement,called traceability.
38

Beschreibung des Sicherheitssystems im MONARCH

Trinks, Holger, Auerbach, Bert 10 February 1999 (has links)
Das Dokument beschreibt in kurzer Form das Sicherheitskonzept im Archivsystem MONARCH.Die archivierten Dokumente werden durch verschiedene Hash-Algorithmen und digitale Signaturen vor nachträglichen Veränderungen geschützt. Dadurch kann dem Benutzer von MONARCH die Unversehrtheit der archivierten Publikationen garantiert werden.
39

A Grouped Hamming Network

Logan, Bryan January 2010 (has links)
A distributed hash table (DHT) is a type of peer-to-peer (P2P) network that, like traditional hash tables, maps keys to values. Unlike traditional hash tables, however, the data is distributed across a network with each node being responsible for a particular range of keys. Numerous other DHTs have been presented and have become the cornerstone of wildly popular P2P file-sharing applications, such as BitTorrent. Each of these DHTs trades-off the number of pointers maintained per node with the overhead and lookup time; storing more pointers decreases the lookup time at the expense of increased overhead. A Grouped Hamming Network (GHN), the overlay network presented in this thesis, allows for the number of pointers per node to be any increasing function of n, P(n) = Ω(log n). The system presented assumes that nodes fail independently and uniformly at random with some probability q = 1 − p. Three different schemes for routing in a GHN are presented. For each routing scheme a theoretical estimate on the probability of failure is given and optimal configurations in terms of n and P(n) are given. Simulations of GHNs with various configurations indicate that the given estimates are indeed accurate for reasonable values of q and that the optimal configurations are accurate.
40

Entwurf und Implementierung eines Lernsystems für gestreute Speicherung

Siantidis, Zissis. January 2004 (has links)
Stuttgart, Univ., Diplomarb., 2004.

Page generated in 0.05 seconds