• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 35
  • 12
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 233
  • 70
  • 51
  • 50
  • 44
  • 42
  • 38
  • 36
  • 30
  • 27
  • 26
  • 25
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Improved Internet Security Protocols Using Cryptographic One-Way Hash Chains

Alabrah, Amerah 01 January 2014 (has links)
In this dissertation, new approaches that utilize the one-way cryptographic hash functions in designing improved network security protocols are investigated. The proposed approaches are designed to be scalable and easy to implement in modern technology. The first contribution explores session cookies with emphasis on the threat of session hijacking attacks resulting from session cookie theft or sniffing. In the proposed scheme, these cookies are replaced by easily computed authentication credentials using Lamport's well-known one-time passwords. The basic idea in this scheme revolves around utilizing sparse caching units, where authentication credentials pertaining to cookies are stored and fetched once needed, thereby, mitigating computational overhead generally associated with one-way hash constructions. The second and third proposed schemes rely on dividing the one-way hash construction into a hierarchical two-tier construction. Each tier component is responsible for some aspect of authentication generated by using two different hash functions. By utilizing different cryptographic hash functions arranged in two tiers, the hierarchical two-tier protocol (our second contribution) gives significant performance improvement over previously proposed solutions for securing Internet cookies. Through indexing authentication credentials by their position within the hash chain in a multi-dimensional chain, the third contribution achieves improved performance. In the fourth proposed scheme, an attempt is made to apply the one-way hash construction to achieve user and broadcast authentication in wireless sensor networks. Due to known energy and memory constraints, the one-way hash scheme is modified to mitigate computational overhead so it can be easily applied in this particular setting. The fifth scheme tries to reap the benefits of the sparse cache-supported scheme and the hierarchical scheme. The resulting hybrid approach achieves efficient performance at the lowest cost of caching possible. In the sixth proposal, an authentication scheme tailored for the multi-server single sign-on (SSO) environment is presented. The scheme utilizes the one-way hash construction in a Merkle Hash Tree and a hash calendar to avoid impersonation and session hijacking attacks. The scheme also explores the optimal configuration of the one-way hash chain in this particular environment. All the proposed protocols are validated by extensive experimental analyses. These analyses are obtained by running simulations depicting the many scenarios envisioned. Additionally, these simulations are supported by relevant analytical models derived by mathematical formulas taking into consideration the environment under investigation.
12

Building Robust Distributed Infrastructure Networks

Benshoof, Brendan 09 May 2016 (has links)
Many competing designs for Distributed Hash Tables exist exploring multiple models of addressing, routing and network maintenance. Designing a general theoretical model and implementation of a Distributed Hash Table allows exploration of the possible properties of Distributed Hash Tables. We will propose a generalized model of DHT behavior, centered on utilizing Delaunay triangulation in a given metric space to maintain the networks topology. We will show that utilizing this model we can produce network topologies that approximate existing DHT methods and provide a starting point for further exploration. We will use our generalized model of DHT construction to design and implement more efficient Distributed Hash Table protocols, and discuss the qualities of potential successors to existing DHT technologies.
13

CCFS cryptographically curated file system

Goldman, Aaron David 07 January 2016 (has links)
The Internet was originally designed to be a next-generation phone system that could withstand a Soviet attack. Today, we ask the Internet to perform tasks that no longer resemble phone calls in the face of threats that no longer resemble Soviet bombardment. However, we have come to rely on names that can be subverted at every level of the stack or simply be allowed to rot by their original creators. It is possible for us to build networks of content that serve the content distribution needs of today while withstanding the hostile environment that all modern systems face. This dissertation presents the Cryptographically Curated File System (CCFS), which offers five properties that we feel a modern content distribution system should provide. The first property is Strong Links, which maintains that only the owner of a link can change the content to which it points. The second property, Permissionless Distribution, allows anyone to become a curator without dependence on a naming or numbering authority. Third, Independent Validation arises from the fact that the object seeking affirmation need not choose the source of trust. Connectivity, the fourth property, allows any curator to delegate and curate the right to alter links. Each curator can delegate the control of a link and that designee can do the same, leaving a chain of trust from the original curator to the one who assigned the content. Lastly, with the property of Collective Confidence, trust does not need to come from a single source, but can instead be an aggregate affirmation. Since CCFS embodies all five of these properties, it can serve as the foundational technology for a more robust Web. CCFS can serve as the base of a web that performs the tasks of today’s Web, but also may outperform it. In the third chapter, we present a number of scenarios that demonstrate the capacity and potential of CCFS. The system can be used as a publication platform that has been re-optimized within the constraints of the modern Internet, but not the constraints of decades past. The curated links can still be organized into a hierarchical namespace (e.g., a Domain Naming System (DNS)) and de jure verifications (e.g., a Certificate Authority (CA) system), but also support social, professional, and reputational graphs. This data can be distributed, versioned, and archived more efficiently. Although communication systems were not designed for such a content-centric system, the combination of broadcasts and point-to-point communications are perfectly suited for scaling the distribution, while allowing communities to share the burdens of hosting and maintenance. CCFS even supports the privacy of friend-to-friend networks without sacrificing the ability to interoperate with the wider world. Finally, CCFS does all of this without damaging the ability to operate search engines or alert systems, providing a discovery mechanism, which is vital to a usable, useful web. To demonstrate the viability of this model, we built a research prototype. The results of these tests demonstrate that while the CCFS prototype is not ready to be used as a drop-in replacement for all file system use cases, the system is feasible. CCFS is fast enough to be usable and can be used to publish, version, archive, and search data. Even in this crude form, CCFS already demonstrates advantages over previous state-of-the-art systems. When the Internet was designed, there were relatively fewer computers that were far weaker than the computers we have now. They were largely connected to each other over reliable connections. When the Internet was first created, computing was expensive and propagation delay was negligible. Since then, the propagation delay has not improved on a Moore’s Law Curve. Now, latency has come to dominate all other costs of retrieving content; specifically, the propagation time has come to dominate the latency. In order to improve the latency, we are paying more for storage, processing, and bandwidth. The only way to improve propagation delay is to move the content closer to the destination. In order to have the content close to the demand, we store multiple copies and search multiple locations, thus trading off storage, bandwidth, and processing for lower propagation delay. The computing world should re-evaluate these trade-offs because the situation has changed. We need an Internet that is designed for the technologies used today, rather than the tools of the 20th century. CCFS, which regards the trade-off for lower propagation delay, will be better suited for 21st-century technologies. Although CCFS is not preferable in all situations, it can still offer tremendous value. Better robustness, performance, and democracy make CCFS a contribution to the field. Robustness comes from the cryptographic assurances provided by the five properties of CCFS. Performance comes from the locality of content. Democracy arises from the lack of a centralized authority that may grant the right of Free Speech only to those who espouse rhetoric compatible with their ideals. Combined, this model for a cryptographically secure, content-centric system provides a novel contribution to the state of communications technology and information security.
14

Geometric Filter: A Space and Time Efficient Lookup Table with Bounded Error

Zhao, Yang 11 1900 (has links)
Lookup tables are frequently used in many applications to store and retrieve keyvalue pairs. Designing efficient lookup tables can be challenging with constraints placed on storage, query response time and/or result accuracy. This thesis proposes Geometric filter, a lookup table with a space requirement close to the theoretical lower bound, efficient construction, fast querying speed, and guaranteed accuracy. Geometric filter consists of a sequence of hash tables, the sizes of which form a descending geometric series. Compared with its predecessor, Bloomier filter, its encoding runs two times faster, uses less memory, and it allows updates after encoding. We analyze the efficiency of the proposed lookup table in terms of its storage requirement and error bound, and run experiments on Web 1TB 5-gram dataset to evaluate its effectiveness.
15

TUPLE FILTERING IN SILK USING CUCKOO HASHES

Webb, Aaron 25 August 2010 (has links)
SiLK Tools is a suite of network ?ow tools that network analysts use to detect intru- sions, viruses, worms, and botnets, and to analyze network performance. One tool in SiLK is tuple ?ltering, where ?ows are ?ltered based on inclusion in a “multi-key” set (MKset) whose unique members are composite keys whose values are from multiple ?elds in a SiLK ?ow record. We propose and evaluate a more e?cient method of im- plementing MKset ?ltering that uses cuckoo hashes, which underlie McHugh et al.’s cuckoo bag (cubag) suite of MKset SiLK tools. Our solution improves execution time for ?ltering with an MKset of size k by a factor of O(logk), and decreases memory footprints for MKset ?ltering by 50%. The solution also saves 90% of disk space for MKset ?le storage, and adds functionality for transformations such as subnet masking on ?ow records during MKset ?ltering.
16

Geometric Filter: A Space and Time Efficient Lookup Table with Bounded Error

Zhao, Yang Unknown Date
No description available.
17

An ethnographic look at Rabbit Hash, Kentucky

Clare, Callie. January 2007 (has links)
Thesis (M.A.)--Bowling Green State University, 2007. / Document formatted into pages; contains iv, 104 p. Includes bibliographical references.
18

Contention resolution in hashing based shared memory simulations /

Stemann, Volker. January 1995 (has links)
Zugl.: Paderborn, University-Gesamthochsch., Diss., 1995.
19

MicroCuckoo Hash Engine for High-Speed IP Lookup

Tata, Nikhitha 23 June 2017 (has links)
The internet data traffic is tripling every two years due to the exponential growth in the number of routers. Routers implement the packet classification methodology by determining the flow of the packet, based on various rule checking mechanisms that are performed on the packet headers. However, the memory components like TCAMs used by these various rules are very expensive and power hungry. Henceforth, the current IP Lookup algorithms implemented in hardware are even though able to achieve multi-gigabit speeds, yet suffer with great memory overhead. To overcome this limitation, we propose a packet classification methodology that comprises of MicroCuckoo-hash technique, to route packets. This approach alleviates the memory requirements significantly, by completely eliminating the need for TCAM cells. Cuckoo hash is used to achieve very high speed, hardware accelerated table lookups and also are economical compared to TCAMs. The proposed IP Lookup algorithm is implemented as a simulation-based hardware/software model. This model is developed, tested and synthesized using Vivado HLS tool. / Master of Science
20

Towards a Framework for DHT Distributed Computing

Rosen, Andrew 12 August 2016 (has links)
Distributed Hash Tables (DHTs) are protocols and frameworks used by peer-to-peer (P2P) systems. They are used as the organizational backbone for many P2P file-sharing systems due to their scalability, fault-tolerance, and load-balancing properties. These same properties are highly desirable in a distributed computing environment, especially one that wants to use heterogeneous components. We show that DHTs can be used not only as the framework to build a P2P file-sharing service, but as a P2P distributed computing platform. We propose creating a P2P distributed computing framework using distributed hash tables, based on our prototype system ChordReduce. This framework would make it simple and efficient for developers to create their own distributed computing applications. Unlike Hadoop and similar MapReduce frameworks, our framework can be used both in both the context of a datacenter or as part of a P2P computing platform. This opens up new possibilities for building platforms to distributed computing problems. One advantage our system will have is an autonomous load-balancing mechanism. Nodes will be able to independently acquire work from other nodes in the network, rather than sitting idle. More powerful nodes in the network will be able use the mechanism to acquire more work, exploiting the heterogeneity of the network. By utilizing the load-balancing algorithm, a datacenter could easily leverage additional P2P resources at runtime on an as needed basis. Our framework will allow MapReduce-like or distributed machine learning platforms to be easily deployed in a greater variety of contexts.

Page generated in 0.0482 seconds