Spelling suggestions: "subject:"distributed cash table"" "subject:"distributed cash cable""
1 |
Building Robust Distributed Infrastructure NetworksBenshoof, Brendan 09 May 2016 (has links)
Many competing designs for Distributed Hash Tables exist exploring multiple models of addressing, routing and network maintenance. Designing a general theoretical model and implementation of a Distributed Hash Table allows exploration of the possible properties of Distributed Hash Tables. We will propose a generalized model of DHT behavior, centered on utilizing Delaunay triangulation in a given metric space to maintain the networks topology. We will show that utilizing this model we can produce network topologies that approximate existing DHT methods and provide a starting point for further exploration. We will use our generalized model of DHT construction to design and implement more efficient Distributed Hash Table protocols, and discuss the qualities of potential successors to existing DHT technologies.
|
2 |
A Grouped Hamming NetworkLogan, Bryan January 2010 (has links)
A distributed hash table (DHT) is a type of peer-to-peer (P2P) network that, like
traditional hash tables, maps keys to values. Unlike traditional hash tables, however, the
data is distributed across a network with each node being responsible for a particular range
of keys. Numerous other DHTs have been presented and have become the cornerstone of
wildly popular P2P file-sharing applications, such as BitTorrent. Each of these DHTs
trades-off the number of pointers maintained per node with the overhead and lookup time;
storing more pointers decreases the lookup time at the expense of increased overhead.
A Grouped Hamming Network (GHN), the overlay network presented in this thesis,
allows for the number of pointers per node to be any increasing function of n, P(n) =
Ω(log n). The system presented assumes that nodes fail independently and uniformly at
random with some probability q = 1 − p. Three different schemes for routing in a GHN
are presented. For each routing scheme a theoretical estimate on the probability of failure
is given and optimal configurations in terms of n and P(n) are given. Simulations of
GHNs with various configurations indicate that the given estimates are indeed accurate
for reasonable values of q and that the optimal configurations are accurate.
|
3 |
A Structured Segment Tree Approach to Supporting Range Queries in P2P SystemsHuang, Tzu-lun 05 July 2007 (has links)
A Peer-to-Peer system is a distributed system whose component nodes participate in similar roles. Every user node (the peer) can exchange and contribute its resources to another one in the system. Similar to the case that peers may dynamically join and leave the system, the data will also be inserted into and removed from the system dynamically. Given a certain range, a range query will find any data item whose value within the range. For example, a range query can find all the Beatle's works between 1961 and 1968 for us. However, once the range data is distributed over a P2P system through the hash function which has been used largely in many P2P systems, the continuity of the range data is not guaranteed to exist. Therefore, finding the scattered data whose value within a certain range costs much in a P2P system. The Distributed Segment Tree method (DST) preserves the local continuity of the range data at each node by using a segment tree and can break any given range into minimum number of node intervals whose union constitutes the whole requested range. The DST method works based on the Distributed Hash Table (DHT) logic; therefore, it can be applied in any DHT-based P2P system. However, data distribution of the DST method may cause overlapping. When searching a data range, the DST method sends more number of requests than what is really needed. Although the DST method designs the Downward Load Stripping Mechanism, the load on peers still may not be balanced. The main reason of these problems is that the DST method applies the DHT logic to the P2P systems. Therefore, in this thesis, we propose a
method called Structured Segment Tree (SST) that does not use the DHT logic but embeds the structure of the segment tree into the P2P systems. In fact, the P2P network topology of an SST is the structure of a segment tree. Unlike a DST, an SST can fully reflect the properties of the original segment tree. Each peer in our
proposed P2P system represents a node of a segment tree. Data intervals at the same level are continuous and will not overlap with each other. The union of data intervals at a level with full nodes is totally the whole data range which the P2P system can support. When searching a data range, the SST method sends as many number of requests as needed. In addition, we add sibling links to preserve
the spatial locality and speed up the search efficiency. For the issue of load balance, our SST method also performs better than the DST method. From our simulation, we show that the SST method routes less number of peers to locate the requested range data than the DST method. We also show that the load based on our method is more
balanced than that based on the DST method.
|
4 |
A Grouped Hamming NetworkLogan, Bryan January 2010 (has links)
A distributed hash table (DHT) is a type of peer-to-peer (P2P) network that, like
traditional hash tables, maps keys to values. Unlike traditional hash tables, however, the
data is distributed across a network with each node being responsible for a particular range
of keys. Numerous other DHTs have been presented and have become the cornerstone of
wildly popular P2P file-sharing applications, such as BitTorrent. Each of these DHTs
trades-off the number of pointers maintained per node with the overhead and lookup time;
storing more pointers decreases the lookup time at the expense of increased overhead.
A Grouped Hamming Network (GHN), the overlay network presented in this thesis,
allows for the number of pointers per node to be any increasing function of n, P(n) =
Ω(log n). The system presented assumes that nodes fail independently and uniformly at
random with some probability q = 1 − p. Three different schemes for routing in a GHN
are presented. For each routing scheme a theoretical estimate on the probability of failure
is given and optimal configurations in terms of n and P(n) are given. Simulations of
GHNs with various configurations indicate that the given estimates are indeed accurate
for reasonable values of q and that the optimal configurations are accurate.
|
5 |
P2P SIP over mobile ad hoc networks / [SIP P2P pour les réseaux mobiles ad hoc]Wongsaardsakul, Thirapon 04 October 2010 (has links)
Cette thèse propose une nouvelle architecture Peer-to-Peer pour l’établissement de sessions SIP (Session Initiation Protocol) sur les réseaux ad hoc. SIP est un protocole con¸cu à l’origine sur un modèle centralisé est n’est pas nativement adapté aux réseaux mobiles ad hoc (MANET) en raison de leurs caractéristiques inhérentes de mobilité. Nous avons ciblé nos études sur un mécanisme de lookup distribué Peer-to-Peer (P2P) tolérant aux fautes, même en cas de mobilité des noeuds du réseau. Cette thèse s’articule autour de quatre principales contributions: Nous introduisons le concept de Structured Mesh Overlay Network (SMON) : un overlay P2P sur MANET permettant d’effectuer des lookups de ressources rapides dans un environnement ad hoc. SMON utilise une architecture cross layer design basée sur une Distributed Hash Table (DHT) utilisant directement les informations de routage OLSR. Cette architecture cross layer permet d’optimiser les performances du réseau overlay lors d’un changement de topologie du réseau. La seconde contribution, SIPMON, est un overlay SIP sur réseau SMON. Sa particularité est d’utiliser un DHT pour distribuer les identifiants d’objet SIP dans le réseau overlay SMON. Les expérimentations menées prouvent que cette approche garantit une durée de découverte SIP constante et permet un établissement de session plus rapide entre deux usagers sur réseau ad hoc. SIPMON ne s’applique cependant qu’à un réseau MANET isolé. Notre troisième contribution SIPMON+ permet un interfonctionnement de plusieurs overlays SIPMON connectés à Internet. SIPMON+ unifie donc les overlays de réseau et permet de joindre un client SIP qu’il soit localisé sur un réseau ad hoc ou sur l’internet. De plus, SIPMON+ permet une continuité de service sans couture lors du passage entre un réseau MANET et un réseau d’infrastructure. Notre prototype a démontré que les performances de temps d’établissement d’appel SIPMON+ étaient meilleures que pour l’approche concurrente MANEMO (MANET for Network Mobility). Le scénario d’usage principal est la fourniture de services de communication multimédia d’urgence rapidement déployables en cas de catastrophe majeure. Nous avons développé un prototype SIPMON+ totalement fonctionnel de service de communication P2P multimédia. Ce prototype a été expérimenté en situation réelle de catastrophe. Notre prototype sans infrastructure a donné de biens meilleurs résultats que MANEMO en termes de temps de déploiement, de taux de perte de paquets et de temps d’établissement d’appel. / This work presents a novel Peer to Peer (P2P) framework for Session Initiation Protocol (SIP) on Mobile Ad Hoc Network (MANET). SIP is a client-server model of computing which can introduce a single point of failure problem. P2P SIP addresses this problem by using a distributed implementation based on a P2P paradigm. However, both the traditional SIP and P2P SIP architectures are not suitable for MANETs because they are initially designed for infrastructured networks whose most nodes are static. We focus on distributed P2P resource lookup mechanisms for SIP which can tolerate failures resulting from the node mobility. Our target application is SIP-based multimedia communication in a rapidly deployable disaster emergency network. To achieve our goal, we provide four contributions as follows. The first contribution is a novel P2P lookup architecture based on a concept of P2P overlay network called a Structured Mesh Overlay Network (SMON). This overlay network enables P2P applications to perform fast resource lookups in the MANET environment. SMON utilizes a cross layer design based on the Distributed Hashing Table (DHT) and has direct access to OLSR routing information. Its cross layer design allows optimizing the overlay network performance during the change of network topology. The second contribution is a distributed SIP architecture on MANET providing SIP user location discovery in a P2P manner which tolerates single-point and multiple-point of failures. Our approach extends the traditional SIP user location discovery by utilizing DHT in SMON to distribute SIP object identifiers over SMON. It offers a constant time on SIP user discovery which results in a fast call setup time between two MANET users. From simulation and experiment results, we find that SIPMON provides the lowest call setup delay when compared to the existing broadcast-based approaches. The third contribution is an extended SIPMON supporting several participating MANETs connected to Internet. This extension (SIPMON+) provides seamless mobility support allowing a SIP user to roam from an ad hoc network to an infrastructured network such as Internet without interrupting an ongoing session. We propose a novel OLSR Overlay Network (OON), a single overlay network containing MANET nodes and some nodes on the Internet. These nodes can communicate using the same OLSR routing protocol. Therefore, SIPMON can be automatically extended without modifying SIPMON internal operations. Through our test-bed experiments, we prove that SIPMON+ has better performance in terms of call setup delay and handoff delay than MANET for Network Mobility (MANEMO). The fourth contribution is a proof-of-concept and a prototype of P2P multimedia communication based on SIPMON+ for post disaster recovery missions. We evaluate our prototype and MANEMO-based approaches through experimentation in real disaster situations (Vehicle to Infrastructure scenarios). We found that our prototype outperforms MANEMO-based approaches in terms of call setup delay, packet loss, and deployment time.
|
6 |
Modeling, simulations, and experiments to balance performance and fairness in P2P file-sharing systemsLi,Yunzhao January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Don Gruenbacher / Caterina Scoglio / In this dissertation, we investigate research gaps still existing in P2P file-sharing systems: the necessity of fairness maintenance during the content information publishing/retrieving process, and the stranger policies on P2P fairness.
First, through a wide range of measurements in the KAD network, we present the impact of a poorly designed incentive fairness policy on the performance of looking up content information. The KAD network, designed to help peers publish and retrieve sharing information, adopts a distributed hash table (DHT) technology and combines itself into the aMule/eMule P2P file-sharing network. We develop a distributed measurement framework that employs multiple test nodes running on the PlanetLab testbed. During the measurements, the routing tables of around 20,000 peers are crawled and analyzed. More than 3,000,000 pieces of source location information from the publishing tables of multiple peers are retrieved and contacted. Based on these measurements, we show that the routing table is well maintained, while the maintenance policy for the source-location-information publishing table is not well designed. Both the current maintenance schedule for the publishing table and the poor incentive policy on publishing peers eventually result in the low availability of the publishing table, which accordingly cause low lookup performance of the KAD network. Moreover, we propose three possible solutions to address these issues: the self-maintenance scheme with short period renewal interval, the chunk-based publishing/retrieving scheme, and the fairness scheme.
Second, using both numerical analyses and agent-based simulations, we evaluate the impact of different stranger policies on system performance and fairness. We explore that the extremely restricting stranger policy brings the best fairness at a cost of performance degradation. The varying tendency of performance and fairness under different stranger policies are not consistent. A trade-off exists between controlling free-riding and maintaining system performance. Thus, P2P designers are required to tackle strangers carefully according to their individual design goals. We also show that BitTorrent prefers to maintain fairness with an extremely restricting stranger policy, while aMule/eMule’s fully rewarding stranger policy promotes free-riders’ benefit.
|
7 |
Geo-process lookup managementHägglund, Andreas January 2017 (has links)
This thesis presents a method to deploy and lookup applications and devices based on a geographical location. The proposed solution is a combination of two existing technologies, where the first one is a geocode system to encode latitude and longitude coordinates, and the second one is a Distributed Hash Table (DHT) where values are stored and accessed with a $<$key,value$>$ pair. The purpose of this work is to be able to search a specific location for the closest device that solves the user needs, such as finding an Internet of Things (IoT) device. The thesis covers a method for searching by iterating key-value pairs in the DHT and expanding the area to find the devices further away. The search is performed using two main algorithm implementations LayerExpand and SpiralBoxExpand, to scan the area around where the user started the search. LayerExpand and SpiralBoxExpand are tested and evaluated in comparison to each other. The comparison results are presented in the form of plots where both of the functions are shown together. The function analysis results show how the size of the DHT, the number of users, and size of the search area affects the performance of the searches.
|
8 |
Uma arquitetura escalável para recuperação e atualização de informações com relação de ordem total. / A scalable architecture for retrieving information with total order relationship.Rocha, Vladimir Emiliano Moreira 17 November 2017 (has links)
Desde o início do século XXI, vivenciamos uma explosão na produção de informações de diversos tipos, tais como fotos, áudios, vídeos, entre outros. Dentre essas informações, existem aquelas em que a informação pode ser dividida em partes menores, mas que devem ser relacionadas seguindo uma ordem total. Um exemplo deste tipo de informação é um arquivo de vídeo que foi dividido em dez segmentos identificados com números de 1 a 10. Para reproduzir o vídeo original a partir dos segmentos é necessário que seus identificadores estejam ordenados. A estrutura denominada tabela de hash distribuída (DHT) tem sido amplamente utilizada para armazenar, atualizar e recuperar esse tipo de informação de forma eficiente em diversos cenários, como monitoramento de sensores e vídeo sob demanda. Entretanto, a DHT apresenta problemas de escalabilidade quando um membro da estrutura não consegue atender as requisições recebidas, trazendo como consequência a inacessibilidade da informação. Este trabalho apresenta uma arquitetura em camadas denominada MATe, que trata o problema da escalabilidade em dois níveis: estendendo a DHT com a introdução de agentes baseados na utilidade e organizando a quantidade de requisições solicitadas. A primeira camada trata a escalabilidade ao permitir a criação de novos agentes com o objetivo de distribuir as requisições evitando que um deles tenha a escalabilidade comprometida. A segunda camada é composta por grupos de dispositivos organizados de tal forma que somente alguns deles serão escolhidos para fazer requisições. A arquitetura foi implementada para dois cenários onde os problemas de escalabilidade acontecem: (i) monitoramento de sensores; e (ii) vídeo sob demanda. Para ambos cenários, os resultados experimentais mostraram que MATe melhora a escalabilidade quando comparada com as implementações originais da DHT. / Since the beginning of the 21st century, we have experienced an explosive growth in the generation of information, such as photos, audios, videos, among others. Within this information, there are some in which the information can be divided and related following a total order. For example, a video file can be divided into ten segments identified with numbers from 1 to 10. To play the original video from these segments, their identifiers must be fully ordered. A structure called Distributed Hash Table (DHT) has been widely used to efficiently store, update, and retrieve this kind of information in several application domains, such as video on demand and sensor monitoring. However, DHT encounters scalability issues when one of its members fails to answer the requests, resulting in information loss. This work presents MATe, a layered architecture that addresses the problem of scalability on two levels: extending the DHT with the introduction of utility-based agents and organizing the volume of requests. The first layer manages the scalability by allowing the creation of new agents to distribute the requests when one of them has compromised its scalability. The second layer is composed of groups of devices, organized in such a way that only a few of them will be chosen to perform requests. The architecture was implemented in two application scenarios where scalability problems arise: (i) sensor monitoring; and (ii) video on demand. For both scenarios, the experimental results show that MATe improves scalability when compared to original DHT implementations.
|
9 |
Scalable Streaming Graph PartitioningSeyed Khamoushi, Seyed Mohammadreza January 2017 (has links)
Large-scale graph-structured datasets are growing at an increasing rate. Social network graphs are an example of these datasets. Processing large-scale graphstructured datasets are central to many applications ranging from telecommunication to biology and has led to the development of many parallel graph algorithms. Performance of parallel graph algorithms largely depends on how the underlying graph is partitioned. In this work, we focus on studying streaming vertex-cut graph partitioning algorithms where partitioners receive a graph as a stream of vertices and edges and assign partitions to them on their arrival once and for all. Some of these algorithms maintain a state during partitioning. In some cases, the size of the state is so huge that it cannot be kept in a single machine memory. In many real world scenarios, several instances of a streaming graph partitioning algorithm are run simultaneously to improve the system throughput. However, running several instances of a partitioner drops the partitioning quality considerably due to the incomplete information of partitioners. Even frequently sharing states and its combination with buffering mechanisms does not completely solves the problem because of the heavy communication overhead produced by partitioners. In this thesis, we propose an algorithm which tackles the problem of low scalability and performance of existing streaming graph partitioning algorithms by providing an efficient way of sharing states and its combination with windowing mechanism. We compare state-of-the-art streaming graph partitioning algorithms with our proposed solution concerning performance and efficiency. Our solution combines a batch processing method with a shared-state mechanism to achieve both an outstanding performance and a high partitioning quality. Shared state mechanism is used for sharing states of partitioners. We provide a robust implementation of our method in a PowerGraph framework. Furthermore, we empirically evaluate the impact of partitioning quality on how graph algorithms perform in a real cloud environment. The results show that our proposed method outperforms other algorithms in terms of partitioning quality and resource consumption and improves partitioning time considerably. On average our method improves partitioning time by 23%, decreases communication load by 15% and increase memory consumption by only 5% compared to the state-of-the-art streaming graph partitioning.
|
10 |
DistroFS: En lösning för distribuerad lagring av filer / DistroFS: A Solution For Distributed File StorageHansen, Peter, Norell, Olov January 2007 (has links)
<p>Nuvarande implementationer av distribuerade hashtabeller (DHT) har en begränsad storlek för data som kan lagras, som t.ex. OpenDHTs datastorleks gräns på 1kByte. Är det möjligt att lagra filer större än 1kByte med DHT-tekniken? Finns det någon lösning för att skydda de data som lagrats utan att försämra prestandan? Vår lösning var att utveckla en klient- och servermjukvara. Mjukvaran använder sig av DHT tekniken för att dela upp filer och distribuera delarna över ett serverkluster. För att se om mjukvaran fungerade som tänkt, gjorde vi ett test utifrån de inledande frågorna. Testet visade att det är möjligt att lagra filer större än 1kByte, säkert med DHT tekniken utan att förlora för mycket prestanda.</p> / <p>Currently existing distributed hash table (DHT) implementations use a small storage size for data, such as OpenDHT’s storage size limitation of 1kByte. Is it possible to store larger files than 1kByte using the DHT technique? Is there a way to protect the data without losing to much performance? Our solution was to develop a client and server software. This software uses the DHT technique to split files and distribute their parts across a cluster of servers. To see if the software worked as intended we created a test based on our opening questions. This test shows that it indeed is possible to store large files securely using the DHT technique without losing any significant performance.</p>
|
Page generated in 0.0735 seconds