• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 35
  • 17
  • 17
  • 14
  • 12
  • 11
  • 10
  • 9
  • 7
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

P2P SIP over mobile ad hoc networks

Wongsaardsakul, Thirapon 04 October 2010 (has links) (PDF)
This work presents a novel Peer to Peer (P2P) framework for Session Initiation Protocol (SIP) on Mobile Ad Hoc Network (MANET). SIP is a client-server model of computing which can introduce a single point of failure problem. P2P SIP addresses this problem by using a distributed implementation based on a P2P paradigm. However, both the traditional SIP and P2P SIP architectures are not suitable for MANETs because they are initially designed for infrastructured networks whose most nodes are static. We focus on distributed P2P resource lookup mechanisms for SIP which can tolerate failures resulting from the node mobility. Our target application is SIP-based multimedia communication in a rapidly deployable disaster emergency network. To achieve our goal, we provide four contributions as follows. The first contribution is a novel P2P lookup architecture based on a concept of P2P overlay network called a Structured Mesh Overlay Network (SMON). This overlay network enables P2P applications to perform fast resource lookups in the MANET environment. SMON utilizes a cross layer design based on the Distributed Hashing Table (DHT) and has direct access to OLSR routing information. Its cross layer design allows optimizing the overlay network performance during the change of network topology. The second contribution is a distributed SIP architecture on MANET providing SIP user location discovery in a P2P manner which tolerates single-point and multiple-point of failures. Our approach extends the traditional SIP user location discovery by utilizing DHT in SMON to distribute SIP object identifiers over SMON. It offers a constant time on SIP user discovery which results in a fast call setup time between two MANET users. From simulation and experiment results, we find that SIPMON provides the lowest call setup delay when compared to the existing broadcast-based approaches. The third contribution is an extended SIPMON supporting several participating MANETs connected to Internet. This extension (SIPMON+) provides seamless mobility support allowing a SIP user to roam from an ad hoc network to an infrastructured network such as Internet without interrupting an ongoing session. We propose a novel OLSR Overlay Network (OON), a single overlay network containing MANET nodes and some nodes on the Internet. These nodes can communicate using the same OLSR routing protocol. Therefore, SIPMON can be automatically extended without modifying SIPMON internal operations. Through our test-bed experiments, we prove that SIPMON+ has better performance in terms of call setup delay and handoff delay than MANET for Network Mobility (MANEMO). The fourth contribution is a proof-of-concept and a prototype of P2P multimedia communication based on SIPMON+ for post disaster recovery missions. We evaluate our prototype and MANEMO-based approaches through experimentation in real disaster situations (Vehicle to Infrastructure scenarios). We found that our prototype outperforms MANEMO-based approaches in terms of call setup delay, packet loss, and deployment time.
22

DistroFS: En lösning för distribuerad lagring av filer / DistroFS: A Solution For Distributed File Storage

Hansen, Peter, Norell, Olov January 2007 (has links)
Nuvarande implementationer av distribuerade hashtabeller (DHT) har en begränsad storlek för data som kan lagras, som t.ex. OpenDHTs datastorleks gräns på 1kByte. Är det möjligt att lagra filer större än 1kByte med DHT-tekniken? Finns det någon lösning för att skydda de data som lagrats utan att försämra prestandan? Vår lösning var att utveckla en klient- och servermjukvara. Mjukvaran använder sig av DHT tekniken för att dela upp filer och distribuera delarna över ett serverkluster. För att se om mjukvaran fungerade som tänkt, gjorde vi ett test utifrån de inledande frågorna. Testet visade att det är möjligt att lagra filer större än 1kByte, säkert med DHT tekniken utan att förlora för mycket prestanda. / Currently existing distributed hash table (DHT) implementations use a small storage size for data, such as OpenDHT’s storage size limitation of 1kByte. Is it possible to store larger files than 1kByte using the DHT technique? Is there a way to protect the data without losing to much performance? Our solution was to develop a client and server software. This software uses the DHT technique to split files and distribute their parts across a cluster of servers. To see if the software worked as intended we created a test based on our opening questions. This test shows that it indeed is possible to store large files securely using the DHT technique without losing any significant performance.
23

Chord - A Distributed Hash Table

Liao, Yimei 24 July 2006 (has links)
An introduction to Chord Algorithm.
24

Chord - A Distributed Hash Table

Liao, Yimei 21 August 2007 (has links)
Source is converted into pdf format. An introduction to Chord Algorithm.
25

Persistence and Node FailureRecovery in Strongly Consistent Key-Value Datastore

Ehsan ul Haque, Muhammad January 2012 (has links)
Consistency preservation of replicated data is a critical aspect for distributed databaseswhich are strongly consistent. Further, in fail-recovery model each process also needs todeal with the management of stable storage and amnesia [1]. CATS is a key/value datastore which combines the Distributed Hash Table (DHT) like scalability and selforganization and also provides atomic consistency of the replicated items. However beingan in memory data store with consistency and partition tolerance (CP), it suffers frompermanent unavailability in the event of majority failure. The goals of this thesis were twofold (i) to implement disk persistent storage in CATS,which would allow the records and state of the nodes to be persisted on disk and (ii) todesign nodes failure recovery-algorithm for CATS which enable the system to run with theassumption of a Fail Recovery model without violating consistency. For disk persistent storage two existing key/value databases LevelDB [2] and BerkleyDB[3] are used. LevelDB is an implementation of log structured merged trees [4] where asBerkleyDB is an implementation of log structured B+ trees [5]. Both have been used as anunderlying local storage for nodes and throughput and latency of the system with each isdiscussed. A technique to improve the performance by allowing concurrent operations onthe nodes is also discussed. The nodes failure-recovery algorithm is designed with a goalto allow the nodes to crash and then recover without violating consistency and also toreinstate availability once the majority of nodes recover. The recovery algorithm is based onpersisting the state variables of Paxos [6] acceptor and proposer and consistent groupmemberships. For fault-tolerance and recovery, processes also need to copy records from the replicationgroup. This becomes problematic when the number of records and the amount of data ishuge. For this problem a technique for transferring key/value records in bulk is alsodescribed, and its effect on the latency and throughput of the system is discussed.
26

A Machine-Checked Proof of Correctness of Pastry / Une preuve certifiée par la machine de la correction du protocole Pastry

Azmy, Noran 24 November 2016 (has links)
Les réseaux pair-à-pair (P2P) constituent un modèle de plus en plus populaire pour la programmation d’applications Internet car ils favorisent la décentralisation, le passage à l’échelle, la tolérance aux pannes et l’auto-organisation. à la différence du modèle traditionnel client-serveur, un réseau P2P est un système réparti décentralisé dans lequel tous les nœuds interagissent directement entre eux et jouent à la fois les rôles de fournisseur et d’utilisateur de services et de ressources. Une table de hachage distribuée (DHT) est réalisée par un réseauP2P et offre les mêmes services qu’une table de hachage classique, hormis le fait que les différents couples (clef, valeur) sont stockés dans différents nœuds du réseau. La fonction principale d’une DHT est la recherche d’une valeur associée à une clef donnée. Parmi les protocoles réalisant une DHT on peut nommer Chord, Pastry, Kademlia et Tapestry. Ces protocoles promettent de garantir certaines propriétés de correction et de performance ; or, les tentatives de démontrer formellement de telles propriétés se heurtent invariablement à des cas limites dans lesquels certaines propriétés sont violées. Tian-xiang Lu a ainsi décrit des problèmes de correction dans des versions publiées de Pastry. Il a conçu un modèle, appelé LuPastry, pour lequel il a fourni une preuve partielle, mécanisée dans l’assistant à la preuve TLA+ Proof System, démontrant que les messages de recherche de clef sont acheminés au bon nœud du réseau dans le cas sans départ de nœuds. En analysant la preuve de Lu j’ai découvert qu’elle contenait beaucoup d’hypothèses pour lesquelles aucune preuve n’avait été fournie, et j’ai pu trouver des contre-exemples à plusieurs de ces hypothèses. La présente thèse apporte trois contributions. Premièrement, je présente LuPastry+, une spécification TLA+ revue de LuPastry. Au-delà des corrections nécessaires d’erreurs, LuPastry+ améliore LuPastry en introduisant de nouveaux opérateurs et définitions, conduisant à une spécification plus modulaire et isolant la complexité de raisonnement à des parties circonscrites de la preuve, contribuant ainsi à automatiser davantage la preuve. Deuxièmement, je présente une preuve TLA+ complète de l’acheminement correct dans LuPastry+. Enfin, je démontre que l’étape finale du processus d’intégration de nœuds dans LuPastry (et LuPastry+) n’est pas nécessaire pour garantir la cohérence du protocole. Concrètement, j’exhibe une nouvelle spécification avec un processus simplifié d’intégration de nœuds, que j’appelle Simplified LuPastry+, et je démontre qu’elle garantit le bon acheminement de messages de recherche de clefs. La preuve de correction pour Simplified LuPastry+ est obtenue en réutilisant la preuve pour LuPastry+, et ceci représente un bon succès pour la réutilisation de preuves, en particulier considérant la taille de ces preuves. Chacune des deux preuves requiert plus de 30000 étapes interactives ; à ma connaissance, ces preuves constituent les preuves les plus longues écrites dans le langage TLA+ à ce jour, et les seuls exemples d’application de preuves mécanisées de théorèmes pour la vérification de protocoles DHT / A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called {\LP}, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the {\TLA} Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present {\LPP}, a revised {\TLA} specification of {\LP}. Aside from needed bug fixes, {\LPP} contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete {\TLA} proof of correct delivery for {\LPP}. Third, I prove that the final step of the node join process of {\LP}/{\LPP} is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by {\SLP}, and prove correct delivery of lookup messages for this new specification. The proof of correctness of {\SLP} is written by reusing the proof for {\LPP}, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the {\TLA} language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols
27

The design and implementation of a robust, cost-conscious peer-to-peer lookup service

Harvesf, Cyrus Mehrabaun 17 November 2008 (has links)
Peer-to-peer (p2p) technology provides an excellent platform for the delivery of rich content and media that scales with the rapid growth of the Internet. This work presents a lookup service design and implementation that provides provable fault tolerance and operates in a cost-conscious manner over the Internet. <br><br> Using a distributed hash table (DHT) as a foundation, we propose a replica placement that improves object availability and reachability to implement a robust lookup service. We present a framework that describes tree-based routing DHTs and formally prove several properties for DHTs of this type. Specifically, we prove that our replica placement, which we call MaxDisjoint, creates a provable number of disjoint routes from any source node to a replica set. We evaluate this technique through simulation and demonstrate that it creates disjoint routes more effectively than existing replica placements. Furthermore, we show that disjoint routes have a marked impact on routing robustness, which we measure as the probability of lookup success. <br><br> To mitigate the costs incurred by multi-hop DHT routing, we develop an organization-based id assignment scheme that bounds the transit costs of prefix-matching routes. To further reduce costs, we use MaxDisjoint placement to create multiple routes of varying costs. This technique helps reduce cost in two ways: (1) replication may create local copies of an object that can be accessed at zero transit cost and (2) MaxDisjoint replication creates multiple, bounded cost, disjoint routes of which the minimal cost route can be used to resolve the lookup. We model the trade-off between the storage cost and routing cost benefit of replication to find the optimal degree to which an object should be replicated. We evaluate our approach using a lookup service implementation and show that it dramatically reduces cost over existing DHT implementations. Furthermore, we show that our technique can be used to manage objects of varying popularity in a manner that is more cost effective than caching. <br><br> By improving its robustness and cost effectiveness, we aim to increase the pervasiveness of p2p in practice and unlock the potential of this powerful technology.
28

[en] LUAPS - LUA PUBLISH-SUBSCRIBE / [pt] LUAPS - LUA PUBLISH-SUBSCRIBE

MARIO MENDES DE O ZIMMERMANN 24 July 2006 (has links)
[pt] Sistemas publish-subscribe são definidos por seu modelo básico de comunicação. No entanto, a maior parte dos sistemas publish-subscribe existentes incorpora outros mecanismos em sua implementação. Este trabalho busca um melhor entendimento de sistemas publish- subscribe, definindo uma arquitetura onde diferentes camadas agrupam decisões e construções relacionadas. Baseado nesta arquitetura, descrevemos um sistema desenvolvido em Lua que utiliza uma tabela hash distribuída como base. O sistema se diferencia dos sistemas publish-subscribe monolíticos e tem como foco generalidade, flexibilidade e extensibilidade. / [en] Publish-subscribe systems are defined by its communication model. However, most of the existent publish-subscribe systems incorporate other mechanisms in their implementation. This work seeks a better understanding of publish-subscribe systems, defining an architecture where different layers group related decisions and constructions. Based on this architecture, we describe a system developed in Lua that uses a distributed hash table as its base. The system differs in its architecture from monolithic publish-subscribe systems and focus on generality, flexibility and extensibility.
29

Structured peer-to-peer overlays for NATed churn intensive networks

Chowdhury, Farida January 2015 (has links)
The wide-spread coverage and ubiquitous presence of mobile networks has propelled the usage and adoption of mobile phones to an unprecedented level around the globe. The computing capabilities of these mobile phones have improved considerably, supporting a vast range of third party applications. Simultaneously, Peer-to-Peer (P2P) overlay networks have experienced a tremendous growth in terms of usage as well as popularity in recent years particularly in fixed wired networks. In particular, Distributed Hash Table (DHT) based Structured P2P overlay networks offer major advantages to users of mobile devices and networks such as scalable, fault tolerant and self-managing infrastructure which does not exhibit single points of failure. Integrating P2P overlays on the mobile network seems a logical progression; considering the popularities of both technologies. However, it imposes several challenges that need to be handled, such as the limited hardware capabilities of mobile phones and churn (i.e. the frequent join and leave of nodes within a network) intensive mobile networks offering limited yet expensive bandwidth availability. This thesis investigates the feasibility of extending P2P to mobile networks so that users can take advantage of both these technologies: P2P and mobile networks. This thesis utilises OverSim, a P2P simulator, to experiment with the performance of various P2P overlays, considering high churn and bandwidth consumption which are the two most crucial constraints of mobile networks. The experiment results show that Kademlia and EpiChord are the two most appropriate P2P overlays that could be implemented in mobile networks. Furthermore, Network Address Translation (NAT) is a major barrier to the adoption of P2P overlays in mobile networks. Integrating NAT traversal approaches with P2P overlays is a crucial step for P2P overlays to operate successfully on mobile networks. This thesis presents a general approach of NAT traversal for ring based overlays without the use of a single dedicated server which is then implemented in OverSim. Several experiments have been performed under NATs to determine the suitability of the chosen P2P overlays under NATed environments. The results show that the performance of these overlays is comparable in terms of successful lookups in both NATed and non-NATed environments; with Kademlia and EpiChord exhibiting the best performance. The presence of NATs and also the level of churn in a network influence the routing techniques used in P2P overlays. Recursive routing is more resilient to IP connectivity restrictions posed by NATs but not very robust in high churn environments, whereas iterative routing is more suitable to high churn networks, but difficult to use in NATed environments. Kademlia supports both these routing schemes whereas EpiChord only supports the iterating routing. This undermines the usefulness of EpiChord in NATed environments. In order to harness the advantages of both routing schemes, this thesis presents an adaptive routing scheme, called Churn Aware Routing Protocol (ChARP), combining recursive and iterative lookups where nodes can switch between recursive and iterative routing depending on their lifetimes. The proposed approach has been implemented in OverSim and several experiments have been carried out. The experiment results indicate an improved performance which in turn validates the applicability and suitability of ChARP in NATed environments.
30

FreeCore : un système d'indexation de résumés de document sur une Table de Hachage Distribuée (DHT) / FreeCore : an index system of summary of documents on an Distributed Hash Table (DHT)

Ngom, Bassirou 13 July 2018 (has links)
Cette thèse étudie la problématique de l’indexation et de la recherche dans les tables de hachage distribuées –Distributed Hash Table (DHT). Elle propose un système de stockage distribué des résumés de documents en se basant sur leur contenu. Concrètement, la thèse utilise les Filtre de Blooms (FBs) pour représenter les résumés de documents et propose une méthode efficace d’insertion et de récupération des documents représentés par des FBs dans un index distribué sur une DHT. Le stockage basé sur contenu présente un double avantage, il permet de regrouper les documents similaires afin de les retrouver plus rapidement et en même temps, il permet de retrouver les documents en faisant des recherches par mots-clés en utilisant un FB. Cependant, la résolution d’une requête par mots-clés représentée par un filtre de Bloom constitue une opération complexe, il faut un mécanisme de localisation des filtres de Bloom de la descendance qui représentent des documents stockés dans la DHT. Ainsi, la thèse propose dans un deuxième temps, deux index de filtres de Bloom distribués sur des DHTs. Le premier système d’index proposé combine les principes d’indexation basée sur contenu et de listes inversées et répond à la problématique liée à la grande quantité de données stockée au niveau des index basés sur contenu. En effet, avec l’utilisation des filtres de Bloom de grande longueur, notre solution permet de stocker les documents sur un plus grand nombre de serveurs et de les indexer en utilisant moins d’espace. Ensuite, la thèse propose un deuxième système d’index qui supporte efficacement le traitement des requêtes de sur-ensembles (des requêtes par mots-clés) en utilisant un arbre de préfixes. Cette dernière solution exploite la distribution des données et propose une fonction de répartition paramétrable permettant d’indexer les documents avec un arbre binaire équilibré. De cette manière, les documents sont répartis efficacement sur les serveurs d’indexation. En outre, la thèse propose dans la troisième solution, une méthode efficace de localisation des documents contenant un ensemble de mots-clés donnés. Comparé aux solutions de même catégorie, cette dernière solution permet d’effectuer des recherches de sur-ensembles en un moindre coût et constitue est une base solide pour la recherche de sur-ensembles sur les systèmes d’index construits au-dessus des DHTs. Enfin, la thèse propose le prototype d’un système pair-à-pair pour l’indexation de contenus et la recherche par mots-clés. Ce prototype, prêt à être déployé dans un environnement réel, est expérimenté dans l’environnement de simulation peersim qui a permis de mesurer les performances théoriques des algorithmes développés tout au long de la thèse. / This thesis examines the problem of indexing and searching in Distributed Hash Table (DHT). It provides a distributed system for storing document summaries based on their content. Concretely, the thesis uses Bloom filters (BF) to represent document summaries and proposes an efficient method for inserting and retrieving documents represented by BFs in an index distributed on a DHT. Content-based storage has a dual advantage. It allows to group similar documents together and to find and retrieve them more quickly at the same by using Bloom filters for keywords searches. However, processing a keyword query represented by a Bloom filter is a difficult operation and requires a mechanism to locate the Bloom filters that represent documents stored in the DHT. Thus, the thesis proposes in a second time, two Bloom filters indexes schemes distributed on DHT. The first proposed index system combines the principles of content-based indexing and inverted lists and addresses the issue of the large amount of data stored by content-based indexes. Indeed, by using Bloom filters with long length, this solution allows to store documents on a large number of servers and to index them using less space. Next, the thesis proposes a second index system that efficiently supports superset queries processing (keywords-queries) using a prefix tree. This solution exploits the distribution of the data and proposes a configurable distribution function that allow to index documents with a balanced binary tree. In this way, documents are distributed efficiently on indexing servers. In addition, the thesis proposes in the third solution, an efficient method for locating documents containing a set of keywords. Compared to solutions of the same category, the latter solution makes it possible to perform subset searches at a lower cost and can be considered as a solid foundation for supersets queries processing on over-dht index systems. Finally, the thesis proposes a prototype of a peer-to-peer system for indexing content and searching by keywords. This prototype, ready to be deployed in a real environment, is experimented with peersim that allowed to measure the theoretical performances of the algorithms developed throughout the thesis.

Page generated in 0.4857 seconds