Spelling suggestions: "subject:"peertopeer network"" "subject:"peertopeer network""
1 |
A Study of Traffic Locality and Reliability in Peer-to-Peer Video Streaming ApplicationsZHANG, XIANGYANG 27 April 2012 (has links)
The past decade has witnessed tremendous growth of peer-to-peer (P2P) video
streaming applications on the Internet. For these applications, playback
smoothness and timeliness are the two most important aspects of users' viewing
experiences, whereas the amount of traffic is Internet service providers' main
concern. According to the playback delay, video streaming can be classified into
on-demand streaming, live streaming, and interactive streaming. P2P live
streaming applications typically have an arbitrary number of users, tens of
seconds of playback delay, and a high packet delivery rate, but their heavy
traffic incurs great financial expenditure and threatens the quality of other
services. Interactive streaming applications usually have a small group size,
several hundreds of milliseconds of playback delay, and reasonable traffic
volume, but cannot achieve a high packet delivery rate. The goal of this thesis
is to study traffic locality and reliable delivery of packets in large-scale
live streaming and small-scale interactive streaming applications, while keeping
the playback delay well below the targeted applications' limits.
For P2P live streaming applications, we first identify "typical" schemes from
existing P2P live streaming schemes, investigate packet propagation behavior and
the impact of neighboring strategies on system performance, and then propose
innovative schemes that take both users' viewing experience and traffic locality
into consideration. We show that the network-driven tree-based schemes with the
swarming technique as a re-transmission error-correction mechanism are superior
to the data-driven swarm-based or tree-based schemes, and a properly designed
tree-based scheme can localize the traffic while maintaining a high packet
delivery rate.
For interactive streaming applications, we analyze the efficacy of systematic
forward error-correction (FEC) codes against the bursty errors of Internet links
when using peers to provide multiple one-hop paths between two communication
parties. We find that although using peers for path diversity often results in
a lower post-FEC packet loss ratio, some conditions do apply. The interplay of
a number of factors, such as the Internet links' error ratio and burst length
and the coding parameters, determines the performance of FEC. We provide
guidelines and computation methods to determine whether the use of peers for
path diversity can be justified. / Thesis (Ph.D, Computing) -- Queen's University, 2012-04-26 15:20:35.555
|
2 |
Information Diffusion in Complex Networks : Measurement-Based Analysis Applied to ModellingFaria Bernardes, Daniel 21 March 2014 (has links) (PDF)
Understanding information diffusion on complex networks is a key issue from a theoretical and applied perspective. Epidemiology-inspired SIR models have been proposed to model information diffusion. Recent papers have analyzed this question from a data-driven perspective. We complement these findings investigating if epidemic models calibrate with a systematic procedure are capable of reproducing key spreading cascade properties. We first identify a large-scale, rich dataset from which we can reconstruct the diffusion trail and the underlying network. Secondly, we examine the simple SIR model as a baseline model and conclude that it was unable to generate structurally realistic spreading cascades. We found the same result examining model extensions to which take into account heterogeneities observed in the data. In contrast, other models which take into account time patterns available in the data generate qualitatively more similar cascades. Although one key property was not reproduced in any model, this result highlights the importance of taking time patterns into account. We have also analyzed the impact of the underlying network structure on the models examined. In our data the observed cascades were constrained in time, so we could not rely on the theoretical results relating the asymptotic behavior of the epidemic and network topological features. Performing simulations we assessed the impact of these common topological properties in time-bounded epidemic and identified that the distribution of neighbors of seed nodes had the most impact among the investigated properties in our context. We conclude discussing identifying perspectives opened by this work.
|
3 |
Návrh využití technologie Blockchain ve firemním prostředí / Implementation of Blockchain technologyDzurdzíková, Kristína January 2020 (has links)
This diploma thesis deals with the creation of a design for the utilization of blockchain technology in a corporate environment. The main goal of this work is to create a proposal for a business process and its implementation in a specific blockchain platform. The analysis of the current state of the process describes current process and company’s requirements for the functionality of new technology. In the design part of the work, I compared specific blockchain platforms. As a result of this part I chose the most suitable solution for the implementation of my proposal. This chapter further includes the design of a methodology for verifying whether the process is suitable for the implementation of a blockchain technology or not. Moreover, it describes how to proceed when choosing a suitable solution and highlights its key factors.
|
4 |
Peer to peer systém pro vzdálené ovládání počítačeLEJTNAR, Michal January 2017 (has links)
This thesis deals with creating of decentralized peer to peer system designed for remote control of computers. P2P network and single nodes in this network is inspired by hybrid peer-to-peer network architecture used by Skype application. The application uses terminal services available in operation system Windows for remote control of computers. Namely, MS Remote Desktop and MS Remote Assistance is used. The entire application is created in programing language C#.
|
5 |
Information Diffusion in Complex Networks : Measurement-Based Analysis Applied to Modelling / Phénomènes de diffusion sur les grands réseaux : mesure et analyse pour la modélisationFaria Bernardes, Daniel 21 March 2014 (has links)
Dans cette thèse nous avons étudié la diffusion de l'information dans les grands graphes de terrain, en se focalisant sur les patterns structurels de la propagation. Sur le plan empirique, il s'est avéré difficile de capturer la structure des cascades de diffusion en termes de mesures simples. Sur le plan théorique, l'approche classique consiste à étudier des modèles stochastiques de contagion. Néanmoins, l'analyse formelle de ces modèles reste limité, car les graphes de terrain ont généralement une topologie complexe et le processus de diffusion se produit dans une fenêtre de temps limitée. Par conséquent, une meilleure compréhension des données empiriques, des modèles théoriques et du lien entre les deux est également cruciale pour la caractérisation de la diffusion dans les grands graphes de terrain. Après un état de l'art sur les graphes de terrain et la diffusion dans ce contexte au premier chapitre, nous décrivons notre jeu de données et discutons sa pertinence au chapitre 2. Ensuite, dans le chapitre 3, nous évaluons la pertinence du modèle SIR simple et de deux extensions qui prennent en compte des hétérogénéités de notre jeu de données. Dans le chapitre 4, nous explorons la prise en compte du temps dans l'évolution du réseau sous-jacent et dans le modèle de diffusion. Dans le chapitre 5, nous évaluons l'impacte de la structure du graphe sous-jacent sur la structure des cascades de diffusion générées avec les modèles étudiés dans les chapitres précédents. Nous terminons la thèse par un bilan des résultats et des perspectives ouvertes par les travaux menés dans cette thèse. / Understanding information diffusion on complex networks is a key issue from a theoretical and applied perspective. Epidemiology-inspired SIR models have been proposed to model information diffusion. Recent papers have analyzed this question from a data-driven perspective. We complement these findings investigating if epidemic models calibrate with a systematic procedure are capable of reproducing key spreading cascade properties. We first identify a large-scale, rich dataset from which we can reconstruct the diffusion trail and the underlying network. Secondly, we examine the simple SIR model as a baseline model and conclude that it was unable to generate structurally realistic spreading cascades. We found the same result examining model extensions to which take into account heterogeneities observed in the data. In contrast, other models which take into account time patterns available in the data generate qualitatively more similar cascades. Although one key property was not reproduced in any model, this result highlights the importance of taking time patterns into account. We have also analyzed the impact of the underlying network structure on the models examined. In our data the observed cascades were constrained in time, so we could not rely on the theoretical results relating the asymptotic behavior of the epidemic and network topological features. Performing simulations we assessed the impact of these common topological properties in time-bounded epidemic and identified that the distribution of neighbors of seed nodes had the most impact among the investigated properties in our context. We conclude discussing identifying perspectives opened by this work.
|
6 |
A Machine-Checked Proof of Correctness of Pastry / Une preuve certifiée par la machine de la correction du protocole PastryAzmy, Noran 24 November 2016 (has links)
Les réseaux pair-à-pair (P2P) constituent un modèle de plus en plus populaire pour la programmation d’applications Internet car ils favorisent la décentralisation, le passage à l’échelle, la tolérance aux pannes et l’auto-organisation. à la différence du modèle traditionnel client-serveur, un réseau P2P est un système réparti décentralisé dans lequel tous les nœuds interagissent directement entre eux et jouent à la fois les rôles de fournisseur et d’utilisateur de services et de ressources. Une table de hachage distribuée (DHT) est réalisée par un réseauP2P et offre les mêmes services qu’une table de hachage classique, hormis le fait que les différents couples (clef, valeur) sont stockés dans différents nœuds du réseau. La fonction principale d’une DHT est la recherche d’une valeur associée à une clef donnée. Parmi les protocoles réalisant une DHT on peut nommer Chord, Pastry, Kademlia et Tapestry. Ces protocoles promettent de garantir certaines propriétés de correction et de performance ; or, les tentatives de démontrer formellement de telles propriétés se heurtent invariablement à des cas limites dans lesquels certaines propriétés sont violées. Tian-xiang Lu a ainsi décrit des problèmes de correction dans des versions publiées de Pastry. Il a conçu un modèle, appelé LuPastry, pour lequel il a fourni une preuve partielle, mécanisée dans l’assistant à la preuve TLA+ Proof System, démontrant que les messages de recherche de clef sont acheminés au bon nœud du réseau dans le cas sans départ de nœuds. En analysant la preuve de Lu j’ai découvert qu’elle contenait beaucoup d’hypothèses pour lesquelles aucune preuve n’avait été fournie, et j’ai pu trouver des contre-exemples à plusieurs de ces hypothèses. La présente thèse apporte trois contributions. Premièrement, je présente LuPastry+, une spécification TLA+ revue de LuPastry. Au-delà des corrections nécessaires d’erreurs, LuPastry+ améliore LuPastry en introduisant de nouveaux opérateurs et définitions, conduisant à une spécification plus modulaire et isolant la complexité de raisonnement à des parties circonscrites de la preuve, contribuant ainsi à automatiser davantage la preuve. Deuxièmement, je présente une preuve TLA+ complète de l’acheminement correct dans LuPastry+. Enfin, je démontre que l’étape finale du processus d’intégration de nœuds dans LuPastry (et LuPastry+) n’est pas nécessaire pour garantir la cohérence du protocole. Concrètement, j’exhibe une nouvelle spécification avec un processus simplifié d’intégration de nœuds, que j’appelle Simplified LuPastry+, et je démontre qu’elle garantit le bon acheminement de messages de recherche de clefs. La preuve de correction pour Simplified LuPastry+ est obtenue en réutilisant la preuve pour LuPastry+, et ceci représente un bon succès pour la réutilisation de preuves, en particulier considérant la taille de ces preuves. Chacune des deux preuves requiert plus de 30000 étapes interactives ; à ma connaissance, ces preuves constituent les preuves les plus longues écrites dans le langage TLA+ à ce jour, et les seuls exemples d’application de preuves mécanisées de théorèmes pour la vérification de protocoles DHT / A distributed hash table (DHT) is a peer-to-peer network that offers the function of a classic hash table, but where different key-value pairs are stored at different nodes on the network. Like a classic hash table, the main function provided by a DHT is key lookup, which retrieves the value stored at a given key. Examples of DHT protocols include Chord, Pastry, Kademlia and Tapestry. Such DHT protocols certain correctness and performance guarantees, but formal verification typically discovers border cases that violate those guarantees. In his PhD thesis, Tianxiang Lu reported correctness problems in published versions of Pastry and developed a model called {\LP}, for which he provided a partial proof of correct delivery of lookup messages assuming no node failure, mechanized in the {\TLA} Proof System. In analyzing Lu's proof, I discovered that it contained unproven assumptions, and found counterexamples to several of these assumptions. The contribution of this thesis is threefold. First, I present {\LPP}, a revised {\TLA} specification of {\LP}. Aside from needed bug fixes, {\LPP} contains new definitions that make the specification more modular and significantly improve proof automation. Second, I present a complete {\TLA} proof of correct delivery for {\LPP}. Third, I prove that the final step of the node join process of {\LP}/{\LPP} is not necessary to achieve consistency. In particular, I develop a new specification with a simpler node join process, which I denote by {\SLP}, and prove correct delivery of lookup messages for this new specification. The proof of correctness of {\SLP} is written by reusing the proof for {\LPP}, which represents a success story in proof reuse, especially for proofs of this size. Each of the two proofs amounts to over 32,000 proof steps; to my knowledge, they are currently the largest proofs written in the {\TLA} language, and---together with Lu's proof---the only examples of applying full theorem proving for the verification of DHT protocols
|
7 |
On the Design of Socially-Aware Distributed SystemsKourtellis, Nicolas 01 January 2012 (has links)
Social media services and applications enable billions of users to share an unprecedented amount of social information, which is further augmented by location and collocation information from mobile phones, and can be aggregated to provide an accurate digital representation of the social world. This dissertation argues that extracted social knowledge from this wealth of information can be embedded in the design of novel distributed, socially-aware applications and services, consequently improving system response time, availability and resilience to attacks, and reducing system overhead. To support this thesis, two research avenues are explored.
First, this dissertation presents Prometheus, a socially-aware peer-to-peer service that collects social information from multiple sources, maintains it in a decentralized fashion on user-contributed nodes, and exposes it to applications through an interface that implements non-trivial social inferences. The system's socially-aware design leads to multiple system improvements: 1) it increases service availability by allowing users to manage their social information via socially-trusted peers, 2) it improves social inference performance and reduces message overhead by exploiting naturally-formed social groups, and 3) it reduces the opportunity of attackers to influence application requests. These performance improvements are assessed via simulations and a prototype deployment on a local cluster and on a worldwide testbed (PlanetLab) under emulated application workloads.
Second, this dissertation defines the projection graph, the result of decentralizing a social graph onto a peer-to-peer system such as Prometheus, and studies the system's network properties and how they can be used to design more efficient socially-aware distributed applications and services. In particular: 1) it analytically formulates the relation between centrality metrics such as degree centrality, node betweenness centrality, and edge betweenness centrality in the social graph and in the emerging projection graph, 2) it experimentally demonstrates on real networks that for small groups of users mapped on peers, there is high association of social and projection graph properties, 3) it shows how these properties of the (dynamic) projection graph can be accurately inferred from the properties of the (slower changing) social graph, and 4) it demonstrates with two search application scenarios the usability of the projection graph in designing social search applications and unstructured P2P overlays.
These research results lead to the formulation of lessons applicable to the design of socially-aware applications and distributed systems for improved application performance such as social search, data dissemination, data placement and caching, as well as for reduced system communication overhead and increased system resilience to attacks.
|
8 |
Detection of malicious user communities in data networksMoghaddam, Amir 04 April 2011 (has links)
Malicious users in data networks may form social interactions to create communities in abnormal fashions that deviate from the communication standards of a network. As a community, these users may perform many illegal tasks such as spamming, denial-of-service attacks, spreading confidential information, or sharing illegal contents. They may use different methods to evade existing security systems such as session splicing, polymorphic shell code, changing port numbers, and basic string manipulation. One way to masquerade the traffic is by changing the data rate patterns or use very low (trickle) data rates for communication purposes, the latter is focus of this research. Network administrators consider these communities of users as a serious threat.
In this research, we propose a framework that not only detects the abnormal data rate patterns in a stream of traffic by
using a type of neural network, Self-organizing Maps (SOM), but also
detect and reveal the community structure of these users for further
decisions. Through a set of comprehensive simulations, it is shown in this research that the suggested framework is able to detect these malicious user communities with a low false negative rate and false positive rate.
We further discuss ways of improving the performance of the neural network by studying the size of SOM's.
|
9 |
Architecture événementielle pour les environnements virtuels collaboratifs sur le web : application à la manipulation et à la visualisation d'objets en 3D / Event-based architecture for web-based virtual collaborative environments : application to manipulation and visualisation of 3D objectsDesprat, Caroline 01 December 2017 (has links)
L’évolution technologique du web durant ces dernières années a favorisé l’arrivée d’environnements virtuels collaboratifs pour la modélisation 3D à grande échelle. Alors que la collaboration réunit dans un même espace partagé des utilisateurs distants géographiquement pour un objectif de collaboration commun, les ressources matérielles qu'ils apportent (calcul, stockage, 3D ...) avec leurs connaissances sont encore trop rarement utilisées et cela constitue un défi. Il s'agit en effet de proposer un système simple, performant et transparent pour les utilisateurs afin de permettre une collaboration efficace à la fois sur le volet computationnel mais aussi, bien entendu, sur l'aspect métier lié à la modélisation 3D sur le web. Pour rendre efficace le passage à l’échelle, de nombreux systèmes utilisent une architecture réseau dite "hybride", combinant client serveur et pair-à-pair. La réplication optimiste s'adapte bien aux propriétés de ces environnements répartis : la dynamicité des utilisateurs et leur nombre, le type de donnée traitées (3D) et leur taille. Cette thèse présente un modèle pour les systèmes d’édition collaborative en 3D sur le web. L'architecture cliente (3DEvent) permet de déporter les aspects métiers de la 3D au plus près de l’utilisateur sous la forme d’évènements. Cette architecture orientée événements repose sur le constat d’un fort besoin de traçabilité et d’historique sur les données 3D lors de l’assemblage d’un modèle. Cet aspect est porté intrinsèquement par le patron de conception event-sourcing. Ce modèle est complété par la définition d’un intergiciel en pair-à-pair. Sur ce dernier point, nous proposons d'utiliser la technologie WebRTC qui présente une API familière aux développeurs de services en infonuagique. Une évaluation portant sur deux études utilisateur concernant l’acceptance du modèle proposé a été menée dans le cadre de tâches d’assemblage de modèles 3D sur plusieurs groupes d’utilisateurs. / Web technologies evolutions during last decades fostered the development of collaborative virtual environments for 3D design at large scale. Despite the fact that collaborative environments gather in a same shared space geographically distant users in a common objective, the hardware ressources of their clients (calcul, storage, graphics ...) are often underused because of the challenge it represents. It is indeed a matter of offering an easy-to-use, efficient and transparent collaborative system to the user supporting both computationnal and 3D design visualisation and business logic needs in heterogeneous web environments. To scale well, numerous systems use a network architecture called "hybrid", combining both client-server and peer-to-peer. Optimistic replication is well adapted to distributed application such as 3D collaborative envionments : the dynamicity of users and their numbers, the 3D data type used and the large amount and size of it.This document presents a model for 3D web-based collaborative editing systems. This model integrates 3DEvent, an client-based architecture allowing us to bring 3D business logic closer to the user using events. Indeed, the need of traceability and history awareness is required during 3D design especially when several experts are involved during the process. This aspect is intrinsec to event-sourcing design pattern. This architecture is completed by a peer-to-peer middleware responsible for the synchronisation and the consistency of the system. To implement it, we propose to use the recent web standard API called WebRTC, close to cloud development services know by developers. To evaluate the model, two user studies were conducted on several group of users concerning its responsiveness and the acceptance by users in the frame of cooperative assembly tasks of 3D models.
|
10 |
在高度分散式環境下進行Top-k相似文件檢索 / Similar Top-k documents retrieval in highly distributed environments王俊閎, Wang, Chun Hung Unknown Date (has links)
在文件資料庫的查詢處理上,Top-k相似文件查詢主要是協助使用者可以從龐大的文件集合中,檢索出和查詢文件具有高度相關性的文件集合。將資料庫內的文件依據和查詢文件之相似度程度,選擇出相似度最高的前k篇文件回傳給使用者。然而過去集中式資料庫,因其覆蓋性和可擴充性的不足,使得這種排名傾向的文件查詢處理,需耗費大量時間及運算成本。近年來,使用端對端(Peer-to-peer, P2P)架構解決相關的文件檢索問題已成為一種趨勢,但在高度分散式環境下,支援排名傾向的相似文件查詢是困難的,因為缺乏全域資訊和適當的系統協調者。
在本研究中,我們先針對各節點資料庫作分群前處理,並提出一個利用區域切割的作法[1],將P2P環境劃分成數個子區塊後,建立特徵索引表。因此在查詢處理時,可透過索引表加快挑選出Top-k相似群集的速度,並且確保有適當數量的回傳結果。最後在實驗中,我們提出的方法會與傳統集中式搜尋引擎以及SON-based [1] 做比較,在高度分散式環境下,我們的方法在執行Top-k相似文件查詢時,會比上述兩種作法有較為優異的表現。 / On query processing in a large database, similar top-k documents query is an important mechanism to retrieve the highly correlated document collection with query for users. It ranks documents with a similarity ranking function and reports the k documents with highest similarity. However, the former approach in web searching, i.e., centralized search engines, rises some issues such as lack of coverage and scalability, impact provides rank-based query become a costly operation. Recently, using Peer-to-peer (P2P) architectures to tackle above issues has emerged as a trend of solution, but due to the shortage of global knowledge and some appropriate central coordinators, support rank-based query in highly distributed environment has been difficulty.
In this paper, we proposed a framework to solve these problems. First, we performed the local cluster pre-processing on each peer, followed by the zone creation process, forming sub-zones over P2P network, and then constructing the feature index table to improve the performance of selecting similar top-k cluster results. The experiments show that our approach performs similar top-k documents query outperforms than SON-based approach in highly distributed environment.
|
Page generated in 0.0578 seconds