1 |
ADVANCED INDEXING TECHNIQUES FOR FILE SHARING IN P2P NETWORKSPONNAVAIKKO, KOVENDHAN 22 May 2002 (has links)
No description available.
|
2 |
On The Issues Of Supporting On-Demand Streaming Application Over Peer-to-Peer NetworksKalapriya, K 06 1900 (has links)
Bandwidth and resource constraints at the server side is a limitation for deployment of streaming media applications. Resource constraints at the server side often leads to saturation of resources during sudden increase in requests. End System Multicast (ESM) is used to overcome the problem of resource saturation. Resources such as storage, bandwidth available at the end systems are utilized to deliver streaming media. In ESM, the end-systems (also known as peers) form a network which is commonly known as Peer-to-Peer (P2P) network. These peers that receive the stream in turn act as routable components and forward the stream to other requests. These peers do not possess server like characteristics. The peers differ from the server in the following ways: (a) they join and exit the system at will (b) unlike servers, they are not reliable source of media. This induces instability in the network. Therefore, streaming media solution over such unstable peer network is a challenging task. Two kinds of media streaming is supported by ESM, namely, live streaming media and on-demand streaming media.
ESM is well studied to support live streaming media. In this thesis we explore the effectiveness of using ESM to support on-demand streaming media over P2P network. There are two major issues to support on-demand streaming video.They are: (a)unlike live streaming, every request should be served from the beginning of the stream and (b) instability in the network due to peer characteristics (particularly transience of peers). In our work, late arriving peers can join the existing stream if the initial segments can be served to these peers. In this scheme, a single stream is used to serve multiple requests and therefore the throughput increases. We propose patching mechanism in which the initial segments of media are temporarily cached in the peers as patches. The peers as they join, contribute storage and this storage space is used to cache the initial segments. The patching mechanism is controlled by Expanding Window Control Protocol (EWCP).
EWCP defines a “virtual window” that logically represents the aggregated cache contributed by the peers. The window expands as the peer contribute more resources. Larger the window size more is the number of clients that can be served by a single stream. GAP is formed when contiguous segments of media is lost. GAP limits the expansion of the virtual window. We explore the conditions that lead to the formation of GAP. GAP is formed due to the transience and non-cooperation of peers. Transience of peers coupled with real time nature of the application requires fast failure recovery algorithms and methods to overcome loss of media segments. We propose an efficient peer management protocol that provides constant failure recovery time. We explore several redundancy techniques to overcome the problem of loss of video segments during transience of peers.
Peer characteristics (duration, resource contribution etc.) have significant impact on performance.The design of peer management protocol must include peer characteristics to increase its effectiveness. In this thesis we present detailed analysis of the relationship between the peer characteristics and performance. Our results indicate that peer characteristics and realtime nature of the application control the performance of the system. Based on our study, we propose algorithms that considers these parameters and increase the performance of the system. Finally, we bring all the pieces of our work together into a comprehensive system architecture for streaming media over P2P networks. We have implemented a prototype Black-Board System (BBS), a distance program utility that reflects the main concepts of our work. We show that algorithms that exploit peer characteristics performs well in P2P networks.
|
3 |
Reducing the cumulative file download time and variance in a P2P overlay via proximity based peer selectionCarasquilla, Uriel J. 01 January 2013 (has links)
The time it takes to download a file in a peer-to-peer (P2P) overlay network is dependent on several factors. These factors include the quality of the network between peers (e.g. packet loss, latency, and link failures), distance, peer selection technique, and packet loss due to Internet Service Providers (ISPs) engaging in traffic shaping. Recent research shows that P2P download time is adversely impacted by the presence of distant peers, particularly when traffic goes across an ISP that could be engaging in P2P traffic throttling activities. It has also been observed that additional delays are introduced when distant candidate nodes for exchanging data are included during the formation of a P2P network overlay. Researchers have shifted their attention to the mechanism for peer selection. They started questioning the random technique because it ignores the location of nodes in the topology of the underlying physical network. Therefore, selecting nodes for interaction in a distributed system based on their position in the network continues to be an active area of research. The goal of this work was to reduce the cumulative file download time and variance for the majority of participating peers in a P2P network by using a peer selection mechanism that favors nearby nodes. In this proposed proximity strategy, the Internet address space is separated by IP blocks that belong to different Autonomous Systems (AS). IP blocks are further broken up into subsets named zones. Each zone is given a landmark (a.k.a. beacon), for example routers or DNS servers, with a known geographical location. At the time peers joined the network, peers were grouped into zones based on their geographical distance to the selected beacons. Peers that end up in the same zone were put at the top of the list of available nodes for interactions during the formation of the overlay. Experiments were conducted to compare the proposed proximity based peer selection strategy to the random peer selection strategy. The results indicate that the proximity technique outperforms the random approach for peer selection in a network with low packet loss and latency and also in a more realistic network subject to packet loss, traffic shaping and long distances. However, this improved performance came at the cost of additional memory (230 megabytes) and to a lesser extent some additional CPU cycles to run the additional subroutines needed to group peers into zones. The framework and algorithms developed for this work made it possible to implement a fully functioning prototype that implements the proximity strategy. This prototype enabled high fidelity testing with a real client implementation in real networks including the Internet. This made it possible to test without having to rely exclusively on event-driven simulations to prove the hypothesis.
|
4 |
PREENCHIMENTO DE FALHAS DE SÉRIES DE DADOS CLIMÁTICOS UTILIZANDO REDES P2PSchmitke, Luiz Rafael 30 June 2012 (has links)
Made available in DSpace on 2017-07-21T14:19:33Z (GMT). No. of bitstreams: 1
Luiz Rafael Schmitke.pdf: 1854453 bytes, checksum: c7e3cc9cb3865213cd2b9f59a4cf211c (MD5)
Previous issue date: 2012-06-30 / Agriculture is an activity where the weather has more impact, influencing techniques and crops employed. Much of the agricultural productivity is affected by climatic conditions that are created by natural factors and are not likely to control. Although you can’t control the weather, we can predict it, or even simulate their conditions to try minimize its impact on agriculture. To be able to make these predictions and simulations are necessary data collected from weather stations that can be conventional or automatic and must be without gaps or abnormal data. Most of these errors are caused by signal interference, disconnection, oxidation of cables and spatio-temporal variation of climate which consequently end up generating those problems at the climates bases. Thus, this research work has as main objective to create a model capable of correcting gaps in climate databases, observing that not to correct abnormal observations or replace statistical methods for the same purpose. Therefore a model was created to correct the gaps in weather data between stations using the P2P architecture. With this model, an application was created to test its performance to correct the gaps. Also to perform the tests were used bases in the cities of Ponta Grossa, Fernandes Pinheiro and Telêmaco Borba provided by Instituto Tecnológico SIMEPAR, and bases of the cities of Castro, Carambeí, Pirai do Sul and Tibagi provided by Fundação ABC, which are collected daily on automatic stations. As a result it was observed that the performance of P2P correction model was satisfactory when compared to the simulator used in the tests, with lower results only in February, which corresponds to the period of summer, to the autumn, winter and spring the model P2P was better than simulated. Although it was found that the number of stations participating in the network at the time of correcting influences the results, and the higher it is, the better the results obtained with the correcting. / A agricultura é uma das atividades onde o clima tem mais impacto, influenciando as técnicas e os cultivos empregados. Grande parte da produtividade agrícola se deve as condições climáticas que são criadas por fatores naturais e não são passíveis de controle. Embora não seja possível controlar o clima, pode-se prevê-lo ou até simular suas condições para tentar minimizar seu impacto na agricultura. Para que seja possível realizar estas previsões e simulações são necessários dados coletados em estações climáticas que podem ser convencionais ou automáticas e que precisam estar sem dados anormais ou lacunas. Grande parte desses erros se deve a interferência no sinal, desconexão, oxidação de cabos e a variação espaço-temporal do clima que por consequência acabam gerando aqueles problemas nas bases climáticas. Desta forma, este trabalho de pesquisa tem como objetivo principal criar um modelo capaz de corrigir as lacunas existentes nas bases de dados climáticas, salientando-se que não visa à correção de observações anormais e nem a substituição dos métodos estatísticos para o mesmo fim. Para tanto foi criado um modelo de correção das lacunas em dados climáticos entre as estações utilizando a arquitetura P2P. Com este modelo, foi criada uma aplicação para testar seu desempenho em corrigir as lacunas encontradas. Também para a realização dos testes foram utilizadas bases das cidades de Ponta Grossa, Fernades Pinheiro e Telêmaco Borba, fornecidas pelo Instituto Tecnológico SIMEPAR, e bases das cidades de Castro, Carambeí, Tibagi e Pirai do Sul fornecidas pela Fundação ABC, sendo estes dados, diários e coletados em estações automáticas. Como resultados foi possível observar que o desempenho do modelo de correção P2P foi satisfatório quando comparado ao simulador utilizado nos testes, apresentando resultados inferiores somente no mês de fevereiro, que corresponde ao período de verão, para as estações de outono, inverno e primavera o modelo P2P foi melhor que o simulado. Ainda foi verificado que a quantidade de estações que participa da rede na hora da correção influencia os resultados, sendo que quanto maior ela for, melhores são os resultados obtidos com a correção.
Palavras-chave: Redes P2P, Correção, Dados Climáticos.
|
5 |
Making Coding Practical: From Servers to SmartphonesShojania, Hassan 01 September 2010 (has links)
The fundamental insight of use of coding in computer networks is that information to be transmitted from the source in a session can be inferred, or decoded, by the intended receivers, and does not have to be transmitted verbatim. Several coding techniques have gained popularity over the recent years. Among them is random network coding with random linear codes, in which a node in a network topology transmits a linear combination of incoming, or source, packets to its outgoing links. Theoretically, the high computational complexity of random linear codes (RLC) is well known, and is used to motivate the application of more efficient codes, such as the traditional Reed-Solomon (RS) codes and, more recently, fountain codes (LT codes). Factors like computational complexity, network overhead, and deployment flexibility can make one coding schemes more attractive for one application than the others. While there is no one-fit-all coding solution, random linear coding is very flexible, well known to be able to achieve optimal flow rates in multicast sessions, and universally adopted in all proposed protocols using network coding. However, its practicality has been questioned, due to its high computational complexity. Unfortunately, to date, there has been no commercial real-world system reported in the literature that take advantage of the power of network coding.
This research represents the first attempt towards a high-performance design and implementation of network coding. The objective of this work is to explore the computational limits of network coding in off-the-shelf modern processors, and to provide a solid reference implementation to facilitate commercial deployment of network coding. We promote the development of new coding-based systems and protocols through a comprehensive toolkit with coding implementations that are not just reference implementations. Instead, they have attained high-performance and flexibility to find widespread adoption.
The final work, packaged as a toolkit code-named Tenor, includes high-performance implementations of a number of coding techniques: random linear network coding (RLC), fountain codes (LT codes), and Reed-Solomon (RS) codes in CPUs (single and multi core(s) for both Intel x86 and IBM POWER families), GPUs (single and multiple), and mobile/embedded devices based on ARMv6 and ARMv7 cores. Tenor is cross-platform with support on Linux, Windows, Mac OS X, and iPhone OS, and supports both 32-bit and 64-bit platforms. The toolkit includes some 23K lines of C++ code.
In order to validate the effectiveness of the Tenor toolkit, we build coding-based on-demand media streaming systems with GPU-based servers, thousands of clients emulated on a cluster of computers, and a small number of actual iPhone devices. To facilitate deployment of such large experiments, we develop Blizzard, a high-performance framework, with the main goals of: 1) emulating hundreds of client/peer applications on each physical node; 2) facilitating scalable servers that can efficiently communicate with thousands of clients. Our experiences offer an illustration of Tenor components in action, and their benefits in rapid system development. With Tenor, it is trivial to switch from one coding technique to another, scale up to thousands of clients, and deliver actual video to be played back even on mobile devices.
|
6 |
Making Coding Practical: From Servers to SmartphonesShojania, Hassan 01 September 2010 (has links)
The fundamental insight of use of coding in computer networks is that information to be transmitted from the source in a session can be inferred, or decoded, by the intended receivers, and does not have to be transmitted verbatim. Several coding techniques have gained popularity over the recent years. Among them is random network coding with random linear codes, in which a node in a network topology transmits a linear combination of incoming, or source, packets to its outgoing links. Theoretically, the high computational complexity of random linear codes (RLC) is well known, and is used to motivate the application of more efficient codes, such as the traditional Reed-Solomon (RS) codes and, more recently, fountain codes (LT codes). Factors like computational complexity, network overhead, and deployment flexibility can make one coding schemes more attractive for one application than the others. While there is no one-fit-all coding solution, random linear coding is very flexible, well known to be able to achieve optimal flow rates in multicast sessions, and universally adopted in all proposed protocols using network coding. However, its practicality has been questioned, due to its high computational complexity. Unfortunately, to date, there has been no commercial real-world system reported in the literature that take advantage of the power of network coding.
This research represents the first attempt towards a high-performance design and implementation of network coding. The objective of this work is to explore the computational limits of network coding in off-the-shelf modern processors, and to provide a solid reference implementation to facilitate commercial deployment of network coding. We promote the development of new coding-based systems and protocols through a comprehensive toolkit with coding implementations that are not just reference implementations. Instead, they have attained high-performance and flexibility to find widespread adoption.
The final work, packaged as a toolkit code-named Tenor, includes high-performance implementations of a number of coding techniques: random linear network coding (RLC), fountain codes (LT codes), and Reed-Solomon (RS) codes in CPUs (single and multi core(s) for both Intel x86 and IBM POWER families), GPUs (single and multiple), and mobile/embedded devices based on ARMv6 and ARMv7 cores. Tenor is cross-platform with support on Linux, Windows, Mac OS X, and iPhone OS, and supports both 32-bit and 64-bit platforms. The toolkit includes some 23K lines of C++ code.
In order to validate the effectiveness of the Tenor toolkit, we build coding-based on-demand media streaming systems with GPU-based servers, thousands of clients emulated on a cluster of computers, and a small number of actual iPhone devices. To facilitate deployment of such large experiments, we develop Blizzard, a high-performance framework, with the main goals of: 1) emulating hundreds of client/peer applications on each physical node; 2) facilitating scalable servers that can efficiently communicate with thousands of clients. Our experiences offer an illustration of Tenor components in action, and their benefits in rapid system development. With Tenor, it is trivial to switch from one coding technique to another, scale up to thousands of clients, and deliver actual video to be played back even on mobile devices.
|
7 |
Design of platforms for computing context with spatio-temporal localityZiotopoulos, Agisilaos Georgios 02 June 2011 (has links)
This dissertation is in the area of pervasive computing.
It focuses on designing platforms for storing, querying, and computing contextual information.
More specifically, we are interested in platforms for storing and querying spatio-temporal events where queries exhibit locality.
Recent advances in sensor technologies have made possible gathering a variety of information on the status of users, the environment machines, etc.
Combining this information with computation we are able to extract context, i.e., a filtered high-level description of the situation.
In many cases, the information gathered exhibits locality both in space and time, i.e., an event is likely to be consumed in a location close to the location where the event was produced, at a time whic
h is close to the time the event was produced.
This dissertation builds on this observation to create better platforms for computing context.
We claim three key contributions.
We have studied the problem of designing and optimizing spatial organizations for exchanging context.
Our thesis has original theoretical work on how to create a platform based on cells of a Voronoi diagram for optimizing the energy and bandwidth required for mobiles to exchange contextual information t
hat is tied to specific locations in the platform.
Additionally, we applied our results to the problem of optimizing a system for surveilling the locations of entities within a given region.
We have designed a platform for storing and querying spatio-temporal events exhibiting locality.
Our platform is based on a P2P infrastructure of peers organized based on the Voronoi diagram associated with their locations to store events based on their own associated locations.
We have developed theoretical results based on spatial point processes for the delay experienced by a typical query in this system.
Additionally, we used simulations to study heuristics to improve the performance of our platform.
Finally, we came up with protocols for the replicated storage of events in order to increase the fault-tolerance of our platform.
Finally, in this thesis we propose a design for a platform, based on RFID tags, to support context-aware computing for indoor spaces.
Our platform exploits the structure found in most indoor spaces to encode contextual information in suitably designed RFID tags.
The elements of our platform collaborate based on a set of messages we developed to offer context-aware services to the users of the platform.
We validated our research with an example hardware design of the RFID tag and a software emulation of the tag's functionality. / text
|
8 |
On P2P Networks and P2P-Based Content Discovery on the InternetMemon, Ghulam 17 June 2014 (has links)
The Internet has evolved into a medium centered around content: people watch videos on YouTube, share their pictures via Flickr, and use Facebook to keep in touch with their friends. Yet, the only globally deployed service to discover content - i.e., Domain Name System (DNS) - does not discover content at all; it merely translates domain names into locations. The lack of persistent naming, in particular, makes content discovery, instead of domain discovery, challenging. Content Distribution Networks (CDNs), which augment DNSs with location-awareness, also suffer from the same problem of lack of persistent content names. Recently, several infrastructure- level solutions to this problem have emerged, but their fundamental limitation is that they fail to preserve the autonomy of network participants. Specifically, the storage requirements for resolution within each participant may not be proportional to their capacity. Furthermore, these solutions cannot be incrementally deployed. To the best of our knowledge, content discovery services based on peer-to-peer (P2P) networks are the only ones that support persistent content names. These services also come with the built-in advantage of scalability and deployability. However, P2P networks have been deployed in the real-world only recently, and their real-world characteristics are not well understood. It is important to understand these real-world characteristics in order to improve the performance and propose new designs by identifying the weaknesses of existing designs. In this dissertation, we first propose a novel, lightweight technique for capturing P2P traffic. Using our captured data, we characterize several aspects of P2P networks and draw conclusions about their weaknesses. Next, we create a botnet to demonstrate the lethality of the weaknesses of P2P networks. Finally, we address the weaknesses of P2P systems to design a P2P-based content discovery service, which resolves the drawbacks of existing content discovery systems and can operate at Internet-scale.
This dissertation includes both previously published/unpublished and co-authored material.
|
9 |
Characterizing dissemination of illegal copies of content through BitTorrent networksSchmidt, Adler Hoff January 2013 (has links)
Redes BitTorrent (BT) atualmente representam o método Par-a-Par (P2P) de compartilhamento de arquivos pela Internet mais utilizado. Relatórios de monitoramento recentes revelam que as cópias de conteúdo sendo compartilhadas são, em grande maioria, ilegais e que filmes são os tipos de mídia mais populares. Iniciativas de pesquisa que tentaram entender a dinâmica da produção e do compartilhamento de conteúdo em redes BT não conseguiram prover informações precisas acerca da disseminação de cópias ilegais. No presente trabalho realizamos um extenso estudo experimental para caracterizar o comportamento de produtores, publicadores, provedores e consumidores de arquivos violando direitos autorais. O estudo conduzido é baseado em dados coletados durante sete meses de monitoração de enxames compartilhando filmes por meio de uma das comunidades públicas mais populares de BT. Os dados foram obtidos via emprego de uma arquitetura de monitoração do \universo" BitTorrent, o que permitiu popular uma base com informações acerca de mais de 55.000 torrents, 1.000 rastreadores e 1,9 milhões de IPs. Nossa análise não somente mostra que um pequeno grupo de usuários ativos _e responsável pela maior parte do compartilhamento de cópias ilegais, como desvenda relacionamentos existentes entre esses atores e caracteriza os padrões de consumo respeitados pelos usuários interessados nesse tipo de conteúdo. / BitTorrent (BT) networks are nowadays the most employed method of Peerto- Peer (P2P) le sharing in the Internet. Recent monitoring reports reveal that content copies being shared are mostly illegal and movies are the most popular media type. Research e orts carried out to understand the dynamics of content production and sharing in BT networks have been unable to provide precise information regarding the dissemination of illegal copies. In this work we perform an extensive experimental study in order to characterize the behavior of producers, publishers, providers and consumers of copyright-infringing les. This study is based on seven months of traces obtained by monitoring swarms sharing movies via one of the most popular BT public communities. Traces were obtained with an extension of a BitTorrent \universe" observation architecture, which allowed the collection of a database with information about more than 55,000 torrents, 1,000 trackers and 1.9 million IPs. Our analysis not only shows that a small group of active users is responsible for the majority of disseminated illegal copies, as it unravels existing relationships among these actors and characterizes consuming patterns respected by users interested in this particular set of contents.
|
10 |
Characterizing dissemination of illegal copies of content through BitTorrent networksSchmidt, Adler Hoff January 2013 (has links)
Redes BitTorrent (BT) atualmente representam o método Par-a-Par (P2P) de compartilhamento de arquivos pela Internet mais utilizado. Relatórios de monitoramento recentes revelam que as cópias de conteúdo sendo compartilhadas são, em grande maioria, ilegais e que filmes são os tipos de mídia mais populares. Iniciativas de pesquisa que tentaram entender a dinâmica da produção e do compartilhamento de conteúdo em redes BT não conseguiram prover informações precisas acerca da disseminação de cópias ilegais. No presente trabalho realizamos um extenso estudo experimental para caracterizar o comportamento de produtores, publicadores, provedores e consumidores de arquivos violando direitos autorais. O estudo conduzido é baseado em dados coletados durante sete meses de monitoração de enxames compartilhando filmes por meio de uma das comunidades públicas mais populares de BT. Os dados foram obtidos via emprego de uma arquitetura de monitoração do \universo" BitTorrent, o que permitiu popular uma base com informações acerca de mais de 55.000 torrents, 1.000 rastreadores e 1,9 milhões de IPs. Nossa análise não somente mostra que um pequeno grupo de usuários ativos _e responsável pela maior parte do compartilhamento de cópias ilegais, como desvenda relacionamentos existentes entre esses atores e caracteriza os padrões de consumo respeitados pelos usuários interessados nesse tipo de conteúdo. / BitTorrent (BT) networks are nowadays the most employed method of Peerto- Peer (P2P) le sharing in the Internet. Recent monitoring reports reveal that content copies being shared are mostly illegal and movies are the most popular media type. Research e orts carried out to understand the dynamics of content production and sharing in BT networks have been unable to provide precise information regarding the dissemination of illegal copies. In this work we perform an extensive experimental study in order to characterize the behavior of producers, publishers, providers and consumers of copyright-infringing les. This study is based on seven months of traces obtained by monitoring swarms sharing movies via one of the most popular BT public communities. Traces were obtained with an extension of a BitTorrent \universe" observation architecture, which allowed the collection of a database with information about more than 55,000 torrents, 1,000 trackers and 1.9 million IPs. Our analysis not only shows that a small group of active users is responsible for the majority of disseminated illegal copies, as it unravels existing relationships among these actors and characterizes consuming patterns respected by users interested in this particular set of contents.
|
Page generated in 0.0667 seconds