Spelling suggestions: "subject:"beers To-Peer"" "subject:"geert To-Peer""
291 |
Peer-to-peer-based file-sharing beyond the dichotomy of 'downloading is theft' vs. 'information wants to be free': how Swedish file-sharers motivate their actionAndersson, Jonas January 2010 (has links)
This thesis aims to offer a comprehensive analysis of peer-to-peer-based file-sharing by focusing on the discourses about use, agency and motivation involved, and how they interrelate with the infrastructural properties of file-sharing. Peer-to-peer-based file-sharing is here defined as the unrestricted duplication of digitised media content between autonomous end-nodes on the Internet. It has become an extremely popular pastime, largely involving music, film, games and other media which is copied without the permission of the copyright holders. Due to its illegality, the popular understanding of the phenomenon tends to overstate its conflictual elements, framing it within a legalistic 'copyfight'. This is most markedly manifested in the dichotomised image of file-sharers as 'pirates' allegedly opposed to the entertainment industry. The thesis is an attempt to counter this dichotomy by using a more heterodox synthesis of perspectives, aiming to assimilate the phenomenon's complex intermingling of technological, infrastructural, economic and political factors. The geographic context of this study is Sweden, a country characterised by early broadband penetration and subsequently widespread unrestricted file-sharing, paralleled by a lively and well-informed public debate. This gives geographic specificity and further context to the file-sharers' own justificatory discourses, serving to highlight and problematise some principal assumptions about the phenomenon. The thesis thus serves as a geographically contained case study which will have analytical implications outside of its immediate local context, and as an inquiry into two aspects of file-sharer argumentation: the ontological understandings of digital technology and the notion of agency. These, in turn, relate to particular forms of sociality in late modernity. Although the agencies and normative forces involved are innumerable, controversies about agency tend to order themselves in a more comprehensive way, as they are appropriated discursively. The invocation to agency that is found in the justificatory discourses - both in the public debate and among individual respondents - thus allows for a more productive and critically attentive understanding of the phenomenon than previously
|
292 |
Peer selection Algorithm in Stochastic Content Delivery Networks to Reduce File Download TimeLehrfeld, Michael Richard 01 January 2010 (has links)
The download duration of peer-to-peer overlay networks is highly dependent upon the client's selection of candidate node-servers and the algorithms used in that process. Recent findings suggest that as node-server network capacity increases the deviation from the average total download time can vary as much as 300 percent between selection algorithms. This work investigated the current selection algorithms based upon chunk size, parallel connections, permanent connection, and time based switching.
The time based switching algorithm is a variation of the chunk based algorithm. Time based switching enables a client to randomly select a new node-server regardless of connection speed at predetermined time slots. Simulations indicate a 41% percent decrease in download time when compared to chunk based switching.
The effects of inserting chokepoints in the time based switching algorithm were investigated. This work investigated improving a client's download performance by preemptively releasing a client from a poor performing node-server. To achieve this, the client will gather a peer-to-peer network overlay capacity from a global catalog. This information will be used to seed a client choke algorithm. Clients will then be able to continually update a local capacity average based upon past download sessions. This local average will be used to make a comparison between the current download session and the previously calculated average. A margin has been introduced to allow the client to vary from the average calculated capacity. The client will perform comparisons against chokepoints and make performance decisions to depart a node-server that does not meet minimum capacity standards.
Experimental results in this research demonstrated the effectiveness of applying a choking algorithm to improve upon client download duration as well as increasing the accuracy of download duration estimates. In the single downloader scenario, the choke based algorithm improved performance up to 44% in extreme congestion and a more modest 13% under normal conditions. The multiple client scenarios yielded on average a 1% decrease in client download duration along with a 44% increase download homogeneity. Furthermore, the results indicate that a client based choking algorithm can decrease overall peer-to-peer network congestion buy improving upon client selection of node-servers.
|
293 |
Performance Analysis of Structured Overlay Networks / Leistungsbewertung Strukturierter Overlay NetzeBinzenhöfer, Andreas January 2007 (has links) (PDF)
Overlay networks establish logical connections between users on top of the physical network. While randomly connected overlay networks provide only a best effort service, a new generation of structured overlay systems based on Distributed Hash Tables (DHTs) was proposed by the research community. However, there is still a lack of understanding the performance of such DHTs. Additionally, those architectures are highly distributed and therefore appear as a black box to the operator. Yet an operator does not want to lose control over his system and needs to be able to continuously observe and examine its current state at runtime. This work addresses both problems and shows how the solutions can be combined into a more self-organizing overlay concept. At first, we evaluate the performance of structured overlay networks under different aspects and thereby illuminate in how far such architectures are able to support carrier-grade applications. Secondly, to enable operators to monitor and understand their deployed system in more detail, we introduce both active as well as passive methods to gather information about the current state of the overlay network. / Unter einem Overlay Netz versteht man den Zusammenschluss mehrerer Komponenten zu einer logischen Topologie, die auf einer existierenden physikalischen Infrastruktur aufsetzt. Da zufällige Verbindungen zwischen den einzelnen Teilnehmern aber sehr ineffizient sind, wurden strukturierte Overlay Netze entworfen, bei denen die Beziehungen zwischen den einzelnen Teilnehmern fest vorgeschrieben sind. Solche strukturierten Mechanismen versprechen zwar ein großes Potential, dieses wurde aber noch nicht ausreichend untersucht bzw. wissenschaftlich nachgewiesen. In dieser Arbeit wird mit mathematischen Methoden und ereignisorientierter Simulation die Leistungsfähigkeit von strukturierten Overlay Netzen untersucht. Da diese stark von der aktuellen Situation im Overlay abhängt, entwickeln wir Methoden, mit denen sich sowohl passiv, als auch aktiv, wichtige Systemparameter zur Laufzeit abschätzen bzw. messen lassen. Zusammen führen die vorgeschlagenen Methoden zu selbstorganisierenden Mechanismen, die den aktuellen Zustand des Overlays überwachen, diesen bewerten und sich gegebenenfalls automatisch an die aktuellen Verhältnisse anpassen
|
294 |
Performance Evaluation of Future Internet Applications and Emerging User Behavior / Leistungsbewertung von zukünftigen Internet-Applikationen und auftretenden NutzerverhaltensHoßfeld, Tobias January 2009 (has links) (PDF)
In future telecommunication systems, we observe an increasing diversity of access networks. The separation of transport services and applications or services leads to multi-network services, i.e., a future service has to work transparently to the underlying network infrastructure. Multi-network services with edge-based intelligence, like P2P file sharing or the Skype VoIP service, impose new traffic control paradigms on the future Internet. Such services adapt the amount of consumed bandwidth to reach different goals. A selfish behavior tries to keep the QoE of a single user above a certain level. Skype, for instance, repeats voice samples depending on the perceived end-to-end loss. From the viewpoint of a single user, the replication of voice data overcomes the degradation caused by packet loss and enables to maintain a certain QoE. The cost for this achievement is a higher amount of consumed bandwidth. However, if the packet loss is caused by congestion in the network, this additionally required bandwidth even worsens the network situation. Altruistic behavior, on the other side, would reduce the bandwidth consumption in such a way that the pressure on the network is released and thus the overall network performance is improved. In this monograph, we analyzed the impact of the overlay, P2P, and QoE paradigms in future Internet applications and the interactions from the observing user behavior. The shift of intelligence toward the edge is accompanied by a change in the emerging user behavior and traffic profile, as well as a change from multi-service networks to multi-networks services. In addition, edge-based intelligence may lead to a higher dynamics in the network topology, since the applications are often controlled by an overlay network, which can rapidly change in size and structure as new nodes can leave or join the overlay network in an entirely distributed manner. As a result, we found that the performance evaluation of such services provides new challenges, since novel key performance factors have to be first identified, like pollution of P2P systems, and appropriate models of the emerging user behavior are required, e.g. taking into account user impatience. As common denominator of the presented studies in this work, we focus on a user-centric view when evaluating the performance of future Internet applications. For a subscriber of a certain application or service, the perceived quality expressed as QoE will be the major criterion of the user's satisfaction with the network and service providers. We selected three different case studies and characterized the application's performance from the end user's point of view. Those are (1) cooperation in mobile P2P file sharing networks, (2) modeling of online TV recording services, and (3) QoE of edge-based VoIP applications. The user-centric approach facilitates the development of new mechanisms to overcome problems arising from the changing user behavior. An example is the proposed CycPriM cooperation strategy, which copes with selfish user behavior in mobile P2P file sharing system. An adequate mechanism has also been shown to be efficient in a heterogeneous B3G network with mobile users conducting vertical handovers between different wireless access technologies. The consideration of the user behavior and the user perceived quality guides to an appropriate modeling of future Internet applications. In the case of the online TV recording service, this enables the comparison between different technical realizations of the system, e.g. using server clusters or P2P technology, to properly dimension the installed network elements and to assess the costs for service providers. Technologies like P2P help to overcome phenomena like flash crowds and improve scalability compared to server clusters, which may get overloaded in such situations. Nevertheless, P2P technology invokes additional challenges and different user behavior to that seen in traditional client/server systems. Beside the willingness to share files and the churn of users, peers may be malicious and offer fake contents to disturb the data dissemination. Finally, the understanding and the quantification of QoE with respect to QoS degradations permits designing sophisticated edge-based applications. To this end, we identified and formulated the IQX hypothesis as an exponential interdependency between QoE and QoS parameters, which we validated for different examples. The appropriate modeling of the emerging user behavior taking into account the user's perceived quality and its interactions with the overlay and P2P paradigm will finally help to design future Internet applications. / Applikationen im heutigen Internet werden immer mehr durch intelligente Endknoten bereitgestellt, deren Kommunikation in logischen, virtuellen Netzwerken, (Overlays) realisiert wird. Die verstärkte Diensterbringung durch solche Overlays, wie zum Beispiel bei Peer-to-Peer Dateitauschbörsen oder Telefonie über das Internet, wird durch einen Paradigmenwechsel von Multi-Service Networks zu Multi-Network Services beschrieben. Während in einem Multi-Service Network verschiedene Dienste innerhalb eines Netzes angeboten werden, beschreibt ein Multi-Network Service die Diensterbringung über verschiedene Netze und Netzzugangstechnologien, wie es im Internet der Fall ist. Dadurch kann die technische Güte des Telekommunikationsdienstes (Quality of Service, QoS) nicht mehr die alleinige Metrik für die Qualität eines Dienstes sein. Stattdessen ist die vom Nutzer erfahrene Dienstgüte (User Perceived Quality of Experience, QoE) zu betrachten. Diese QoE muss entsprechend modelliert werden, um die Performanz von heutigen oder auch zukünftigen Internetapplikationen zu beurteilen. Die Berücksichtigung der QoE beinhaltet unter anderem auch neuartige Verhaltensweisen der Teilnehmer, die ebenfalls modelliert werden müssen. Ein Beispiel ist der Dienstabbruch durch ungeduldige Nutzer beim Herunterladen von Filmen oder bei nicht ausreichender Qualität bei Internet-Telefonie. Durch die Verschiebung der Intelligenz von Applikationen in Richtung Endknoten entstehen neu aufkommende Verhaltensweisen der Teilnehmer und sich ändernde Charakteristika des Netzwerkverkehrs, die sie von klassischen Client-Server-Anwendungen unterscheiden. Beispiele hierfür sind egoistisches oder altruistisches Nutzerverhalten bei der Einbringung von Endnutzer-Ressourcen zur Diensterbringung oder auch bösartiges Nutzerverhalten bei der gezielten Störung eines Dienstes (Pollution). In beiden Fällen sind die zeitdynamischen Verhaltensmuster (Churn, Flash Crowds) zu berücksichtigen. Um die ausgedehnten Overlay. Netze zu planen und zu evaluieren, sind überdies auch neue Leistungsbewertungsmodelle notwendig, damit zum Beispiel die Simulation skaliert oder aber auch zeitdynamische Nutzerverhalten in analytischen Modellen abgebildet wird. Diese Doktorarbeit arbeitet diese Aspekte an drei Anwendungsbeispielen auf: Verteilernetz für Dateiinhalte (Content Distribution Network), Netzwerk-basierte Videorekorder (Online TV Recorder) und Sprachtelefonie über P2P (VoP2P). Die Ergebnisse und Untersuchungen dieser Arbeit gliedern sich entsprechend dieser Anwendungsbeispiele.
|
295 |
Peer-to-peer network architecture for massive online gamingShongwe, Bongani 01 September 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / Virtual worlds and massive multiplayer online games are amongst the most popular applications on the
Internet. In order to host these applications a reliable architecture is required. It is essential for the
architecture to handle high user loads, maintain a complex game state, promptly respond to game interactions,
and prevent cheating, amongst other properties. Many of today’s Massive Multiplayer Online
Games (MMOG) use client-server architectures to provide multiplayer service. Clients (players) send
their actions to a server. The latter calculates the game state and publishes the information to the clients.
Although the client-server architecture has been widely adopted in the past for MMOG, it suffers from
many limitations. First, applications based on a client-server architecture are difficult to support and
maintain given the dynamic user base of online games. Such architectures do not easily scale (or handle
heavy loads). Also, the server constitutes a single point of failure. We argue that peer-to-peer architectures
can provide better support for MMOG. Peer-to-peer architectures can enable the user base to scale
to a large number. They also limit disruptions experienced by players due to other nodes failing.
This research designs and implements a peer-to-peer architecture for MMOG. The peer-to-peer architecture
aims at reducing message latency over the network and on the application layer. We refine the
communication between nodes in the architecture to reduce network latency by using SPDY, a protocol
designed to reduce web page load time. For the application layer, an event-driven paradigm was used to
process messages. Through user load simulation, we show that our peer-to-peer design is able to process
and reliably deliver messages in a timely manner. Furthermore, by distributing the work conducted by a
game server, our research shows that a peer-to-peer architecture responds quicker to requests compared
to client-server models.
|
296 |
Quantitative Risk Assessment for Residential MortgagesRen, Qingyun 01 May 2017 (has links)
The crisis of the mortgage market and the mortgage-backed security (MBS) market in 2008 had dramatic negative effects in dragging down all of the economy on a worldwide scale. Many researches have, therefore, attempted to explore the influencing factors on mortgage default risk. This project, in cooperation with the company EnerScore, revolves around discovering a correlation between portfolios of mortgages to underlying energy expenditures. EnerScore€™s core product provides an internal dataset related to home energy efficiency for American homes and gives their corresponding home energy efficiency rating to every home, which is called an €œEnerScore.€� This project involves discovering a correlation between default within portfolios of mortgages based on underlying energy expenditures. The goal is to show that energy efficient homes potentially have lower default risks than standard homes because the homes which lack energy efficiency are associated with higher energy costs. This leaves less money to make the mortgage payment, and thereby increases default risk. The first phase of this project involves finding a foreclosure dataset that will be used to design the quantitative model. Due to limited availability and constraints related to default data, Google search query data is used to develop a broad based and real-time index of mortgage default risk and establish a meaningful scientific correlation. After analyzing several statistical models to explore this correlation, the regression tree model showed that the EnerScore is a strong predictor for mortgage default risk when using city-level mortgage default risk data and EnerScore data.
|
297 |
Desenvolvimento de um ambiente de computação voluntária baseado em computação ponto-a-ponto / Development of an volunteer computing environment based in peer-to-peer computingSantiago, Caio Rafael do Nascimento 13 March 2015 (has links)
As necessidades computacionais de experimentos científicos muitas vezes exigem computadores potentes. Uma forma alternativa de obter esse processamento é aproveitar o processamento ocioso de computadores pessoais de modo voluntário. Essa técnica é conhecida como computação voluntária e possui grande potencial na ajuda aos cientistas. No entanto existem diversos fatores que podem reduzir sua eficiência quando aplicada a experimentos científicos complexos, por exemplo, aqueles que envolvem processamento de longa duração, uso de dados de entrada ou saída muito grandes, etc. Na tentativa de solucionar alguns desses problemas surgiram abordagens que aplicam conceitos de computação ponto-a-ponto. Neste projeto foram especificados, desenvolvidos e testados um ambiente e um escalonador de atividades que aplicam conceitos de computação ponto-a-ponto à execução de workflows com computação voluntária. Quando comparado com a execução local de atividades e com a computação voluntária tradicional houve melhoras em relação ao tempo de execução (até 22% de redução quando comparada com a computação voluntária tradicional nos testes mais complexos) e em alguns casos também houve uma redução no consumo de banda de upload do servidor de até 62%. / The computational needs of scientific experiments often require powerful computers. One alternative way to obtain this processing power is taking advantage of the idle processing of personal computers as volunteers. This technique is known as volunteer computing and has great potential in helping scientists. However, there are several issues which can reduce the efficiency of this approach when applied to complex scientific experiments, such as, the ones with long processing time, very large input or output data, etc. In an attempt to solve these problems some approaches based on P2P concepts arisen. In this project a workflow execution environment and a scheduler of activities were specified, developed and tested applying P2P concepts in the workflows execution using volunteer computing. When compared with the local execution of activities and traditional volunteer computing was the execution time was improved (until 22% of reduction when compared with the traditional volunteer computing in the most complex tests) and in some cases there was also a reduction of the server upload bandwidth use of until 62%.
|
298 |
P2P LENDING MARKET: DETERMINANTS OF INTEREST RATE AND DEFAULT RISKLiu, Guanting January 2019 (has links)
The peer to peer (p2p) lending industry has grown fast in recent years. This study put an eye on the credit evaluation system of one of the p2p platform named lending club. The author used the empirical method and discussed the determinants of the interest rate and the default risk in the p2p lending market. The author concluded that the evaluation system founded by lending club could predict the risk of loans. Collecting more information about borrowers’ credit history may increase the accuracy of the model.
|
299 |
An analysis of continuous consistency models in real time peer-to-peer fighting gamesHuynh, Martin, Valarino, Fernando January 2019 (has links)
This study analyses different methods of maintaining a consistent state between two peers in a real time fighting game played over a network. Current methods of state management are explored in a comprehensive literature review, which establishes a baseline knowledge and theoretical comparison of use cases for the two most common models: delay and rollback. These results were then further explored by a practical case study where a test fighting game was created in Unity3D that implemented both delay and rollback networking. Networking strategies were tested by a group of ten users under different simulated network conditions and their experiences were documented using a Likert-style questionnaire for each stage of testing. Based on user feedback it was found that the implemented rollback strategy provided an overall better user experience. Rollback was found to be more responsive and stable than the delay implementation as network latency was increased, suggesting that rollback is also more fault tolerant than delay.
|
300 |
Performance and security issues in peer-to-peer based content distribution networks. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
Finally, for improving the security of P2P-based CDNs against peer misbehaviors, we present a stochastic analytical model for understanding the performance of the P2P rating systems, which are widely engaged for safeguarding P2P-based CDNs. We study two representative designs, namely the unstructured self-managing rating (UMR) system and the structured supervising rating (SSR) system with the model under various network environments and adversary attacks. We also propose a configurable loosely supervising rating (LSR) system, and show that the system works inexpensively, and could make tradeoffs between the features of the UMR and the SSR system, thus providing a better overall performance according to the application context. / Peer-to-Peer (P2P) networks, especially P2P-based content distribution networks (CDN), have enabled large-scale content distribution without major infrastructure support in recent years. However, P2P-based CDNs suffer from performance issues such as stability and scalability, as well as security threats due to their decentralized nature. In this thesis, we address the performance and security issues in P2P-based CDNs. / We first consider a BitTorrent-like file swarming system. A simple mathematical model is presented for understanding its performance. With the model we find that under the stable state the peer distribution follows an asymmetric U-shaped curve, which is determined and influenced by various factors. We also analyze the content availability in the system and study its dying process, in which the integrity of the content is endangered. An innovative "tit-for-tat" unchoking strategy enabling more peers to finish their download jobs and prolong the system's lifetime is proposed. We then consider an application-layer tree-like overlay for the synchronous live media multicasting system. In particular we address the instability issue of the multicast overlay caused by nodes' abrupt departures. A set of algorithms are proposed to improve the overlay's stability based on actively estimating the nodes' lifetime model. To support our solution, we have studied the lifetime model via real-world measurements, and have formally proved the effectiveness of the algorithms. The experimental performance evaluation indicates that our algorithms work inexpensively, and could improve the overlay's stability considerably. We also consider the asynchronous on-demand media (MoD) streaming using P2P networks. In particular, we aim to improve the scalability of the system by proposing a novel probabilistic caching mechanism. Theoretical analysis is presented to show that by engaging the proposed mechanism with a flexible system parameter, better scalability could be achieved by a MoD system with less workload imposed on the server. Moreover, we show by simulation that our proposed caching mechanism could improve the streaming service conceived by peers under various conditions of server capacities and network environments. / Tian, Ye. / "July 2007." / Adviser: Kam-Wing Ng. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1119. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 180-193). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
Page generated in 0.0642 seconds