Spelling suggestions: "subject:"beers too peer"" "subject:"beers too meer""
351 |
Quality-consciousness in Large-scale Content Distribution in the InternetGupta, Minaxi 23 July 2004 (has links)
Content distribution is the primary
function of the Internet today.
Technologies like multicast and
peer-to-peer networks hold the potential
to serve content to large populations in
a scalable manner. While multicast
provides an efficient transport
mechanism for one-to-many and
many-to-many delivery of data in an
Internet environment, the peer-to-peer
networks allow scalable content location
and retrieval among large groups of
users in the Internet.
Incorporating quality-consciousness in
these technologies is necessary to
enhance the overall experience of
clients. This dissertation focuses on
the architectures and mechanisms to
enhance multicast and peer-to-peer
content distribution through
quality-consciousness. In particular,
the following aspects of
quality-consciousness are addressed: 1)
client latency, 2) service
differentiation, and 3) content quality.
Data analysis shows that the existing
multicast scheduling algorithms behave
unfairly when the access conditions for
the popular files changes. They favor
the popular files while penalizing the
files whose access conditions have not
changed. To maintain the client latency
for all files under dynamic access
conditions we develop a novel multicast
scheduling algorithm that requires no
change in server provisioning.
Service differentiation is a desirable
functionality for both multicast and
peer-to-peer networks. For multicast,
we design a scalable and low overhead
service differentiation architecture.
For peer-to-peer networks, we focus on a
protocol to provide different levels of
service to peers based on their
contributions in the system.
The ability to associate reliable
reputations with peers in a peer-to-peer
network is a useful feature of these
networks. Reliable reputations can help
establish trust in these networks and
hence improve content quality. They can
also be used as a substrate for a
service differentiation scheme for these
networks. This dissertation develops
two methods of tracking peer reputations
with varying degrees of reliability and
overheads.
|
352 |
The Design and Evaluation of Advanced TCP-based Services over an Evolving InternetHe, Qi 19 July 2005 (has links)
Performance evaluation continues to play an important role in network research. Two types of research efforts related to network performance evaluation are particularly noteworthy: (1) using performance evaluation to understand specific problems and to design better solutions, and (2) designing efficient performance evaluation methodologies.
This thesis addresses several performance evaluation challenges, encompassing both categories of effort listed above, in building high-performance TCP-based network services in the context of overlay routing and peer-to-peer systems.
With respect to the first type of research effort, this thesis addresses two issues related to the design of TCP-based network services:
1. Prediction of large transfer TCP throughput: Predicting the TCP throughput attainable on given paths is used for applications such as route selection in overlay routing. Based on a systematic measurement study, we evaluate the accuracy of two categories of TCP throughput prediction techniques. We then analyze the factors that affect the accuracy of each.
2. Congestion control and message loss in Gnutella peer-to-peer networks: We evaluate the congestion control mechanisms and message loss behavior in a real-world overlay network, the Gnutella system. The challenges for congestion control in such a network are analyzed, as are the design tradeoffs of alternative mechanisms. In order to study systems such as the above with details of the network, we build a scalable, extensible and portable packet-level simulator of peer-to-peer systems.
The second part of the thesis, representing the second type of effort above, proposes two techniques to improve network simulation by exploiting the detailed knowledge of TCP:
1. Speed up network simulation by exploiting TCP steady-state predictability:
We develop a technique that uses prediction to accurately summarize a series of packet events and, therefore, to save on processing cost while maintaining fidelity. Our technique integrates well with packet-level simulations and is more faithful in several respects than previous optimization techniques.
2. TCP workload generation under link load constraints: We develop an algorithm that generates traffic for a specific network configuration such that realistic and specific load conditions are obtained on user-specified links. At the same time, the algorithm minimizes the simulation memory requirement.
|
353 |
Throughput and Fairness Considerations in Overlay Networks for Content DistributionKarbhari, Pradnya 26 August 2005 (has links)
The Internet has been designed as a best-effort network, which does not provide any additional services to applications using the network. Overlay networks, which form an application layer network on top of the underlying Internet, have emerged as popular means to provide specific services and greater control to applications. Overlay networks offer a wide range of services, including content distribution, multicast and multimedia streaming. In my thesis, I focus on overlay networks for content distribution, used by applications such as bulk data transfer, file sharing and web retrieval.
I first investigate the construction of such overlay networks by studying the bootstrapping functionality in an example network (the Gnutella peer-to-peer system). This study comprises the analysis and performance measurements of Gnutella servents and measurement of the GWebCache system that helps new peers find existing peers on the Gnutella network.
Next, I look at fairness issues due to the retrieval of data at a client in the form of multipoint-to-point sessions, formed due to the use of content distribution networks. A multipoint-to-point session comprises multiple connections from multiple servers to a single client over multiple paths, initiated to retrieve a single application-level object. I investigate fairness of rate allocation from a session point of view, and propose fairness definitions and algorithms to achieve these definitions.
Finally, I consider the problem of designing an overlay network for content distribution, which is fair to competing overlay networks, while maximizing the total end-to-end throughput of the data it carries. As a first step, I investigate this design problem for a single path in an Overlay-TCP network. I propose two schemes that dynamically provision the number of TCP connections on each hop of an Overlay-TCP path to maximize the end-to-end throughput using few
extraneous connections. Next, I design an Overlay-TCP network, with the secondary goal of intra-overlay network fairness. I propose four schemes for deciding the number of TCP connections to be used on each overlay hop. I show that one can vary the proportion of sharing between competing overlay networks by varying the maximum number of connections allowed on overlay hops in each competing network.
|
354 |
Transmission Schemes, Caching Algorithms and P2P Content Distribution with Network Coding for Efficient Video Streaming ServicesKao, Yung-cheng 23 February 2010 (has links)
For more than a decade, streaming media services, including on-line conferences, distance education and movie broadcasting, have gained much popularity on the Internet. Due to the high bandwidth requirements and long lived nature of video streaming, it requires huge transmission cost to support these streaming media services. In addition, how to adapt rich multimedia content to satisfy various resource-constrained devices presents a challenge. The limited and time-varying network bandwidth complicates the content adaptation tasks. Differentiated content delivery may be required to meet diverse client profiles and user preferences. Therefore, in order to reduce transmission cost to serve heterogeneous clients for efficient streaming, in this dissertation, several novel schemes including transcoding-enable proxy caching scheme, reactive transmission schemes, and network coding P2P content distribution scheme, are proposed to support efficient multiple-version and layered video delivery in the proxy-attached network environment as well as to provide efficient interactive IPTV service in a peer-to-peer network.
Firstly, for multiple-version cache consideration in the transcoding-enable proxy, we focus on reducing the required server bandwidth and startup delay by caching the optimal versions of the video. A generalized video object profit function is derived from the extended weighted transcoding graph to calculate the individual cache profit of certain version of a video object, and the aggregate profit from caching multiple versions of the same video object. This proposed function takes into account the popularity of certain version of a video object, the transcoding delay among versions and the average access duration of each version. Based on the profit function, cache replacement algorithms are proposed to reduce the startup delay and network traffic by efficiently caching video objects with maximum profits.
Next, a set of proxy-assisted transmission schemes are proposed to reduce the transmission cost for layered video streaming by integrating the proxy caching with reactive transmission schemes, peer-to-peer mesh networks and multicast capability. These proposed transmission schemes make multiple requests to be serviced by the single transmission and thus to significantly reduce the total required transmission cost. The optimal proxy prefix cache allocation is also calculated for each transmission scheme to identify the cache layers and cache length of each video to minimize the aggregate transmission cost. The process considers the fact that reduction in transmission cost by caching X layers of a video is not only from requests on X layers, but also from requests on less than X layers.
Finally, we proposed a network coding equivalent content distribution (NCECD) scheme to decrease server stress, startup delay and jumping latency to support random access operations which are desirable for peer-to-peer on-demand video streaming. The random access operations are difficult to be efficiently supported, due to the asynchronous interactive behaviors of users and the dynamic nature of peers. In NCECD, videos are divided into segments which are then further divided into blocks. These blocks are then encoded into independent encoded blocks that are distributed to the local storage of different peers. With NCECD, a new client only needs to connect to a sufficient number of parent peers in order to view the whole video and rarely needs to find new parents when performing random access operations. Whereas most existing methods must search for parent peers containing interested segments, NCECD uses the properties of network coding to cache equivalent content on most peers, so that searches are rarely needed. The analysis of system parameters is given to achieve reasonable block loss rates for peer-to-peer interactive video-on-demand streaming.
Experimental results demonstrate that these proposed schemes can lead to significant transmission cost saving, high delay saving ratio, high bandwidth saving ratio, low startup and jumping searching delays, connecting to a new parent peer delay and less server resources. Hence, these proposed schemes can further be integrated and utilized to build an efficient video streaming platform for providing high-performance and high-quality IPTV services to a diversity of clients.
|
355 |
A Study of Consumer's Cognition on Peer-to-Peer Recommendation Appeal and Tie Strength - A Case of Online Group-BuyingLin, Keng-Kuei 30 August 2010 (has links)
Online group-buying is one of the popular online business models recently. Both the initiator and participants hope to recruit more consumers to join order to aggregate larger orders and thus get cheaper price. Traditionally, consumers always invite their friends or families to join group-buying in order to collect more orders. Hope the relationship could affect their behavior. As the communication and coordination through the Internet are getting more convenient, it is easy and popular to recruit friends in larger range to join group-buying via e-mail.
Further, the increasing virtual communities result from that, members have same interest, concern, and needs. It is quite possible that the members have same needs and therefore initiate a group-buying activity to fulfill many members¡¦ needs. Since information sharing is a major activity between members of virtual communities, the degree of the interactions will impact the tie strength between them. If members can send peer-to-peer recommendation email to other members who may be interested in the group-buying transaction, it may improve the group-buying performance.
In addition, marketing via e-mail is getting common. The different marketing appeal results in different effect. Rational appeals focus on product itself while emotional appeal makes consumer¡¦s feeling change.
The purpose of this research is to explore the difference in advertisement attitude between consumers clicking the peer-to-peer recommendation e-mail and consumers not clicking it. We also examined if these two groups have different cognition of tie strength with the e-mail sender.
The result shows the group clicking the recommendation mail has better advertisement attitude than the group not clicking. Further, emotional appeal induces the subjects¡¦ better cognition of reliability of the appeal
|
356 |
Understanding Churn in Decentralized Peer-to-Peer NetworksYao, Zhongmei 2009 August 1900 (has links)
This dissertation presents a novel modeling framework for understanding the dynamics
of peer-to-peer (P2P) networks under churn (i.e., random user arrival/departure)
and designing systems more resilient against node failure. The proposed models are
applicable to general distributed systems under a variety of conditions on graph construction
and user lifetimes.
The foundation of this work is a new churn model that describes user arrival and
departure as a superposition of many periodic (renewal) processes. It not only allows
general (non-exponential) user lifetime distributions, but also captures heterogeneous
behavior of peers. We utilize this model to analyze link dynamics and the ability
of the system to stay connected under churn. Our results offers exact computation
of user-isolation and graph-partitioning probabilities for any monotone lifetime distribution,
including heavy-tailed cases found in real systems. We also propose an
age-proportional random-walk algorithm for creating links in unstructured P2P networks
that achieves zero isolation probability as system size becomes infinite. We
additionally obtain many insightful results on the transient distribution of in-degree,
edge arrival process, system size, and lifetimes of live users as simple functions of the
aggregate lifetime distribution.
The second half of this work studies churn in structured P2P networks that are
usually built upon distributed hash tables (DHTs). Users in DHTs maintain two types of neighbor sets: routing tables and successor/leaf sets. The former tables determine
link lifetimes and routing performance of the system, while the latter are built for
ensuring DHT consistency and connectivity. Our first result in this area proves that
robustness of DHTs is mainly determined by zone size of selected neighbors, which
leads us to propose a min-zone algorithm that significantly reduces link churn in
DHTs. Our second result uses the Chen-Stein method to understand concurrent
failures among strongly dependent successor sets of many DHTs and finds an optimal
stabilization strategy for keeping Chord connected under churn.
|
357 |
Robust and Scalable Sampling Algorithms for Network MeasurementWang, Xiaoming 2009 August 1900 (has links)
Recent growth of the Internet in both scale and complexity has imposed a number of difficult challenges on existing measurement techniques and approaches, which
are essential for both network management and many ongoing research projects. For
any measurement algorithm, achieving both accuracy and scalability is very challenging given hard resource constraints (e.g., bandwidth, delay, physical memory, and
CPU speed). My dissertation research tackles this problem by first proposing a novel
mechanism called residual sampling, which intentionally introduces a predetermined
amount of bias into the measurement process. We show that such biased sampling
can be extremely scalable; moreover, we develop residual estimation algorithms that
can unbiasedly recover the original information from the sampled data. Utilizing
these results, we further develop two versions of the residual sampling mechanism:
a continuous version for characterizing the user lifetime distribution in large-scale
peer-to-peer networks and a discrete version for monitoring flow statistics (including
per-flow counts and the flow size distribution) in high-speed Internet routers. For the
former application in P2P networks, this work presents two methods: ResIDual-based
Estimator (RIDE), which takes single-point snapshots of the system and assumes
systems with stationary arrivals, and Uniform RIDE (U-RIDE), which takes multiple snapshots and adapts to systems with arbitrary (including non-stationary) arrival
processes. For the latter application in traffic monitoring, we introduce Discrete
RIDE (D-RIDE), which allows one to sample each flow with a geometric random variable. Our numerous simulations and experiments with P2P networks and real
Internet traces confirm that these algorithms are able to make accurate estimation
about the monitored metrics and simultaneously meet the requirements of hard resource constraints. These results show that residual sampling indeed provides an ideal
solution to balancing between accuracy and scalability.
|
358 |
The Evaluation of Inquiry-based Learning with Incentive Mechanisms on Peer-to-Peer NetworksWu, Shih-neng 27 July 2004 (has links)
With rapid development of information technologies, especially the Internet technology, people can communicate more flexibly via various media, in which knowledge can be also shared. In gaining knowledge through the Internet, either digital content retrieval or inter-personal interaction, learning activities conducted on the Web are getting popular. This research has two main objectives. One is to develop incentive mechanisms to enhance the quantity and quality of information shared through peer-to-peer (P2P) networks. The other objective is to implement and evaluate the proposed mechanisms for inquiry-based learning on P2P networks.
The pricing-like incentive mechanism is embedded on each peer to determine the price to share a document, to issue a question, and respond to a question. Through experiments, this study evaluates the effects on mitigating the free-riding problems and exchanging information through the P2P network. The results show the effectiveness of the incentive mechanisms for inquiry-based learning on P2P networks.
|
359 |
The Study of Dynamic Team Formation in Peer-to-Peer NetworksChiang, Chi-hsun 27 July 2004 (has links)
Most of virtual communities are built on the client/server system. There are some limitations on the client/server system such as the maintenance cost and the personal attribute protection. The peer-to-peer system has some strengths to overcome the limitations of client/server system. Therefore, we are willing to export the virtual community on the peer-to-peer system.
There are two main team formation approaches in the current virtual community collaboration. Either one of these approaches alone has its limitations. In this study, we adopt the social network concept to design a team formation mechanism in order to overcome the limitations of current approaches. Besides, because of the natural of peer-to-peer system, the exchange of messages is sending and receiving on the network. The mechanism proposed in this research can also reduce the traffic cost of the team formation process. Furthermore, it maintains the fitness of members who are chosen in the same team.
|
360 |
Achieving Electronic Healthcare Record (ehr) Interoperability Across Healthcare Information SystemsKilic, Ozgur 01 June 2008 (has links) (PDF)
Providing an interoperability infrastructure for Electronic Healthcare Records (EHRs) is on the agenda of many national and regional eHealth initiatives. Two important integration profiles have been specified for this purpose: the " / IHE Cross-enterprise Document Sharing (XDS)" / and the " / IHE Cross Community Access (XCA)" / . XDS describes how to share EHRs in a community of healthcare enterprises and XCA describes how EHRs are shared across communities.
However, currently no solution addresses some of the important challenges of cross community exchange environments. The first challenge is scalability. If every community joining the network needs to connect to every other community, this solution will not scale. Furthermore, each community may use a different coding vocabulary for the same metadata attribute in which case the target community cannot interpret the query involving such an attribute. Another important challenge is that each community has a different patient identifier domain. Querying for the patient identifiers in another community using patient demographic data may create patient privacy concerns. Yet another challenge in cross community EHR access is the EHR interoperability since the communities may be using different EHR content standards.
|
Page generated in 0.0548 seconds