• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Characterizing Web Response Time

Liu, Binzhang M.S. 07 May 1998 (has links)
It is critical to understand WWW latency in order to design better HTTP protocols. In this study we characterize Web response time and examine the effects of proxy caching, network bandwidth, traffic load, persistent connections for a page, and periodicity. Based on studies with four workloads, we show that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTP/1.0. The distributions of connection time and elapsed time can be modeled using Pearson, Weibul, or Log-logistic distributions. We also characterize the effect of a user's network bandwidth on response time. Average connection time from a client via a 33.6 K modem is two times longer than that from a client via switched Ethernet. We estimate the elapsed time savings from using persistent connections for a page to vary from about a quarter to a half. Response times display strong daily and weekly patterns. This study finds that a proxy caching server is sensitive to traffic loads. Contrary to the typical thought about Web proxy caching, this study also finds that a single stand-alone squid proxy cache does not always reduce response time for our workloads. Implications of these results to future versions of the HTTP protocol and to Web application design also are discussed. / Master of Science
2

Transmission Schemes, Caching Algorithms and P2P Content Distribution with Network Coding for Efficient Video Streaming Services

Kao, Yung-cheng 23 February 2010 (has links)
For more than a decade, streaming media services, including on-line conferences, distance education and movie broadcasting, have gained much popularity on the Internet. Due to the high bandwidth requirements and long lived nature of video streaming, it requires huge transmission cost to support these streaming media services. In addition, how to adapt rich multimedia content to satisfy various resource-constrained devices presents a challenge. The limited and time-varying network bandwidth complicates the content adaptation tasks. Differentiated content delivery may be required to meet diverse client profiles and user preferences. Therefore, in order to reduce transmission cost to serve heterogeneous clients for efficient streaming, in this dissertation, several novel schemes including transcoding-enable proxy caching scheme, reactive transmission schemes, and network coding P2P content distribution scheme, are proposed to support efficient multiple-version and layered video delivery in the proxy-attached network environment as well as to provide efficient interactive IPTV service in a peer-to-peer network. Firstly, for multiple-version cache consideration in the transcoding-enable proxy, we focus on reducing the required server bandwidth and startup delay by caching the optimal versions of the video. A generalized video object profit function is derived from the extended weighted transcoding graph to calculate the individual cache profit of certain version of a video object, and the aggregate profit from caching multiple versions of the same video object. This proposed function takes into account the popularity of certain version of a video object, the transcoding delay among versions and the average access duration of each version. Based on the profit function, cache replacement algorithms are proposed to reduce the startup delay and network traffic by efficiently caching video objects with maximum profits. Next, a set of proxy-assisted transmission schemes are proposed to reduce the transmission cost for layered video streaming by integrating the proxy caching with reactive transmission schemes, peer-to-peer mesh networks and multicast capability. These proposed transmission schemes make multiple requests to be serviced by the single transmission and thus to significantly reduce the total required transmission cost. The optimal proxy prefix cache allocation is also calculated for each transmission scheme to identify the cache layers and cache length of each video to minimize the aggregate transmission cost. The process considers the fact that reduction in transmission cost by caching X layers of a video is not only from requests on X layers, but also from requests on less than X layers. Finally, we proposed a network coding equivalent content distribution (NCECD) scheme to decrease server stress, startup delay and jumping latency to support random access operations which are desirable for peer-to-peer on-demand video streaming. The random access operations are difficult to be efficiently supported, due to the asynchronous interactive behaviors of users and the dynamic nature of peers. In NCECD, videos are divided into segments which are then further divided into blocks. These blocks are then encoded into independent encoded blocks that are distributed to the local storage of different peers. With NCECD, a new client only needs to connect to a sufficient number of parent peers in order to view the whole video and rarely needs to find new parents when performing random access operations. Whereas most existing methods must search for parent peers containing interested segments, NCECD uses the properties of network coding to cache equivalent content on most peers, so that searches are rarely needed. The analysis of system parameters is given to achieve reasonable block loss rates for peer-to-peer interactive video-on-demand streaming. Experimental results demonstrate that these proposed schemes can lead to significant transmission cost saving, high delay saving ratio, high bandwidth saving ratio, low startup and jumping searching delays, connecting to a new parent peer delay and less server resources. Hence, these proposed schemes can further be integrated and utilized to build an efficient video streaming platform for providing high-performance and high-quality IPTV services to a diversity of clients.
3

A Peer To Peer Web Proxy Cache For Enterprise Networks

Ravindranath, C K 06 1900 (has links)
In this thesis, we propose a decentralized peer-to-peer (P2P) Web proxy cache for enterprise networks (ENs). Currently, enterprises use a centralized proxy-based Web cache, where a dedicated proxy server does the caching. A dedicated proxy Web Cache has to be over-provisioned to handle peak loads. It is expensive, a single point of failure, and a bottleneck. In a P2P Web Cache, the clients themselves cooperate in caching the Web objects without any dedicated proxy cache. The resources from the client machines are pooled together to form a Web cache. This eliminates the need for extra hardware and the single point of failure, and improves the average response time, since all the machines serve the request queue. The most important attraction for the P2P scheme is its inherent scalability. Squirrel was the earliest P2P Web cache. Squirrel is built upon a structured P2P protocol called Pastry. Pastry is based on consistent hashing; a special hashing that performs well in the presence of client membership changes. Consistent hashing based protocols are designed for Internet-wide environments to handle very large membership sizes and high rates of membership change. To minimize the protocol bandwidth, the membership state maintained at each peer is very small. This state consists of the information about the peer’s immediate neighbours, and those of a few other P2P members, to achieve faster look-up. This scheme has the following advantages: (i) since peers do not maintain information about all the other peers in the system, any peer needing an object has to find the peer responsible for the object through a multi-hop lookup, thereby increasing the latency, and (ii) the number of objIds assigned to a peer depends on the hashing used, and this can be skewed, which affects the load distribution. The popular applications of the P2P paradigm have been file-sharing systems. These systems are deployed across the Internet. Hence, the existing P2P protocols were designed to operate within the constraints of Internet environments. The P2P proxy Web cache has been a recent application of the P2P paradigm. P2P Web Proxy caches operate across the entire network of an enterprise. An enterprise network(EN) comprises all the computing and communications capabilities of an institution. Institutions typically consist of many departments, with each department having and managing its own local area netwok (LAN). The available bandwidth in LANs is very high. LANs have low latency and low error rates. EN environments have smaller membership size, less frequent membership changes and more available bandwidth. Hence, in such environments, the P2P protocol can afford to store more membership information. This thesis explores the significant differences between EN and Internet environments. It proposes a new P2P protocol designed to exploit these differences, and a P2P Web proxy caching scheme based on this new protocol. Specifically, it shows that it is possible to maintain complete the consistent membership information on ENs. The thesis then presents a load distribution policy for a P2P system with complete and consistent membership information to achieve (i) load balance and (ii) minimum object migrations subsequent to each node join or node leave event. The proposed system requires extra storage and bandwidth costs. We have seen that the necessary storage is available in general workstations and the required bandwidth is feasible in modern networks. We then evaluated the improvement in performance achieved by the system over existing consistent hashing based systems. We have shown that without investing in any special hardware, the P2P system can match the performance of dedicated proxy caches. We have further shown that the buddy based P2P scheme has a better load distribution, especially under heavy loads when load balancing becomes critical. We have also shown that for large P2P systems, the buddy based scheme has a lower latency than the consistent hashing based schemes. Further, we have compared the costs of the proposed scheme and the existing consistent hashing based scheme for different loads (i.e., rate of Web object requests), and identified the situations in which the proposed scheme is likely to perform best. In summary, the thesis shows that (i) the membership dynamics of P2P systems on ENs are different from that of Internet file-sharing systems and (ii) it is feasible in ENs, to maintain complete the consistent view of the P2P membership at all the peers. We have designed a structured P2P protocol for LANs that maintains a complete and consistent view of membership information at all peers. P2P Web caches achieve single hop routing and a better balanced load distribution using this scheme. Complete and consistent view of membership information enabled a single-hop lookup and a flexible load assignment.

Page generated in 0.0591 seconds