Spelling suggestions: "subject:"eeb caching"" "subject:"beb caching""
1 |
Improving the Caching Performances in Web using Cooperative ProxiesHuang, Li-te 03 February 2009 (has links)
Nowadays, Web caching technique has been widely used and become one of the most promising ways to reduce network traffic, server load, and user-experienced latency while users surf the Web. In the context of traditional systems, caching techniques have been extensively studied. However, these techniques are not directly applicable to Web due to larger size of working set and cache storage in proxies. Many studies have presented their approaches to improving the performance of Web caching. Two of most representative approaches are hash routing [25] and directory-base digest [12]. Hash routing provides amapping fromthe URL of object to the location of proxy, which has the cached object, while directory-based digest records the pairs of proxy locations and object URLs for answering the query when local misses occur in any proxy. Hash routing can best utilize storage space by eliminating duplicated objects among proxies,while directory-based digest allows object replicas among proxies to resist proxy failures. These two conventional approaches have complementary tradeoffs.
In this thesis, a comprehensive approach to cooperative caching for Web proxies, using a combination of hash routing and directory-based digest, is presented. Our approach tends to subsume these widely used approaches and thus gives a spectrum of trade-off between the overall hit ratio and its associated overhead. Through the simulations using real-life proxy traces, the performance and overhead of our proposed mechanism were evaluated. The experimental results showed that our approach outperforms the previous competitors.
|
2 |
Caching dynamic data for web applicationsMahdavi, Mehregan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Web portals are one of the rapidly growing applications, providing a single interface to access different sources (providers). The results from the providers are typically obtained by each provider querying a database and returning an HTML or XML document. Performance and in particular providing fast response time is one of the critical issues in such applications. Dissatisfaction of users dramatically increases with increasing response time, resulting in abandonment of Web sites, which in turn could result in loss of revenue by the providers and the portal. Caching is one of the key techniques that address the performance of such applications. In this work we focus on improving the performance of portal applications via caching. We discuss the limitations of existing caching solutions in such applications and introduce a caching strategy based on collaboration between the portal and its providers. Providers trace their logs, extract information to identify good candidates for caching and notify the portal. Caching at the portal is decided based on scores calculated by providers and associated with objects. We evaluate the performance of the collaborative caching strategy using simulation data. We show how providers can trace their logs and calculate cache-worthiness scores for their objects and notify the portal. We also address the issue of heterogeneous scoring policies by different providers and introduce mechanisms to regulate caching scores. We also show how portal and providers can synchronize their meta-data in order to minimize the overhead associated with collaboration for caching.
|
3 |
Replica placement algorithms for efficient internet content delivery.Xu, Shihong January 2009 (has links)
This thesis covers three main issues in content delivery with a focus on placement algorithms of replica servers and replica contents. In a content delivery system, the location of replicas is very important as perceived by a quotation: Closer is better. However, considering the costs incurred by replication, it is a challenge to deploy replicas in a cost-effective manner. The objective of our work is to optimally select the location of replicas which includes sites for replica server deployment, servers for replica contents hosting, and en-route caches for object caching. Our solutions for corresponding applications are presented in three parts of the work, which makes significant contributions for designing scalable, reliable, and efficient systems for Internet content delivery. In the first part, we define the Fault-Tolerant Facility Allocation (FTFA) problem for the placement of replica servers, which relaxes the well known Fault-Tolerant Facility Location (FTFL) problem by allowing an integer (instead of binary) number of facilities per site. We show that the problem is NP-hard even for the metric version, where connection costs satisfy the triangle inequality. We propose two efficient algorithms for the metric FTFA problem with approximation factors 1.81 and 1.61 respectively, where the second algorithm is also shown to be (1.11,1.78)- and (1,2)-approximation through the proposed inverse dual fitting technique. The first bi-factor approximation result is further used to achieve a 1.52-approximation algorithm and the second one a 4-approximation algorithm for the metric Fault-Tolerant k-Facility Allocation problem, where an upper bound of facility number (i. e. k) applies. In the second part, we formulate the problem of QoS-aware content replication for parallel access in terms of combined download speed maximization, where each client has a given degree of parallel connections determined by its QoS requirement. The problem is further converted into the metric FTFL problem and we propose an approximation algorithm which is implemented in a distributed and asynchronous manner of communication. We show theoretically that the cost of our solution is no more than 2F* + RC*, where F* and C* are two components of any optimal solution while R is the maximum number of parallel connections. Numerical experiments show that the cost of our solutions is comparable (within 4% error) to the optimal solutions. In the third part, we establish mathematical formulation for the en-route web caching problem in a multi-server network that takes into account all requests (to any server) passing through the intermediate nodes on a request/response path. The problem is to cache the requested object optimally on the path so that the total system gain is maximized. We consider the unconstrained case and two QoS-constrained cases respectively, using efficient dynamic programming based methods. Simulation experiments show that our methods either yield a steady performance improvement (in the unconstrained case) or provide required QoS guarantees. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1461921 / Thesis (Ph.D.) - University of Adelaide, School of Computer Science, 2009
|
4 |
A Caching And Streaming Framework For MultimediaPaknikar, Shantanu 12 1900 (has links) (PDF)
No description available.
|
5 |
Nätverksoptimering med öppen källkod : En studie om nätverksoptimering för sjöfartenDeshayes, Dan, Sedvallsson, Simon January 1900 (has links)
Detta examensarbete handlar om hur datatrafik över en satellitlänk kan optimeras för att minska laddningstider och överförd datamängd. Syftet med studien är att undersöka i vilken omfattning datatrafik mellan fartyg och land via satellitlänk kan styras så att trafiken blir effektivare. Genom att använda DNS-mellanlagring, mellanlagring av webbsidor samt annonsblockering med pfSense som plattform har examensarbetet utfört experiment emot olika hemsidor och mätt laddningstid samt överförd datamängd. Resultatet visade att det fanns stora möjligheter att optimera nätverkstrafiken och de uppmätta resultaten visade på en minskning av datamängden med 94% och laddningstiderna med 67%. / The thesis describes how network traffic transmitted via a satellite link can be optimized in order to reduce loading times and transmitted data. The purpose with this study has been to determine what methods are available to control and reduce the amount of data transmitted through a network and how this data is affected. By applying the practice of DNS caching, web caching and ad blocking with the use of pfSense as a platform the study has performed experiments targeting different web sites and measured the loading times and amount of transmitted data. The results showed good possibilities to optimize the network traffic and the measured values indicated a reduction of the network traffic of up to 94% and loading times with 67%.
|
6 |
Cache Design for Massive Heterogeneous Data of Mobile Social MediaZhang, Ruiyang January 2014 (has links)
Since social media gains ever increasing popularity, Online Social Networks have become important repositories for information retrieval. The concept of social search, therefore, is gradually being recognized as the next breakthrough in this field, and it is expected to dominate topics in industry. However, retrieving information from OSNs with high Quality of Experience is non-trivial as a result of the prevalence of mobile applications for social networking services. For the sake of shortening user perceived latency Web caching was introduced and has been studied extensively for years. Nevertheless, the previous works seldom focus on the Web caching solutions for social search. In the context of this master’s thesis project, emphasis is given to the design of a Web caching system which is used to cache public data from social media with the objective of improving the user experience in terms of the freshness of data and the perceived service latency. To be more specific, a Web caching strategy named Staleness Bounded LRU algorithm is proposed to limit the term of validity of the cached data. In addition, a Two-Level Web Caching System that adopts the SB-LRU algorithm is proposed in order for shortening the user perceived latency. Results of trace-driven simulations and performance evaluations demonstrate that serving clients with stale data is avoided and the user perceived latencies are significantly shortened when the proposed Web caching system is used in the use case of unauthenticated social search. Besides, the design idea in this project is believed to be helpful to the design of a Web caching system for social search, which is capable of caching user specific data for different clients.
|
7 |
Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streamingMolina Moreno, Benjamin 02 September 2013 (has links)
Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación.
El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada.
El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network).
Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc.
Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet.
Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica. / Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637
|
8 |
Deterministic Object Management in Large Distributed SystemsMikhailov, Mikhail 05 March 2003 (has links)
Caching is a widely used technique to improve the scalability of distributed systems. A central issue with caching is maintaining object replicas consistent with their master copies. Large distributed systems, such as the Web, typically deploy heuristic-based consistency mechanisms, which increase delay and place extra load on the servers, while not providing guarantees that cached copies served to clients are up-to-date. Server-driven invalidation has been proposed as an approach to strong cache consistency, but it requires servers to keep track of which objects are cached by which clients.
We propose an alternative approach to strong cache consistency, called MONARCH, which does not require servers to maintain per-client state. Our approach builds on a few key observations. Large and popular sites, which attract the majority of the traffic, construct their pages from distinct components with various characteristics. Components may have different content types, change characteristics, and semantics. These components are merged together to produce a monolithic page, and the information about their uniqueness is lost. In our view, pages should serve as containers holding distinct objects with heterogeneous type and change characteristics while preserving the boundaries between these objects. Servers compile object characteristics and information about relationships between containers and embedded objects into explicit object management commands. Servers piggyback these commands onto existing request/response traffic so that client caches can use these commands to make object management decisions.
The use of explicit content control commands is a deterministic, rather than heuristic, object management mechanism that gives content providers more control over their content. The deterministic object management with strong cache consistency offered by MONARCH allows content providers to make more of their content cacheable. Furthermore, MONARCH enables content providers to expose internal structure of their pages to clients.
We evaluated MONARCH using simulations with content collected from real Web sites. The results show that MONARCH provides strong cache consistency for all objects, even for unpredictably changing ones, and incurs smaller byte and message overhead than heuristic policies. The results also show that as the request arrival rate or the number of clients increases, the amount of server state maintained by MONARCH remains the same while the amount of server state incurred by server invalidation mechanisms grows.
|
9 |
A Peer To Peer Web Proxy Cache For Enterprise NetworksRavindranath, C K 06 1900 (has links)
In this thesis, we propose a decentralized peer-to-peer (P2P) Web proxy cache for enterprise networks (ENs). Currently, enterprises use a centralized proxy-based Web cache, where a dedicated proxy server does the caching. A dedicated proxy Web Cache has to be over-provisioned to handle peak loads. It is expensive, a single point of failure, and a bottleneck. In a P2P Web Cache, the clients themselves cooperate in caching the Web objects without any dedicated proxy cache. The resources from the client machines are pooled together to form a Web cache. This eliminates the need for extra hardware and the single point of failure, and improves the average response time, since all the machines serve the request queue. The most important attraction for the P2P scheme is its inherent scalability.
Squirrel was the earliest P2P Web cache. Squirrel is built upon a structured P2P protocol called Pastry. Pastry is based on consistent hashing; a special hashing that performs well in the presence of client membership changes. Consistent hashing based protocols are designed for Internet-wide environments to handle very large membership sizes and high rates of membership change. To minimize the protocol bandwidth, the membership state maintained at each peer is very small. This state consists of the information about the peer’s immediate neighbours, and those of a few other P2P members, to achieve faster look-up.
This scheme has the following advantages: (i) since peers do not maintain information about all the other peers in the system, any peer needing an object has to find the peer responsible for the object through a multi-hop lookup, thereby increasing the latency, and (ii) the number of objIds assigned to a peer depends on the hashing used, and this can be skewed, which affects the load distribution.
The popular applications of the P2P paradigm have been file-sharing systems. These systems are deployed across the Internet. Hence, the existing P2P protocols were designed to operate within the constraints of Internet environments. The P2P proxy Web cache has been a recent application of the P2P paradigm. P2P Web Proxy caches operate across the entire network of an enterprise. An enterprise network(EN) comprises all the computing and communications capabilities of an institution. Institutions typically consist of many departments, with each department having and managing its own local area netwok (LAN). The available bandwidth in LANs is very high. LANs have low latency and low error rates. EN environments have smaller membership size, less frequent membership changes and more available bandwidth. Hence, in such environments, the P2P protocol can afford to store more membership information.
This thesis explores the significant differences between EN and Internet environments. It proposes a new P2P protocol designed to exploit these differences, and a P2P Web proxy caching scheme based on this new protocol. Specifically, it shows that it is possible to maintain complete the consistent membership information on ENs. The thesis then presents a load distribution policy for a P2P system with complete and consistent membership information to achieve (i) load balance and (ii) minimum object migrations subsequent to each node join or node leave event.
The proposed system requires extra storage and bandwidth costs. We have seen that the necessary storage is available in general workstations and the required bandwidth is feasible in modern networks. We then evaluated the improvement in performance achieved by the system over existing consistent hashing based systems. We have shown that without investing in any special hardware, the P2P system can match the performance of dedicated proxy caches. We have further shown that the buddy based P2P scheme has a better load distribution, especially under heavy loads when load balancing becomes critical. We have also shown that for large P2P systems, the buddy based scheme has a lower latency than the consistent hashing based schemes. Further, we have compared the costs of the proposed scheme and the existing consistent hashing based scheme for different loads (i.e., rate of Web object requests), and identified the situations in which the proposed scheme is likely to perform best.
In summary, the thesis shows that (i) the membership dynamics of P2P systems on ENs are different from that of Internet file-sharing systems and (ii) it is feasible in ENs, to maintain complete the consistent view of the P2P membership at all the peers. We have designed a structured P2P protocol for LANs that maintains a complete and consistent view of membership information at all peers. P2P Web caches achieve single hop routing and a better balanced load distribution using this scheme. Complete and consistent view of membership information enabled a single-hop lookup and a flexible load assignment.
|
Page generated in 0.0907 seconds