• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 10
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 181
  • 74
  • 37
  • 36
  • 32
  • 27
  • 26
  • 25
  • 25
  • 22
  • 22
  • 20
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Caching Techniques For Dynamic Web Servers

Suresha, * 07 1900 (has links)
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
12

Wireless network caching scheme for a cost saving wireless data access

Wang, Jerry Chun-Ping, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Recent widespread use of computer and wireless communication technologies has increased the demand of data services via wireless channels. However, providing high data rate in wireless system is expensive due to many technical and physical limitations. Unlike voice service, data service can tolerate delays and allow burst transfer of information, thus, an alternative approach had to be formulated. This approach is known as ???Infostation.??? Infostation is an inexpensive, high speed wireless disseminator that features discontinuous coverage and high radio transmission rate by using many short-distance high bandwidth local wireless stations in a large terrain. As opposed to ubiquitous networks, each infostation provides independent wireless connectivity at relative shorter distance compare to traditional cellular network. However, due to the discontinuous nature of infostation network, there is no data service available between stations, and the clients become completely disconnected from the outside world. During, the disconnected period, the clients have to access information locally. Thus, the need for a good wireless network caching scheme has arisen. In this dissertation, we explore the use of the infostation model for disseminating and caching of data. Our initial approach focuses on large datasets that exhibit hierarchical structure. In order to facilitate information delivery, we exploit the hierarchical nature of the file structure, then propose generic content scheduling and cache management strategies for infostations. We examine the performance of our proposed strategies with the network simulator Qualnet. Our simulation results demonstrate the improvement in increasing the rate of successful data access, thus alleviating excessive waiting overheads during disconnected periods. Moreover, our technique allows infostations to be combined with traditional cellular networks and avoid accessing data via scarce and expensive wireless channel for the purpose of cost reduction.
13

Using cooperation to improve the experience of web services consumers

Luo, Yuting 11 September 2009
Web Services (WS) are one of the most promising approaches for building loosely coupled systems. However, due to the heterogeneous and dynamic nature of the WS environment, ensuring good QoS is still non-trivial. While WS tend to scale better than tightly coupled systems, they introduce a larger communication overhead and are more susceptible to server/resource latency. Traditionally this problem has been addressed by relying on negotiated Service Level Agreement to ensure the required QoS, or the development of elaborate compensation handlers to minimize the impact of undesirable latency.<p> This research focuses on the use of cooperation between consumers and providers as an effective means of optimizing resource utilization and consumer experiences. It introduces a novel cooperative approach to implement the cooperation between consumers and providers.
14

Cache-Aware Virtual Page Management

Szlavik, Alexander 19 February 2013 (has links)
With contemporary research focusing its attention primarily on benchmark-driven performance evaluation, studying fundamental memory characteristics has gone by the way-side. This thesis presents a systematic study of the expected performance characteristics for contemporary multi-core CPUs. These characteristics are the primary influence on benchmarking variability and need to be quantified if more accurate benchmark results are desired. With the aid of a new, highly customizable, micro-benchmark suite, these CPU-specific attributes are evaluated and contrasted. The benchmark tool provides the framework for accurately measuring instruction throughput and integrates hardware performance counters to gain insight into machine-level caching performance. Additionally, the Linux operating system's impact on cache utilization is evaluated. With careful virtual memory management, cache-misses may be reduced, significantly contributing to benchmark result stability. Finally, a popular cache performance model, stack distance profile, is evaluated with respect to contemporary CPU architectures. While particularly popular in multi-core contention-aware scheduling projects, modern incarnations of the model fail to account for trends in CPU cache hardware, leading to measurable degrees of inaccuracy.
15

Managing Cache Consistency to Scale Dynamic Web Systems

Wasik, Chris January 2007 (has links)
Data caching is a technique that can be used by web servers to speed up the response time of client requests. Dynamic websites are becoming more popular, but they pose a problem –- it is difficult to cache dynamic content, as each user may receive a different version of a webpage. Caching fragments of content in a distributed way solves this problem, but poses a maintainability challenge: cached fragments may depend on other cached fragments, or on underlying information in a database. When the underlying information is updated, care must be taken to ensure cached information is also invalidated. If new code is added that updates the database, the cache can very easily become inconsistent with the underlying data. The deploy-time dependency analysis method solves this maintainability problem by analyzing web application source code at deploy-time, and statically writing cache dependency information into the deployed application. This allows for the significant performance gains distributed object caching can allow, without any of the maintainability problems that such caching creates.
16

Managing Cache Consistency to Scale Dynamic Web Systems

Wasik, Chris January 2007 (has links)
Data caching is a technique that can be used by web servers to speed up the response time of client requests. Dynamic websites are becoming more popular, but they pose a problem –- it is difficult to cache dynamic content, as each user may receive a different version of a webpage. Caching fragments of content in a distributed way solves this problem, but poses a maintainability challenge: cached fragments may depend on other cached fragments, or on underlying information in a database. When the underlying information is updated, care must be taken to ensure cached information is also invalidated. If new code is added that updates the database, the cache can very easily become inconsistent with the underlying data. The deploy-time dependency analysis method solves this maintainability problem by analyzing web application source code at deploy-time, and statically writing cache dependency information into the deployed application. This allows for the significant performance gains distributed object caching can allow, without any of the maintainability problems that such caching creates.
17

Using cooperation to improve the experience of web services consumers

Luo, Yuting 11 September 2009 (has links)
Web Services (WS) are one of the most promising approaches for building loosely coupled systems. However, due to the heterogeneous and dynamic nature of the WS environment, ensuring good QoS is still non-trivial. While WS tend to scale better than tightly coupled systems, they introduce a larger communication overhead and are more susceptible to server/resource latency. Traditionally this problem has been addressed by relying on negotiated Service Level Agreement to ensure the required QoS, or the development of elaborate compensation handlers to minimize the impact of undesirable latency.<p> This research focuses on the use of cooperation between consumers and providers as an effective means of optimizing resource utilization and consumer experiences. It introduces a novel cooperative approach to implement the cooperation between consumers and providers.
18

Improving the Caching Performances in Web using Cooperative Proxies

Huang, Li-te 03 February 2009 (has links)
Nowadays, Web caching technique has been widely used and become one of the most promising ways to reduce network traffic, server load, and user-experienced latency while users surf the Web. In the context of traditional systems, caching techniques have been extensively studied. However, these techniques are not directly applicable to Web due to larger size of working set and cache storage in proxies. Many studies have presented their approaches to improving the performance of Web caching. Two of most representative approaches are hash routing [25] and directory-base digest [12]. Hash routing provides amapping fromthe URL of object to the location of proxy, which has the cached object, while directory-based digest records the pairs of proxy locations and object URLs for answering the query when local misses occur in any proxy. Hash routing can best utilize storage space by eliminating duplicated objects among proxies,while directory-based digest allows object replicas among proxies to resist proxy failures. These two conventional approaches have complementary tradeoffs. In this thesis, a comprehensive approach to cooperative caching for Web proxies, using a combination of hash routing and directory-based digest, is presented. Our approach tends to subsume these widely used approaches and thus gives a spectrum of trade-off between the overall hit ratio and its associated overhead. Through the simulations using real-life proxy traces, the performance and overhead of our proposed mechanism were evaluated. The experimental results showed that our approach outperforms the previous competitors.
19

Cache-Aware Virtual Page Management

Szlavik, Alexander 19 February 2013 (has links)
With contemporary research focusing its attention primarily on benchmark-driven performance evaluation, studying fundamental memory characteristics has gone by the way-side. This thesis presents a systematic study of the expected performance characteristics for contemporary multi-core CPUs. These characteristics are the primary influence on benchmarking variability and need to be quantified if more accurate benchmark results are desired. With the aid of a new, highly customizable, micro-benchmark suite, these CPU-specific attributes are evaluated and contrasted. The benchmark tool provides the framework for accurately measuring instruction throughput and integrates hardware performance counters to gain insight into machine-level caching performance. Additionally, the Linux operating system's impact on cache utilization is evaluated. With careful virtual memory management, cache-misses may be reduced, significantly contributing to benchmark result stability. Finally, a popular cache performance model, stack distance profile, is evaluated with respect to contemporary CPU architectures. While particularly popular in multi-core contention-aware scheduling projects, modern incarnations of the model fail to account for trends in CPU cache hardware, leading to measurable degrees of inaccuracy.
20

Wireless network caching scheme for a cost saving wireless data access

Wang, Jerry Chun-Ping, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Recent widespread use of computer and wireless communication technologies has increased the demand of data services via wireless channels. However, providing high data rate in wireless system is expensive due to many technical and physical limitations. Unlike voice service, data service can tolerate delays and allow burst transfer of information, thus, an alternative approach had to be formulated. This approach is known as ???Infostation.??? Infostation is an inexpensive, high speed wireless disseminator that features discontinuous coverage and high radio transmission rate by using many short-distance high bandwidth local wireless stations in a large terrain. As opposed to ubiquitous networks, each infostation provides independent wireless connectivity at relative shorter distance compare to traditional cellular network. However, due to the discontinuous nature of infostation network, there is no data service available between stations, and the clients become completely disconnected from the outside world. During, the disconnected period, the clients have to access information locally. Thus, the need for a good wireless network caching scheme has arisen. In this dissertation, we explore the use of the infostation model for disseminating and caching of data. Our initial approach focuses on large datasets that exhibit hierarchical structure. In order to facilitate information delivery, we exploit the hierarchical nature of the file structure, then propose generic content scheduling and cache management strategies for infostations. We examine the performance of our proposed strategies with the network simulator Qualnet. Our simulation results demonstrate the improvement in increasing the rate of successful data access, thus alleviating excessive waiting overheads during disconnected periods. Moreover, our technique allows infostations to be combined with traditional cellular networks and avoid accessing data via scarce and expensive wireless channel for the purpose of cost reduction.

Page generated in 0.024 seconds