1 |
Optimized Nested Complex Event Processing Using Continuous CachingRay, Medhabi 12 October 2011 (has links)
"Complex Event Processing (CEP) has become increasingly important for tracking and monitoring anomalies and trends in event streams emitted from business processes such as supply chain management to online stores in e-commerce. These monitoring applications submit complex event queries to track sequences of events that match a given pattern. While the state-of-the-art CEP systems mostly focus on the execution of flat sequence queries, we instead support the execution of nested CEP queries specified by the (NEsted Event Language) NEEL. However the iterative execution often results in the repeated recomputation of similar or even identical results for nested sub- expressions as the window slides over the event stream. This work proposes to optimize NEEL execution performance by caching intermediate results. In particular a method of applying selective caching of intermediate results called Continuous Sliding Caching technique has been designed. Then a further optimization of the previous technique which we call the Semantic Caching and the Continuous Semantic Caching have been proposed. Techniques for incrementally loading, purging and exploiting the cache content are described. Our experimental study using real- world stock trades evaluates the performance of our proposed caching strategies for different query types."
|
2 |
ADAPTIVE PROFILE DRIVEN DATA CACHING AND PREFETCHING IN MOBILE ENVIRONMENTMahmood, Omer January 2005 (has links)
This thesis describes a new method of calculating data priority by using adaptive mobile user and device profiles which change with user location, time of the day, available networks and data access history. The profiles are used for data prefetching, selection of most suitable wireless network and cache management on the mobile device in order to optimally utilize the device�s storage capacity and available bandwidth. Some of the inherent characteristics of mobile devices due to user movements are �non-persistent connection, limited bandwidth and storage capacity, changes in mobile device�s geographical location and connection (eg. connection can be from GPRS to WLAN to Bluetooth). New research is being carried out in making mobile devices work more efficiently by reducing and/or eliminating their limitations. The focus of this research is to propose, evaluate and test a new user profiling technique which specifically caters to the needs of the mobile device users who are required to access large amounts of data, possibly more than the device storage capability during the course of the day or week. This work involves the development of an intelligent user profiling system along with mobile device caching system which will first allocate weight (priority) to the different sets and subsets of the total given data based on user�s location, user�s appointment information, user�s preferences, device capabilities and available networks. Then the profile will automatically change the data weights with user movements, history of cached data access and characteristics of available networks. The Adaptive User and Device Profiles were designed to handle broad range of the issues associated with: �Changing network types and conditions �Limited storage capacity and document type support of mobile devices �Changes in user data needs due to their movements at different times of the day Many research areas have been addressed through this research but the primary focus has remained on the following four core areas. The four core areas are : selecting the most suitable wireless network; allocating weights to different datasets & subsets by integrating user�s movements; previously accessed data; time of the day with user appointment information and device capabilities.
|
3 |
Cache architectures to improve IP lookupsRavinder, Sunil 11 1900 (has links)
IP address lookup is an important processing function of Internet routers. The challenge lies in finding the longest prefix that matches the packets destination address. One of the issues concerning IP address lookups is the average lookup time. In previous works, caching was shown to be an effective method to minimize the average lookup time. Caching involves storing information on recent IP lookup results in order to decrease average lookup times. In this thesis, we present two architectures that contain a prefix cache and a dynamic substride cache. The dynamic substride cache stores longest possible substrides from previous lookups, and is used in conjunction with a prefix cache. Successful hits in both the caches help reduce the number of worst-case lookups in the low level memory containing the IP routing table in a trie data structure.
From simulations, we show that the two architectures show up to 99.9%global hit rate. Furthermore we present analytical models to find optimal designs for the two architectures. We also show that the architectures can support incremental updates once appropriate modifications are made to the trie data structure.
|
4 |
Cache architectures to improve IP lookupsRavinder, Sunil Unknown Date
No description available.
|
5 |
ADAPTIVE PROFILE DRIVEN DATA CACHING AND PREFETCHING IN MOBILE ENVIRONMENTMahmood, Omer January 2005 (has links)
This thesis describes a new method of calculating data priority by using adaptive mobile user and device profiles which change with user location, time of the day, available networks and data access history. The profiles are used for data prefetching, selection of most suitable wireless network and cache management on the mobile device in order to optimally utilize the device�s storage capacity and available bandwidth. Some of the inherent characteristics of mobile devices due to user movements are �non-persistent connection, limited bandwidth and storage capacity, changes in mobile device�s geographical location and connection (eg. connection can be from GPRS to WLAN to Bluetooth). New research is being carried out in making mobile devices work more efficiently by reducing and/or eliminating their limitations. The focus of this research is to propose, evaluate and test a new user profiling technique which specifically caters to the needs of the mobile device users who are required to access large amounts of data, possibly more than the device storage capability during the course of the day or week. This work involves the development of an intelligent user profiling system along with mobile device caching system which will first allocate weight (priority) to the different sets and subsets of the total given data based on user�s location, user�s appointment information, user�s preferences, device capabilities and available networks. Then the profile will automatically change the data weights with user movements, history of cached data access and characteristics of available networks. The Adaptive User and Device Profiles were designed to handle broad range of the issues associated with: �Changing network types and conditions �Limited storage capacity and document type support of mobile devices �Changes in user data needs due to their movements at different times of the day Many research areas have been addressed through this research but the primary focus has remained on the following four core areas. The four core areas are : selecting the most suitable wireless network; allocating weights to different datasets & subsets by integrating user�s movements; previously accessed data; time of the day with user appointment information and device capabilities.
|
6 |
Ecological Determinants of Foraging and Caching Behaviour in Sympatric Heteromyid Rodents / Determinants of Foraging and Caching in HeteromyidsLeaver, Lisa 06 1900 (has links)
A series of studies was carried out in order to ascertain some of the ecological determinants of the foraging and caching behaviour of heteromyid rodents (kangaroo rats, Dipodomys, and pocket mice, Chaetodipus). The results show that heteromyids are sensitive to cues of predation while they are foraging. They put more effort into foraging under the safety of cover and in the dark of the new moon, when risk of predation from visually hunting predators is low. They also modulate their selectivity in relation to cues of predation risk, requiring a better pay-off(a more valuable food) as risk increases. The kangaroo rats and pocket mice compete for resources, and the pocket mice are at an aggressive disadvantage to the kangaroo rats at primary resource patches. However, the pocket mice compensate at least partially for their loss by engaging in cache pilferage. Finally, a study of the scatter caching decisions made by kangaroo rats demonstrates that they adaptively modulate cache spacing by placing more valuable seeds into caches that are more widely spaGed. This differential spacing leads to decreased probability that pilferers conducting area-localised search after encountering one cache will be able to locate further caches. The results are discussed in relation to current theory and empirical findings. / Thesis / Doctor of Philosophy (PhD)
|
7 |
Improving Network Performance and Document Dissemination by Enhancing Cache Consistency on the Web Using Proxy and Server NegotiationDoswell, Felicia 06 September 2005 (has links)
Use of proxy caches in the World Wide Web is beneficial to the end user, network administrator, and server administrator since it reduces the amount of redundant traffic that circulates through the network. In addition, end users get quicker access to documents that are cached. However, the use of proxies introduces additional issues that need to be addressed. In particular, there is growing concern over how to maintain cache consistency and coherency among cached versions of documents.
The existing consistency protocols used in the Web are proving to be insufficient to meet the growing needs of the Internet population. For example, too many messages sent over the network are due to caches guessing when their copy is inconsistent. One option is to apply the cache coherence strategies already in use for many other distributed systems, such as parallel computers. However, these methods are not satisfactory for the World Wide Web due to its larger size and more diverse access patterns.
Many decisions must be made when exploring World Wide Web coherency, such as whether to provide consistency at the proxy level (client pull) or to allow the server to handle it (server push). What trade offs are inherent for each of these decisions? The relevant usage of any method strongly depends upon the conditions of the network (e.g., document types that are frequently requested or the state of the network load) and the resources available (e.g., disk space and type of cache available). Version 1.1 of HTTP is the first protocol version to give explicit rules for consistency on the Web. Many proposed algorithms require changes to HTTP/1.1. However, this is not necessary to provide a suitable solution.
One goal of this dissertation is to study the characteristics of document retrieval and modification to determine their effect on proposed consistency mechanisms. A set of effective consistency policies is identified from the investigation. The main objective of this dissertation is to use these findings to design and implement a consistency algorithm that provides improved performance over the current mechanisms proposed in the literature. Optimistically, we want an algorithm that provides strong consistency. However, we do not want to further degrade the network or cause undue burden on the server to gain this advantage. We propose a system based on the notion of soft-state and based on server push. In this system, the proxy would have some influence on what state information is maintained at the server (spatial consideration) as well as how long to maintain the information (temporal consideration). We perform a benchmark study of the performance of the new algorithm in comparison with existing proposed algorithms. Our results show that the Synchronous Nodes for Consistency (SINC) framework provides an average of 20% control message savings by limiting how much polling occurs with the current Web cache consistency mechanism, Adaptive Client Polling. In addition, the algorithm shows 30% savings on state space overhead at the server by limiting the amount of per-proxy and per-document state information required at the server. / Ph. D.
|
8 |
Fundamentals of Cache Aided Wireless NetworksSengupta, Avik 06 December 2016 (has links)
Caching at the network edge has emerged as a viable solution for alleviating the severe capacity crunch in content-centric next generation 5G wireless networks by leveraging localized content storage and delivery. Caching generally works in two phases namely (i) storage phase where parts of popular content is pre-fetched and stored in caches at the network edge during time of low network load and (ii) delivery phase where content is distributed to users at times of high network load by leveraging the locally stored content. Cache-aided networks therefore have the potential to leverage storage at the network edge to increase bandwidth efficiency. In this dissertation we ask the following question - What are the theoretical and practical guarantees offered by cache aided networks for reliable content distribution while minimizing transmission rates and increasing network efficiency?
We furnish an answer to this question by identifying fundamental Shannon-type limits for cache aided systems. To this end, we first consider a cache-aided network where the cache storage phase is assisted by a central server and users can demand multiple files at each transmission interval. To service these demands, we consider two delivery models - (i) centralized content delivery where demands are serviced by the central server; and (ii) device-to-device-assisted distributed delivery where demands are satisfied by leveraging the collective content of user caches. For such cache aided networks, we develop a new technique for characterizing information theoretic lower bounds on the fundamental storage-rate trade-off. Furthermore, using the new lower bounds, we establish the optimal storage-rate trade-off to within a constant multiplicative gap and show that, for the case of multiple demands per user, treating each set of demands independently is order-optimal. To address the concerns of privacy in multicast content delivery over such cache-aided networks, we introduce the problem of caching with secure delivery. We propose schemes which achieve information theoretic security in cache-aided networks and show that the achievable rate is within a constant multiplicative factor of the information theoretic optimal secure rate. We then extend our theoretical analysis to the wireless domain by studying a cloud and cache-aided wireless network from a perspective of low-latency content distribution. To this end, we define a new performance metric namely normalized delivery time, or NDT, which captures the worst-case delivery latency. We propose achievable schemes with an aim to minimize the NDT and derive information theoretic lower bounds which show that the proposed schemes achieve optimality to within a constant multiplicative factor of 2 for all values of problem parameters. Finally, we consider the problem of caching and content distribution in a multi-small-cell heterogeneous network from a reinforcement learning perspective for the case when the popularity of content is unknown. We propose a novel topology-aware learning-aided collaborative caching algorithm and show that collaboration among multiple small cells for cache-aided content delivery outperforms local caching in most network topologies of practical interest. The results presented in this dissertation show definitively that cache-aided systems help in appreciable increase of network efficiency and are a viable solution for the ever evolving capacity demands in the wireless communications landscape. / Ph. D. / Caching at the network edge has emerged as a viable solution for alleviating the severe capacity crunch in content-centric next generation 5G wireless networks by leveraging localized content storage and delivery. Caching generally works in two phases namely (<i>i</i>) <i>storage phase</i> where parts of popular content is pre-fetched and stored in caches at the network edge during time of low network load and (<i>ii</i>) <i>delivery phase</i> where content is distributed to users at times of high network load by leveraging the locally stored content. Cache-aided networks therefore have the potential to leverage storage at the network edge to increase bandwidth efficiency. In this dissertation we study cache-aided systems from an information theoretic perspective and identify fundamental Shannontype limits for such systems. The results presented in this dissertation show definitively that cacheaided systems help in appreciable increase of network efficiency and are a viable solution for the ever evolving capacity demands in the wireless communications landscape.
|
9 |
Caching dynamic data for web applicationsMahdavi, Mehregan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Web portals are one of the rapidly growing applications, providing a single interface to access different sources (providers). The results from the providers are typically obtained by each provider querying a database and returning an HTML or XML document. Performance and in particular providing fast response time is one of the critical issues in such applications. Dissatisfaction of users dramatically increases with increasing response time, resulting in abandonment of Web sites, which in turn could result in loss of revenue by the providers and the portal. Caching is one of the key techniques that address the performance of such applications. In this work we focus on improving the performance of portal applications via caching. We discuss the limitations of existing caching solutions in such applications and introduce a caching strategy based on collaboration between the portal and its providers. Providers trace their logs, extract information to identify good candidates for caching and notify the portal. Caching at the portal is decided based on scores calculated by providers and associated with objects. We evaluate the performance of the collaborative caching strategy using simulation data. We show how providers can trace their logs and calculate cache-worthiness scores for their objects and notify the portal. We also address the issue of heterogeneous scoring policies by different providers and introduce mechanisms to regulate caching scores. We also show how portal and providers can synchronize their meta-data in order to minimize the overhead associated with collaboration for caching.
|
10 |
NETWORKING ISSUES IN DEFER CACHE- IMPLEMENTATION AND ANALYSISPRABHU, SHALAKA K. January 2003 (has links)
No description available.
|
Page generated in 0.0543 seconds