• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 10
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 181
  • 74
  • 37
  • 36
  • 32
  • 27
  • 26
  • 25
  • 25
  • 22
  • 22
  • 20
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

GPU implementace algoritmů irradiance a radiance caching / GPU implementation of the irradiance and radiance caching algorithms

Bulant, Martin January 2015 (has links)
The objective of this work is to create software implementing two algorithms for global ilumination computation. Iradiance and radiance caching should be implemented in CUDA framework on a graphics card (GPU). Parallel implementation on the GPU should improve algoritm speed compared to CPU implementation. The software will be written using an already done framework for global illumunation computation. That allows to focus on algorithm implementation only. This work should speed up testing of new or existing methods for global illumination computing, because saving and reusing of intermediate results can be used for other algorithms too. Powered by TCPDF (www.tcpdf.org)
32

Transmission Schemes, Caching Algorithms and P2P Content Distribution with Network Coding for Efficient Video Streaming Services

Kao, Yung-cheng 23 February 2010 (has links)
For more than a decade, streaming media services, including on-line conferences, distance education and movie broadcasting, have gained much popularity on the Internet. Due to the high bandwidth requirements and long lived nature of video streaming, it requires huge transmission cost to support these streaming media services. In addition, how to adapt rich multimedia content to satisfy various resource-constrained devices presents a challenge. The limited and time-varying network bandwidth complicates the content adaptation tasks. Differentiated content delivery may be required to meet diverse client profiles and user preferences. Therefore, in order to reduce transmission cost to serve heterogeneous clients for efficient streaming, in this dissertation, several novel schemes including transcoding-enable proxy caching scheme, reactive transmission schemes, and network coding P2P content distribution scheme, are proposed to support efficient multiple-version and layered video delivery in the proxy-attached network environment as well as to provide efficient interactive IPTV service in a peer-to-peer network. Firstly, for multiple-version cache consideration in the transcoding-enable proxy, we focus on reducing the required server bandwidth and startup delay by caching the optimal versions of the video. A generalized video object profit function is derived from the extended weighted transcoding graph to calculate the individual cache profit of certain version of a video object, and the aggregate profit from caching multiple versions of the same video object. This proposed function takes into account the popularity of certain version of a video object, the transcoding delay among versions and the average access duration of each version. Based on the profit function, cache replacement algorithms are proposed to reduce the startup delay and network traffic by efficiently caching video objects with maximum profits. Next, a set of proxy-assisted transmission schemes are proposed to reduce the transmission cost for layered video streaming by integrating the proxy caching with reactive transmission schemes, peer-to-peer mesh networks and multicast capability. These proposed transmission schemes make multiple requests to be serviced by the single transmission and thus to significantly reduce the total required transmission cost. The optimal proxy prefix cache allocation is also calculated for each transmission scheme to identify the cache layers and cache length of each video to minimize the aggregate transmission cost. The process considers the fact that reduction in transmission cost by caching X layers of a video is not only from requests on X layers, but also from requests on less than X layers. Finally, we proposed a network coding equivalent content distribution (NCECD) scheme to decrease server stress, startup delay and jumping latency to support random access operations which are desirable for peer-to-peer on-demand video streaming. The random access operations are difficult to be efficiently supported, due to the asynchronous interactive behaviors of users and the dynamic nature of peers. In NCECD, videos are divided into segments which are then further divided into blocks. These blocks are then encoded into independent encoded blocks that are distributed to the local storage of different peers. With NCECD, a new client only needs to connect to a sufficient number of parent peers in order to view the whole video and rarely needs to find new parents when performing random access operations. Whereas most existing methods must search for parent peers containing interested segments, NCECD uses the properties of network coding to cache equivalent content on most peers, so that searches are rarely needed. The analysis of system parameters is given to achieve reasonable block loss rates for peer-to-peer interactive video-on-demand streaming. Experimental results demonstrate that these proposed schemes can lead to significant transmission cost saving, high delay saving ratio, high bandwidth saving ratio, low startup and jumping searching delays, connecting to a new parent peer delay and less server resources. Hence, these proposed schemes can further be integrated and utilized to build an efficient video streaming platform for providing high-performance and high-quality IPTV services to a diversity of clients.
33

Design and Evaluation of Enhanced Network Caching Systems to Improve Content Delivery in the Internet / Conception et évaluation de systèmes de caching de réseau pour améliorer la distribution des contenus sur Internet

Araldo, Andrea Giuseppe 07 October 2016 (has links)
Le caching de réseau peut aider àgérer l'explosion du trafic sur Internet et àsatisfaire la Qualité d'Expérience (QoE)croissante demandée par les usagers.Néanmoins, les techniques proposées jusqu'àprésent par la littérature scientifique n'arriventpas à exploiter tous les avantages potentiels. Lestravaux de recherche précédents cherchent àoptimiser le hit ratio ou d'autres métriques deréseau, tandis que les opérateurs de réseau(ISPs) sont plus intéressés à des métriques plusconcrètes, par exemple le coût et la qualitéd'expérience (QoE). Pour cela, nous visonsdirectement l'optimisation des métriquesconcrètes et montrons que, ce faisant, on obtientdes meilleures performances.Plus en détail, d'abord nous proposons desnouvelles techniques de caching pour réduire lecoût pour les ISPs en préférant stocker lesobjets qui sont les plus chères à repérer.Nous montrons qu'un compromis existe entre lamaximisation classique du hit ratio et laréduction du coût.Ensuite, nous étudions la distribution vidéo,comme elle est la plus sensible à la QoE etconstitue la plus part du trafic Internet. Lestechniques de caching classiques ignorent sescaractéristiques particulières, par exemple le faitqu'une vidéo est représentée par différentesreprésentations, encodées en différents bit-rateset résolutions. Nous introduisons des techniquesqui prennent en compte cela.Enfin, nous remarquons que les techniquescourantes assument la connaissance parfaite desobjets qui traversent le réseau. Toutefois, laplupart du trafic est chiffrée et du coup toutetechnique de caching ne peut pas fonctionner.Nous proposons un mécanisme qui permet auxISPs de faire du caching, bien qu’ils ne puissentobserver les objets envoyés. / Network caching can help copewith today Internet traffic explosion and sustainthe demand for an increasing user Quality ofExperience. Nonetheless, the techniquesproposed in the literature do not exploit all thepotential benefits. Indeed, they usually aim tooptimize hit ratio or other network-centricmetrics, e.g. path length, latency, etc., whilenetwork operators are more focused on moremore practical metrics, like cost and quality ofexperience. We devise caching techniques thatdirectly target the latter objectives and showthat this allows to gain better performance.More specifically, we first propose novelstrategies that reduce the Internet ServiceProvider (ISP) operational cost, bypreferentially caching the objects whose cost ofretrieval is the largest.We then focus on video delivery, since it is themost sensitive to QoE and represents most ofthe Internet traffic. Classic techniques ignorethat each video is represented by differentrepresentations, encoded at different bit-ratesand resolutions. We devise techniques that takethis into account.Finally, we point out that the techniquespresented in the literature assume the perfectknowledge of the objects that are crossing thenetwork. Nonetheless, most of the traffic todayis encrypted and thus caching techniques areinapplicable. To overcome this limit, Wepropose a mechanism which allows the ISPs tocache, even without knowing the objects being
34

Cooperative caching for object storage

Kaynar Terzioglu, Emine Ugur 29 October 2022 (has links)
Data is increasingly stored in data lakes, vast immutable object stores that can be accessed from anywhere in the data center. By providing low cost and scalable storage, today immutable object-storage based data lakes are used by a wide range of applications with diverse access patterns. Unfortunately, performance can suffer for applications that do not match the access patterns for which the data lake was designed. Moreover, in many of today's (non-hyperscale) data centers, limited bisectional bandwidth will limit data lake performance. Today many computer clusters integrate caches both to address the mismatch between application performance requirements and the capabilities of the shared data lake, and to reduce the demand on the data center network. However, per-cluster caching; i) means the expensive cache resources cannot be shifted between clusters based on demand, ii) makes sharing expensive because data accessed by multiple clusters is independently cached by each of them, and iii) makes it difficult for clusters to grow and shrink if their servers are being used to cache storage. In this dissertation, we present two novel data-center wide cooperative cache architectures, Datacenter-Data-Delivery Network (D3N) and Directory-Based Datacenter-Data-Delivery Network (D4N) that are designed to be part of the data lake itself rather than part of the computer clusters that use it. D3N and D4N distribute caches across the data center to enable data sharing and elasticity of cache resources where requests are transparently directed to nearby cache nodes. They dynamically adapt to changes in access patterns and accelerate workloads while providing the same consistency, trust, availability, and resilience guarantees as the underlying data lake. We nd that exploiting the immutability of object stores significantly reduces the complexity and provides opportunities for cache management strategies that were not feasible for previous cooperative cache systems for le or block-based storage. D3N is a multi-layer cooperative cache that targets workloads with large read-only datasets like big data analytics. It is designed to be easily integrated into existing data lakes with only limited support for write caching of intermediate data, and avoiding any global state by, for example, using consistent hashing for locating blocks and making all caching decisions based purely on local information. Our prototype is performant enough to fully exploit the (5 GB/s read) SSDs and (40, Gbit/s) NICs in our system and improve the runtime of realistic workloads by up to 3x. The simplicity of D3N has enabled us, in collaboration with industry partners, to upstream the two-layer version of D3N into the existing code base of the Ceph object store as a new experimental feature, making it available to the many data lakes around the world based on Ceph. D4N is a directory-based cooperative cache that provides a reliable write tier and a distributed directory that maintains a global state. It explores the use of global state to implement more sophisticated cache management policies and enables application-specific tuning of caching policies to support a wider range of applications than D3N. In contrast to previous cache systems that implement their own mechanism for maintaining dirty data redundantly, D4N re-uses the existing data lake (Ceph) software for implementing a write tier and exploits the semantics of immutable objects to move aged objects to the shared data lake. This design greatly reduces the barrier to adoption and enables D4N to take advantage of sophisticated data lake features such as erasure coding. We demonstrate that D4N is performant enough to saturate the bandwidth of the SSDs, and it automatically adapts replication to the working set of the demands and outperforms the state of art cluster cache Alluxio. While it will be substantially more complicated to integrate the D4N prototype into production quality code that can be adopted by the community, these results are compelling enough that our partners are starting that effort. D3N and D4N demonstrate that cooperative caching techniques, originally designed for file systems, can be employed to integrate caching into today’s immutable object-based data lakes. We find that the properties of immutable object storage greatly simplify the adoption of these techniques, and enable integration of caching in a fashion that enables re-use of existing battle tested software; greatly reducing the barrier of adoption. In integrating the caching in the data lake, and not the compute cluster, this research opens the door to efficient data center wide sharing of data and resources.
35

Towards Efficient Delivery of Dynamic Web Content

Ramaswamy, Lakshmish Macheeri 26 August 2005 (has links)
Advantages of cache cooperation on edge cache networks serving dynamic web content were studied. Design of cooperative edge cache grid a large-scale cooperative edge cache network for delivering highly dynamic web content with varying server update frequencies was presented. A cache clouds-based architecture was proposed to promote low-cost cache cooperation in cooperative edge cache grid. An Internet landmarks-based scheme, called selective landmarks-based server-distance sensitive clustering scheme, for grouping edge caches into cooperative clouds was presented. Dynamic hashing technique for efficient, load-balanced, and reliable documents lookups and updates was presented. Utility-based scheme for cooperative document placement in cache clouds was proposed. The proposed architecture and techniques were evaluated through trace-based simulations using both real-world and synthetic traces. Results showed that the proposed techniques provide significant performance benefits. A framework for automatically detecting cache-effective fragments in dynamic web pages was presented. Two types of fragments in web pages, namely, shared fragments and lifetime-personalization fragments were identified and formally defined. A hierarchical fragment-aware web page model called the augmented-fragment tree model was proposed. An efficient algorithm to detect maximal fragments that are shared among multiple documents was proposed. A practical algorithm for detecting fragments based on their lifetime and personalization characteristics was designed. The proposed framework and algorithms were evaluated through experiments on real web sites. The effect of adopting the detected fragments on web-caches and origin-servers is experimentally studied.
36

Challenges faced by foraging Eastern grey squirrels, Sciurus carolinensis : competition, pilferage and predation risks

Jayne, Kimberley January 2014 (has links)
This thesis examines how Eastern grey squirrels, Sciurus carolinensis, modify their foraging and hoarding behaviour in relation to different risks, particularly those which involve a trade-off between securing food resources and avoiding a negative outcome with a competitor. While foraging for food to eat and hoard, squirrels must compete with conspecifics and heterospecifics for access to resources, and they must ensure the safety of their food hoards from onlookers or opportunistic pilferers. While engaging in these behaviours in the most efficient way, they must also avoid being predated upon. Five studies were conducted to further understanding of grey squirrel foraging, hoarding and pilferage behaviours, and how they are affected by different risk factors. The data in this thesis provide experimental evidence that grey squirrels respond directly to conspecific presence as a cue of pilferage risk and adjust their behaviour in ways that may help to reduce cache theft. The data also support the view that conspecific and heterospecific competitors pose risks to foraging and caching, with squirrels modifying their behaviour in ways that serve to avoid negative competitive interactions. Predation risk was found to be particularly disruptive to foraging behaviour, and it also had a seasonal effect upon pilferage rates of experimenter-made caches. A variety of strategies that squirrels might use to pilfer caches were investigated, however, the data did not provide a clear indication of pilferage strategy used by squirrels; they did not seem to use observational spatial memory, and they did not simply pilfer in profitable foraging locations. This thesis raises questions about the mechanisms grey squirrels use to assess pilferage risk and how they engage in pilferage in comparison to other caching species; the studies conducted illustrate different methods that future research could use to investigate food hoarding and pilfering behaviour in wild and captive squirrels.
37

Reliable Writeback for Client-side Flash Caches

Qin, Dai 04 July 2014 (has links)
Modern data centers are increasingly using shared storage solutions for ease of management. Data is cached on the client side on inexpensive and high-capacity flash devices, helping improve performance and reduce contention on the storage side. Currently, write-through caching is used because it ensures consistency and durability under client failures, but it offers poor performance for write-heavy workloads. In this work, we propose two write-back based caching policies, called write-back flush and write-back persist, that provide strong reliability guarantees, under two different client failure models. These policies rely on storage applications such as file systems and databases issuing write barriers to persist their data, because these barriers are the only reliable method for storing data durably on storage media. Our evaluation shows that these policies achieve performance close to write-back caching, while providing stronger guarantees than vanilla write-though caching.
38

Variations on the Theme of Caching

Gaspar, Cristian January 2005 (has links)
This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering. <br /><br /> In the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. Next we analyze the competitive ratio of online deterministic marking algorithms in the BIT cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2-competitive. <br /><br /> The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin [11] to define our own concept of relative competitive ratio. We demonstrate results regarding the relative competitiveness of two cache eviction policies in both the basic and the Torng framework combined with the CLASSICAL cost model. <br /><br /> The third variation is caching with request reordering. Two reordering models are defined. We prove some important results about the value of a move and number of orderings, then demonstrate results about the approximation factor and competitive ratio of offline and online reordering schemes, respectively.
39

Search Engine Optimisation Using Past Queries

Garcia, Steven, steven.garcia@student.rmit.edu.au January 2008 (has links)
World Wide Web search engines process millions of queries per day from users all over the world. Efficient query evaluation is achieved through the use of an inverted index, where, for each word in the collection the index maintains a list of the documents in which the word occurs. Query processing may also require access to document specific statistics, such as document length; access to word statistics, such as the number of unique documents in which a word occurs; and collection specific statistics, such as the number of documents in the collection. The index maintains individual data structures for each these sources of information, and repeatedly accesses each to process a query. A by-product of a web search engine is a list of all queries entered into the engine: a query log. Analyses of query logs have shown repetition of query terms in the requests made to the search system. In this work we explore techniques that take advantage of the repetition of user queries to improve the accuracy or efficiency of text search. We introduce an index organisation scheme that favours those documents that are most frequently requested by users and show that, in combination with early termination heuristics, query processing time can be dramatically reduced without reducing the accuracy of the search results. We examine the stability of such an ordering and show that an index based on as little as 100,000 training queries can support at least 20 million requests. We show the correlation between frequently accessed documents and relevance, and attempt to exploit the demonstrated relationship to improve search effectiveness. Finally, we deconstruct the search process to show that query time redundancy can be exploited at various levels of the search process. We develop a model that illustrates the improvements that can be achieved in query processing time by caching different components of a search system. This model is then validated by simulation using a document collection and query log. Results on our test data show that a well-designed cache can reduce disk activity by more than 30%, with a cache that is one tenth the size of the collection.
40

Coherent Shared Memories for FPGAs

Woods, David 17 February 2010 (has links)
To build a shared-memory programming model for FPGAs, a fast and highly parallel method of accessing the shared-memory is required. This thesis presents a first look at how to implement a coherent caching system in an FPGA. The coherent caching system consists of multiple distributed caches that implement the write-once coherence protocol, allowing efficient access to system memory while simplifying the user programming model. Several test applications are used to verify functionality, and assess performance of the current system. Results show that with a processor-based system, some applications could benefit from improvements to the coherence system, but for many applications, the current system is sufficient. However, the current coherent caching system is not sufficient for most hardware core based systems, because the faster memory accesses quickly saturate shared system resources. As well, the performance of distributed-memory systems currently surpasses that of the coherent caching system. Performance results are promising, and given the potential for improvements, future work on this system is warranted.

Page generated in 0.0531 seconds