• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Data Prefetching in Thin-Client/Server Computing over Wide Area Network

An, Feng-Wen 28 July 2003 (has links)
The thin-client/server computing model mandates applications running solely on a server and client devices connecting to the server through the Internet for carrying out works. Traditional thin-client/server computing model comprises only a single server and works only within LAN environment, which severely restrict its applicability. To meet the demand of reasonable response time over WAN, a modified thin-client/server computing model, MAS TC/S, was proposed. In MAS TC/S, multiple application servers spreading over WAN are installed, and each client device can freely connect to any application server that is close to it. However, reducing delay associated with fetching absent files, which are stored in other servers, is a challenging issue in MAS TC/S. We propose to employ data prefetching mechanisms to speed up file fetching. We use the suffix tree-like structure to store users¡¦ previous file access records and define two temporal relationships between two records: followed by or concurrent with, to decide the set of files that should be prefetched together. Each file access subsequence is associated with a set of predicted file sets, each carrying a different weight. Given a current file access session, we will first find a matching file access subsequence and then choose the predicted set that has the highest weight. Based on the chosen predicted set, suitable files are prefeteched to the connected server. We compare our method with All-Kth-Order Markov model and find our method gets higher hit ratio under various operating regions.
2

Evaluation of Instruction Prefetch Methods for Coresonic DSP Processor

Lind, Tobias January 2016 (has links)
With increasing demands on mobile communication transfer rates the circuits in mobile phones must be designed for higher performance while maintaining low power consumption for increased battery life. One possible way to improve an existing architecture is to implement instruction prefetching. By predicting which instructions will be executed ahead of time the instructions can be prefetched from memory to increase performance and some instructions which will be executed again shortly can be stored temporarily to avoid fetching them from the memory multiple times. By creating a trace driven simulator the existing hardware can be simulated while running a realistic scenario. Different methods of instruction prefetch can be implemented into this simulator to measure how they perform. It is shown that the execution time can be reduced by up to five percent and the amount of memory accesses can be reduced by up to 25 percent with a simple loop buffer and return stack. The execution time can be reduced even further with the more complex methods such as branch target prediction and branch condition prediction.
3

Multi-Video Streaming with DASH / Multi-Video Streaming med DASH

Johansson, Markus, Andersson, Sebastian January 2017 (has links)
Multi-video streaming allows the viewer to interact with the stream by choosing andswitching between several different camera angles to view the stream from. For this reportwe implement and evaluate adaptive multi-video streaming with dash.js. With the help ofdash.js and self-made additions, multiple parallel video streams which are synchronized intime are implemented to provide a good user experience with smooth switching betweenstreams. These streams will be delivered according to dash.js own HTTP-based AdaptiveStreaming algorithms to achieve adaptive streams for varying conditions. In order to optimizethe usage of the available bandwidth in terms of video quality in a multi-videoenvironment we implement probabilities of camera switching to adapt qualities and allocatedbandwidth of streams. By utilizing the functions of dash.js we create two prefetchingpolicies and analyze these results together with the standard non-prefetch dash.js implementationin a multi-view video environment. Our results present the improvements interms of stalling with a prefetch implementation and the possibility of a good policy tofurther optimize a multi-view video implementation in terms of stalling, quality and bandwidthusage. Evaluation of dash.js compatibility for a multi-view video environment is alsodiscussed where pros and cons of dash.js in their current state are presented.
4

Proxy-based prefetching and pushing of web resources / Proxy-baserad prefetching och pushing av web resurser

Holm, Jacob January 2016 (has links)
The use of WWW is more prevalent now than ever. Latency has a significant impact on the WWW, with higher latencies causing longer loading time of webpages. On the other hand, if we can lower the latency, we will lower the loading time of a webpage. Latencies are often caused by data traveling long distances or through gateways that add additional processing delays to the forwarded packets. In this thesis we evaluate the latency benefits of different algorithms for prefetching and pushing of web resources, from a proxy when the client cache is known. We found that the most beneficial algorithm is a two sequence data mining technique. This algorithm is evaluated on a live system where we improve loading time by approximately 246 ms with only a 27% traffic increase on average. The results were measured by evaluating a large set of clients on Opera Turbo 2, a distributed proxy with knowledge of the client’s cache. We also concluded that by using a more conservative strategy we can push prefetched resources to the client, reducing the client requests by approximately 9.3% without any significant traffic increase between proxy and client.
5

Adaptive and intelligent memory systems / Système mémoire adaptatif intelligent

Sridharan, Aswinkumar 15 December 2016 (has links)
Dans cette thèse, nous nous sommes concentrés sur l'interférence aux ressources de la hiérarchie de la mémoire partagée : cache de dernier niveau et accès à la mémoire hors-puce dans le contexte des systèmes multicœurs à grande échelle. À cette fin, le premier travail a porté sur les caches de dernier niveau partagées, où le nombre d'applications partageant le cache pourrait dépasser l'associativité du cache. Pour gérer les caches dans de telles situations, notre solution évalue l'empreinte du cache des applications pour déterminer approximativement à quel point elles pourraient utiliser le cache. L'estimation quantitative de l'utilitaire de cache permet explicitement de faire respecter différentes priorités entre les applications. La seconde partie apporte une prédétection dans la gestion de la mémoire cache. En particulier, nous observons les blocs cache pré-sélectionnés pour présenter un bon comportement de réutilisation dans le contexte de caches plus grands. Notre troisième travail est axé sur l'interférence entre les demandes à la demande et les demandes de prélecture à l'accès partagé à la mémoire morte. Ce travail est basé sur deux observations fondamentales de la fraction des requêtes de prélecture générées et de sa corrélation avec l'utilité de prélecture et l'interférence causée par le prélecteur. Au total, deux observations conduisent à contrôler le flux de requêtes de prélecture entre les mémoires LLC et off-chip. / In this thesis, we have focused on addressing interference at the shared memory-hierarchy resources: last level cache and off-chip memory access in the context of large-scale multicore systems. Towards this end, the first work focused on shared last level caches, where the number of applications sharing the cache could exceed the associativity of the cache. To manage caches in such situations, our solution estimates the cache footprint of applications to approximate how well they could utilize the cache. Quantitative estimate of cache utility explicitly allows enforcing different priorities across applications. The second part brings in prefetch awareness in cache management. In particular, we observe prefetched cache blocks to exhibit good reuse behavior in the context of larger caches. Our third work focuses on addressing interference between on-demand and prefetch requests at the shared off-chip memory access. This work is based on two fundamental observations of the fraction of prefetch requests generated and its correlation with prefetch usefulness and prefetcher-caused interference. Altogether, two observations lead to control the flow of prefetch requests between LLC and off-chip memory.

Page generated in 0.0464 seconds