Spelling suggestions: "subject:"prefetching"" "subject:"refetching""
11 |
Förebyggande cachningBobeck, Samuel, Hallqvist, Daniel January 2012 (has links)
No description available.
|
12 |
Algoritmo de prefetching de dados temporizado para sistemas multiprocessadores baseados em NOCSILVEIRA, Maria Cireno Ribeiro 09 March 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-03-15T13:58:26Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
UFPE-MEI 2015-078 - Maria Cireno Ribeiro Silveira.pdf: 4578273 bytes, checksum: 1c434494e0c03cb02156a37ebfd1c7da (MD5) / Made available in DSpace on 2016-03-15T13:58:26Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
UFPE-MEI 2015-078 - Maria Cireno Ribeiro Silveira.pdf: 4578273 bytes, checksum: 1c434494e0c03cb02156a37ebfd1c7da (MD5)
Previous issue date: 2015-03-09 / O prefetching é uma técnica considerada e ciente para mitigar um problema já conhecido
em sistemas computacionais: a diferença entre o desempenho do processador e do acesso
à memória. O objetivo do prefetching é aproximar o dado do processador retirando-o da
memória e carregando na cache local. Uma vez que o dado seja requisitado pelo processador,
ele já estará disponível na cache, reduzindo a taxa de perdas e a penalidade do
sistema. Para sistemas multiprocessadores baseados em NoCs a e ciência do prefetching
é ainda mais crítica em relação ao desempenho, uma vez que o tempo de acesso ao dado
varia dependendo da distância entre processador e memória e do tráfego da rede.
Este trabalho propõe um algoritmo de prefetching de dados temporizado, que tem
como objetivo minimizar a penalidade dos núcleos através uma solução de prefetching
baseada em predição de tempo para sistemas multiprocessadores baseados em NoC. O
algoritmo utiliza um processo pró-ativo iniciado pelo servidor para realizar requisições
de prefetching baseado no histórico de perdas de cache e informações da NoC. Nos experimentos
realizados para 16 núcleos, o algoritmo proposto reduziu a penalidade dos
processadores em 53,6% em comparação com o prefetching baseado em eventos (faltas na
cache), sendo a maior redução de 29% da penalidade. / The prefetching technique is an e ective approach to mitigate a well-known problem in
multi-core processors: the gap between computing and data access performance. The
goal of prefetching is to approximate data to the CPU by retrieving the data from the
memory and loading it in the cache. When the data is requested by the CPU, it is already
available in the cache, reducing the miss rate and penalty. In multiprocessor NoC-based
systems the prefetching e ciency is even more critical to system performance, since the
access time depends of the distance between the requesting processor and the memory
and also of the network tra c.
This work proposes a temporized data prefetching algorithm that aims to minimize
the penalty of the cores through one prefetching solution based on time prediction for
multiprocessor NoC-based systems. The algorithm utilizes a proactive process initiated by
the server to request prefetching data based on cache miss history and NoC's information.
In the experiments for 16 cores, the proposed algorithm has successfully reduced the
processors penalty in 53,6% compared to the event-based prefetching and the best case
was a penalty reduction of 29%.
|
13 |
Model-driven dual caching For nomadic service-oriented architecture clientsLiu, Xin 15 August 2007
Mobile devices have evolved over the years from resource constrained devices that supported only the most basic tasks to powerful handheld computing devices. However, the most significant step in the evolution of mobile devices was the introduction of wireless connectivity which enabled them to host applications that require internet connectivity such as email, web browsers and maybe most importantly smart/rich clients. Being able to host smart clients allows the users of mobile devices to seamlessly access the Information Technology (IT) resources of their organizations. <p>One increasingly popular way of enabling access to IT resources is by using Web Services (WS). This trend has been aided by the rapid availability of WS packages/tools, most notably the efforts of the Apache group and Integrated Development Environment (IDE) vendors. But the widespread use of WS raises questions for users of mobile devices such as laptops or PDAs; how and if they can participate in WS. Unlike their wired counterparts (desktop computers and servers) they rely on a wireless network that is characterized by low bandwidth and unreliable connectivity.<p>The aim of this thesis is to enable mobile devices to host Web Services consumers. It introduces a Model-Driven Dual Caching (MDDC) approach to overcome problems arising from temporarily loss of connectivity and fluctuations in bandwidth.
|
14 |
Just-In-Time Push Prefetching: Accelerating the Mobile WebArmstrong, Nicholas Daniel Robert January 2011 (has links)
Web pages take noticeably longer to load when accessing the Internet using high-latency wide-area wireless networks like 3G. This delay can result in lower user satisfaction and lost revenue for web site operators. By locating a just-in-time prefetching push proxy in the Internet service provider's mobile network core and routing mobile client web requests through it, web page load times can be perceivably reduced. Our analysis and experimental results demonstrate that the use of a push proxy results in a much smaller dependency on the mobile-client-to-network latency than seen in environments where no proxy is used; in particular, only one full round trip from client to server is necessary regardless of the number of resources referenced by a web page. In addition, we find that the ideal location for a push proxy is close to the servers that the mobile client accesses, minimizing the latency between the proxy and the servers that the mobile client accesses through it; this is in contrast to traditional prefetching proxies that do not push prefetched items to the client, which are best deployed halfway between the client and the server.
|
15 |
Model-driven dual caching For nomadic service-oriented architecture clientsLiu, Xin 15 August 2007 (has links)
Mobile devices have evolved over the years from resource constrained devices that supported only the most basic tasks to powerful handheld computing devices. However, the most significant step in the evolution of mobile devices was the introduction of wireless connectivity which enabled them to host applications that require internet connectivity such as email, web browsers and maybe most importantly smart/rich clients. Being able to host smart clients allows the users of mobile devices to seamlessly access the Information Technology (IT) resources of their organizations. <p>One increasingly popular way of enabling access to IT resources is by using Web Services (WS). This trend has been aided by the rapid availability of WS packages/tools, most notably the efforts of the Apache group and Integrated Development Environment (IDE) vendors. But the widespread use of WS raises questions for users of mobile devices such as laptops or PDAs; how and if they can participate in WS. Unlike their wired counterparts (desktop computers and servers) they rely on a wireless network that is characterized by low bandwidth and unreliable connectivity.<p>The aim of this thesis is to enable mobile devices to host Web Services consumers. It introduces a Model-Driven Dual Caching (MDDC) approach to overcome problems arising from temporarily loss of connectivity and fluctuations in bandwidth.
|
16 |
A Prefetching Method For Interactive Web Gis ApplicationsYesilmurat, Serdar 01 March 2010 (has links) (PDF)
A Web GIS system has a major issue of serving the map data to the client applications. Since most of the GIS services provide their geospatial data as basic image formats like PNG and JPEG, constructing those images and transferring them over the internet are costly operations. To enhance this inefficient process, various approaches are offered. Caching the responses of the requests on the client side is the most commonly implemented solution. However, this method is not adequate by itself. Besides caching the responses, predicting the next possible requests of the client and updating the cache with the responses for those requests provide a remarkable performance improvement. This procedure is called &ldquo / prefetching&rdquo / . Via prefetching, caching mechanisms can be used more effectively and efficiently. This study proposes a prefetching algorithm called Retrospective Adaptive Prefetch (RAP). The algorithm is constructed over a heuristic method that takes the former actions of the user into consideration. This method reduces the user-perceived response time and improves users&rsquo / navigation efficiency. The caching mechanism developed takes the memory capacity of the client machine into consideration to adjust the cache capacity by default. Otherwise, cache size can be configured manually. RAP is compared with 4 other methods. According to the experiments, this study shows that RAP provides better performance enhancements than the other compared methods.
|
17 |
Just-In-Time Push Prefetching: Accelerating the Mobile WebArmstrong, Nicholas Daniel Robert January 2011 (has links)
Web pages take noticeably longer to load when accessing the Internet using high-latency wide-area wireless networks like 3G. This delay can result in lower user satisfaction and lost revenue for web site operators. By locating a just-in-time prefetching push proxy in the Internet service provider's mobile network core and routing mobile client web requests through it, web page load times can be perceivably reduced. Our analysis and experimental results demonstrate that the use of a push proxy results in a much smaller dependency on the mobile-client-to-network latency than seen in environments where no proxy is used; in particular, only one full round trip from client to server is necessary regardless of the number of resources referenced by a web page. In addition, we find that the ideal location for a push proxy is close to the servers that the mobile client accesses, minimizing the latency between the proxy and the servers that the mobile client accesses through it; this is in contrast to traditional prefetching proxies that do not push prefetched items to the client, which are best deployed halfway between the client and the server.
|
18 |
A Study on Flat-Address-Space Heterogeneous Memory ArchitecturesIslam, Mahzabeen 05 1900 (has links)
In this dissertation, we present a number of studies that primarily focus on data movement challenges among different types of memories (viz., 3D-DRAM, DDRx DRAM and NVM) employed together as a flat-address heterogeneous memory system. We introduce two different hardware-based techniques for prefetching data from slow off-chip phase change memory (PCM) to fast on-chip memories. The prefetching techniques efficiently fetch data from PCM and place that data into processor-resident or 3D-DRAM-resident buffers without putting high demand on bandwidth and provide significant performance improvements. Next, we explore different page migration techniques for flat-address memory systems which differ in when to migrate pages (i.e., periodically or instantaneously) and how to manage the migrations (i.e., OS-based or hardware-based approach). In the first page migration study, we present several epoch-based page migration policies for different organizations of flat-address memories consisting of two (2-level) and three (3-level) types of memory modules. These policies have resulted in significant energy savings. In the next page migration study, we devise an efficient "on-the-fly'" page migration technique which migrates a page from slow PCM to fast 3D-DRAM whenever it receives a certain number of memory accesses without waiting for any specific time interval. Furthermore, we present a light-weight hardware-assisted address reconciliation process for address management of the migrated pages. Such an on-the-fly page migration with hardware-assisted address reconciliation technique provides significant performance improvement over systems using epoch-based page migration and OS-based address management. Finally, we have developed an analytical model, which employs offline analyses of memory access counts per page and recommends whether an application is migration friendly or not. This can be useful in deciding if page migration (either epoch-based or on-the-fly based) should be used or turned off for a given application. Thus, our data management techniques and model enable significant performance improvements for flat-address heterogeneous memory systems involving NVMs.
|
19 |
IMPROVING L2 CACHE PERFORMANCE THROUGH STREAM-DIRECTED OPTIMIZATIONSSOHONI, SOHUM 06 October 2004 (has links)
No description available.
|
20 |
Intelligent Caching to Mitigate the Impact of Web Robots on Web ServersRude, Howard Nathan January 2016 (has links)
No description available.
|
Page generated in 0.0368 seconds