• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 10
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 181
  • 74
  • 37
  • 36
  • 32
  • 27
  • 26
  • 25
  • 25
  • 22
  • 22
  • 20
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

IMPROVING L2 CACHE PERFORMANCE THROUGH STREAM-DIRECTED OPTIMIZATIONS

SOHONI, SOHUM 06 October 2004 (has links)
No description available.
62

Knowledge Accelerated Algorithms and the Knowledge Cache

Goyder, Matthew 19 July 2012 (has links)
No description available.
63

Spatio-Temporal Correlation in the Performance of Cache-Enabled Cellular Networks

Krishnan, Shankar 19 July 2016 (has links)
Exact characterization and performance analysis of wireless networks should incorporate dependencies or correlations in space and time, i.e., study how the network performance varies spatially and temporally while having prior information about the performance at previous locations and time slots. This spatio-temporal correlation in wireless networks is usually characterized by studying metrics such as joint coverage probability at two spatial locations/time slots or spatio-temporal correlation coefficient. While developing models and analytical expressions for studying the two extreme cases of spatio-temoral correlation - i) uncorrelated scenario and ii) fully correlated scenario are easier, studying the intermediate case is non-trivial. In this thesis, we develop realistic and tractable analytical frameworks based on random spatial models (using tools from stochastic geometry) for modeling and analysis of correlation in cellular networks. With an ever increasing data demand, caching popular content in the storage of small cells (small cell caching) or the memory of user devices (device caching) is seen as a good alternative to offload demand from macro base stations and reduce backhaul loads. After providing generic results for traditional cellular networks, we study two applications exploiting spatio-temporal correlation in cache-enabled cellular networks. First, we determine the optimal cache content to be stored in the cache of a small cell network that maximizes the hit probability and minimizes the reception energy for the two extreme cases of correlation. Our results concretely demonstrate that the optimal cache contents are significantly different for the two correlation scenarios, thereby indicating the need of correlation-aware caching strategies. Second, we look at a distributed caching scenario in user devices and show that spatio-temporal correlation (user mobility) can be exploited to improve the network performance (in terms of coverage probability and local delay) significantly. / Master of Science
64

Pre-fetch document caching to improve World-Wide Web user response time

Lee, David Chunglin 01 October 2008 (has links)
The World-Wide Web, or the Web, is currently one of the most highly used network services. Because of this, improvements and new technologies are rapidly being developed and deployed. One important area of study is improving user response time through the use of caching mechanisms. Most prior work considered multiple user caches running on cache relay systems. These systems are mostly post-caching systems; they perform no "look ahead," or pre-fetch, functions. This research studies a pre-fetch caching scheme based on Web server access statistics. The scheme employs a least-recently used replacement policy and allows for multiple simultaneous document retrievals to occur. The scheme is based on a combined statistical and locality of reference model associated with the links in hypertext systems. Results show that cache hit rates are doubled over schemes that use only post-caching and are mixed for user response time improvements. The conclusion is that pre-fetch caching Web documents offers an improvement over post-caching methods and should be studied in detail for both single user and multiple user systems. / Master of Science
65

Performance Evaluation of Web Archiving Through In-Memory Page Cache

Vishwasrao, Saket Dilip 23 June 2017 (has links)
This study proposes and evaluates a new method for Web archiving. We leverage the caching infrastructure in Web servers for archiving. Redis is used as the page cache and its persistence mechanism is exploited for archiving. We experimentally evaluate the performance of our archival technique using the Greek version of Wikipedia deployed on Amazon cloud infrastructure. We show that there is a slight increase in latencies of the rendered pages due to archiving. Though the server performance is comparable at larger page cache sizes, the maximum throughput the server can handle decreases significantly at lower cache sizes due to more disk write operations as a result of archiving. Since pages are dynamically rendered and the technology stack of Wikipedia is extensively used in a number of Web applications, our results should have broad impact. / Master of Science / This study proposes and evaluates a new method for Web archiving. To reduce response time for serving webpages, Web Servers store recently rendered pages in memory. This process is known as caching. We modify this caching mechanism of Web Servers for archival. We then experimentally evaluate the impact of our archival technique on Web Servers. We observe that the time to render a Web page increases slightly as long as the Web Server is under moderate load. Through our experiments, we establish limits on the maximum requests a Web Server can handle without increasing the response time. We ensure our experiments are conducted on Web Servers using technologies that are widely used today. Thus our results should have broad impact.
66

Nätverksoptimering med öppen källkod : En studie om nätverksoptimering för sjöfarten

Deshayes, Dan, Sedvallsson, Simon January 1900 (has links)
Detta examensarbete handlar om hur datatrafik över en satellitlänk kan optimeras för att minska laddningstider och överförd datamängd. Syftet med studien är att undersöka i vilken omfattning datatrafik mellan fartyg och land via satellitlänk kan styras så att trafiken blir effektivare. Genom att använda DNS-mellanlagring, mellanlagring av webbsidor samt annonsblockering med pfSense som plattform har examensarbetet utfört experiment emot olika hemsidor och mätt laddningstid samt överförd datamängd. Resultatet visade att det fanns stora möjligheter att optimera nätverkstrafiken och de uppmätta resultaten visade på en minskning av datamängden med 94% och laddningstiderna med 67%. / The thesis describes how network traffic transmitted via a satellite link can be optimized in order to reduce loading times and transmitted data. The purpose with this study has been to determine what methods are available to control and reduce the amount of data transmitted through a network and how this data is affected. By applying the practice of DNS caching, web caching and ad blocking with the use of pfSense as a platform the study has performed experiments targeting different web sites and measured the loading times and amount of transmitted data. The results showed good possibilities to optimize the network traffic and the measured values indicated a reduction of the network traffic of up to 94% and loading times with 67%.
67

[en] AN EXPERIMENTAL EVALUATION OF CONSISTENT HASHING WITH BOUNDED LOADS IN ONLINE VIDEO DISTRIBUTION / [pt] UMA AVALIAÇÃO EXPERIMENTAL DE HASHING CONSISTENTE COM CARGAS LIMITADAS NA DISTRIBUIÇÃO DE VÍDEOS ONLINE

BERNARDO DE CAMPOS VIDAL CAMILO 14 December 2018 (has links)
[pt] O consumo de vídeos representa grande parte do tráfego na Internet hoje e tende a aumentar ainda mais nos próximos anos. Neste trabalho, investigamos formas de aprimorar o caching em redes de distribuição de conteúdo (Content Delivery Networks - CDNs) de vídeo para reduzir o tempo de resposta das mesmas e aumentar a qualidade de experiência dos usuários. A partir da análise de diferentes técnicas, concluímos que o hashing consistente com cargas limitadas possui características interessantes para esse fim e se encaixa adequadamente ao cenário de distribuição de vídeos. Para verificar o seu desempenho, criamos uma plataforma de experimentação e, usando dados de uma CDN de vídeos real, o confrontamos com o hashing consistente e com o método de balanceamento least connections, todos implementados de maneira equivalente para permitir uma comparação justa. Por fim, discutimos os resultados dessa avaliação, destacando os benefícios e limitações dessa técnica no contexto considerado. / [en] Video consumption accounts for a large part of Internet traffic today and tends to increase further in the next years. In this work, we investigate ways to improve caching in video content delivery networks (CDNs) to reduce their response time and increase the users quality of experience. From the analysis of different techniques, we concluded that consistent hashing with bounded loads has interesting characteristics for this purpose and fits adequately to the video delivery scenario. In order to verify its performance, we created an experimentation platform and, using data from a real video CDN, confronted it with the consistent hashing and the least connections balancing method, all implemented in an equivalent manner to permit a fair comparison. Lastly, we discussed the results of this evaluation, highlighting the benefits and limitations of this technique in the considered context.
68

Les méthodes de caching distribué dans les réseaux small cells / Distributed caching methods in small cell networks

Bastug, Ejder 14 December 2015 (has links)
Cette thèse explore le caching proactif, l'un des principaux paradigmes des réseaux cellulaires 5G utilisé en particulier le déploiement des réseaux à petites cellules (RPCs). Doté de capacités de prévisions en combinaison avec les récents développements dans le stockage, la sensibilité au contexte et les réseaux sociaux, le caching distribué permet de réduire considérablement les pics de trafic dans la demande des utilisateurs en servant de manière proactive ces derniers en fonction de leurs demandes potentielles, et en stockant les contenus à la fois dans les stations de base et dans les terminaux des utilisateurs. Pour montrer la faisabilité des techniques de caching proactif, nous abordons le problème sous deux angles différents, à savoir théorique et pratique.Dans la première partie de cette thèse, nous utiliserons des outils de géométrie stochastique pour modéliser et analyser les gains théoriques résultant du stockage dans les stations de base. Nous nous focalisons en particulier sur 1-) les réseaux ``niveau-simple" dans lesquels de petites stations de base ayant une capacité de stockage limitée, 2-) Réseaux ``niveau-multiples" avec un backbone à capacité limitée et 3-) Les réseaux ``niveau-multiples groupés" à deux topologies différentes: déploiements en fonction de la couverture et en fonction de la capacité. Nous y caractérisons les gains de stockage en termes de débit moyen fourni et de délai moyen, puis nous montrons différents compromis en fonction du nombre de stations de base, de la taille de stockage, du facteur de popularité des contenus et du débit des contenus ciblés. Dans la seconde partie de la thèse, nous nous focalisons à une approche pratique du caching proactif et nous focalisons sur l'estimation du facteur de popularité des contenus et les aspects algorithmiques. En particulier, 1-) nous établissons dans un premier lieu les gains du caching proactif à la fois au niveau des stations de base qu'au niveau des terminaux utilisateurs, en utilisant des outils récents d'apprentissage automatique exploitant le transfert des communications appareil-à-appareil (AàA); 2-) nous proposons une approche d'apprentissage sur la base de la richesse des informations transmises entre terminaux (que nous désignons par domaine source) dans le but d'avoir une meilleure estimation de la popularité des différents contenus et des contenus à stocker de manière stratégique dans les stations de base (que nous désignons par domaine cible); 3-) Enfin, pour l'estimation de la popularité des contenus en pratique, nous collectons des données de trafic d'usagers mobiles d'un opérateur de télécommunications sur plusieurs de ses stations de base pendant un certain nombre d'observations. Cette grande quantité de données entre dans le cadre du traitement ``Big Data" et nécessite l'utilisation de nouveaux mécanismes d'apprentissage automatique adaptés à ces grandes masses de données. A ce titre, nous proposons une architecture parallélisée dans laquelle l'estimation de la popularité des contenus et celle du stockage stratégique au niveau des stations de base sont faites simultanément. Nos résultats et analyses fournissent des visions clés pour le déploiement du stockage de contenus dans les petites stations de base, l'une des solutions les plus prometteuses des réseaux cellulaires mobiles hétérogènes 5G. / This thesis explores one of the key enablers of 5G wireless networks leveraging small cell network deployments, namely proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context-awareness and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands, via caching at base stations and users' devices. In order to show the effectiveness of proactive caching techniques, we tackle the problem from two different perspectives, namely theoretical and practical ones.In the first part of this thesis, we use tools from stochastic geometry to model and analyse the theoretical gains of caching at base stations. In particular, we focus on 1) single-tier networks where small base stations with limited storage are deployed, 2) multi-tier networks with limited backhaul, and) multi-tier clustered networks with two different topologies, namely coverage-aided and capacity-aided deployments. Therein, we characterize the gains of caching in terms of average delivery rate and mean delay, and show several trade-offs as a function of the number of base stations, storage size, content popularity behaviour and target content bitrate. In the second part of the thesis, we take a more practical approach of proactive caching and focus on content popularity estimation and algorithmic aspects. In particular: 1) We first investigate the gains of proactive caching both at base stations and user terminals, by exploiting recent tools from machine learning and enabling social-network aware device-to-device (D2D) communications; 2) we propose a transfer learning approach by exploiting the rich contextual information extracted from D2D interactions (referred to as source domain) in order to better estimate the content popularity and cache strategic contents at the base stations (referred to as target domain); 3) finally, to estimate the content popularity in practice, we collect users' real mobile traffic data from a telecom operator from several base stations in hours of time interval. This amount of large data falls into the framework of big data and requires novel machine learning mechanisms to handle. Therein, we propose a parallelized architecture in which content popularity estimation from this data and caching at the base stations are done simultaneously.Our results and analysis provide key insights into the deployment of cache-enabled small base stations, which are seen as a promising solution for 5G heterogeneous cellular networks.
69

Adaptive Caching of Distributed Components

Pohl, Christoph 01 May 2005 (has links) (PDF)
Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen. / Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach.
70

Adaptive Caching of Distributed Components

Pohl, Christoph 12 May 2005 (has links)
Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen. / Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach.

Page generated in 0.0706 seconds