Spelling suggestions: "subject:"[een] CACHING"" "subject:"[enn] CACHING""
101 |
Paralelizando unidades de cache hierárquicas para roteadores ICNMansilha, Rodrigo Brandão January 2017 (has links)
Um desafio fundamental em ICN (do inglês Information-Centric Networking) é desenvolver Content Stores (ou seja, unidades de cache) que satisfaçam três requisitos: espaço de armazenamento grande, velocidade de operação rápida e custo acessível. A chamada Hierarchical Content Store (HCS) é uma abordagem promissora para atender a esses requisitos. Ela explora a correlação temporal entre requisições para prever futuras solicitações. Por exemplo, assume-se que um usuário que solicita o primeiro minuto de um filme também solicitará o segundo minuto. Teoricamente, essa premissa permitiria transferir proativamente conteúdos de uma área de cache relativamente grande, mas lenta (Layer 2 - L2), para uma área de cache mais rápida, porém menor (Layer 1 - L1). A estrutura hierárquica tem potencial para incrementar o desempenho da CS em uma ordem de grandeza tanto em termos de vazão como de tamanho, mantendo o custo. Contudo, o desenvolvimento de HCS apresenta diversos desafios práticos. É necessário acoplar as hierarquias de memória L2 e L1 considerando as suas taxas de transferência e tamanhos, que dependem tanto de aspectos de hardware (por exemplo, taxa de leitura da L2, uso de múltiplos SSD físicos em paralelo, velocidade de barramento, etc.), como de software (por exemplo, controlador do SSD, gerenciamento de memória, etc.). Nesse contexto, esta tese apresenta duas contribuições principais. Primeiramente, é proposta uma arquitetura para superar os gargalos inerentes ao sistema através da paralelização de múltiplas HCS. Em resumo, o esquema proposto supera desafios inerentes à concorrência (especificamente, sincronismo) através do particionamento determinístico das requisições de conteúdos entre múltiplas threads. Em segundo lugar, é proposta uma metodologia para investigar o desenvolvimento de HCS explorando técnicas de emulação e modelagem analítica conjuntamente. A metodologia proposta apresenta vantagens em relação a metodologias baseadas em prototipação e simulação. A L2 é emulada para viabilizar a investigação de uma variedade de cenários de contorno (tanto em termos de hardware como de software) maior do que seria possível através de prototipação (considerando as tecnologias atuais). Além disso, a emulação emprega código real de um protótipo para os outros componentes do HCS (por exemplo L1, gerência das camadas e API) para fornecer resultados mais realistas do que seriam obtidos através de simulação. / A key challenge in Information Centric Networking (ICN) is to develop cache units (also called Content Store - CS) that meet three requirements: large storage space, fast operation, and affordable cost. The so-called HCS (Hierarchical Content Store) is a promising approach to satisfy these requirements jointly. It explores the correlation between content requests to predict future demands. Theoretically, this idea would enable proactively content transfers from a relatively large but slow cache area (Layer 2 - L2) to a faster but smaller cache area (Layer 1 - L1). Thereby, it would be possible to increase the throughput and size of CS in one order of magnitude, while keeping the cost. However, the development of HCS introduces several practical challenges. HCS requires a careful coupling of L2 and L1 memory levels considering their transfer rates and sizes. This requirement depends on both hardware specifications (e.g., read rate L2, use of multiple physical SSD in parallel, bus speed, etc.), and software aspects (e.g., the SSD controller, memory management, etc.). In this context, this thesis presents two main contributions. First, we propose an architecture for overcoming the HCS bottlenecks by parallelizing multiple HCS. In summary, the proposed scheme overcomes racing condition related challenges through deterministic partitioning of content requests among multiple threads. Second, we propose a methodology to investigate the development of HCS exploiting emulation techniques and analytical modeling jointly. The proposed methodology offers advantages over prototyping and simulation-based methods. We emulate the L2 to enable the investigation of a variety of boundary scenarios that are richer (regarding both hardware and software aspects) than would be possible through prototyping (considering current technologies). Moreover, the emulation employs real code from a prototype for the other components of the HCS (e.g., L1, layers management and API) to provide more realistic results than would be obtained through simulation.
|
102 |
Utvärdering av cachningsalgoritm för dynamiskt genererade webbsidorHandfast, Benny January 2005 (has links)
Webbservrar på Internet använder idag dynamiska webbsidor genererade med hjälp av databassystem för sina användare. Detta har lett till en stor belastning på webbservrar och en metod för att minska belastningen är att använda cachning. Detta arbete implementerar och utför tester på en specifik cachningsalgoritm kallad Online View Selection i ett webbspelsscenario. Ett potentiellt problem identifieras hos algoritmen som kan leda till att inaktuell information levereras till klienten och algoritmen modifieras för att hantera problemet. Testresultaten visar att både den modifierade algoritmen och originalet ger likvärdig prestanda. Den modifierade algoritmen visar sig fungera men problemet med den ursprungliga algoritmen uppkommer sällan i webbspelsscenariot.
|
103 |
A Caching And Streaming Framework For MultimediaPaknikar, Shantanu 12 1900 (has links) (PDF)
No description available.
|
104 |
Service-Oriented Information-Centric Vehicular Ad-hoc NetworksModesto, Felipe 29 May 2019 (has links)
With Vehicular mobile communication becoming a daily requirement and an ever increasing number of services being available to passengers, it is clear that vehicular networks efficient communication systems.
VANETs, one of the most significant trends in ad-hoc networking, has much to gain from improved content delivery and one of the leading contenders for mobile networks is the Information-Centric networking approach.
Its peculiarities define the Vehicular Environment requires specialized solutions, tailored for highly mobile environments.
The main contribution of this thesis is the introduction of a novel architecture and components.
We perform extensively discuss Information-Centric Vehicular Ad-hoc Networks.
Additionally, we perform an in-depth analysis of bus-based transit systems into VANETs not only as participating members but as service providers and official agents including roles and potential challenges.
We perform statistical analysis and analyze world data to denote the intrinsic potential of public transit systems.
From the discussions presented, we introduce a novel service-based system architecture for Information-Centric Networking named SEVeN.
The proposed model is designed to enable service exchange and service management in highly competitive vehicular ad-hoc networks.
The proposed SEVeN architecture includes the introduction of a novel purpose-defined naming policy and service sub-layer as well as a service prioritization policy named LBD.
We also discuss the current state of ICN caching in VANET, existing issues faced by vehicular networks and potential approaches based on intermediate cache coordination that can be taken to mitigate existing shortcommings.
We perform a series of simulations and analyze the efficiency of popular caching in various network configurations to denote current shortcomings.
From this discussion, we propose a cache content insertion policies, UG-Cache and MG-Cache, for ICN-VANETs.
In these cache policies, cache insertion decisions are made based on recommendations from content sender dependent on request frequency and cache distance.
We also introduce a caching policy based on collaborative observation of locality in request frequency, designed to allow vehicles to preemptively distribute and store in a reserved portion of the cache based on the cooperative observation of requests with provider-based location correlation.
All novel elements proposed by this thesis are discussed, described, evaluated within the chapters of this thesis.
|
105 |
Simulation of content-aware caching policies for tiled 360 videos / Simulering av content-aware caching policies för 360 videosLatif, Rami January 2020 (has links)
Video streaming is used daily by people around the world, plays a big role in many Internet users daily lives, and are today responsible for the majority of the Internet traffic. As 360 video streaming services become increasingly popular and each such user session requires much higher bandwidths than traditional video streaming, optimized solutions for this type of video is becoming increasingly important. One method that has been proposed to reduce the bandwidth usage is the usage of proxy servers. In this thesis, we evaluate custom-adapted prefetching policies that tries to improve the users Quality of Experience (QoE). Defining a prefetching policy for something adaptive as 360 video brings challenges that need to be simulated before release in the real world. Without proper testing the prefetch policy can do more harm than good by flooding the network with unnecessary amount of transmissions. Prior research has shown that the QoE of HTTP-based adaptive streaming (HAS) clients can be improved with content-aware prefetching (e.g., Krishnamoothi et al. 2013). However, there have been limited prior work adapting and evaluating such policies in the context of 360 streaming. This thesis presents a simulation-based evaluation of proxy-assisted 360solutions that includes custom designed prefetching polices. The main contributions of the thesis are as follows:First, we implement four types of proxy-assisted prefetching policies and simulate these under two scenarios with different networks conditions. One scenario simulate a network environment with a bottleneck located between the client and proxy while the other scenario simulates a network environment with the bottleneck located between the proxy and server. The cooperation between the client and proxy is evaluated for each scenario and prefetching policy. Second, we evaluate the proxy-assisted prefetching policies in comparison with baselines and each other, in regards of their ability to improve the viewers QoE. Our results show that the bottleneck location has major impact on proxy-performance and that simple prefetching policies can enable clients to download bigger loads of data, which have a significant effect on viewers QoE. Considering that 360 videos require much higher bandwidth then traditional video streaming, service providers may consider integrating prefetching policies for 360 video streaming.
|
106 |
Seed dispersal by black-backed Jackals (Canis mesomelas) and hairy-footed gerbils (Gerbillurus spp.) of !nara (Acanthosicyos horridus) in the central Namib DesertShikesho, Saima Dhiginina 29 September 2021 (has links)
This study investigated primary seed dispersal of !nara (Acanthosicyos horridus) by Blackbacked Jackals (Canis mesomelas) and secondary seed dispersal by scatter-hoarding hairyfooted gerbils (Gerbilliscus (Gerbillurus) spp.) in the central Namib Desert. This was accomplished by examining visitation rates and fruit removal of !nara melons, primarily by jackals. In addition, I determined the viability and germination rate of !nara seeds collected from jackal scat. The results indicate that jackals were the dominant species to visit !nara (93.3%) and the only !nara frugivores recorded by camera traps over two !nara fruiting seasons. There was no difference in the viability of ingested seeds and control seeds, but germination rates of ingested !nara seeds were significantly higher (50.4%) than control !nara seeds (34%). This component of the study suggests that Black-backed Jackals are the main primary dispersers of !nara seeds in the central Namib Desert. I furthermore examined secondary seed dispersal by tracking !nara seeds to determine whether scatter-hoarding hairyfooted gerbils were caching or consuming seeds. I recorded the distance moved, depth of seed burial, recovery rate and the habitats in which seeds were buried in three habitat types. Hairyfooted gerbils removed 100% !nara seeds from experimental sites and cached 60.3 % of all the !nara seeds removed. The gerbils frequently retrieved the buried caches within two days (77% of the time) and re-cached them elsewhere. The majority of caches were in the open areas (83%) and only consisted of one (39%) or two seeds (45%). Only 1.7% of the cached seeds were not retrieved by the gerbils during the 30-day observation periods. !Nara seeds were moved an average distance of 29.1±1.6 m and buried at an average depth of 4±0.2 cm. Although there is high probability of cache retrieval, some of the cached seeds survived. As gerbil caches are at favourable locations for plant establishment, and as it is more likely that buried seeds will survive until suitable conditions for germination and seedling establishment, seed dispersal by hairy-footed gerbils is advantageous to !nara plants. Therefore, hairy-footed gerbil species in the central Namib Desert contributed to secondary seed dispersal of !nara. The combined interaction of endozoochory by Black-backed Jackals (Canis mesomelas) and synzoochory by hairy-footed gerbils (Gerbillurus spp.) in dispersing seeds of !nara plants (Acanthosicyos horridus) in the central Namib Desert suggest diplochory is highly likely.
|
107 |
Limites fondamentales de stockage pour les réseaux de diffusion de liens partagés et les réseaux de combinaison / Fundamental Limits of Cache-aided Shared-link Broadcast Networks and Combination NetworksWan, Kai 29 June 2018 (has links)
Dans cette thèse, nous avons étudié le problème de cache codée en construisant la connexion entre le problème de cache codée avec placement non-codé et codage d'index, et en tirant parti des résultats de codage d'index pour caractériser les limites fondamentales du problème de cache codée. Nous avons principalement analysé le problème de cache codée dans le modèle de diffusion à liaison partagée et dans les réseaux combinés. Dans la première partie de cette thèse, pour les réseaux de diffusion de liens partagés, nous avons considéré la contrainte que le contenu placé dans les caches est non-codé. Lorsque le contenu du cache est non-codé et que les demandes de l'utilisateur sont révélées, le problème de cache peut être lié à un problème de codage d'index. Nous avons dérivé des limites fondamentales pour le problème de cache en utilisant des outils pour le problème de codage d'index. Nous avons dérivé un nouveau schéma réalisable de codage d'index en base d'un codage de source distribué. Cette borne interne est strictement meilleure que la borne interne du codage composite largement utilisée. Pour le problème de cache centralisée, une borne externe sous la contrainte de placement de cache non-codé est proposée en base de une borne externe “acyclic” de codage d’index. Il est prouvé que cette borne externe est atteinte par le schéma cMAN lorsque le nombre de fichiers n'est pas inférieur au nombre d'utilisateurs, et par le nouveau schéma proposé pour le codage d’index, sinon. Pour le problème de cache décentralisée, cette thèse propose une borne externe sous la contrainte que chaque utilisateur stocke des bits uniformément et indépendamment au hasard. Cette borne externe est atteinte par le schéma dMAN lorsque le nombre de fichiers n'est pas inférieur au nombre d'utilisateurs, et par notre codage d'index proposé autrement. Dans la deuxième partie de cette thèse, nous avons considéré le problème de cache dans les réseaux de relais, où le serveur communique avec les utilisateurs aidés par le cache via certains relais intermédiaires. En raison de la dureté de l'analyse sur les réseaux généraux, nous avons principalement considéré un réseau de relais symétrique bien connu, `réseaux de combinaison’, y compris H relais et binom {H} {r} utilisateurs où chaque utilisateur est connecté à un r-sous-ensemble de relais différent. Nous avons cherché à minimiser la charge de liaison maximale pour les cas les plus défavorables. Nous avons dérivé des bornes externes et internes dans cette thèse. Pour la borne externes, la méthode directe est que chaque fois que nous considérons une coupure de x relais et que la charge totale transmise à ces x relais peut être limitée à l'extérieur par la borne externes du modèle de lien partagé, y compris binom {x} {r} utilisateurs. Nous avons utilisé cette stratégie pour étendre les bornes externes du modèle de lien partagé et la borne externe “acyclic” aux réseaux de combinaison. Dans cette thèse, nous avons également resserré la borne externe “acyclic” dans les réseaux de combinaison en exploitant davantage la topologie du réseau et l'entropie conjointe des diverses variables aléatoires. Pour les schémas réalisables, il existe deux approches, la séparation et la non-séparation. De plus, nous avons étendu nos résultats à des modèles plus généraux, tels que des réseaux combinés où tous les relais et utilisateurs sont équipés par cache, et des systèmes de cache dans des réseaux relais plus généraux. Les résultats d'optimisation ont été donnés sous certaines contraintes et les évaluations numériques ont montré que nos schémas proposés surpassent l'état de l'art. / In this thesis, we investigated the coded caching problem by building the connection between coded caching with uncoded placement and index coding, and leveraging the index coding results to characterize the fundamental limits of coded caching problem. We mainly analysed the caching problem in shared-link broadcast model and in combination networks. In the first part of this thesis, for cache-aided shared-link broadcast networks, we considered the constraint that content is placed uncoded within the caches. When the cache contents are uncoded and the user demands are revealed, the caching problem can be connected to an index coding problem. We derived fundamental limits for the caching problem by using tools for the index coding problem. A novel index coding achievable scheme was first derived based on distributed source coding. This inner bound was proved to be strictly better than the widely used “composite (index) coding” inner bound by leveraging the ignored correlation among composites and the non-unique decoding. For the centralized caching problem, an outer bound under the constraint of uncoded cache placement is proposed based on the “acyclic index coding outer bound”. This outer bound is proved to be achieved by the cMAN scheme when the number of files is not less than the number of users, and by the proposed novel index coding achievable scheme otherwise. For the decentralized caching problem, this thesis proposes an outer bound under the constraint that each user stores bits uniformly and independently at random. This outer bound is achieved by dMAN when the number of files is not less than the number of users, and by our proposed novel index coding inner bound otherwise. In the second part of this thesis, we considered the centralized caching problem in two-hop relay networks, where the server communicates with cache-aided users through some intermediate relays. Because of the hardness of analysis on the general networks, we mainly considered a well-known symmetric relay networks, combination networks, including H relays and binom{H}{r} users where each user is connected to a different r-subset of relays. We aimed to minimize the max link-load for the worst cases. We derived outer and inner bounds in this thesis. For the outer bound, the straightforward way is that each time we consider a cut of x relays and the total load transmitted to these x relays could be outer bounded by the outer bound for the shared-link model including binom{x}{r} users. We used this strategy to extend the outer bounds for the shared-link model and the acyclic index coding outer bound to combination networks. In this thesis, we also tightened the extended acyclic index coding outer bound in combination networks by further leveraging the network topology and joint entropy of the various random variables. For the achievable schemes, there are two approaches, separation and non-separation. In the separation approach, we use cMAN cache placement and multicast message generation independent of the network topology. We then deliver cMAN multicast messages based on the network topology. In the non-separation approach, we design the placement and/or the multicast messages on the network topology. We proposed four delivery schemes on separation approach. On non-separation approach, firstly for any uncoded cache placement, we proposed a delivery scheme by generating multicast messages on network topology. Moreover, we also extended our results to more general models, such as combination networks with cache-aided relays and users, and caching systems in more general relay networks. Optimality results were given under some constraints and numerical evaluations showed that our proposed schemes outperform the state-of-the-art.
|
108 |
Optimal Network Coding Under Some Less-Restrictive Network ModelsChih-Hua Chang (10214267) 12 March 2021 (has links)
Network Coding is a critical technique when designing next-generation network systems, since the use of network coding can significantly improve the throughput and performance (delay/reliability) of the system. In the traditional design paradigm without network coding, different information flows are transported in a similar way like commodity flows such that the flows are kept separated while being forwarded in the network. However, network coding allows nodes in the network to not only forward the packet but also process the incoming information messages with the goal of either improving the throughput, reducing delay, or increasing the reliability. Specifically, network coding is a critical tool when designing absolute Shannon-capacity-achieving schemes for various broadcasting and multi-casting applications. In this thesis, we study the optimal network schemes for some applications with less restrictive network models. A common component of the models/approaches is how to use network coding to take advantage of a broadcast communication channel.<div><br></div><div>In the first part of the thesis, we consider the system of one server transmitting K information flows, one for each of K users (destinations), through a broadcast packet erasure channels with ACK/NACK. The capacity region of 1-to-K broadcast packet erasure channels with ACK/NACK is known for some scenarios, e.g., K<=3, etc. However, existing achievability schemes with network coding either require knowing the target rate in advance, and/or have a complicated description of the achievable rate region that is difficult to prove whether it matches the capacity or not. In this part, we propose a new network coding protocol with the following features: (i) Its achievable rate region is identical to the capacity region for all the scenarios in which the capacity is known; (ii) Its achievable rate region is much more tractable and has been used to derive new capacity rate vectors; (iii) It employs sequential encoding that naturally handles dynamic packet arrivals; (iv) It automatically adapts to unknown packet arrival rates; (v) It is based on GF(q) with q>=K. Numerically, for K=4, it admits an average control overhead 1.1% (assuming each packet has 1000 bytes), average encoding memory usage 48.5 packets, and average per-packet delay 513.6 time slots, when operating at 95% of the capacity.</div><div><br></div><div>In the second part, we focus on the coded caching system of one server and K users, each user k has cache memory size M<sub>k</sub> and demand a file among the N files currently stored at server. The coded caching system consists of two phases: Phase 1, the placement phase: Each user accesses the N files and fills its cache memory during off-peak hours; and Phase 2, the delivery phase: During the peak hours, each user submits his/her own file request and the server broadcasts a set of packet simultaneously to K users with the goal of successfully delivering the desired packets to each user. Due to the high complexity of coded caching problem with heterogeneous file size and heterogeneous cache memory size for arbitrary N and K, prior works focus on solving the optimal worst-case rate with homogeneous file size and mostly focus on designing order-optimal coded caching schemes with user-homogeneous file popularity that attain the lower bound within a constant factor. In this part, we derive the average rate capacity for microscopic 2-user/2-file (N=K=2) coded caching problem with heterogeneous files size, cache memory size, and user-dependent heterogeneous file popularity. The study will shed some further insights on the complexity and optimal scheme design of general coded caching problem with full heterogeneity.<br></div><div><br></div><div>In the third part, we further study the coded caching system of one server, K= 2 users, and N>=2 files and focus on the user-dependent file popularity of the two users. In order to approach the exactly optimal uniform average rate of the system, we simplify the file demand popularity to binary outputs, i.e., each user either has no interest (with probability 0) or positive uniform interest (with a constant probability) to each of the N file. Under this model, the file popularity of each user is characterized by his/her file demand set of positive interest in the N files. Specifically, we analyze the case of two user (K=2). We show the exact capacity results of one overlapped file of the two file demand sets for arbitrary N and two overlapped files of the two file demand sets for N = 3. To investigate the performance of large overlapped files we also present the average rate capacity under the constraint of selfish and uncoded prefetching with explicit prefetching schemes that achieve those capacities. All the results allow for arbitrary (and not necessarily identical) users' cache capacities and number of files in each file demand set.<br></div>
|
109 |
Analyzing Caching Gain in Small Geographical Areas in IP Access NetworksDu, Manxing January 2012 (has links)
Since its emergence, user generated content (UGC) has become the driving force in the growth of Internet traffic. As one of the most successful and popular UGC systems, YouTube contributes a great share of Internet traffic volume and has attracted a lot of academic interest. The continuously increasing amount of IP traffic motivates the need for better network design, more efficient content distribution mechanisms, and more sustainable system development. Web caching is one of the widely used techniques to reduce the inter Internet Service Provider (ISP) traffic. Web caching is considered an important part in the design of a content distribution infrastructure. This master’s thesis utilizes a one month trace of YouTube traffic in two residential networks in Sweden. Based upon a systematic and in-depth measurement we focus on analyzing the geographic locality of traffic patterns within small areas for these two networks. We summarize the YouTube traffic characteristics and user replay patterns, and then discuss why caching can be useful for YouTube-like systems. We present the optimal caching gain on a per area basis and also divide users into two groups: PC and mobile device users to show the caching gain for these two groups. Overall, an infinite capacity proxy cache for each small area could reduce the YouTube streaming data traffic by 30% to 45%. The result presented in this paper help us to understand YouTube traffic and user behaviors and provides valuable information for the ISPs to enable them to design more efficient caching mechanisms. When this work began we thought that a reduction of backhaul traffic (especially for mobile operators) may delay the need to make investments in upgrading their network capacity. However, an important conclusion from this thesis project is that the cache efficiency depends on the terminal type. For mobile terminals (smart phones, iPads, etc) a terminal cache solution is found to be the most efficient. For PCs in fixed networks, a network cache would be more efficient. It should be noted that the mobile terminals covered in the project are connected through home Wi-Fi, so further research is needed in order to draw definite conclusions for caching solutions for cellular networks. / Sedan dess tillkomst har användargenererat innehåll (på engelska: User Generated Content UGC) blivit den drivande kraften bakom ökningen av internettrafiken. Ett av de mest använda och populära UGC-systemen är Youtube, som bidrar med en stor del av volymen i internettrafiken, och har på så sätt lockat till sig ett stort akademiskt intresse. Den konstant ökande mängden av IP-trafik motiverar behovet av bättre nätverksdesign, effektivare mekanismer för delning av data, och en mer långsiktig system utveckling. Mellanlagring i nätet (network caching) är en av de mer använda teknikerna för att reducera trafiken för Internetoperatörer. Mellanlagring i nätet anses vara en viktig del i designen av den framtida media-distributionens infrastruktur. Det här examensarbetet använder en månads data från Youtube-trafik i två lokala nätverk i Sverige. Baserat på en systematisk och detaljerad mätning, fokuserar vi på att analysera specifika trafikmönster geografisk små områden för dessa två nätverk. Vi analyserar Youtube-trafikens egenskaper och karakteristik och användarnas beteende mönster. Baserat på dessa analyserar vi om mellanlagring kan vara en nyttig lösning för att reducera trafiken för Youtube-liknande system. Vi presenterar den optimala lagringsvinsten (cache gain) för geografiskt definierade populationer och vi delar även upp användare i två grupper: PC och mobila enheter, för att visa lagringsvinsten individuellt för dessa grupper. Generellt sett, om man hade en oändlig lagringskapacitet hos en proxy cache inom ett visst område, så skulle man kunna reducera Youtube-trafiken med 30-45%. Resultaten som presenterats i detta dokument, hjälper oss att förstå Youtube trafik och användar beteende, och ger värdefull information till operatörer, så att de kan designa effektivare lagringsmekanismer. Some utgångspunkt för detta arbete antog vi att en minskning av backhaultrafiken (särskilt för mobiloperatörer) kan fördröja behovet av att göra investeringar för att uppgradera kapaciteten i nätet. En viktig slutsats av detta examensarbete är att effektiviteten hos en proxy cache beror av terminaltypen. För mobila terminaler (smarta telefoner, iPads, etc) ger terminal-cache högre effektivitet, medan en nätverks-cache är effektivare för PCs. Det bör dock nämnas att mätningarna i detta arbete är från terminaler uppkopplade via fast bredband. Det behövs vidare analys för att dra konkreta slutsatser för användarbeteende och cache-lösningar i mobilnät.
|
110 |
Caches collaboratifs noyau adaptés aux environnements virtualisés / A kernel cooperative cache for virtualized environmentsLorrillere, Maxime 04 February 2016 (has links)
Avec l'avènement du cloud computing, la virtualisation est devenue aujourd'hui incontournable. Elle offre isolation et flexibilité, en revanche elle implique une fragmentation des ressources, et notamment de la mémoire. Les performances des applications qui effectuent beaucoup d'entrées/sorties (E/S) en sont particulièrement impactées. En effet, celles-ci reposent en grande partie sur la présence de mémoire libre, utilisée par le système pour faire du cache et ainsi accélérer les E/S. Ajuster dynamiquement les ressources d'une machine virtuelle devient donc un enjeu majeur. Dans cette thèse nous nous intéressons à ce problème, et nous proposons Puma, un cache réparti permettant de mutualiser la mémoire inutilisée des machines virtuelles pour améliorer les performances des applications qui effectuent beaucoup d'E/S. Contrairement aux solutions existantes, notre approche noyau permet à Puma de fonctionner avec les applications sans adaptation ni système de fichiers spécifique. Nous proposons plusieurs métriques, reposant sur des mécanismes existants du noyau Linux, qui permettent de définir le niveau d'activité « cache » du système. Ces métriques sont utilisées par Puma pour automatiser le niveau de contribution d'un noeud au cache réparti. Nos évaluations de Puma montrent qu'il est capable d'améliorer significativement les performances d'applications qui effectuent beaucoup d'E/S et de s'adapter dynamiquement afin de ne pas dégrader leurs performances. / With the advent of cloud architectures, virtualization has become a key mechanism for ensuring isolation and flexibility. However, a drawback of using virtual machines (VMs) is the fragmentation of physical resources. As operating systems leverage free memory for I/O caching, memory fragmentation is particularly problematic for I/O-intensive applications, which suffer a significant performance drop. In this context, providing the ability to dynamically adjust the resources allocated among the VMs is a primary concern.To address this issue, this thesis proposes a distributed cache mechanism called Puma. Puma pools together the free memory left unused by VMs: it enables a VM to entrust clean page-cache pages to other VMs. Puma extends the Linux kernel page cache, and thus remains transparent, to both applications and the rest of the operating system. Puma adjusts itself dynamically to the caching activity of a VM, which Puma evaluates by means of metrics derived from existing Linux kernel memory management mechanisms. Our experiments show that Puma significantly improves the performance of I/O-intensive applications and that it adapts well to dynamically changing conditions.
|
Page generated in 0.0491 seconds