• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cache Coherence State Based Replacement Policies

Agarwal, Tanuj Kumar January 2015 (has links) (PDF)
Cache replacement policies can play a pivotal role in the overall performance of a system by preserving data locality and thus limiting the o -chip accesses. In a shared memory system, a cache coherence protocol is necessary to ensure correctness of data computations by maintaining the state of entries in the cache. In this work we attempt to build and investigate the effect of cache replacement policies using the information provided by cache coherence protocol states. The cache coherence protocol states give us an idea about the state of entry with respect to other cores in the system. State based analysis of SPLASH-2 and PARSEC benchmark suites show that this information hints us towards the locality patterns of cache blocks, which can be used to prioritize the order of replacement of a cache states in a replacement policy. We model ten di erent cache state based replacement policies, three having xed priorities and seven whose priorities vary dynamically over the most recently used state. We compare these policies against the standard replacement policies (LRU, FIFO and Random) in terms of system performance and ease of implementation. We develop our simulation framework using the Multi2Sim simulator, where we model cache state based replacement policies. We simulate SPLASH-2 and PARSEC benchmark suites over a variety of con gurations, where we vary the number of cores, associatively for each level of cache, private/shared L2 cache. We characterize the programs to find out critical components for performance. For an 8-core system we observe that the best case among these state based replacement policies shows marginal improvements in IPC over the Random and FIFO policies, falling slightly short of LRU. We design the state based replacement policies using a smaller cache (CSL-cache), which is used to store the state information of the blocks in the main cache. The CSL cache communicates with the controller to provide the replacement entry. The complexity associated with the system is equal to FIFO and is independent of the associatively of the cache.
2

Competitive cache replacement strategies for a shared cache

Katti, Anil Kumar 08 July 2011 (has links)
We consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized cache replacement strategies when processes share memory blocks. Our main result for this case is a deterministic algorithm called GLOBAL-MAXIMA which is optimum up to a constant factor when processes share memory blocks. Our framework is a generalization of the application controlled caching framework in which processes access disjoint sets of memory blocks. We also present a deterministic algorithm called RR-PROC-MARK which exactly matches the lower bound on the competitive ratio of deterministic cache replacement algorithms when processes access disjoint sets of memory blocks. We extend our results to multiple levels of caches and prove that an exclusive cache is better than both inclusive and non-inclusive caches; this validates the experimental findings in the literature. Our results could be applied to shared caches in multicore systems in which processes work together on multithreaded computations like Gaussian elimination paradigm, fast Fourier transform, matrix multiplication, etc. In these computations, processes have full knowledge about their individual request sequences and can share memory blocks. / text
3

Cache strategies for internet-based video on-demand distribution

Moreira, Josilene Aires 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T15:51:44Z (GMT). No. of bitstreams: 2 arquivo2806_1.pdf: 3483412 bytes, checksum: cab776dc5a3fdf07c8cda900906f6a98 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Aires Moreira, Josilene; Fawzi Hadj Sadok, Djamel. Cache strategies for internet-based video on-demand distribution. 2011. Tese (Doutorado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2011.
4

Information-Centric Networking, A natural design for IoT applications? / Le réseau basé sur les informations (ICN), une conception naturelle pour l'Internet des Objets?

Meddeb, Maroua 27 September 2017 (has links)
L'Internet des Objets (IdO) est généralement perçu comme l'extension de l'Internet actuel à notre monde physique. Il interconnecte un grand nombre de capteurs / actionneurs, référencés comme des objets, sur Internet. Face aux importants défis imposés par l'hétérogénéité des dispositifs et l'énorme trafic généré, la pile protocolaire actuelle TCP / IP va atteindre ses limites. Le réseau centré sur l'information (ICN) a récemment reçu beaucoup d'attention comme une nouvelle architecture Internet qui a un grand potentiel pour être adoptée dans un système IdO. Le paradigme ICN forme la future architecture Internet qui s’est contrée sur les données elles-mêmes plutôt que sur leurs emplacements dans le réseau. Il s'agit d'un passage d'un modèle de communication centrée sur l'hôte vers un système centré sur le contenu en se basant sur des noms de contenu uniques et indépendants de la localisation, la mise en cache dans le réseau et le routage basé sur les noms. Grâce à ses avantages pertinents, l'ICN peut être un framework viable pour soutenir l’IdO, interconnectant des milliards d'objets contraints hétérogènes. En effet, ICN permet l'accès facile aux données et réduit à la fois le délai de récupération et la charge des requêtes sur les producteurs de données. Parmi plusieurs architectures ICN, le réseau de données nommées (NDN) est considéré comme l'architecture ICN appropriée pour les systèmes IdO. Néanmoins, de nouveaux problèmes ont apparu et s'opposent aux ambitions visées dans l'utilisation de la philosophie ICN dans les environnements IdO. En fait, nous avons identifié trois défis majeurs. Étant donné que les périphériques IdO sont habituellement limités en termes de ressources avec des limitations sévères de l'énergie, de la mémoire et de la puissance de traitement, les techniques de mise en cache en réseau doivent être optimisées. En outre, les données IdO sont transitoires et sont régulièrement mises à jour par les producteurs, ce qui impose des exigences strictes pour maintenir la cohérence des données mises en cache. Enfin, dans un scénario IdO, les objets sont souvent mobiles et nécessitent des stratégies pour maintenir leurs accessibilités. Dans cette thèse, nous proposons une stratégie de mise en cache optimale qui considère les contraintes des périphériques. Ensuite, nous présentons un nouveau mécanisme de cohérence de cache pour surveiller la validité des contenus mis en cache dans un environnement IdO. En outre, pour améliorer l'efficacité de la mise en cache, nous proposons également une politique de remplacement du cache qui vise à augmenter les performances du système et à maintenir la validité des données. Enfin, nous introduisons un nouveau routage basé sur les noms pour les réseaux NDN / IdO afin de prendre en charge la mobilité des producteurs.Nous simulons et comparons nos propositions à plusieurs propositions pertinentes sous un réseau IdO de trafic réel. Nos contributions présentent de bonnes performances du système en termes de taux de réduction du chemin parcouru par les requêtes, de taux de réduction du nombre des requêtes satisfaites par les serveur, du délai de la réponse et de perte des paquets, de plus, la stratégie de mise en cache offre un faible coût de cache et finalement la validité du contenu est considérablement améliorée grâce au mécanisme de cohérence. / The Internet of Things (IoT) is commonly perceived as the extension of the current Internet to our physical world. It interconnects an unprecedented number of sensors/actuators, referred as things, to the Internet. Facing the important challenges imposed by devices heterogeneity and the tremendous generated traffic, the current Internet protocol suite has reached its limits. The Information-Centric Networking (ICN) has recently received a lot of attention as a potential Internet architecture to be adopted in an IoT ecosystem. The ICN paradigm is shaping the foreseen future Internet architecture by focusing on the data itself rather than its hosting location. It is a shift from a host-centric! communication model to a content-centric one supporting among! others unique and location-independent content names, in-network caching and name-based routing. By leveraging the easy data access, and reducing both the retrieval delay and the load on the data producer, the ICN can be a viable framework to support the IoT, interconnecting billions of heterogeneous constrained objects. Among several ICN architectures, the Named Data Networking (NDN) is considered as a suitable ICN architecture for IoT systems. Nevertheless, new issues have emerged slowing down the ambitions besides using the ICN paradigm in IoT environments. In fact, we have identified three major challenges. Since IoT devices are usually resource-constrained with harsh limitations on energy, memory and processing power, the adopted in-network caching techniques should be optimized. Furthermore, IoT data are transient and frequently updated by the producer which imposes stringent requirements to maintain cached data freshness. Finally, in IoT scenario, devices are ! frequently mobile and IoT applications require keeping data continuity. In this thesis, we propose a caching strategy that considers devices constraints. Then, we introduce a novel cache freshness mechanism to monitor the validity of cached contents in an IoT environment. Furthermore, to improve caching efficiency, we also propose a cache replacement policy that targets to raise the system performances and maintain data freshness. Finally, we introduce a novel name-based routing for NDN/IoT networks to support the producer mobility. We simulate and compare our proposals to several relevant schemes under a real traffic IoT network. Our schemes exhibit good system performances in terms of hop reduction ratio, server hit reduction ratio, response latency and packet loss, yet it provides a low cache cost and significantly improves the content validity.
5

High-performance memory system architectures using data compression

Baek, Seungcheol 22 May 2014 (has links)
The Chip Multi-Processor (CMP) paradigm has cemented itself as the archetypal philosophy of future microprocessor design. Rapidly diminishing technology feature sizes have enabled the integration of ever-increasing numbers of processing cores on a single chip die. This abundance of processing power has magnified the venerable processor-memory performance gap, which is known as the ”memory wall”. To bridge this performance gap, a high-performing memory structure is needed. An attractive solution to overcoming this processor-memory performance gap is using compression in the memory hierarchy. In this thesis, to use compression techniques more efficiently, compressed cacheline size information is studied, and size-aware cache management techniques and hot-cacheline prediction for dynamic early decompression technique are proposed. Also, the proposed works in this thesis attempt to mitigate the limitations of phase change memory (PCM) such as low write performance and limited long-term endurance. One promising solution is the deployment of hybridized memory architectures that fuse dynamic random access memory (DRAM) and PCM, to combine the best attributes of each technology by using the DRAM as an off-chip cache. A dual-phase compression technique is proposed for high-performing DRAM/PCM hybrid environments and a multi-faceted wear-leveling technique is proposed for the long-term endurance of compressed PCM. This thesis also includes a new compression-based hybrid multi-level cell (MLC)/single-level cell (SLC) PCM management technique that aims to combine the performance edge of SLCs with the higher capacity of MLCs in a hybrid environment.

Page generated in 0.0138 seconds