• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 6
  • 5
  • 4
  • 1
  • Tagged with
  • 43
  • 43
  • 16
  • 13
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Optimisation des performance des logiciels de traitement de données sur les périphériques de stockage SSD / Performance optimization for data processing software on SSD storage devices

Laga, Arezki 20 December 2018 (has links)
Nous assistons aujourd’hui à une croissance vertigineuse des volumes de données. Cela exerce une pression sur les infrastructures de stockage et les logiciels de traitement de données comme les Systèmes de Gestion de Base de Données (SGBD). De nouvelles technologies ont vu le jour et permettent de réduire la pression exercée par les grandes masses de données. Nous nous intéressons particulièrement aux nouvelles technologies de mémoires secondaires comme les supports de stockage SSD (Solid State Drive) à base de mémoire Flash. Les supports de stockage SSD offrent des performances jusqu’à 10 fois plus élevées que les supports de stockage magnétiques. Cependant, ces nouveaux supports de stockage offrent un nouveau modèle de performance. Cela implique l’optimisation des coûts d’E/S pour les algorithmes de traitement et de gestion des données. Dans cette thèse, nous proposons un modèle des coûts d’E/S sur SSD pour les algorithmes de traitement de données. Ce modèle considère principalement le volume des données, l’espace mémoire alloué et la distribution des données. Nous proposons également un nouvel algorithme de tri en mémoire secondaire : MONTRES. Ce dernier est optimisé pour réduire le coût des E/S lorsque le volume de données à trier fait plusieurs fois la taille de la mémoire principale. Nous proposons enfin un mécanisme de pré-chargement de données : Lynx. Ce dernier utilise un mécanisme d’apprentissage pour prédire et anticiper les prochaines lectures en mémoire secondaire. / The growing volume of data poses a real challenge to data processing software like DBMS (DataBase Management Systems) and data storage infrastructure. New technologies have emerged in order to face the data volume challenges. We considered in this thesis the emerging new external memories like flash memory-based storage devices named SSD (Solid State Drive).SSD storage devices offer a performance gain compared to the traditional magnetic devices.However, SSD devices offer a new performance model that involves 10 cost optimization for data processing and management algorithms.We proposed in this thesis an 10 cost model to evaluate the data processing algorithms. This model considers mainly the SSD 10 performance and the data distribution.We also proposed a new external sorting algorithm: MONTRES. This algorithm includes optimizations to reduce the 10 cost when the volume of data is greater than the allocated memory space by an order of magnitude. We proposed finally a data prefetching mechanism: Lynx. This one makes use of a machine learning technique to predict and to anticipate future access to the external memory.
22

Applications des technologies mémoires MRAM appliquées aux processeurs embarqués / MRAM applied to Embedded Processors Architecture and Memory Hierarchy

Cargnini, Luís Vitório 12 November 2013 (has links)
Le secteur Semi-conducteurs avec l'avènement de fabrication submicroniques coule dessous de 45 nm ont commencé à relever de nouveaux défis pour continuer à évoluer en fonction de la loi de Moore. En ce qui concerne l'adoption généralisée de systèmes embarqués une contrainte majeure est devenu la consommation d'énergie de l'IC. En outre, les technologies de mémoire comme le standard actuel de la technologie de mémoire intégré pour la hiérarchie de la mémoire, la mémoire SRAM, ou le flash pour le stockage non-volatile ont des contraintes complexes extrêmes pour être en mesure de produire des matrices de mémoire aux nœuds technologiques 45 nm ci-dessous. Un important est jusqu'à présent mémoire non volatile n'a pas été adopté dans la hiérarchie mémoire, en raison de sa densité et comme le flash sur la nécessité d'un fonctionnement multi-tension.Ces thèses ont fait, par le travail dans l'objectif de ces contraintes et de fournir quelques réponses. Dans la thèse sera présenté méthodes et les résultats extraits de ces méthodes pour corroborer notre objectif de définir une feuille de route à adopter une nouvelle technologie de mémoire non volatile, de faible puissance, à faible fuite, SEU / MEU-résistant, évolutive et avec similaire le rendement en courant de la SRAM, physiquement équivalente à SRAM, ou encore mieux, avec une densité de surface de 4 à 8 fois la surface d'une cellule SRAM, sans qu'il soit nécessaire de domaine multi-tension comme FLASH. Cette mémoire est la MRAM (mémoire magnétique), selon l'ITRS avec un candidat pour remplacer SRAM dans un proche avenir. MRAM au lieu de stocker une charge, ils stockent l'orientation magnétique fournie par l'orientation de rotation-couple de l'alliage sans la couche dans la MTJ (Magnetic Tunnel Junction). Spin est un état quantical de la matière, que dans certains matériaux métalliques peuvent avoir une orientation ou son couple tension à appliquer un courant polarisé dans le sens de l'orientation du champ souhaitée.Une fois que l'orientation du champ magnétique est réglée, en utilisant un amplificateur de lecture, et un flux de courant à travers la MTJ, l'élément de cellule de mémoire de MRAM, il est possible de mesurer l'orientation compte tenu de la variation de résistance, plus la résistance plus faible au passage de courant, le sens permettra d'identifier un zéro logique, diminuer la résistance de la SA détecte une seule logique. Donc, l'information n'est pas une charge stockée, il s'agit plutôt d'une orientation du champ magnétique, raison pour laquelle il n'est pas affecté par SEU ou MEU due à des particules de haute énergie. En outre, il n'est pas dû à des variations de tensions de modifier le contenu de la cellule de mémoire, le piégeage charges dans une grille flottante.En ce qui concerne la MRAM, cette thèse a par adresse objective sur les aspects suivants: MRAM appliqué à la hiérarchie de la mémoire:- En décrivant l'état actuel de la technique dans la conception et l'utilisation MRAM dans la hiérarchie de mémoire;- En donnant un aperçu d'un mécanisme pour atténuer la latence d'écriture dans MRAM au niveau du cache (Principe de banque de mémoire composite);- En analysant les caractéristiques de puissance d'un système basé sur la MRAM sur Cache L1 et L2, en utilisant un débit d'évaluation dédié- En proposant une méthodologie pour déduire une consommation d'énergie du système et des performances.- Et pour la dernière base dans les banques de mémoire analysant une banque mémoire Composite, une description simple sur la façon de générer une banque de mémoire, avec quelques compromis au pouvoir, mais la latence équivalente à la SRAM, qui maintient des performances similaires. / The Semiconductors Industry with the advent of submicronic manufacturing flows below 45 nm began to face new challenges to keep evolving according with the Moore's Law. Regarding the widespread adoption of embedded systems one major constraint became power consumption of IC. Also, memory technologies like the current standard of integrated memory technology for memory hierarchy, the SRAM, or the FLASH for non-volatile storage have extreme intricate constraints to be able to yield memory arrays at technological nodes below 45nm. One important is up until now Non-Volatile Memory weren't adopted into the memory hierarchy, due to its density and like flash the necessity of multi-voltage operation. These theses has by objective work into these constraints and provide some answers. Into the thesis will be presented methods and results extracted from this methods to corroborate our goal of delineate a roadmap to adopt a new memory technology, non-volatile, low-power, low-leakage, SEU/MEU-resistant, scalable and with similar performance as the current SRAM, physically equivalent to SRAM, or even better with a area density between 4 to 8 times the area of a SRAM cell, without the necessity of multi-voltage domain like FLASH. This memory is the MRAM (Magnetic Memory), according with the ITRS one candidate to replace SRAM in the near future. MRAM instead of storing charge, they store the magnetic orientation provided by the spin-torque orientation of the free-layer alloy in the MTJ (Magnetic Tunnel Junction). Spin is a quantical state of matter, that in some metallic materials can have it orientation or its torque switched applying a polarized current in the sense of the field orientation desired. Once the magnetic field orientation is set, using a sense amplifier, and a current flow through the MTJ, the memory cell element of MRAM, it is possible to measure the orientation given the resistance variation, higher the resistance lower the passing current, the sense will identify a logic zero, lower the resistance the SA will sense a one logic. So the information is not a charge stored, instead it is a magnetic field orientation, reason why it is not affected by SEU or MEU caused due to high energy particles. Also it is not due to voltages variations to change the memory cell content, trapping charges in a floating gate. Regarding the MRAM, this thesis has by objective address the following aspects: MRAM applied to memory Hierarchy: - By describing the current state of the art in MRAM design and use into memory hierarchy; - by providing an overview of a mechanism to mitigate the latency of writing into MRAM at the cache level (Principle to composite memory bank); - By analyzing power characteristics of a system based on MRAM on CACHE L1 and L2, using a dedicated evaluation flow- by proposing a methodology to infer a system power consumption, and performances.- and for last based into the memory banks analysing a Composite Memory Bank, a simple description on how to generate a memory bank, with some compromise in power, but equivalent latency to the SRAM, that keeps similar performance.
23

Adaptive Hierarchial RAID

Muppalaneni, Nitin 01 1900 (has links)
Redundant Arrays of Inexpensive Disks or RAID is a popular method of improving the reliability and performance of disk storage. Of various levels of RAID, mirrored or RAID1 and rotating parity or RAID5 configurations have become moat popular. Mirrored or RAID1 provides best overall performance and is easier to configure, but has 100 percent storage overhead for the redundancy. Rotating parity or RAID5, on the other hand, is quite inexpensive for the redundancy it provides, shorn impressive performance for reads and full-stripe writes in normal mode, but the small write performance is poor due to the read-modify-write cycle involved. The performance drops drastically when one of the disks fails and the system enters degraded mode. Also RAID5 is relatively difficult to configure. Typical non-scientific system disk access patterns exhibit very high locality of reference. This thesis presents the design and implementation of an Adaptive Hierarchical RAID array to exploit this high locality. Frequently accessed data is migrated towards the top of the hierarchy and not so frequently acee88ed data is moved down the hierarchy, thus adaptively rearranging itself to the access patterns. Previous work on Adaptive Hierarchical RAID such as HP AutoRAID has explored one part of the design space, namely design of configurable storage at the SGSI level with no interaction with higher level layers like volume manager. This thesis explores a different design point: namely, one that is centered at the volume manager layer. This is important also for the reason that with fibre channel disks and SCSI-3, Storage Area Networks (SAN) no longer need a conventional controller but a modified version of a controller that is more close to a volume manager.
24

Memory-aware algorithms : from multicores to large scale platforms

Jacquelin, Mathias 20 July 2011 (has links) (PDF)
This thesis focus on memory-aware algorithms tailored for hierarchical memory architectures, found for instance within multicore processors. We first study the matrix product on multicore architectures. We model such a processor, and derive lower bounds on the communication volume. We introduce three ad hoc algorithms, and experimentally assess their performance.We then target a more complex operation: the QR factorization of tall matrices. We revisit existing algorithms to better exploit the parallelism of multicore processors. We thus study the critical paths of many algorithms, prove some of them to be asymptotically optimal, and assess their performance.In the next study, we focus on scheduling streaming applications onto a heterogeneous multicore platform, the QS 22. We introduce a model of the platform and use steady-state scheduling techniques so as to maximize the throughput. We present a mixed integer programming approach that computes an optimal solution, and propose simpler heuristics. We then focus on minimizing the amount of required memory for tree-shaped workflows, and target a classical two-level memory system. I/O represent transfers from a memory to the other. We propose a new exact algorithm, and show that there exist trees where postorder traversals are arbitrarily bad. We then study the problem of minimizing the I/O volume for a given memory, show that it is NP-hard, and provide a set of heuristics.Finally, we compare archival policies for BLUE WATERS. We introduce two archival policies and adapt the well known RAIT strategy. We provide a model of the tape storage platform, and use it to assess the performance of the three policies through simulation.
25

A reliable, secure phase-change memory as a main memory

Seong, Nak Hee 07 August 2012 (has links)
The main objective of this research is to provide an efficient and reliable method for using multi-level cell (MLC) phase-change memory (PCM) as a main memory. As DRAM scaling approaches the physical limit, alternative memory technologies are being explored for future computing systems. Among them, PCM is the most mature with announced commercial products for NOR flash replacement. Its fast access latency and scalability have led researchers to investigate PCM as a feasible candidate for DRAM replacement. Moreover, the multi-level potential of PCM cells can enhance the scalability by increasing the number of bits stored in a cell. However, the two major challenges for adopting MLC PCM are the limited write endurance cycle and the resistance drift issue. To alleviate the negative impact of the limited write endurance cycle, this thesis first introduces a secure wear-leveling scheme called Security Refresh. In the study, this thesis argues that a PCM design not only has to consider normal wear-out under normal application behavior, most importantly, it must take the worst-case scenario into account with the presence of malicious exploits and a compromised OS to address the durability and security issues simultaneously. Security Refresh can avoid information leak by constantly migrating their physical locations inside the PCM, obfuscating the actual data placement from users and system software. In addition to the secure wear-leveling scheme, this thesis also proposes SAFER, a hardware-efficient multi-bit stuck-at-fault error recovery scheme which can function in conjunction with existing wear-leveling techniques. The limited write endurance leads to wear-out related permanent failures, and furthermore, technology scaling increases the variation in cell lifetime resulting in early failures of many cells. SAFER exploits the key attribute that a failed cell with a stuck-at value is still readable, making it possible to continue to use the failed cell to store data; thereby reducing the hardware overhead for error recovery. Another approach that this thesis proposes to address the lower write endurance is a hybrid phase-change memory architecture that can dynamically classify, detect, and isolate frequent writes from accessing the phase-change memory. This proposed architecture employs a small SRAM-based Isolation Cache with a detection mechanism based on a multi-dimensional Bloom filter and a binary classifier. The techniques are orthogonal to and can be combined with other wear-out management schemes to obtain a synergistic result. Lastly, this thesis quantitatively studies the current art for MLC PCM in dealing with the resistance drift problem and shows that the previous techniques such as scrubbing or error correction schemes are incapable of providing sufficient level of reliability. Then, this thesis proposes tri-level-cell (3LC) PCM and demonstrates that 3LC PCM can be a viable solution to achieve the soft error rate of DRAM and the performance of single-level-cell PCM.
26

O impacto da hierarquia de memória sobre a arquitetura IPNoSys

Damasceno, Alexandro Lima 27 July 2016 (has links)
Submitted by Lara Oliveira (lara@ufersa.edu.br) on 2017-04-10T21:22:16Z No. of bitstreams: 1 AlexandroLD_DISSERT.pdf: 4478017 bytes, checksum: b25b015c0ae937a3ba2f2718697a3977 (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-13T14:42:00Z (GMT) No. of bitstreams: 1 AlexandroLD_DISSERT.pdf: 4478017 bytes, checksum: b25b015c0ae937a3ba2f2718697a3977 (MD5) / Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-13T15:00:20Z (GMT) No. of bitstreams: 1 AlexandroLD_DISSERT.pdf: 4478017 bytes, checksum: b25b015c0ae937a3ba2f2718697a3977 (MD5) / Made available in DSpace on 2017-04-13T15:07:49Z (GMT). No. of bitstreams: 1 AlexandroLD_DISSERT.pdf: 4478017 bytes, checksum: b25b015c0ae937a3ba2f2718697a3977 (MD5) Previous issue date: 2016-07-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Over the years, with the as technology advances, the search for improvements in the performance of computer systems is notable. The computer systems have evolved in both processing capacity and complexity of the implemented architectures. In such systems it is crucial to use memories since they are responsible for storing data to be processed. Considering an ideal environment, the memories should have a unlimited storage capacity, instant data access and the extremely low cost per bit. But in real systems the memories do not exhibit these characteristics. Storage capacity, speed and cost per bit are factors that increase in proportion to each other. One technique that is used to balance these factors and improve the performance of computer systems is the memory hierarchy. In the scenario of new technologies and proposals for new organizations of processors, a model that has been adopted by designers of computer systems is the use of MPSoCs (multiprocessor systems on chip), which has a higher energy and computational e ciency. In this scenario with many processing elements, networks using on-chip (NoC - networks-on-chip) is more e cient use of the buses. An NoC consists of a set of routers and interconnected channels forming a switched network. The cores are connected to network terminals and communication occurs through the exchange of packets. These NoCs have traditionally been exclusively designed for communication SoCs. However, a project of an unconventional architecture decided to integrate processing and communication in an NoC. This architecture is known for IPNoSys. The IPNoSys (Integrated Processing NoC System) architecture is an unconventional processor that uses networks on chip and implements processing units and routing to handle and process instructions. It takes advantage of the characteristics of NoC, such as scalability and parallel communication, for implement e ectively runs programs that exploit parallelism-level threads. Currently, IPNoSys architecture has four memory physically distributed at the corners of the network, but represent a unified addressing. Each memory module is associated with an access unit in charge of managing it. Given the current organization of IPNoSys memories, this work proposes to develop a new memory hierarchy system for IPNoSys and investigate the possible impact on performance and the programming model / Aolongo dos anos,coma ascensão das tecnologias, a busca por melhorias no desempenho dos sistemas computacionais é algo notável. Os sistemas computacionais evoluíram tanto em capacidade de processamento como em complexidade das arquiteturas implementadas. Nesses sistemas é crucial a utilização de memórias uma vez que elas são responsáveis pelo armazenamento de dados que serão processados. Considerando um ambiente ideal, as memórias deveriam ter uma capacidade de armazenamento ilimitado, o acesso de dados imediato e o custo por bit extremamente baixo. Porém nos sistemas reais as memórias não apresentam essas características. Capacidade de armazenamento, velocidade e custo por bit são fatores que crescem proporcionalmente entre si. Uma técnica que é utilizada para balancear esses fatores e melhorar o desempenho dos sistemas computacionais é a hierarquia de memória. No cenário de novas tecnologias e propostas de novas organizações de processadores, um modelo que vem sendo adotada pelos projetistas de sistemas computacionais é o uso de MPSoCs (sistemas multiprocessados integrados em chip), que apresenta uma maior eficiência energética e computacional. Nesse cenário com muitos elementos de processamento, a utilização de redes em chip (NoC - networks-on-chip) se mostra mais eficiente que o uso de barramentos. Uma NoC consiste em um conjunto de roteadores e canais interligados formando uma rede chaveada. Os núcleos são conectados aos terminais da rede e a comunicação ocorre pela troca de pacotes. Essas NoCs foram tradicionalmente projetadas exclusivamente para a comunicação em SoCs. Entretanto, um projeto de uma arquitetura não convencional resolveu integrar processamento e comunicação em uma NoC. Essa arquitetura é conhecida por IPNoSys. A arquitetura IPNoSys (Integrated Processing NoC System) é um processador não convencional que utiliza redes em chip e implementa unidades de processamento e roteamento para tratar e processar instruções. Aproveita as características das NoCs, como escalabilidade e comunicação paralela, para implementar de maneira eficiente execuções de programas que exploram paralelismo em nível de threads. Atualmente, a arquitetura IPNoSys possui quatro memórias fisicamente distribuidas nos cantos da rede, mas que representam um endereçamento unificado. Cada módulo de memória é associado a uma unidade de acesso que se encarregam de gerenciá-la. Diante da atual organização de memórias da IPNoSys, esse trabalho desenvolveu um novo sistema de hierarquia de memórias para o IPNoSys e investigou os possíveis impactos sobre o desempenho e o modelo de programação / 2017-04-10
27

Novel Cache Hierarchies with Photonic Interconnects for Chip Multiprocessors

Puche Lara, José 13 April 2021 (has links)
[ES] Los procesadores multinúcleo actuales cuentan con recursos compartidos entre los diferentes núcleos. Dos de estos recursos compartidos, la cache de último nivel y el ancho de banda de memoria principal, pueden convertirse en cuellos de botella para el rendimiento. Además, con el crecimiento del número de núcleos que implementan los diseños más recientes, la red dentro del chip también se convierte en un cuello de botella que puede afectar negativamente al rendimiento, ya que las redes tradicionales pueden encontrar limitaciones a su escalabilidad en el futuro cercano. Prácticamente la totalidad de los diseños actuales implementan jerarquías de memoria que se comunican mediante rápidas redes de interconexión. Esta organización es eficaz dado que permite reducir el número de accesos que se realizan a memoria principal y la latencia media de acceso a memoria. Las caches, la red de interconexión y la memoria principal, conjuntamente con otras técnicas conocidas como la prebúsqueda, permiten reducir las enormes latencias de acceso a memoria principal, limitando así el impacto negativo ocasionado por la diferencia de rendimiento existente entre los núcleos de cómputo y la memoria. Sin embargo, compartir los recursos mencionados es fuente de diferentes problemas y retos, siendo uno de los principales el manejo de la interferencia entre aplicaciones. Hacer un uso eficiente de la jerarquía de memoria y las caches, así como contar con una red de interconexión apropiada, es necesario para sostener el crecimiento del rendimiento en los diseños tanto actuales como futuros. Esta tesis analiza y estudia los principales problemas e inconvenientes observados en estos dos recursos: la cache de último nivel y la red dentro del chip. En primer lugar, se estudia la escalabilidad de las tradicionales redes dentro del chip con topología de malla, así como esta puede verse comprometida en próximos diseños que cuenten con mayor número de núcleos. Los resultados de este estudio muestran que, a mayor número de núcleos, el impacto negativo de la distancia entre núcleos en la latencia puede afectar seriamente al rendimiento del procesador. Como solución a este problema, en esta tesis proponemos una de red de interconexión óptica modelada en un entorno de simulación detallado, que supone una solución viable a los problemas de escalabilidad observados en los diseños tradicionales. A continuación, esta tesis dedica un esfuerzo importante a identificar y proponer soluciones a los principales problemas de diseño de las jerarquías de memoria actuales como son, por ejemplo, el sobredimensionado del espacio de cache privado, la existencia de réplicas de datos y rigidez e incapacidad de adaptación de las estructuras de cache. Aunque bien conocidos, estos problemas y sus efectos adversos en el rendimiento pueden ser evitados en procesadores de alto rendimiento gracias a la enorme capacidad de la cache de último nivel que este tipo de procesadores típicamente implementan. Sin embargo, en procesadores de bajo consumo, no existe la posibilidad de contar con tales capacidades y hacer un uso eficiente del espacio disponible es crítico para mantener el rendimiento. Como solución a estos problemas en procesadores de bajo consumo, proponemos una novedosa organización de jerarquía de dos niveles cache que utiliza una red de interconexión óptica. Los resultados obtenidos muestran que, comparado con diseños convencionales, el consumo de energía estática en la arquitectura propuesta es un 60% menor, pese a que los resultados de rendimiento presentan valores similares. Por último, hemos extendido la arquitectura propuesta para dar soporte tanto a aplicaciones paralelas como secuenciales. Los resultados obtenidos con la esta nueva arquitectura muestran un ahorro de hasta el 78 % de energía estática en la ejecución de aplicaciones paralelas. / [CA] Els processadors multinucli actuals compten amb recursos compartits entre els diferents nuclis. Dos d'aquests recursos compartits, la memòria d’últim nivell i l'ample de banda de memòria principal, poden convertir-se en colls d'ampolla per al rendiment. A mes, amb el creixement del nombre de nuclis que implementen els dissenys mes recents, la xarxa dins del xip també es converteix en un coll d'ampolla que pot afectar negativament el rendiment, ja que les xarxes tradicionals poden trobar limitacions a la seva escalabilitat en el futur proper. Pràcticament la totalitat dels dissenys actuals implementen jerarquies de memòria que es comuniquen mitjançant rapides xarxes d’interconnexió. Aquesta organització es eficaç ates que permet reduir el nombre d'accessos que es realitzen a memòria principal i la latència mitjana d’accés a memòria. Les caches, la xarxa d’interconnexió i la memòria principal, conjuntament amb altres tècniques conegudes com la prebúsqueda, permeten reduir les enormes latències d’accés a memòria principal, limitant així l'impacte negatiu ocasionat per la diferencia de rendiment existent entre els nuclis de còmput i la memòria. No obstant això, compartir els recursos esmentats és font de diversos problemes i reptes, sent un dels principals la gestió de la interferència entre aplicacions. Fer un us eficient de la jerarquia de memòria i les caches, així com comptar amb una xarxa d’interconnexió apropiada, es necessari per sostenir el creixement del rendiment en els dissenys tant actuals com futurs. Aquesta tesi analitza i estudia els principals problemes i inconvenients observats en aquests dos recursos: la memòria cache d’últim nivell i la xarxa dins del xip. En primer lloc, s'estudia l'escalabilitat de les xarxes tradicionals dins del xip amb topologia de malla, així com aquesta es pot veure compromesa en propers dissenys que compten amb major nombre de nuclis. Els resultats d'aquest estudi mostren que, a major nombre de nuclis, l'impacte negatiu de la distància entre nuclis en la latència pot afectar seriosament al rendiment del processador. Com a solució' a aquest problema, en aquesta tesi proposem una xarxa d’interconnexió' òptica modelada en un entorn de simulació detallat, que suposa una solució viable als problemes d'escalabilitat observats en els dissenys tradicionals. A continuació, aquesta tesi dedica un esforç important a identificar i proposar solucions als principals problemes de disseny de les jerarquies de memòria actuals com son, per exemple, el sobredimensionat de l'espai de memòria cache privat, l’existència de repliques de dades i la rigidesa i incapacitat d’adaptació' de les estructures de memòria cache. Encara que ben coneguts, aquests problemes i els seus efectes adversos en el rendiment poden ser evitats en processadors d'alt rendiment gracies a l'enorme capacitat de la memòria cache d’últim nivell que aquest tipus de processadors típicament implementen. No obstant això, en processadors de baix consum, no hi ha la possibilitat de comptar amb aquestes capacitats, i fer un us eficient de l'espai disponible es torna crític per mantenir el rendiment. Com a solució a aquests problemes en processadors de baix consum, proposem una nova organització de jerarquia de dos nivells de memòria cache que utilitza una xarxa d’interconnexió òptica. Els resultats obtinguts mostren que, comparat amb dissenys convencionals, el consum d'energia estàtica en l'arquitectura proposada és un 60% menor, malgrat que els resultats de rendiment presenten valors similars. Per últim, hem estes l'arquitectura proposada per donar suport tant a aplicacions paral·leles com seqüencials. Els resultats obtinguts amb aquesta nova arquitectura mostren un estalvi de fins al 78 % d'energia estàtica en l’execució d'aplicacions paral·leles. / [EN] Current multicores face the challenge of sharing resources among the different processor cores. Two main shared resources act as major performance bottlenecks in current designs: the off-chip main memory bandwidth and the last level cache. Additionally, as the core count grows, the network on-chip is also becoming a potential performance bottleneck, since traditional designs may find scalability issues in the near future. Memory hierarchies communicated through fast interconnects are implemented in almost every current design as they reduce the number of off-chip accesses and the overall latency, respectively. Main memory, caches, and interconnection resources, together with other widely-used techniques like prefetching, help alleviate the huge memory access latencies and limit the impact of the core-memory speed gap. However, sharing these resources brings several concerns, being one of the most challenging the management of the inter-application interference. Since almost every running application needs to access to main memory, all of them are exposed to interference from other co-runners in their way to the memory controller. For this reason, making an efficient use of the available cache space, together with achieving fast and scalable interconnects, is critical to sustain the performance in current and future designs. This dissertation analyzes and addresses the most important shortcomings of two major shared resources: the Last Level Cache (LLC) and the Network on Chip (NoC). First, we study the scalability of both electrical and optical NoCs for future multicoresand many-cores. To perform this study, we model optical interconnects in a cycle-accurate multicore simulation framework. A proper model is required; otherwise, important performance deviations may be observed otherwise in the evaluation results. The study reveals that, as the core count grows, the effect of distance on the end-to-end latency can negatively impact on the processor performance. In contrast, the study also shows that silicon nanophotonics are a viable solution to solve the mentioned latency problems. This dissertation is also motivated by important design concerns related to current memory hierarchies, like the oversizing of private cache space, data replication overheads, and lack of flexibility regarding sharing of cache structures. These issues, which can be overcome in high performance processors by virtue of huge LLCs, can compromise performance in low power processors. To address these issues we propose a more efficient cache hierarchy organization that leverages optical interconnects. The proposed architecture is conceived as an optically interconnected two-level cache hierarchy composed of multiple cache modules that can be dynamically turned on and off independently. Experimental results show that, compared to conventional designs, static energy consumption is improved by up to 60% while achieving similar performance results. Finally, we extend the proposal to support both sequential and parallel applications. This extension is required since the proposal adapts to the dynamic cache space needs of the running applications, and multithreaded applications's behaviors widely differ from those of single threaded programs. In addition, coherence management is also addressed, which is challenging since each cache module can be assigned to any core at a given time in the proposed approach. For parallel applications, the evaluation shows that the proposal achieves up to 78% static energy savings. In summary, this thesis tackles major challenges originated by the sharing of on-chip caches and communication resources in current multicores, and proposes new cache hierarchy organizations leveraging optical interconnects to address them. The proposed organizations reduce both static and dynamic energy consumption compared to conventional approaches while achieving similar performance; which results in better energy efficiency. / Puche Lara, J. (2021). Novel Cache Hierarchies with Photonic Interconnects for Chip Multiprocessors [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/165254 / TESIS
28

Exploitation efficace des architectures parallèles de type grappes de NUMA à l’aide de modèles hybrides de programmation

Clet-Ortega, Jérôme 18 April 2012 (has links)
Les systèmes de calcul actuels sont généralement des grappes de machines composés de nombreux processeurs à l'architecture fortement hiérarchique. Leur exploitation constitue le défi majeur des implémentations de modèles de programmation tels MPI ou OpenMP. Une pratique courante consiste à mélanger ces deux modèles pour bénéficier des avantages de chacun. Cependant ces modèles n'ont pas été pensés pour fonctionner conjointement ce qui pose des problèmes de performances. Les travaux de cette thèse visent à assister le développeur dans la programmation d'application de type hybride. Il s'appuient sur une analyse de la hiérarchie architecturale du système de calcul pour dimensionner les ressources d'exécution (processus et threads). Plutôt qu'une approche hybride classique, créant un processus MPI multithreadé par noeud, nous évaluons de façon automatique des solutions alternatives, avec plusieurs processus multithreadés par noeud, mieux adaptées aux machines de calcul modernes. / Modern computing servers usually consist in clusters of computers with several multi-core CPUs featuring a highly hierarchical hardware design. The major challenge of the programming models implementations is to efficiently take benefit from these servers. Combining two type of models, like MPI and OpenMP, is a current trend to reach this point. However these programming models haven't been designed to work together and that leads to performance issues. In this thesis, we propose to assist the programmer who develop hybrid applications. We lean on an analysis of the computing system architecture in order to set the number of processes and threads. Rather than a classical hybrid approach, that is to say creating one multithreaded MPI process per node, we automatically evaluate alternative solutions, with several multithreaded processes per node, better fitted to modern computing systems.
29

Managing the memory hierarchy in GPUs

Dublish, Saumay Kumar January 2018 (has links)
Pervasive use of GPUs across multiple disciplines is a result of continuous adaptation of the GPU architectures to address the needs of upcoming application domains. One such vital improvement is the introduction of the on-chip cache hierarchy, used primarily to filter the high bandwidth demand to the off-chip memory. However, in contrast to traditional CPUs, the cache hierarchy in GPUs is presented with significantly different challenges such as cache thrashing and bandwidth bottlenecks, arising due to small caches and high levels of memory traffic. These challenges lead to severe congestion across the memory hierarchy, resulting in high memory access latencies. In memory-intensive applications, such high memory access latencies often get exposed and can no longer be hidden through multithreading, and therefore adversely impact system performance. In this thesis, we address the inefficiencies across the memory hierarchy in GPUs that lead to such high levels of congestion. We identify three major factors contributing to poor memory system performance: first, disproportionate and insufficient bandwidth resources in the cache hierarchy; second, poor cache management policies; and third, high levels of multithreading. In order to revitalize the memory hierarchy by addressing the above limitations, we propose a three-pronged approach. First, we characterize the bandwidth bottlenecks present across the memory hierarchy in GPUs and identify the architectural parameters that are most critical in alleviating congestion. Subsequently, we explore the architectural design space to mitigate the bandwidth bottlenecks in a cost-effective manner. Second, we identify significant inter-core reuse in GPUs, presenting an opportunity to reuse data among the L1s. We exploit this reuse by connecting the L1 caches with a lightweight ring network to facilitate inter-core communication of shared data. We show that this technique reduces traffic to the L2 cache, freeing up the bandwidth for other accesses. Third, we present Poise, a machine learning approach to mitigate cache thrashing and bandwidth bottlenecks by altering the levels of multi-threading. Poise comprises a supervised learning model that is trained offline on a set of profiled kernels to make good warp scheduling decisions. Subsequently, a hardware inference engine is used to predict good warp scheduling decisions at runtime using the model learned during training. In summary, we address the problem of bandwidth bottlenecks across the memory hierarchy in GPUs by exploring how to best scale, supplement and utilize the existing bandwidth resources. These techniques provide an effective and comprehensive methodology to mitigate the bandwidth bottlenecks in the GPU memory hierarchy.
30

Design Space Exploration and Optimization of Embedded Memory Systems

Rabbah, Rodric Michel 11 July 2006 (has links)
Recent years have witnessed the emergence of microprocessors that are embedded within a plethora of devices used in everyday life. Embedded architectures are customized through a meticulous and time consuming design process to satisfy stringent constraints with respect to performance, area, power, and cost. In embedded systems, the cost of the memory hierarchy limits its ability to play as central a role. This is due to stringent constraints that fundamentally limit the physical size and complexity of the memory system. Ultimately, application developers and system engineers are charged with the heavy burden of reducing the memory requirements of an application. This thesis offers the intriguing possibility that compilers can play a significant role in the automatic design space exploration and optimization of embedded memory systems. This insight is founded upon a new analytical model and novel compiler optimizations that are specifically designed to increase the synergy between the processor and the memory system. The analytical models serve to characterize intrinsic program properties, quantify the impact of compiler optimizations on the memory systems, and provide deep insight into the trade-offs that affect memory system design.

Page generated in 0.6623 seconds