Spelling suggestions: "subject:"demory hierarchy"" "subject:"amemory hierarchy""
31 |
Algorithm/architecture codesign of low power and high performance linear algebra compute fabricsPedram, Ardavan 27 September 2013 (has links)
In the past, we could rely on technology scaling and new micro-architectural techniques to improve the performance of processors. Nowadays, both of these methods are reaching their limits. The primary concern in future architectures with billions of transistors on a chip and limited power budgets is power/energy efficiency. Full-custom design of application-specific cores can yield up to two orders of magnitude better power efficiency over conventional general-purpose cores. However, a tremendous design effort is required in integrating a new accelerator for each new application. In this dissertation, we present the design of specialized compute fabrics that maintain the efficiency of full custom hardware while providing enough flexibility to execute a whole class of coarse-grain operations. The broad vision is to develop integrated and specialized hardware/software solutions that are co-optimized and co-designed across all layers ranging from the basic hardware foundations all the way to the application programming support through standard linear algebra libraries. We try to address these issues specifically in the context of dense linear algebra applications. In the process, we pursue the main questions that architects will face while designing such accelerators. How broad is this class of applications that the accelerator can support? What are the limiting factors that prevent utilization of these accelerators on the chip? What is the maximum achievable performance/efficiency? Answering these questions requires expertise and careful codesign of the algorithms and the architecture to select the best possible components, datapaths, and data movement patterns resulting in a more efficient hardware-software codesign. In some cases, codesign reduces complexities that are imposed on the algorithm side due to the initial limitations in the architectures. We design a specialized Linear Algebra Processor (LAP) architecture and discuss the details of mapping of matrix-matrix multiplication onto it. We further verify the flexibility of our design for computing a broad class of linear algebra kernels. We conclude that this architecture can perform a broad range of matrix-matrix operations as complex as matrix factorizations, and even Fast Fourier Transforms (FFTs), while maintaining its ASIC level efficiency. We present a power-performance model that compares state-of-the-art CPUs and GPUs with our design. Our power-performance model reveals sources of inefficiencies in CPUs and GPUs. We demonstrate how to overcome such inefficiencies in the process of designing our LAP. As we progress through this dissertation, we introduce modifications of the original matrix-matrix multiplication engine to facilitate the mapping of more complex operations. We observe the resulting performance and efficiencies on the modified engine using our power estimation methodology. When compared to other conventional architectures for linear algebra applications and FFT, our LAP is over an order of magnitude better in terms of power efficiency. Based on our estimations, up to 55 and 25 GFLOPS/W single- and double-precision efficiencies are achievable on a single chip in standard 45nm technology. / text
|
32 |
Architectural enhancement for message passing interconnectsKhun Jush, Farshad 21 October 2008 (has links)
Research in high-performance architecture has been focusing on achieving more computing power to solve computationally-intensive problems. Advancements in the processor industry are not applicable in applications that need several hundred or thousand-fold improvement in
performance. The parallel architecture approach promises to provide more computing power and scalability. Cluster computing, consisting of low-cost and high-performance processors, has been an alternative to proprietary and expensive supercomputer platforms. As in any other
parallel system, communication overhead (including hardware, software, and network) adversely affects the computation performance in a cluster environment. Therefore, decreasing this overhead is the main concern in such environments.
Communication overhead is the key obstacle to reaching hardware performance limits and is mostly associated with software overhead, a significant portion of which is attributed to message copying. Message copying is largely caused by a lack of knowledge of the next received message, which can be dealt with through speculation. To
reduce this copying overhead and advance toward a finer granularity, architectural extensions comprised of a specialized network cache and instructions to manage the operations of these extensions were introduced. In order to investigate the effectiveness of the proposed architectural enhancement, a simulation environment was established by expanding an existing single-thread infrastructure to one that can run MPI applications. Then the proposed extensions were implemented, along with the MPI functions on top of the SimpleScalar infrastructure.
Further, two techniques were proposed in order to achieve zero-copy data transfer in message passing environments, two policies that determine
when a message is to be bound and sent to the data cache. These policies are called Direct to Cache Transfer DTCT and lazy DTCT. The simulations showed that by using the proposed network extension along with the DTCT techniques fewer data cache misses were encountered as compared to when the DTCT techniques
were not used. This involved a study of the possible overhead and cache pollution introduced by the operating system and the communications stack, as exemplified by Linux, TCP/IP and M-VIA. Then these effects on the proposed extensions were explored. Ultimately, this enabled a comparison of the performance achieved by applications running on a system incorporating the proposed extension with the performance of the same
applications running on a standard system. The results showed that the proposed approach could improve the performance of MPI applications by 15 to 20%.
Moreover, data transfer mechanisms and the associated components in the CELL BE processor were studied. For this, two general data transfer methods were explored involving the PUT and GET functions, demonstrating that the SPE-initiated DMA data transfers are faster than the corresponding PPE-initiated DMAs. The main components of each data transfer were also investigated. In the SPE-initiated GET function, the main component is data delivery. However, the PPE-initiated GET function shows a long DMA issue time as well as a lengthy gap in receiving successive messages. It was demonstrated that the main components of the SPE-initiated PUT function are data
delivery and latency (that is, the time to receive the first byte), and the main components in the PPE-initiated PUT function are the DMA issue time and latency. Further, an investigation revealed that memory-management overhead is comparable to the data transfer time;
therefore, this calls for techniques to hide the unavoidable
overhead in order to reach high-throughput communication in MPI implementation in the Cell BE processor.
|
33 |
Co-scheduling for large-scale applications : memory and resilience / Ordonnancement concurrent d’applications à grande échelle : mémoire et résiliencePottier, Loïc 18 September 2018 (has links)
Cette thèse explore les problèmes liés à l'ordonnancement concurrent dans le contexte des applications massivement parallèle, de deux points de vue: le coté mémoire (en particulier la mémoire cache) et le coté tolérance aux fautes.Avec l'avènement récent des architectures dites many-core, tels que les récents processeurs multi-coeurs, le nombre d'unités de traitement augmente de manière importante.Dans ce contexte, les avantages fournis par les techniques d'ordonnancements concurrents ont été démontrés à travers de nombreuses études.L'ordonnancement concurrent, aussi appelé co-ordonnancement, consiste à exécuter les applications de manière concurrente plutôt que les unes après les autres, dans le but d'améliorer le débit global de la plateforme.Mais le partage des ressources peut souvent générer des interférences.Une des solutions pour réduire de manière importante ces interférences est le partitionnement de cache.À travers un modèle théorique, des simulations et des expériences sur une plateforme existante, nous montrons l'utilité et l'importance du co-ordonnancement quand nos stratégies de partitionnement de cache sont utilisées.De plus, avec ce nombre croissant de processeurs, la probabilité d'une panne augmente également.L'efficacité des techniques de co-ordonnancement a été démontrée dans un contexte sans pannes, mais les plateformes massivement parallèles sont confrontées à des pannes fréquentes, et des techniques de tolérance aux fautes doivent être mise en place pour améliorer l'efficacité de ces plateformes.Nous étudions la complexité du problème avec un modèle théorique, nous concevons des heuristiques et nous effectuons un ensemble complet de simulations avec un simulateur de pannes, qui démontre l'efficacité des heuristiques proposées. / This thesis explores co-scheduling problems in the context of large-scale applications with two main focus: the memory side, in particular the cache memory and the resilience side.With the recent advent of many-core architectures such as chip multiprocessors (CMP), the number of processing units is increasing.In this context, the benefits of co-scheduling techniques have been demonstrated. Recall that, the main idea behind co-scheduling is to execute applications concurrently rather than in sequence in order to improve the global throughput of the platform.But sharing resources often generates interferences.With the arising number of processing units accessing to the same last-level cache, those interferences among co-scheduled applications becomes critical.In addition, with that increasing number of processors the probability of a failure increases too.Resiliency aspects must be taking into account, specially for co-scheduling because failure-prone resources might be shared between applications.On the memory side, we focus on the interferences in the last-level cache, one solution used to reduce these interferences is the cache partitioning.Extensive simulations demonstrate the usefulness of co-scheduling when our efficient cache partitioning strategies are deployed.We also investigate the same problem on a real cache partitioned chip multiprocessors, using the Cache Allocation Technology recently provided by Intel.In a second time, still on the memory side, we study how to model and schedule task graphs on the new many-core architectures, such as Knights Landing architecture.These architectures offer a new level in the memory hierarchy through a new on-packagehigh-bandwidth memory. Current approaches usually do not take intoaccount this new memory level, however new scheduling algorithms anddata partitioning schemes are needed to take advantage of this deepmemory hierarchy.On the resilience, we explore the impact on failures on co-scheduling performance.The co-scheduling approach has been demonstrated in a fault-free context, but large-scale computer systems are confronted by frequent failures, and resilience techniques must be employed for large applications to execute efficiently. Indeed, failures may create severe imbalance between applications, and significantly degrade performance.We aim at minimizing the expected completion time of a set of co-scheduled applications in a failure-prone context by redistributing processors.
|
34 |
Adaptive Hierarchical RAIDMuppalaneni, Nitin 01 1900 (has links) (PDF)
No description available.
|
35 |
Performance Analysis of Complex Shared Memory SystemsMolka, Daniel 10 March 2017 (has links)
Systems for high performance computing are getting increasingly complex. On the one hand, the number of processors is increasing. On the other hand, the individual processors are getting more and more powerful. In recent years, the latter is to a large extent achieved by increasing the number of cores per processor. Unfortunately, scientific applications often fail to fully utilize the available computational performance. Therefore, performance analysis tools that help to localize and fix performance problems are indispensable. Large scale systems for high performance computing typically consist of multiple compute nodes that are connected via network. Performance analysis tools that analyze performance problems that arise from using multiple nodes are readily available. However, the increasing number of cores per processor that can be observed within the last decade represents a major change in the node architecture. Therefore, this work concentrates on the analysis of the node performance.
The goal of this thesis is to improve the understanding of the achieved application performance on existing hardware. It can be observed that the scaling of parallel applications on multi-core processors differs significantly from the scaling on multiple processors. Therefore, the properties of shared resources in contemporary multi-core processors as well as remote accesses in multi-processor systems are investigated and their respective impact on the application performance is analyzed. As a first step, a comprehensive suite of highly optimized micro-benchmarks is developed. These benchmarks are able to determine the performance of memory accesses depending on the location and coherence state of the data. They are used to perform an in-depth analysis of the characteristics of memory accesses in contemporary multi-processor systems, which identifies potential bottlenecks. However, in order to localize performance problems, it also has to be determined to which extend the application performance is limited by certain resources.
Therefore, a methodology to derive metrics for the utilization of individual components in the memory hierarchy as well as waiting times caused by memory accesses is developed in the second step. The approach is based on hardware performance counters, which record the number of certain hardware events. The developed micro-benchmarks are used to selectively stress individual components, which can be used to identify the events that provide a reasonable assessment for the utilization of the respective component and the amount of time that is spent waiting for memory accesses to complete. Finally, the knowledge gained from this process is used to implement a visualization of memory related performance issues in existing performance analysis tools.
The results of the micro-benchmarks reveal that the increasing number of cores per processor and the usage of multiple processors per node leads to complex systems with vastly different performance characteristics of memory accesses depending on the location of the accessed data. Furthermore, it can be observed that the aggregated throughput of shared resources in multi-core processors does not necessarily scale linearly with the number of cores that access them concurrently, which limits the scalability of parallel applications. It is shown that the proposed methodology for the identification of meaningful hardware performance counters yields useful metrics for the localization of memory related performance limitations.
|
36 |
The HELLS-Join: A Heterogeneous Stream join for ExtremeLy Large windowsKarnagel, Tomas, Habich, Dirk, Schlegel, Benjamin, Lehner, Wolfgang 19 September 2022 (has links)
Upcoming processors are combining different computing units in a tightly-coupled approach using a unified shared memory hierarchy. This tightly-coupled combination leads to novel properties with regard to cooperation and interaction. This paper demonstrates the advantages of those processors for a stream-join operator as an important data-intensive example. In detail, we propose our HELLS-Join approach employing all heterogeneous devices by outsourcing parts of the algorithm on the appropriate device. Our HELLS-Join performs better than CPU stream joins, allowing wider time windows, higher stream frequencies, and more streams to be joined as before.
|
37 |
Architecture multi-coeurs et temps d'exécution au pire cas / Multicore architectures and worst-case execution timeLesage, Benjamin 21 May 2013 (has links)
Les tâches critiques en systèmes temps-réel sont soumises à des contraintes temporelles et de correction. La validation d'un tel système repose sur l'estimation du comportement temporel au pire cas de ses tâches. Le partage de ressources, inhérent aux architectures multi-cœurs, entrave le calcul de ces estimations. Le comportement temporel d'une tâche dépend de ses rivales du fait de l'arbitrage de l'accès aux ressources ou de modifications concurrentes de leur état. Cette étude vise à l'estimation de la contribution temporelle de la hiérarchie mémoire au pire temps d'exécution de tâches critiques. Les méthodes existantes, pour caches d'instructions, sont étendues afin de supporter caches de données privés et partagés, et permettre l'analyse de hiérarchies mémoires riches. Le court-circuitage de cache est ensuite utilisé pour réduire la pression sur les caches partagés. Nous proposons à cette fin différentes heuristiques basées sur la capture de la réutilisation de blocs de cache entre différents accès mémoire. Notre seconde proposition est la politique de partitionnement Preti qui permet l'allocation d'un espace sans conflits à une tâche. Preti favorise aussi les performances de tâches non critiques concurrentes aux temps-réel dans les systèmes de criticité hybride. / Critical tasks in the context of real-time systems submit to both timing and correctness constraints. Whence, the validation of a real-time system rely on the estimation of its tasks’ Worst case execution times. Resource sharing, as it occurs on multicore architectures, hinders the computation of such estimates. The timing behaviour of a task is impacted by its concurrents, whether because of resource access arbitration or concurrent modifications of a resource state. This study focuses on estimating the contribution of the memory hierarchy to tasks’ worst case execution time. Existing analysis methods, defined for instruction caches, are extended to support private and shared data caches, hence allowing for the analysis of rich memory hierarchies. Cache bypass is then used to reduce the pressure laid by concurrent tasks on shared caches levels. We propose different bypass heuristics, based on the capture of cache blocks’ reuse between memory accesses. Our second proposal is the Preti partitioning scheme which allows for the allocation to tasks of a cache space, free from inter-task conflicts. Preti offers the added benefit of providing for average-case performance to non-critical tasks concurrent to real-time ones on hybrid criticality systems.
|
38 |
Μεθοδολογίες μεταγλώττισης σε επαναπροσδιοριζόμενα συστήματα αρχιτεκτονικών πίνακαΓεωργιόπουλος, Σταύρος 01 February 2013 (has links)
Το αντικείμενο της παρούσας διδακτορικής διατριβής εστιάζεται στην ανάπτυξη αποδοτικών τεχνικών μεταγλώττισης για επαναπροσδιοριζόμενα ολοκληρωμένα συστήματα αρχιτεκτονικών πίνακα. Χρησιμοποιήθηκαν εφαρμογές που κυριαρχούνται από δεδομένα για τον έλεγχο των μεθοδολογιών. Σκοπός είναι να βελτιστοποιηθεί η εκτέλεση των εφαρμογών ως προς χαρακτηριστικά των επαναπροσδιοριζόμενων συστημάτων όπως η απόδοση, ο αριθμός εντολών ανά κύκλο ρολογιού, η επιφάνεια ολοκλήρωσης και ο βαθμός χρησιμοποίησης των επεξεργαστικών πόρων. Αυτό επιτυγχάνεται με την εισαγωγή πρωτότυπων τεχνικών χαρτογράφησης αλλά και την εύρεση βέλτιστων αρχιτεκτονικών.
Στο πρώτο τμήμα της διατριβής υλοποιήθηκε η έρευνα, ανάπτυξη και αυτοματοποίηση τεχνικών μεταγλώττισης για επαναπροσδιοριζόμενα συστήματα αρχιτεκτονικών πίνακα. Κύριο χαρακτηριστικό αυτών των αρχιτεκτονικών είναι ύπαρξη μεγάλου αριθμού επεξεργαστικών στοιχείων που δουλεύουν παράλληλα με αποτέλεσμα να επιταχύνουν την εκτέλεση εφαρμογών που εμφανίζουν παραλληλία πράξεων. Η λειτουργία τους σε ενσωματωμένα συστήματα είναι αυτή ενός συνεπεξεργαστή.
Η έρευνα πάνω σε επαναπροσδιοριζόμενες αρχιτεκτονικές πίνακα έχει αποκτήσει μεγάλο ενδιαφέρον λόγω της ευελιξίας, της επεκτασιμότητας και της απόδοσής τους, ιδιαίτερα σε εφαρμογές που κυριαρχούνται από δεδομένα. Η μεταγλώττιση, όμως, εφαρμογών πάνω σε αυτές χαρακτηρίζεται από υψηλή πολυπλοκότητα. Απαιτούνται κατάλληλα εργαλεία και ειδικές μεθοδολογίες χαρτογράφησης για την εκμετάλλευση των χαρακτηριστικών αυτών των αρχιτεκτονικών. Με αυτό το σκεπτικό, προτάθηκε μια πρωτότυπη επαναστοχεύσιμη μεθοδολογία χαρτογράφησης εφαρμογών, η οποία επιπλέον έχει αυτοματοποιηθεί με τη χρήση ενός πρότυπου εργαλείου μεταγλώττισης που στοχεύει σε ένα αρχιτεκτονικό παραμετρικό πρότυπο. Αποτέλεσμα ήταν η εύρεση των βέλτιστων αρχιτεκτονικών με βάσει την απόδοση, τον αριθμό των εντολών ανά κύκλο ρολογιού και το χρόνο εκτέλεσης του εργαλείου, για μια ομάδα εφαρμογών.
Η αποδοτικότητα μιας επαναπροσδιοριζόμενης αρχιτεκτονικής πίνακα ως προς την ταχύτητα και το κόστος σε υλικό είναι δύσκολο να μετρηθεί, για αυτό έχουν υπάρξει λίγες έρευνες που μελετούν την επίδραση αρχιτεκτονικών παραμέτρων πάνω σε παράγοντες όπως η επιφάνεια ολοκλήρωσης και ο αριθμός εντολών ανά κύκλο ρολογιού. Επιπλέον, καμιά εργασία δεν έχει εξετάσει την επίδραση πολλαπλασιαστών ενσωματωμένων στα επεξεργαστικά στοιχεία των επαναπροσδιοριζόμενων αρχιτεκτονικών. Χρησιμοποιώντας την υπάρχουσα επαναστοχεύσιμη μεθοδολογία μεταγλώττισης και μια παραμετρική υλοποίηση της αρχιτεκτονικής σε γλώσσα περιγραφής υλικού, εξετάζουμε την επίδραση των πολλαπλασιαστών από τη μεριά της χαρτογράφησης και της αρχιτεκτονικής.
Επίσης, περιγράφεται η πρωτότυπη μεθοδολογία χαρτογράφησης που εισήχθη με σκοπό την αποδοτική λειτουργία του αλγορίθμου Fast Fourier Transform (FFT) πάνω σε επαναπροσδιοριζόμενα συστήματα αρχιτεκτονικών πίνακα. Ο αλγόριθμος FFT χαρακτηρίζεται από μεγάλο αριθμό πράξεων κυρίως πολλαπλασιασμών που επιβραδύνουν την απόδοση μιας επαναπροσδιοριζόμενης αρχιτεκτονικής. Εκμεταλλευόμενοι την ύπαρξη εσωτερικής επαναληπτικής δομής μέσα στον αλγόριθμο και χρησιμοποιώντας μια επαναπροσδιοριζόμενη αρχιτεκτονική 16 επεξεργαστικών στοιχείων, αναπτύξαμε μια πρωτότυπη τεχνική χαρτογράφησης. Επιπρόσθετα, η τεχνική μας λαμβάνει υπόψη την ιεραρχία μνήμης μεταξύ κύριας μνήμης και επαναπροσδιοριζόμενης αρχιτεκτονικής για την περαιτέρω επιτάχυνση εκτέλεσης του αλγορίθμου FFT. Η χρήση της προτεινόμενης τεχνικής χαρτογράφησης οδηγεί σε επίτευξη βαθμού χρησιμοποίησης των επεξεργαστικών στοιχείων άνω του 90%, τιμή που είναι τουλάχιστον 37% υψηλότερη από την καλύτερη τιμή της βιβλιογραφίας. / The object of this PhD thesis focuses on developing efficient mapping techniques for coarse grain reconfigurable build arrays. Data intensive applications were used to evaluate the proposed methodologies. The aim is to optimize the applications’ performance on characteristics targeting reconfigurable characteristics such as performance, instructions per cycle, area of integration and processing resource utilization. This is achieved by introducing novel mapping techniques and finding optimal architectures.
In the first part of the thesis research, development and automation of mapping techniques was carried out targeting coarse grain reconfigurable arrays. The main feature of these architectures is the presence of a large number of processing elements working in parallel thus speeding up the execution of applications featuring parallel operations. The function of these processing elements in embedded systems resembles that of a coprocessor.
The research on reconfigurable array architectures has gained considerable interest because of their flexibility, scalability and performance, particularly in data intensive applications. Nevertheless, compiling these applications on reconfigurable architectures is characterized by high degree of complexity. Appropriate tools and special mapping methodologies are needed to exploit the characteristics of these architectures. Bearing this in mind, we proposed a novel reconfigurable methodology for mapping applications, which has also been automated with the use of a prototype compiler tool aiming at a parametric architectural model. The result was finding the best architectures on the basis of performance, the instructions per cycle term and the tool execution time for a sample set of applications.
It is difficult to evaluate the efficiency of a reconfigurable array architecture table in terms of speed and area of integration, so there have been few cases studying the effect of architectural parameters on factors such as surface integration and the number of instructions per clock cycle. Moreover, no work has examined the multipliers’ impact embedded in reconfigurable architectures processing elements. Using the existing reconfigurable mapping methodology and a parametric implementation of the architecture in hardware description language, we examine the effect of multipliers on the part of the mapping phase and architecture.
We also describe an original mapping methodology introduced for the purpose of efficiently mapping the Fast Fourier Transform (FFT) algorithm on reconfigurable array architectures. The FFT algorithm is characterized by a large number of operations primarily multiplications that slow the performance of a reconfigurable architecture. Exploiting the existence of an internal structure inside the FFT algorithm and by the use of a reconfigurable architecture template of 16 processing elements, we developed a novel mapping technique. Additionally, our technique takes into account the memory hierarchy between main memory and reconfigurable architecture in order to further accelerate the implementation of the FFT algorithm. Using the proposed mapping technique results in processing elements utilization of over 90% value which is at least 37% better than the best value of the related literature.
|
39 |
Hiérarchie mémoire dans les systèmes intégrés multiprocesseurs construits autour de réseaux sur puce / Memory hierarchy in embedded multiprocessor system built around networks on chipBelhadj Amor, Hela 05 October 2017 (has links)
Les systèmes parallèles de type multi/pluri-cœurs permettant d'obtenir une grande puissance de calcul à bas coût énergétique sont de nos jours une réalité. Néanmoins, l'exploitation des performances de ces architectures dépend de l'efficacité du système à gérer les accès aux données. Le but de nos travaux est d'améliorer l'efficacité de ces accès en exploitant les caractéristiques de l'architecture matérielle.Dans une première partie, nous proposons une nouvelle organisation de la hiérarchie des mémoires caches qui maximise l'utilisation de l'espace de stockage disponible à chaque niveau. Cette solution, basée sur les architectures à accès non uniforme au cache (NUCA), supporte les transferts inter et intra-niveau de la hiérarchie. Elle requiert un protocole de cohérence de cache qui s'adapte à ses spécifications.Certes, le transfert des données au niveau de la hiérarchie est aussi un déterminant de la performance du système. Dans une seconde partie, nous prenons en compte les besoins de communication spécifiques du protocole. Nous proposons un réseau virtualisé comme support de communication ad-hoc afin de gérer le trafic de cohérence à moindre coût. Ce dernier relie les caches d'un même niveau pour supporter les transferts intra-niveaux, qui sont une spécificité de notre protocole, en vue de réduire la latence moyenne d'accès. / Multi/many-cores parallel systems for high-power computing at low energy costs are nowadays a reality. However, exploiting the performance of these architectures depends on the efficiency of the system in managing data accesses. The aim of our work is to improve the efficiency of these accesses by exploiting the hardware architecture characteristics.In a first part, we propose a new cache hierarchy organization that aims at maximizing the use of the available storage space at each level. This solution, based on non-uniform cache access architectures (NUCA), supports inter and intra-level transfers of the hierarchy. It requires a cache coherency protocol that suits its specifications.Obviously, the transfer of data in the hierarchy is also a determinant of the system performance. In a second part, we consider the specific communication needs of the protocol. We suggest the use of a virtualized network as an ad-hoc communication medium to manage consistency traffic at a lower cost. It links the caches of the same level to support intra-level transfers, which are a specificity of our protocol, in order to reduce the average access latency.
|
40 |
Memory-aware algorithms : from multicores to large scale platforms / Algorithmes orientés mémoire : des processeurs multi-cœurs aux plates-formes à grande échelleJacquelin, Mathias 20 July 2011 (has links)
Cette thèse s’intéresse aux algorithmes adaptés aux architectures mémoire hiérarchiques, rencontrées notamment dans le contexte des processeurs multi-cœurs.Nous étudions d’abord le produit de matrices sur les processeurs multi-cœurs. Nous modélisons le processeur, bornons le volume de communication, présentons trois algorithmes réduisant ce volume de communication et validons leurs performances. Nous étudions ensuite la factorisation QR, dans le contexte des matrices ayant plus de lignes que de colonnes. Nous revisitons les algorithmes existants afin d’exploiter les processeurs multi-cœurs, analysons leurs chemins critiques, montrons que certains sont asymptotiquement optimaux, et analysons leurs performances.Nous étudions ensuite les applications pipelinées sur une plate-forme hétérogène, le QS 22. Nous modélisons celle-ci et appliquons les techniques d’ordonnancement en régime permanent. Nous introduisons un programme linéaire mixte permettant d’obtenir une solution optimale. Nous introduisons en outre un ensemble d’heuristiques.Puis, nous minimisons la mémoire nécessaire à une application modélisée par un arbre, sur une plate-forme à deux niveaux de mémoire. Nous présentons un algorithme optimal et montrons qu’il existe des arbres tels que les parcours postfixes sont arbitrairement mauvais. Nous étudions alors la minimisation du volume d’E/S à mémoire donnée, montrons que ce problème est NP-complet, et présentons des heuristiques. Enfin, nous comparons plusieurs politiques d’archivage pour BLUE WATERS. Nous introduisons deux politiques d’archivage améliorant les performances de la politique RAIT, modélisons la plate-forme de stockage et simulons son fonctionnement. / This thesis focus on memory-aware algorithms tailored for hierarchical memory architectures, found for instance within multicore processors. We first study the matrix product on multicore architectures. We model such a processor, and derive lower bounds on the communication volume. We introduce three ad hoc algorithms, and experimentally assess their performance.We then target a more complex operation: the QR factorization of tall matrices. We revisit existing algorithms to better exploit the parallelism of multicore processors. We thus study the critical paths of many algorithms, prove some of them to be asymptotically optimal, and assess their performance.In the next study, we focus on scheduling streaming applications onto a heterogeneous multicore platform, the QS 22. We introduce a model of the platform and use steady-state scheduling techniques so as to maximize the throughput. We present a mixed integer programming approach that computes an optimal solution, and propose simpler heuristics. We then focus on minimizing the amount of required memory for tree-shaped workflows, and target a classical two-level memory system. I/O represent transfers from a memory to the other. We propose a new exact algorithm, and show that there exist trees where postorder traversals are arbitrarily bad. We then study the problem of minimizing the I/O volume for a given memory, show that it is NP-hard, and provide a set of heuristics.Finally, we compare archival policies for BLUE WATERS. We introduce two archival policies and adapt the well known RAIT strategy. We provide a model of the tape storage platform, and use it to assess the performance of the three policies through simulation.
|
Page generated in 0.3163 seconds