• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 6
  • 5
  • 4
  • 1
  • Tagged with
  • 43
  • 43
  • 16
  • 13
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Advances Towards Data-Race-Free Cache Coherence Through Data Classification

Davari, Mahdad January 2017 (has links)
Providing a consistent view of the shared memory based on precise and well-defined semantics—memory consistency model—has been an enabling factor in the widespread acceptance and commercial success of shared-memory architectures. Moreover, cache coherence protocols have been employed by the hardware to remove from the programmers the burden of dealing with the memory inconsistency that emerges in the presence of the private caches. The principle behind all such cache coherence protocols is to guarantee that consistent values are read from the private caches at all times. In its most stringent form, a cache coherence protocol eagerly enforces two invariants before each data modification: i) no other core has a copy of the data in its private caches, and ii) all other cores know where to receive the consistent data should they need the data later. Nevertheless, by partly transferring the responsibility for maintaining those invariants to the programmers, commercial multicores have adopted weaker memory consistency models, namely the Total Store Order (TSO), in order to optimize the performance for more common cases. Moreover, memory models with more relaxed invariants have been proposed based on the observation that more and more software is written in compliance with the Data-Race-Free (DRF) semantics. The semantics of DRF software can be leveraged by the hardware to infer when data in the private caches might be inconsistent. As a result, hardware ignores the inconsistent data and retrieves the consistent data from the shared memory. DRF semantics therefore removes from the hardware the burden of eagerly enforcing the strong consistency invariants before each data modification. Instead, consistency is guaranteed only when needed. This results in manifold optimizations, such as reducing the energy consumption and improving the performance and scalability. The efficiency of detecting and discarding the inconsistent data is an important factor affecting the efficiency of such coherence protocols. For instance, discarding the consistent data does not affect the correctness, but results in performance loss and increased energy consumption. In this thesis we show how data classification can be leveraged as an effective tool to simplify the cache coherence based on the DRF semantics. In particular, we introduce simple but efficient hardware-based private/shared data classification techniques that can be used to efficiently detect the inconsistent data, thus enabling low-overhead and scalable cache coherence solutions based on the DRF semantics.
12

A Hierarchy Navigation Framework: Supporting Scalable Interactive Exploration over Large Databases

Mehta, Nishant K 27 August 2004 (has links)
"Modern computer applications from business decision support to scientific data analysis use visualization techniques. However, visual exploration tools do not scale well for large data sets, i.e., the level of clutter on the screen is typically unacceptable. To solve the problem of cluttering at the interface level, visualization tools have recently been extended to support hierarchical views of the data, with support for focusing and drilling-down using interactive selection. To solve the scalability problem, we now investigate how best to couple such a near real-time responsive visualization tool with a database management system. Our solution proposes a framework containing three major components: hierarchy encoding, caching and prefetching. Since the direct implementation of the visual user interactions on hierarchical data sets corresponds to recursive query processing, we have developed a hierarchy encoding method, called the MinMax tree, that pushes the on-line recursive processing step into an off-line precomputation step. The MinMax encoding scheme allows us to map the hierarchy to a 2-dimensional space and the recursive navigation operations at the interface level to 2-dimensional spatial range queries. These queries can then be answered efficiently using spatial indexes. To compliment this encoding scheme we employ a caching strategy that exploits user navigation characteristics to cache the nodes having high probability of being referenced again. Based on user characteristics we choose to implement two replacement policies one which exploits temporal locality (LRU) and the other exploits spatial locality (Distance). Also, to enhance the performance of the cache we propose using a prefetching mechanism that predicts and prefetches future user requests into the cache. Together the components form a comprehensive framework that scales the visualization tool to support navigation operations over large data sets. The techniques have been incorporated into XmdvTool, a free software package for multi-variate data visualization and exploration. Our experimental results quantify the effectiveness of each component and show that collectively the components scale the XmdvTool to support navigation operations over large data sets. Mainly, our experimental results show that together the components can achieve 63\% to 96\%reduction in response time latency even with limited system resources."
13

Topology-aware load balancing for performance portability over parallel high performance systems / Équilibrage de charge prenant en compte la topologie des plates-formes de calcul parallèle pour la portabilité des performances

Lima Pilla, Laércio 11 April 2014 (has links)
Cette thèse présente nos travaux de recherche qui ont comme principal objectif d'assurer la portabilité des performances et le passage à l'échelle des applications scientifiques complexes exécutées sur des plates-formes multi-coeurs parallèles et hiérarchiques. La portabilité des performances est obtenue lorsque l'ordonnancement des tâches d'une application permet de réduire les périodes d'inactivité des coeurs de la plate-forme. Cette portabilité des performances peut être affectée par différents problèmes tels que des déséquilibres de charge, des communications coûteuses et des surcoûts provenant de l'ordonnancement des tâches. Le déséquilibre de charge est la conséquence de comportements de charges irrégulières et dynamiques, où le volume de calcul varie dynamiquement en fonction de la tâche et de l'étape de simulation. Les communications coûteuses sont provoquées par un ordonnancement qui ne prend pas en compte les différents temps de communication entre tâches sur une plate-forme hiérarchique. Cela est accentué par des communications non uniformes et asymétriques au niveau mémoire et réseau. Enfin, ces surcoûts peuvent être générés par des algorithmes de placement trop complexes dont les coûts ne seraient pas compensés par les gains de performance.Pour atteindre cet objectif de portabilité des performances, notre approche repose sur une récolte d'informations précises sur la topologie de la machine qui vont aider les algorithmes d'ordonnancement de tâches à prendre les bonnes décisions. Dans ce contexte, nous avons proposé une modélisation générique de la topologie des plates-formes parallèles. Le modèle comprend des latences et des bandes passantes mesurées de la mémoire et du réseau qui mettent en évidence des asymétries. Ces informations sont utilisées par nos trois algorithmes d'équilibrage de charge nommés NucoLB, HwTopoLB, et HierarchicalLB. De plus, ces algorithmes utilisent des informations provenant de l'exécution de l'application. NucoLB se concentre sur les aspects non uniformes de plates-formes parallèles, alors que HwTopoLB considère l'ensemble de la hiérarchie pour ses décisions, et HierarchicalLB combine ces algorithmes hiérarchiquement pour réduire son surcoût d'ordonnancement de tâches. Ces algorithmes cherchent à atténuer le déséquilibre de charge et des communications coûteuses tout en limitant les surcoûts de migration des tâches.Les résultats expérimentaux avec les trois régulateurs de charge proposés ont montré des améliorations de performances sur les meilleurs algorithmes de l'état de l'art: NucoLB a présenté jusqu'à 19% d'amélioration de performances sur un noeud de calcul; HwTopoLB a amélioré les performances en moyenne de 19%, et HierarchicalLB a surclassé HwTopoLB de 22% en moyenne sur des plates-formes avec plus de dix noeuds de calcul. Ces résultats ont été obtenus en répartissant la charge entre les ressources disponibles, en réduisant les coûts de communication des applications, et en gardant les surcoûts d'équilibrage de charge faibles. En ce sens, nos algorithmes d'équilibrage de charge permettent la portabilité des performances pour les applications scientifiques tout en étant indépendant de l'application et de l'architecture du système. / This thesis presents our research to provide performance portability and scalability to complex scientific applications running over hierarchical multicore parallel platforms. Performance portability is said to be attained when a low core idleness is achieved while mapping a given application to different platforms, and can be affected by performance problems such as load imbalance and costly communications, and overheads coming from the task mapping algorithm. Load imbalance is a result of irregular and dynamic load behaviors, where the amount of work to be processed varies depending on the task and the step of the simulation. Meanwhile, costly communications are caused by a task distribution that does not take into account the different communication times present in a hierarchical platform. This includes nonuniform and asymmetric communication costs at memory and network levels. Lastly, task mapping overheads come from the execution time of the task mapping algorithm trying to mitigate load imbalance and costly communications, and from the migration of tasks.Our approach to achieve the goal of performance portability is based on the hypothesis that precise machine topology information can help task mapping algorithms in their decisions. In this context, we proposed a generic machine topology model of parallel platforms composed of one or more multicore compute nodes. It includes profiled latencies and bandwidths at memory and network levels, and highlights asymmetries and nonuniformity at both levels. This information is employed by our three proposed topology-aware load balancing algorithms, named NucoLB, HwTopoLB, and HierarchicalLB. Besides topology information, these algorithms also employ application information gathered during runtime. NucoLB focuses on the nonuniform aspects of parallel platforms, while HwTopoLB considers the whole hierarchy in its decisions, and HierarchicalLB combines these algorithms hierarchically to reduce its task mapping overhead. These algorithms seek to mitigate load imbalance and costly communications while averting task migration overheads.Experimental results with the proposed load balancers over different platform composed of one or more multicore compute nodes showed performance improvements over state of the art load balancing algorithms: NucoLB presented improvements of up to 19% on one compute node; HwTopoLB experienced performance improvements of 19% on average; and HierarchicalLB outperformed HwTopoLB by 22% on average on parallel platforms with ten or more compute nodes. These results were achieved by equalizing work among the available resources, reducing the communication costs experienced by applications, and by keeping load balancing overheads low. In this sense, our load balancing algorithms provide performance portability to scientific applications while being independent from application and system architecture.
14

Intégration de technologies de mémoires non volatiles émergentes dans la hiérarchie de caches pour améliorer l'efficacité énergétique / Integration of emerging non volatile memory technologies in cache hierarchy for improving energy-efficiency

Péneau, Pierre-Yves 31 October 2018 (has links)
De nos jours, des efforts majeurs pour la conception de systèmes sur puces performants et efficaces énergétiquement sont en cours. Le déclin de la loi de Moore au début du XX e siècle a poussé les concepteurs à augmenter le nombre de cœurs par processeur pour continuer d’améliorer les performances. En conséquence, la surface de silicium occupée par les mémoires caches a augmentée. La finesse de gravure toujours plus petite a également fait augmenter le courant de fuite des transistors CMOS. Ainsi, la consommation énergétique des mémoires occupe une part de plus en plus importante dans la consommation globale des puces. Pour diminuer cette consommation, de nouvelles technologies de mémoires émergent depuis une dizaine d’années : les mémoires non volatiles (NVM). Ces mémoires ont la particularité d’avoir un courant de fuite très faible comparé aux technologies CMOS classiques. De fait, leur utilisation dans une architecture permettrait de diminuer la consommation globale de la hiérarchie de caches. Cependant, ces technologies souffrent de latences d’accès plus élevées que la SRAM, de coûts énergétiques d’accès plus importants et d’une durée de vie limitée. Leur intégration à des systèmes sur puces nécessite de continuer à rechercher des solutions. Cette thèse cherche à évaluer l’impact d’un changement de technologie dans la hiérarchie de caches.Plus spécifiquement, elle s’intéresse au cache de dernier niveau (LLC) et la technologie non volatile considérée est la STT-MRAM. Nos travaux adoptent un point de vue architectural dans lequel une modification de la technologie n’est pas retenue. Nous cherchons alors à intégrer les caractéristiques différentes de la STT-MRAM lors de la conception de la hiérarchie mémoire. Une première étude a permis de mettre en place un cadre d’exploration architectural pour des systèmes contenant des mémoires émergentes. Une seconde étude sur les optimisations architecturales au niveau du LLC a été menée pour identifier quelles sont les opportunités d’intégration de la STT-MRAM. Le but est d’améliorer l’efficacité énergétique tout en atténuant les pénalités d’accès dues aux fortes latences de cette technologie. / Today, intensive efforts to design energy-efficient and high-performance systems-on-chip (SoCs) are underway. Moore’s end in the early 20 th century pushed designers to increase the number of core per processor to continue to improve the performance. As a result, the silicon area occupied by cache memories has increased. The ever smaller technology node also increased the leakage current of CMOS transistors. Thus, the energy consumption of memories represents an increasingly important part in the overall consumption of chips.To reduce this energy consumption, new memory technologies have emerged overthe past decade : non-volatile memories (NVM). These memories have the particularity of having a very low leakage current compared to conventional CMOS technologies. In fact, their use in an architecture would reduce the overall consumption of the cache hierarchy. However, these technologies sufferfrom higher access latencies than SRAM, higher access energy costs and limitedlifetime. Their integration into SoCs requires a continuous research effort.This thesis work aims to evaluate the impact of a change in technology in the cache hierarchy. More specifically, we are interested in the Last-Level Cache(LLC) and we consider the STT-MRAM technology. Our work adopts an architectural point of view in which a modification of the technology is not retained. Then,we try to integrate the different characteristics of the STT-MRAM atarchitectural level when designing the memory hierarchy. A first study set upan architectural exploration framework for systems containing emerging memories. A second study on architectural optimizations at LLC was conducted toidentify opportunities for the integration of STT-MRAM. The goal is to improve energy efficiency while reducing access penalties due to the high latency ofthis technology.
15

Network processor memory hierarchy designs for IP packet classification /

Low, Douglas Wai Kok. January 2005 (has links)
Thesis (Ph. D.)--University of Washington, 2005. / Vita. Includes bibliographical references (p. 132-136).
16

Scalable and effective clustering, scheduling and monitoring of self-organizing grids

Yang, Weishuai. January 2008 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, 2008. / Includes bibliographical references.
17

Controle adaptativo para acesso à memória compartilhada em sistemas em chip / Adaptive control for shared access memory in system-on-chip the number

Bonatto, Alexsandro Cristóvão January 2014 (has links)
Acessos simultâneos gerados por Elementos de Processamento (EP) contidos nos Sistemas em Chip (SoC) para um único canal de memória externa coloca desafios que requerem uma atenção especial por constituírem o gargalo para o desempenho de processamento. No caso em que os EPs são microprocessadores, a questão fica ainda mais evidente, pois a taxa de aumento da velocidade dos microprocessadores excede a taxa de aumento da velocidade da DRAM. Ambas aumentam exponencialmente, mas a expoente dos microprocessadores é maior do que a das memórias. Este efeito é denominado de “muro de memória” (Memory Wall) e representa que o gargalo de processamento está relacionado à diferença de velocidade. Neste cenário, novas estratégias de controle de acesso são necessárias para melhorar o desempenho. Plataformas heterogêneas de processamento multimídia são formadas por diversos EPs. Os acessos con- correntes à regiões de memória não contíguas em uma DRAM reduzem a largura de banda e aumentam a latência de acesso aos dados, degradando o desempenho de processamento. Esta tese mostra que a eficiência computacional pode ser melhorada com o uso de um fluxo de projeto centralizado em memória, ou seja, orientado para os aspectos funcionais da DRAM. Neste trabalho é apresentado um subsistema de memória com gerenciamento adaptativo de compar- tilhamento do canal de memória entre múltiplos clientes. Esta tese apresenta a arquitetura de um controlador de memória com comportamento predizível que faz a avaliação do pior caso de execução para as transações solicitadas pelos clientes em tempo de execução. Um modelo baseado em atrasos é utilizado para prever os piores casos para o conjunto de clientes. O sub-sistema de memória centraliza a comunicação de dados e gerencia os acessos dos diversos EPs do sistema, de forma que a comunicação seja atendida de acordo com as necessidades de cada aplicação. São apresentadas três contribuições principais: 1) um método de projeto de sistemas integrados centralizado em memória, que orienta o projeto para os aspectos funcionais da me- mória compartilhada; 2) um modelo baseado em atrasos para estimar o pior caso de execução do sistema, quanto aos tempos de resposta e largura de banda mínima alocada por cliente; 3) um árbitro adaptativo para gerenciamento dos acessos à memória externa com garantias de prazos de execução das transações. / The number of Processing Elements (PE) contained in a System-on-Chip (SoC) follows the growth of the number of transistors per chip. A SoC composed of multiple PEs, in some ap- plications such as multimedia, implements algorithms that handle large volumes of data and justify the use of an external memory with large capacity. External memory accesses are shared by multiple PEs adding challenges that may have special attention because they constitute the bottleneck for performance and relevant factor for power consumption. In the case where the PEs are microprocessors, this issue becomes even more evident as the rate of increase of speed of microprocessors exceeds the rate of increase in speed of DRAM. This effect is called “mem- ory wall” and represents that the bottleneck processing is related to the speed of data access. In this scenario, new access control strategies are needed to improve processing performance. Heterogeneous platforms for multimedia processing are formed by several PEs. The concur- rent accesses to DRAM reduce bandwidth and increase latency access to data, degrading the processing performance. This thesis shows that significant improvements in computational effi- ciency can be obtained using a design methodology oriented to the functional aspects of DRAM through a memory subsystem with adaptive management. It is presented the data communica- tion architecture for integration of PEs system based on an analytical model to reduce latency and guarantee Quality of Service (QoS). The memory subsystem is organized as a hierarchy of memories, with a proposed integration of PEs oriented centered in the main memory. The memory subsystem centralized data communication and manages the access of several PEs sys- tem so that communication is served according to the needs of each application. This thesis proposes three major contributions: 1) a methodology for design integrated systems based on the memory-centric design approach, 2) an analytical model based on delays used to evaluate the worst-case performance of the memory subsystem, 3) an arbiter for adaptive management of accesses to the external memory with guaranteed execution times of transactions.
18

Controle adaptativo para acesso à memória compartilhada em sistemas em chip / Adaptive control for shared access memory in system-on-chip the number

Bonatto, Alexsandro Cristóvão January 2014 (has links)
Acessos simultâneos gerados por Elementos de Processamento (EP) contidos nos Sistemas em Chip (SoC) para um único canal de memória externa coloca desafios que requerem uma atenção especial por constituírem o gargalo para o desempenho de processamento. No caso em que os EPs são microprocessadores, a questão fica ainda mais evidente, pois a taxa de aumento da velocidade dos microprocessadores excede a taxa de aumento da velocidade da DRAM. Ambas aumentam exponencialmente, mas a expoente dos microprocessadores é maior do que a das memórias. Este efeito é denominado de “muro de memória” (Memory Wall) e representa que o gargalo de processamento está relacionado à diferença de velocidade. Neste cenário, novas estratégias de controle de acesso são necessárias para melhorar o desempenho. Plataformas heterogêneas de processamento multimídia são formadas por diversos EPs. Os acessos con- correntes à regiões de memória não contíguas em uma DRAM reduzem a largura de banda e aumentam a latência de acesso aos dados, degradando o desempenho de processamento. Esta tese mostra que a eficiência computacional pode ser melhorada com o uso de um fluxo de projeto centralizado em memória, ou seja, orientado para os aspectos funcionais da DRAM. Neste trabalho é apresentado um subsistema de memória com gerenciamento adaptativo de compar- tilhamento do canal de memória entre múltiplos clientes. Esta tese apresenta a arquitetura de um controlador de memória com comportamento predizível que faz a avaliação do pior caso de execução para as transações solicitadas pelos clientes em tempo de execução. Um modelo baseado em atrasos é utilizado para prever os piores casos para o conjunto de clientes. O sub-sistema de memória centraliza a comunicação de dados e gerencia os acessos dos diversos EPs do sistema, de forma que a comunicação seja atendida de acordo com as necessidades de cada aplicação. São apresentadas três contribuições principais: 1) um método de projeto de sistemas integrados centralizado em memória, que orienta o projeto para os aspectos funcionais da me- mória compartilhada; 2) um modelo baseado em atrasos para estimar o pior caso de execução do sistema, quanto aos tempos de resposta e largura de banda mínima alocada por cliente; 3) um árbitro adaptativo para gerenciamento dos acessos à memória externa com garantias de prazos de execução das transações. / The number of Processing Elements (PE) contained in a System-on-Chip (SoC) follows the growth of the number of transistors per chip. A SoC composed of multiple PEs, in some ap- plications such as multimedia, implements algorithms that handle large volumes of data and justify the use of an external memory with large capacity. External memory accesses are shared by multiple PEs adding challenges that may have special attention because they constitute the bottleneck for performance and relevant factor for power consumption. In the case where the PEs are microprocessors, this issue becomes even more evident as the rate of increase of speed of microprocessors exceeds the rate of increase in speed of DRAM. This effect is called “mem- ory wall” and represents that the bottleneck processing is related to the speed of data access. In this scenario, new access control strategies are needed to improve processing performance. Heterogeneous platforms for multimedia processing are formed by several PEs. The concur- rent accesses to DRAM reduce bandwidth and increase latency access to data, degrading the processing performance. This thesis shows that significant improvements in computational effi- ciency can be obtained using a design methodology oriented to the functional aspects of DRAM through a memory subsystem with adaptive management. It is presented the data communica- tion architecture for integration of PEs system based on an analytical model to reduce latency and guarantee Quality of Service (QoS). The memory subsystem is organized as a hierarchy of memories, with a proposed integration of PEs oriented centered in the main memory. The memory subsystem centralized data communication and manages the access of several PEs sys- tem so that communication is served according to the needs of each application. This thesis proposes three major contributions: 1) a methodology for design integrated systems based on the memory-centric design approach, 2) an analytical model based on delays used to evaluate the worst-case performance of the memory subsystem, 3) an arbiter for adaptive management of accesses to the external memory with guaranteed execution times of transactions.
19

Controle adaptativo para acesso à memória compartilhada em sistemas em chip / Adaptive control for shared access memory in system-on-chip the number

Bonatto, Alexsandro Cristóvão January 2014 (has links)
Acessos simultâneos gerados por Elementos de Processamento (EP) contidos nos Sistemas em Chip (SoC) para um único canal de memória externa coloca desafios que requerem uma atenção especial por constituírem o gargalo para o desempenho de processamento. No caso em que os EPs são microprocessadores, a questão fica ainda mais evidente, pois a taxa de aumento da velocidade dos microprocessadores excede a taxa de aumento da velocidade da DRAM. Ambas aumentam exponencialmente, mas a expoente dos microprocessadores é maior do que a das memórias. Este efeito é denominado de “muro de memória” (Memory Wall) e representa que o gargalo de processamento está relacionado à diferença de velocidade. Neste cenário, novas estratégias de controle de acesso são necessárias para melhorar o desempenho. Plataformas heterogêneas de processamento multimídia são formadas por diversos EPs. Os acessos con- correntes à regiões de memória não contíguas em uma DRAM reduzem a largura de banda e aumentam a latência de acesso aos dados, degradando o desempenho de processamento. Esta tese mostra que a eficiência computacional pode ser melhorada com o uso de um fluxo de projeto centralizado em memória, ou seja, orientado para os aspectos funcionais da DRAM. Neste trabalho é apresentado um subsistema de memória com gerenciamento adaptativo de compar- tilhamento do canal de memória entre múltiplos clientes. Esta tese apresenta a arquitetura de um controlador de memória com comportamento predizível que faz a avaliação do pior caso de execução para as transações solicitadas pelos clientes em tempo de execução. Um modelo baseado em atrasos é utilizado para prever os piores casos para o conjunto de clientes. O sub-sistema de memória centraliza a comunicação de dados e gerencia os acessos dos diversos EPs do sistema, de forma que a comunicação seja atendida de acordo com as necessidades de cada aplicação. São apresentadas três contribuições principais: 1) um método de projeto de sistemas integrados centralizado em memória, que orienta o projeto para os aspectos funcionais da me- mória compartilhada; 2) um modelo baseado em atrasos para estimar o pior caso de execução do sistema, quanto aos tempos de resposta e largura de banda mínima alocada por cliente; 3) um árbitro adaptativo para gerenciamento dos acessos à memória externa com garantias de prazos de execução das transações. / The number of Processing Elements (PE) contained in a System-on-Chip (SoC) follows the growth of the number of transistors per chip. A SoC composed of multiple PEs, in some ap- plications such as multimedia, implements algorithms that handle large volumes of data and justify the use of an external memory with large capacity. External memory accesses are shared by multiple PEs adding challenges that may have special attention because they constitute the bottleneck for performance and relevant factor for power consumption. In the case where the PEs are microprocessors, this issue becomes even more evident as the rate of increase of speed of microprocessors exceeds the rate of increase in speed of DRAM. This effect is called “mem- ory wall” and represents that the bottleneck processing is related to the speed of data access. In this scenario, new access control strategies are needed to improve processing performance. Heterogeneous platforms for multimedia processing are formed by several PEs. The concur- rent accesses to DRAM reduce bandwidth and increase latency access to data, degrading the processing performance. This thesis shows that significant improvements in computational effi- ciency can be obtained using a design methodology oriented to the functional aspects of DRAM through a memory subsystem with adaptive management. It is presented the data communica- tion architecture for integration of PEs system based on an analytical model to reduce latency and guarantee Quality of Service (QoS). The memory subsystem is organized as a hierarchy of memories, with a proposed integration of PEs oriented centered in the main memory. The memory subsystem centralized data communication and manages the access of several PEs sys- tem so that communication is served according to the needs of each application. This thesis proposes three major contributions: 1) a methodology for design integrated systems based on the memory-centric design approach, 2) an analytical model based on delays used to evaluate the worst-case performance of the memory subsystem, 3) an arbiter for adaptive management of accesses to the external memory with guaranteed execution times of transactions.
20

Performance Analysis of Complex Shared Memory Systems

Molka, Daniel 22 March 2017 (has links) (PDF)
Systems for high performance computing are getting increasingly complex. On the one hand, the number of processors is increasing. On the other hand, the individual processors are getting more and more powerful. In recent years, the latter is to a large extent achieved by increasing the number of cores per processor. Unfortunately, scientific applications often fail to fully utilize the available computational performance. Therefore, performance analysis tools that help to localize and fix performance problems are indispensable. Large scale systems for high performance computing typically consist of multiple compute nodes that are connected via network. Performance analysis tools that analyze performance problems that arise from using multiple nodes are readily available. However, the increasing number of cores per processor that can be observed within the last decade represents a major change in the node architecture. Therefore, this work concentrates on the analysis of the node performance. The goal of this thesis is to improve the understanding of the achieved application performance on existing hardware. It can be observed that the scaling of parallel applications on multi-core processors differs significantly from the scaling on multiple processors. Therefore, the properties of shared resources in contemporary multi-core processors as well as remote accesses in multi-processor systems are investigated and their respective impact on the application performance is analyzed. As a first step, a comprehensive suite of highly optimized micro-benchmarks is developed. These benchmarks are able to determine the performance of memory accesses depending on the location and coherence state of the data. They are used to perform an in-depth analysis of the characteristics of memory accesses in contemporary multi-processor systems, which identifies potential bottlenecks. However, in order to localize performance problems, it also has to be determined to which extend the application performance is limited by certain resources. Therefore, a methodology to derive metrics for the utilization of individual components in the memory hierarchy as well as waiting times caused by memory accesses is developed in the second step. The approach is based on hardware performance counters, which record the number of certain hardware events. The developed micro-benchmarks are used to selectively stress individual components, which can be used to identify the events that provide a reasonable assessment for the utilization of the respective component and the amount of time that is spent waiting for memory accesses to complete. Finally, the knowledge gained from this process is used to implement a visualization of memory related performance issues in existing performance analysis tools. The results of the micro-benchmarks reveal that the increasing number of cores per processor and the usage of multiple processors per node leads to complex systems with vastly different performance characteristics of memory accesses depending on the location of the accessed data. Furthermore, it can be observed that the aggregated throughput of shared resources in multi-core processors does not necessarily scale linearly with the number of cores that access them concurrently, which limits the scalability of parallel applications. It is shown that the proposed methodology for the identification of meaningful hardware performance counters yields useful metrics for the localization of memory related performance limitations.

Page generated in 0.0781 seconds