11 |
Contribution à l’émergence de nouvelles méthodes parallèles et réparties intelligentes utilisant un paradigme de programmation multi-niveaux pour le calcul extrême / Contribution to the emergence of new intelligent parallel and distributed methods using a multi-level programming paradigm for extreme computingWu, Xinzhe 22 March 2019 (has links)
Les méthodes itératives de Krylov sont utilisées sur les plate-formes de Calcul Haute Performance (CHP) pour résoudre les grands systèmes linéaires issus des domaines de la science et de l’ingénierie. Avec l’augmentation du nombre de cœurs et de l’hétérogénéité des superordinateurs, le temps consacré à la communication et synchronisation globales nuit gravement aux leurs performances parallèles. La programmation tend à être distribuée et parallèle. Le développement d’algorithmes devrait prendre en compte les principes: 1) parallélisme avec multi-granularité; 2) mémoire hiérarchique; 3) minimisation de la communication globale; 4) promotion de l’asynchronicité; 5) proposition de stratégies d’ordonnancement et de moteurs de gestion pour gérer les trafics et la tolérance aux pannes. En réponse à ces objectifs, nous présentons un paradigme de programmation multi-niveaux distribués et parallèles pour les méthodes de Krylov sur les plates-formes de CHP. La première partie porte sur la mise en œuvre d’un générateur de matrices avec des valeurs propres prescrites pour la référence des méthodes itératives. Dans la deuxième partie, nous étudions les performances numériques et parallèles de la méthode itérative proposée. Son implémentation avec un moteur de gestion peut gérer la communication, la tolérance aux pannes et la réutilisabilité. Dans la troisième partie, un schéma de réglage automatique est introduit pour la sélection intelligente de ses paramètres lors de l’exécution. Enfin, nous étudions la possibilité d’implémenter ce paradigme dans un environnement d’exécution de flux de travail. / Krylov iterative methods are frequently used on High-Performance Computing (HPC) systems to solve the extremely large sparse linear systems and eigenvalue problems from science and engineering fields. With the increase of both number of computing units and the heterogeneity of supercomputers, time spent in the global communication and synchronization severely damage the parallel performance of iterative methods. Programming on supercomputers tends to become distributed and parallel. Algorithm development should consider the principles: 1) multi-granularity parallelism; 2) hierarchical memory; 3) minimization of global communication; 4) promotion of the asynchronicity; 5) proposition of multi-level scheduling strategies and manager engines to handle huge traffic and improve the fault tolerance. In response to these goals, we present a distributed and parallel multi-level programming paradigm for Krylov methods on HPC platforms. The first part of our work focuses on an implementation of a scalable matrix generator to create test matrices with customized eigenvalue for benchmarking iterative methods on supercomputers. In the second part, we aim to study the numerical and parallel performance of proposed distributed and parallel iterative method. Its implementation with a manager engine and runtime can handle the huge communication traffic, fault tolerance, and reusability. In the third part, an auto-tuning scheme is introduced for the smart selection of its parameters at runtime. Finally, we analyse the possibility to implement the distributed and parallel paradigm by a graph-based workflow runtime environment.
|
12 |
Energy consumption and performance of HPC architecture for Exascale / Consumo de energia e desempenho de arquiteturas PAD para ExascaleOliveira, Daniel Alfonso Gonçalves de January 2013 (has links)
Uma das principais preocupações para construir a próxima geração de sistemas PAD é o consumo de energia. Para quebrar a barreira de exascale a comunidade científica precisa investigar alternativas que possam lidar com o problema de consumo de energia. Sistemas PAD atuais não se preocupam com energia e já consomem GigaWatts. Requisitos de consumo de energia restringirão fortemente sistemas futuros. Nesse contexto processadores de alta potência abrem espaço para novas arquiteturas. Duas arquiteturas surgem no contexto de PAD. A primeira arquitetura são as unidades de processamento gráfico (GPU), GPUs possuem vários núcleos de processamento, suportando milhares de threads simultâneas, se adaptando bem a aplicações massivamente paralelas. Hoje alguns dos melhores sistemas PAD possuem GPUs que demonstram um alto desempenho por um baixo consumo de energia para várias aplicações paralelas. A segunda arquitetura são os processadores de baixo consumo, processadores ARM estão melhorando seu desempenho e mantendo o menor consumo de energia possível. Como exemplo desse ganho, projetos como Mont-Blanc apostam no uso de ARM para construir um sistema PAD energeticamente eficiente. Este trabalho visa verificar o potencial dessas arquiteturas emergentes. Avaliamos essas arquiteturas e comparamos com a arquitetura mais comum encontrada nos sistemas PAD atuais. O principal objetivo é analisar o consumo de energia e o desempenho dessas arquiteturas no contexto de sistemas PAD. Portanto, benchmarks heterogêneos foram executados em todas as arquiteturas. Os resultados mostram que a arquitetura de GPU foi a mais rápida e a melhor em termos de consumo de energia. GPU foi pelo menos 5 vezes mais rápida e consumiu 18 vezes menos energia considerando todos os benchmarks testados. Também observamos que processadores de alta potência foram mais rápidos e consumiram menos energia, para tarefas com uma carga de trabalho leve, do que comparado com processadores de baixo consumo. Entretanto, para tarefas com carga de trabalho leve processadores de baixo consumo apresentaram um consumo de energia melhor. Concluímos que sistemas heterogêneos combinando GPUs e processadores de baixo consumo podem ser uma solução interessante para alcançar um eficiência energética superior. Apesar de processadores de baixo consumo apresentarem um pior consumo de energia para cargas de trabalho pesadas. O consumo de energia extremamente baixo durante o processamento é inferior ao consumo ocioso das demais arquiteturas. Portanto, combinando processadores de baixo consumo para gerenciar GPUs pode resultar em uma eficiência energética superior a sistemas que combinam processadores de alta potência com GPUs. / One of the main concerns to build the new generation of High Performance Computing (HPC) systems is energy consumption. To break the exascale barrier, the scientific community needs to investigate alternatives that cope with energy consumption. Current HPC systems are power hungry and are already consuming Megawatts of energy. Future exascale systems will be strongly constrained by their energy consumption requirements. Therefore, general purpose high power processors could be replaced by new architectures in HPC design. Two architectures emerge in the HPC context. The first architecture uses Graphic Processing Units (GPU). GPUs have many processing cores, supporting simultaneous execution of thousands of threads, adapting well to massively parallel applications. Today, top ranked HPC systems feature many GPUs, which present high processing speed at low energy consumption budget with various parallel applications. The second architecture uses Low Power Processors, such as ARM processors. They are improving the performance, while still aiming to keep the power consumption as low as possible. As an example of this performance gain, projects like Mont-Blanc bet on ARM to build energy efficient HPC systems. This work aims to verify the potential of these emerging architectures. We evaluate these architectures and compare them to the current most common HPC architecture, high power processors such as Intel. The main goal is to analyze the energy consumption and performance of these architectures in the HPC context. Therefore, heterogeneous HPC benchmarks were executed in the architectures. The results show that the GPU architecture is the fastest and the best in terms of energy efficiency. GPUs were at least 5 times faster while consuming 18 times less energy for all tested benchmarks. We also observed that high power processors are faster than low power processors and consume less energy for heavy-weight workloads. However, for light-weight workloads, low power processors presented a better energy efficiency. We conclude that heterogeneous systems combining GPUs and low power processors can be an interesting solution to achieve greater energy efficiency, although low power processors presented a worse energy efficiency for HPC workloads. Their extremely low power consumption during the processing of an application is less than the idle power of the other architectures. Therefore, combining low power processors with GPUs could result in an overall energy efficiency greater than high power processors combined with GPUs.
|
13 |
Energy consumption and performance of HPC architecture for Exascale / Consumo de energia e desempenho de arquiteturas PAD para ExascaleOliveira, Daniel Alfonso Gonçalves de January 2013 (has links)
Uma das principais preocupações para construir a próxima geração de sistemas PAD é o consumo de energia. Para quebrar a barreira de exascale a comunidade científica precisa investigar alternativas que possam lidar com o problema de consumo de energia. Sistemas PAD atuais não se preocupam com energia e já consomem GigaWatts. Requisitos de consumo de energia restringirão fortemente sistemas futuros. Nesse contexto processadores de alta potência abrem espaço para novas arquiteturas. Duas arquiteturas surgem no contexto de PAD. A primeira arquitetura são as unidades de processamento gráfico (GPU), GPUs possuem vários núcleos de processamento, suportando milhares de threads simultâneas, se adaptando bem a aplicações massivamente paralelas. Hoje alguns dos melhores sistemas PAD possuem GPUs que demonstram um alto desempenho por um baixo consumo de energia para várias aplicações paralelas. A segunda arquitetura são os processadores de baixo consumo, processadores ARM estão melhorando seu desempenho e mantendo o menor consumo de energia possível. Como exemplo desse ganho, projetos como Mont-Blanc apostam no uso de ARM para construir um sistema PAD energeticamente eficiente. Este trabalho visa verificar o potencial dessas arquiteturas emergentes. Avaliamos essas arquiteturas e comparamos com a arquitetura mais comum encontrada nos sistemas PAD atuais. O principal objetivo é analisar o consumo de energia e o desempenho dessas arquiteturas no contexto de sistemas PAD. Portanto, benchmarks heterogêneos foram executados em todas as arquiteturas. Os resultados mostram que a arquitetura de GPU foi a mais rápida e a melhor em termos de consumo de energia. GPU foi pelo menos 5 vezes mais rápida e consumiu 18 vezes menos energia considerando todos os benchmarks testados. Também observamos que processadores de alta potência foram mais rápidos e consumiram menos energia, para tarefas com uma carga de trabalho leve, do que comparado com processadores de baixo consumo. Entretanto, para tarefas com carga de trabalho leve processadores de baixo consumo apresentaram um consumo de energia melhor. Concluímos que sistemas heterogêneos combinando GPUs e processadores de baixo consumo podem ser uma solução interessante para alcançar um eficiência energética superior. Apesar de processadores de baixo consumo apresentarem um pior consumo de energia para cargas de trabalho pesadas. O consumo de energia extremamente baixo durante o processamento é inferior ao consumo ocioso das demais arquiteturas. Portanto, combinando processadores de baixo consumo para gerenciar GPUs pode resultar em uma eficiência energética superior a sistemas que combinam processadores de alta potência com GPUs. / One of the main concerns to build the new generation of High Performance Computing (HPC) systems is energy consumption. To break the exascale barrier, the scientific community needs to investigate alternatives that cope with energy consumption. Current HPC systems are power hungry and are already consuming Megawatts of energy. Future exascale systems will be strongly constrained by their energy consumption requirements. Therefore, general purpose high power processors could be replaced by new architectures in HPC design. Two architectures emerge in the HPC context. The first architecture uses Graphic Processing Units (GPU). GPUs have many processing cores, supporting simultaneous execution of thousands of threads, adapting well to massively parallel applications. Today, top ranked HPC systems feature many GPUs, which present high processing speed at low energy consumption budget with various parallel applications. The second architecture uses Low Power Processors, such as ARM processors. They are improving the performance, while still aiming to keep the power consumption as low as possible. As an example of this performance gain, projects like Mont-Blanc bet on ARM to build energy efficient HPC systems. This work aims to verify the potential of these emerging architectures. We evaluate these architectures and compare them to the current most common HPC architecture, high power processors such as Intel. The main goal is to analyze the energy consumption and performance of these architectures in the HPC context. Therefore, heterogeneous HPC benchmarks were executed in the architectures. The results show that the GPU architecture is the fastest and the best in terms of energy efficiency. GPUs were at least 5 times faster while consuming 18 times less energy for all tested benchmarks. We also observed that high power processors are faster than low power processors and consume less energy for heavy-weight workloads. However, for light-weight workloads, low power processors presented a better energy efficiency. We conclude that heterogeneous systems combining GPUs and low power processors can be an interesting solution to achieve greater energy efficiency, although low power processors presented a worse energy efficiency for HPC workloads. Their extremely low power consumption during the processing of an application is less than the idle power of the other architectures. Therefore, combining low power processors with GPUs could result in an overall energy efficiency greater than high power processors combined with GPUs.
|
14 |
Energy consumption and performance of HPC architecture for Exascale / Consumo de energia e desempenho de arquiteturas PAD para ExascaleOliveira, Daniel Alfonso Gonçalves de January 2013 (has links)
Uma das principais preocupações para construir a próxima geração de sistemas PAD é o consumo de energia. Para quebrar a barreira de exascale a comunidade científica precisa investigar alternativas que possam lidar com o problema de consumo de energia. Sistemas PAD atuais não se preocupam com energia e já consomem GigaWatts. Requisitos de consumo de energia restringirão fortemente sistemas futuros. Nesse contexto processadores de alta potência abrem espaço para novas arquiteturas. Duas arquiteturas surgem no contexto de PAD. A primeira arquitetura são as unidades de processamento gráfico (GPU), GPUs possuem vários núcleos de processamento, suportando milhares de threads simultâneas, se adaptando bem a aplicações massivamente paralelas. Hoje alguns dos melhores sistemas PAD possuem GPUs que demonstram um alto desempenho por um baixo consumo de energia para várias aplicações paralelas. A segunda arquitetura são os processadores de baixo consumo, processadores ARM estão melhorando seu desempenho e mantendo o menor consumo de energia possível. Como exemplo desse ganho, projetos como Mont-Blanc apostam no uso de ARM para construir um sistema PAD energeticamente eficiente. Este trabalho visa verificar o potencial dessas arquiteturas emergentes. Avaliamos essas arquiteturas e comparamos com a arquitetura mais comum encontrada nos sistemas PAD atuais. O principal objetivo é analisar o consumo de energia e o desempenho dessas arquiteturas no contexto de sistemas PAD. Portanto, benchmarks heterogêneos foram executados em todas as arquiteturas. Os resultados mostram que a arquitetura de GPU foi a mais rápida e a melhor em termos de consumo de energia. GPU foi pelo menos 5 vezes mais rápida e consumiu 18 vezes menos energia considerando todos os benchmarks testados. Também observamos que processadores de alta potência foram mais rápidos e consumiram menos energia, para tarefas com uma carga de trabalho leve, do que comparado com processadores de baixo consumo. Entretanto, para tarefas com carga de trabalho leve processadores de baixo consumo apresentaram um consumo de energia melhor. Concluímos que sistemas heterogêneos combinando GPUs e processadores de baixo consumo podem ser uma solução interessante para alcançar um eficiência energética superior. Apesar de processadores de baixo consumo apresentarem um pior consumo de energia para cargas de trabalho pesadas. O consumo de energia extremamente baixo durante o processamento é inferior ao consumo ocioso das demais arquiteturas. Portanto, combinando processadores de baixo consumo para gerenciar GPUs pode resultar em uma eficiência energética superior a sistemas que combinam processadores de alta potência com GPUs. / One of the main concerns to build the new generation of High Performance Computing (HPC) systems is energy consumption. To break the exascale barrier, the scientific community needs to investigate alternatives that cope with energy consumption. Current HPC systems are power hungry and are already consuming Megawatts of energy. Future exascale systems will be strongly constrained by their energy consumption requirements. Therefore, general purpose high power processors could be replaced by new architectures in HPC design. Two architectures emerge in the HPC context. The first architecture uses Graphic Processing Units (GPU). GPUs have many processing cores, supporting simultaneous execution of thousands of threads, adapting well to massively parallel applications. Today, top ranked HPC systems feature many GPUs, which present high processing speed at low energy consumption budget with various parallel applications. The second architecture uses Low Power Processors, such as ARM processors. They are improving the performance, while still aiming to keep the power consumption as low as possible. As an example of this performance gain, projects like Mont-Blanc bet on ARM to build energy efficient HPC systems. This work aims to verify the potential of these emerging architectures. We evaluate these architectures and compare them to the current most common HPC architecture, high power processors such as Intel. The main goal is to analyze the energy consumption and performance of these architectures in the HPC context. Therefore, heterogeneous HPC benchmarks were executed in the architectures. The results show that the GPU architecture is the fastest and the best in terms of energy efficiency. GPUs were at least 5 times faster while consuming 18 times less energy for all tested benchmarks. We also observed that high power processors are faster than low power processors and consume less energy for heavy-weight workloads. However, for light-weight workloads, low power processors presented a better energy efficiency. We conclude that heterogeneous systems combining GPUs and low power processors can be an interesting solution to achieve greater energy efficiency, although low power processors presented a worse energy efficiency for HPC workloads. Their extremely low power consumption during the processing of an application is less than the idle power of the other architectures. Therefore, combining low power processors with GPUs could result in an overall energy efficiency greater than high power processors combined with GPUs.
|
15 |
Designing a Scalable Network Analysis and Monitoring Tool with MPI SupportAugustine, Albert Mathews 28 December 2016 (has links)
No description available.
|
16 |
Energy and Performance Models Enabling Design Space Exploration using Domain Specific LanguagesUmar, Mariam 25 May 2018 (has links)
With the advent of exascale architectures maximizing performance while maintaining energy consumption within reasonable limits has become one of the most critical design constraints. This constraint is particularly significant in light of the power budget of 20 MWatts set by the U.S. Department of Energy for exascale supercomputing facilities. Therefore, understanding an application's characteristics, execution pattern, energy footprint, and the interactions of such aspects is critical to improving the application's performance as well as its utilization of the underlying resources.
With conventional methods of analyzing performance and energy consumption trends scientists are forced to limit themselves to a manageable number of design parameters. While these modeling techniques have catered to the needs of current high-performance computing systems, the complexity and scale of exascale systems demands that large-scale design-space-exploration techniques are developed to enable comprehensive analysis and evaluations.
In this dissertation we present research on performance and energy modeling of current high performance computing and future exascale systems. Our thesis is focused on the design space exploration of current and future architectures, in terms of their reconfigurability, application's sensitivity to hardware characteristics (e.g., system clock, memory bandwidth), application's execution patterns, application's communication behavior, and utilization of resources. Our research is aimed at understanding the methods by which we may maximize performance of exascale systems, minimize energy consumption, and understand the trade offs between the two.
We use analytical, statistical, and machine-learning approaches to develop accurate, portable and scalable performance and energy models. We develop application and machine abstractions using Aspen (a domain specific language) to implement and evaluate our modeling techniques. As part of our research we develop and evaluate system-level performance and energy-consumption models that form part of an automated modeling framework, which analyzes application signatures to evaluate sensitivity of reconfigurable hardware components for candidate exascale proxy applications. We also develop statistical and machine-learning based models of the application's execution patterns on heterogeneous platforms. We also propose a communication and computation modeling and mapping framework for exascale proxy architectures and evaluate the framework for an exascale proxy application. These models serve as external and internal extensions to Aspen, which enable proxy exascale architecture implementations and thus facilitate design space exploration of exascale systems. / Ph. D. / Performance monitoring and modeling has been an extensively researched topic over the last decade. The traditional approaches of manually modeling performance and energy worked well for previous generation computers. With the prevalence of complex high-performance computers, clusters and the anticipation of future exascale architectures, the conventional modeling approaches will not be sufficient. A number of reasons limit the conventional modeling approaches, e.g, complexity of current and future architectures, increase in number of performance parameters to monitor, diversity in the architecture etc. This issue will worsen with the advent of exascale architectures that encompasses complex micro-architectures along with the increases in scale that have never been encountered in the computing industry before.
In this dissertation, we focus on two primary aspects of performance and energy modeling in the context of current high performance computing and future exascale architectures. We focus on adapting conventional modeling approaches to comprise the properties of accuracy, scalability, portability and independence of architectures. Centered around performance and energy improvements, we also develop design space exploration techniques that study the effects of application performance improvement in terms of reconfigurable hardware. We also quantitatively measure the effects of application performance sensitivity with changing hardware configurations – using analytical and machine learning modeling techniques. We explore theoretical exascale architecture, and validate it for performance limits. We develop a communication and computation model for the proxy exascale architecture and test it for strong and weak scaling for co-design for molecular dynamics.
|
17 |
Combining checkpointing and other resilience mechanisms for exascale systems / L'utilisation conjointe de mécanismes de sauvegarde de points de reprise (checkpoints) et d'autres mécanismes de résilience pour les systèmes exascalesBentria, Dounia 10 December 2014 (has links)
Dans cette thèse, nous nous sommes intéressés aux problèmes d'ordonnancement et d'optimisation dans des contextes probabilistes. Les contributions de cette thèse se déclinent en deux parties. La première partie est dédiée à l’optimisation de différents mécanismes de tolérance aux pannes pour les machines de très large échelle qui sont sujettes à une probabilité de pannes. La seconde partie est consacrée à l’optimisation du coût d’exécution des arbres d’opérateurs booléens sur des flux de données.Dans la première partie, nous nous sommes intéressés aux problèmes de résilience pour les machines de future génération dites « exascales » (plateformes pouvant effectuer 1018 opérations par secondes).Dans le premier chapitre, nous présentons l’état de l’art des mécanismes les plus utilisés dans la tolérance aux pannes et des résultats généraux liés à la résilience.Dans le second chapitre, nous étudions un modèle d’évaluation des protocoles de sauvegarde de points de reprise (checkpoints) et de redémarrage. Le modèle proposé est suffisamment générique pour contenir les situations extrêmes: d’un côté le checkpoint coordonné, et de l’autre toute une famille de stratégies non-Coordonnées. Nous avons proposé une analyse détaillée de plusieurs scénarios, incluant certaines des plateformes de calcul existantes les plus puissantes, ainsi que des anticipations sur les futures plateformes exascales.Dans les troisième, quatrième et cinquième chapitres, nous étudions l'utilisation conjointe de différents mécanismes de tolérance aux pannes (réplication, prédiction de pannes et détection d'erreurs silencieuses) avec le mécanisme traditionnel de checkpoints et de redémarrage. Nous avons évalué plusieurs modèles au moyen de simulations. Nos résultats montrent que ces modèles sont bénéfiques pour un ensemble de modèles d'applications dans le cadre des futures plateformes exascales.Dans la seconde partie de la thèse, nous étudions le problème de la minimisation du coût de récupération des données par des applications lors du traitement d’une requête exprimée sous forme d'arbres d'opérateurs booléens appliqués à des prédicats sur des flux de données de senseurs. Le problème est de déterminer l'ordre dans lequel les prédicats doivent être évalués afin de minimiser l'espérance du coût du traitement de la requête. Dans le sixième chapitre, nous présentons l'état de l'art de la seconde partie et dans le septième chapitre, nous étudions le problème pour les requêtes exprimées sous forme normale disjonctive. Nous considérons le cas plus général où chaque flux peut apparaître dans plusieurs prédicats et nous étudions deux modèles, le modèle où chaque prédicat peut accéder à un seul flux et le modèle où chaque prédicat peut accéder à plusieurs flux. / In this thesis, we are interested in scheduling and optimization problems in probabilistic contexts. The contributions of this thesis come in two parts. The first part is dedicated to the optimization of different fault-Tolerance mechanisms for very large scale machines that are subject to a probability of failure and the second part is devoted to the optimization of the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. In the first chapter, we present the related work of the first part and then we introduce some new general results that are useful for resilience on exascale systems.In the second chapter, we study a unified model for several well-Known checkpoint/restart protocols. The proposed model is generic enough to encompass both extremes of the checkpoint/restart space, from coordinated approaches to a variety of uncoordinated checkpoint strategies. We propose a detailed analysis of several scenarios, including some of the most powerful currently available HPC platforms, as well as anticipated exascale designs.In the third, fourth, and fifth chapters, we study the combination of different fault tolerant mechanisms (replication, fault prediction and detection of silent errors) with the traditional checkpoint/restart mechanism. We evaluated several models using simulations. Our results show that these models are useful for a set of models of applications in the context of future exascale systems.In the second part of the thesis, we study the problem of minimizing the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. The problem is to determine the order in which predicates should be evaluated so as to shortcut part of the query evaluation and minimize the expected cost.In the sixth chapter, we present the related work of the second part and in the seventh chapter, we study the problem for queries expressed as a disjunctive normal form. We consider the more general case where each data stream can appear in multiple predicates and we consider two models, the model where each predicate can access a single stream and the model where each predicate can access multiple streams.
|
18 |
Photonic Interconnection Networks for Exascale ComputersDuro Gómez, José 24 May 2021 (has links)
[ES] En los últimos años, distintos proyectos alrededor del mundo se han centrado en el diseño de supercomputadores capaces de alcanzar la meta de la computación a exascala, con el objetivo de soportar la ejecución de aplicaciones de gran importancia para la sociedad en diversos campos como el de la salud, la inteligencia artificial, etc. Teniendo en cuenta la creciente tendencia de la potencia computacional en cada generación de supercomputadores, este objetivo se prevee accesible en los próximos años. Alcanzar esta meta requiere abordar diversos retos en el diseño y desarrollo del sistema. Uno de los principales es conseguir unas comunicaciones rápidas y eficientes entre el inmenso número de nodos de computo y los sitemas de memoria. La tecnología fotónica proporciona ciertas ventajas frente a las redes eléctricas, como un mayor ancho de banda en los enlaces, un mayor paralelismo a nivel de comunicaciones gracias al DWDM o una mejor gestión del cableado gracias a su reducido tamaño. En la tesis se ha desarrollado un estudio de viabilidad y desarrollo de redes de interconexión haciendo uso de la tecnología fotónica para los futuros sistemas a exaescala dentro del proyecto europeo ExaNeSt. En primer lugar, se ha realizado un análisis y caracterización de aplicaciones exaescala. Este análisis se ha utilizado para conocer el comportamiento y requisitos de red que presentan las aplicaciones, y con ello guiarnos en el diseño de la red del sistema. El análisis considera tres parámetros: la distribución de mensajes en base a su tamaño y su tipo, el consumo de ancho de banda requerido a lo largo de la ejecución y la matriz de comunicación espacial entre los nodos. El estudio revela la necesidad de una red eficiente y rápida, debido a que la mayoría de las comunaciones se realizan en burst y con mensajes de un tamaño medio inferior a 50KB. A continuación, la tesis se centra en identificar los principales elementos que diferencian las redes fotónicas de las eléctricas. Identificamos una secuencia de pasos en el diseño de un simulador, ya sea haciéndolo desde cero con tecnología fotónica o adaptando un simulador de redes eléctricas existente para modelar la fotónica. Después se han realizado dos estudios de rendimiento y comparativas entre las actuales redes eléctricas y distintas configuraciones de redes fotónicas utilizando topologías clásicas. En el primer estudio, realizado tanto con tráfico sintético como con trazas de ExaNeSt en un toro, fat tree y dragonfly, se observa como la tecnología fotónica supone una clara mejora respecto a la eléctrica. Además, el estudio muestra que el parámetro que más afecta al rendimiento es el ancho de banda del canal fotónico. El segundo estudio muestra el comportamiento y rendimiento de aplicaciones reales en simulaciones a gran escala en una topología jellyfish. En este estudio se confirman las conclusiones obtenidas en el anterior, revelando además que la tecnología fotónica permite reducir la complejidad de algunas topologías, y por ende, el coste de la red.
En los estudios realizados se ha observado una baja utilización de la red debido a que las topologías utilizadas para redes eléctricas no aprovechan las características que proporciona la tecnología fotónica. Por ello, se ha propuesto Segment Switching, una estrategia de conmutación orientada a reducir la longitud de las rutas mediante el uso de buffers intermedios. Los resultados experimentales muestran que cada topología tiene sus propios requerimientos. En el caso del toro, el mayor rendimiento se obtiene con un mayor número de buffers en la red. En el fat tree el parámetro más importante es el tamaño del buffer, obteniendo unas prestaciones similares una configuración con buffers en todos los switches que la que los ubica solo en el nivel superior. En resumen, esta tesis estudia el uso de la tecnología fotónica para las redes de sistemas a exascala y propone aprovechar / [CA] Els darrers anys, múltiples projectes de recerca a tot el món s'han centrat en el disseny de superordinadors capaços d'assolir la barrera de computació exascala, amb l'objectiu de donar suport a l'execució d'aplicacions importants per a la nostra societat, com ara salut, intel·ligència artificial, meteorologia, etc. Segons la tendència creixent en la potència de càlcul en cada generació de superordinadors, es preveu assolir aquest objectiu en els propers anys. No obstant això, assolir aquest objectiu requereix abordar diferents reptes importants en el disseny i desenvolupament del sistema. Un dels principals és aconseguir comunicacions ràpides i eficients entre l'enorme nombre de nodes computacionals i els sistemes de memòria. La tecnologia fotònica proporciona diversos avantatges respecte a les xarxes elèctriques actuals, com ara un major ample de banda als enllaços, un major paral·lelisme de la xarxa gràcies a DWDM o una millor gestió del cable a causa de la seva mida molt més xicoteta. En la tesi, s'ha desenvolupat un estudi de viabilitat i desenvolupament de xarxes d'interconnexió mitjançant tecnologia fotònica per a futurs sistemes exascala dins del projecte europeu ExaNeSt. En primer lloc, s'ha dut a terme un estudi de caracterització d'aplicacions exascala dels requisits de xarxa. Els resultats de l'anàlisi ajuden a entendre els requisits de xarxa de les aplicacions exascale i, per tant, ens guien en el disseny de la xarxa del sistema. Aquesta anàlisi considera tres paràmetres principals: la distribució dels missatges en funció de la seva mida i tipus, el consum d'ample de banda requerit durant tota l'execució i els patrons de comunicació espacial entre els nodes. L'estudi revela la necessitat d'una xarxa d'interconnexió ràpida i eficient, ja que la majoria de comunicacions consisteixen en ràfegues de transmissions, cadascuna amb una mida mitjana de missatge de 50 KB. A continuació, la tesi se centra a identificar els principals elements que diferencien les xarxes fotòniques de les elèctriques. Identifiquem una seqüència de passos en el disseny i implementació d'un simulador: tractar la tecnologia fotònica des de zero o per ampliar un simulador de xarxa elèctrica existent per modelar la fotònica. Després, es presenten dos estudis principals de comparació de rendiment entre xarxes elèctriques i diferents configuracions de xarxes fotòniques mitjançant topologies clàssiques. En el primer estudi, realitzat tant amb trànsit sintètic com amb traces d'ExaNeSt en un toro, fat tree i dragonfly, vam trobar que la tecnologia fotònica representa una millora notable respecte a la tecnologia elèctrica. A més, l'estudi mostra que el paràmetre que més afecta el rendiment és l'amplada de banda del canal fotònic. Aquest darrer estudi analitza el rendiment d'aplicacions reals en simulacions a gran escala en una topologia jellyfish. Els resultats d'aquest estudi corroboren les conclusions obtingudes en l'anterior, revelant també que la tecnologia fotònica permet reduir la complexitat d'algunes topologies i, per tant, el cost de la xarxa. En els estudis anteriors ens adonem que la xarxa estava infrautilitzada principalment perquè les topologies estudiades per a xarxes elèctriques no aprofiten les característiques proporcionades per la tecnologia fotònica. Per aquest motiu, proposem Segment Switching, una estratègia de commutació destinada a reduir la longitud de les rutes mitjançant la implementació de memòries intermèdies en nodes intermedis al llarg de la ruta. Els resultats experimentals mostren que cadascuna de les topologies estudiades presenta diferents requisits de memòria intermèdia. Per al toro, com més gran siga el nombre de memòries intermèdies a la xarxa, major serà el rendiment. Per al fat tree, el paràmetre clau és la mida de la memòria intermèdia, aconseguint un rendiment similar tant amb una configuració amb memòria intermèdia en tots els co / [EN] In the last recent years, multiple research projects around the world have focused on the design of supercomputers able to reach the exascale computing barrier, with the aim of supporting the execution of important applications for our society, such as health, artificial intelligence, meteorology, etc. According to the growing trend in the computational power in each supercomputer generation, this objective is expected to be reached in the coming years. However, achieving this goal requires addressing distinct major challenges in the design and development of the system. One of the main ones is to achieve fast and efficient communications between the huge number of computational nodes and the memory systems.
Photonics technology provides several advantages over current electrical networks, such as higher bandwidth in the links, greater network parallelism thanks to DWDM, or better cable management due to its much smaller size. In this thesis, a feasibility study and development of interconnection networks have been developed using photonics technology for future exascale systems within the European project ExaNeSt.
First, a characterization study of exascale applications from the network requirements has been carried out. The results of the analysis help understand the network requirements of exascale applications, and thereby guide us in the design of the system network. This analysis considers three main parameters: the distribution of the messages based on their size and type, the required bandwidth consumption throughout the execution, and the spatial communication patterns between the nodes. The study reveals the need for a fast and efficient interconnection network, since most communications consist of bursts of transmissions, each with an average message size of 50 KB.
Next, this dissertation concentrates on identifying the main elements that differentiate photonic networks from electrical ones. We identify a sequence of steps in the design and implementation of a simulator either i) dealing with photonic technology from scratch or ii) to extend an existing electrical network simulator in order to model photonics.
After that, two main performance comparison studies between electrical networks and different configurations of photonic networks are presented using classical topologies. In the former study, carried out with both synthetic traffic and traces of ExaNeSt in a torus, fat tree and dragonfly, we found that photonic technology represents a noticeable improvement over electrical technology. Furthermore, the study shows that the parameter that most affects the performance is the bandwidth of the photonic channel. The latter study analyzes performance of real applications in large-scale simulations in a jellyfish topology. The results of this study corroborates the conclusions obtained in the previous, also revealing that photonic technology allows reducing the complexity of some topologies, and therefore, the cost of the network.
In the previous studies we realize that the network was underutilized mainly because the studied topologies for electrical networks do not take advantage of the features provided by photonic technology. For this reason, we propose Segment Switching, a switching strategy aimed at reducing the length of the routes by implementing buffers at intermediate nodes along the path. Experimental results show that each of the studied topologies presents different buffering requirements. For the torus, the higher the number of buffers in the network, the higher the performance. For the fat tree, the key parameter is the buffer size, achieving similar performance a configuration with buffers on all switches that locating buffers only at the top level.
In summary, this thesis studies the use of photonic technology for networks of exascale systems, and proposes to take advantage of the characteristics of this technology in current electrical network topologies. / This thesis has been conceived from the work carried out by Polytechnic University of Valencia
in the ExaNeSt European project / Duro Gómez, J. (2021). Photonic Interconnection Networks for Exascale Computers [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/166796
|
19 |
Data Transfer and Management through the IKAROS framework : Adopting an asynchronous non-blocking event driven approach to implement the Elastic-Transfer's IMAP client-server connectionGkikas, Nikolaos January 2015 (has links)
Given the current state of input/output (I/O) and storage devices in petascale systems, incremental solutions would be ineffective when implemented in exascale environments. According to the "The International Exascale Software Roadmap", by Dongarra, et al. existing I/O architectures are not sufficiently scalable, especially because current shared file systems have limitations when used in large-scale environments. These limitations are: Bandwidth does not scale economically to large-scale systems, I/O traffic on the high speed network can impact on and be influenced by other unrelated jobs, and I/O traffic on the storage server can impact on and be influenced by other unrelated jobs. Future applications on exascale computers will require I/O bandwidth proportional to their computational capabilities. To avoid these limitations C. Filippidis, C. Markou, and Y. Cotronis proposed the IKAROS framework. In this thesis project, the capabilities of the publicly available elastic-transfer (eT) module which was directly derived from the IKAROS, will be expanded. The eT uses Google’s Gmail service as an utility for efficient meta-data management. Gmail is based on the IMAP protocol, and the existing version of the eT framework implements the Internet Message Access Protocol (IMAP) client-server connection through the ‘‘Inbox’’ module from the Node Package Manager (NPM) of the Node.js programming language. This module was used as a proof of concept, but in a production environment this implementation undermines the system’s scalability and there is an inefficient allocation of the system’s resources when a large number of concurrent requests arrive at the eT′s meta-data server (MDS) at the same time. This thesis solves this problem by adopting an asynchronous non-blocking event driven approach to implement the IMAP client-server connection. This was done by integrating and modifying the ‘‘Imap’’ NPM module from the NPM repository to suit the eT framework. Additionally, since the JavaScript Object Notation (JSON) format has become one of the most widespread data-interchange formats, eT′s meta-data scheme is appropriately modified to make the system’s meta-data easily parsed as JSON objects. This feature creates a framework with wider compatibility and interoperability with external systems. The evaluation and operational behavior of the new module was tested through a set of data transfer experiments over a wide area network environment. These experiments were performed to ensure that the changes in the system’s architecture did not affected its performance. / Givet det nuvarande läget för input/output (I/O) och lagringsenheter för system i peta-skala, skulle inkrementella lösningar bli ineffektiva om de implementerades i exa-skalamiljöer. Enligt ”The International Exascale Software Roadmap”, av Dongarra et al., är nuvarande I/O-arkitekturer inte tillräckligt skalbara, särskilt eftersom nuvarande delade filsystem har begränsningar när de används i storskaliga miljöer. Dessa begränsningar är: Bandbredd skalar inte på ett ekonomiskt sätt i storskaliga system, I/O-trafik på höghastighetsnätverk kan ha påverkan på och blir påverkad av andra orelaterade jobb, och I/O-trafik på lagringsservern kan ha påverkan på och bli påverkad av andra orelaterade jobb. Framtida applikationer på exa-skaladatorer kommer kräva I/O-bandbredd proportionellt till deras beräkningskapacitet. För att undvika dessa begränsningar föreslog C. Filippidis, C. Markou och Y. Cotronis ramverket IKAROS. I detta examensarbete utökas funktionaliteten hos den publikt tillgängliga modulen elastic-transfer (eT) som framtagits utifrån IKAROS. Den befintliga versionen av eT-ramverket implementerar Internet Message Access Protocol (IMAP) klient-serverkommunikation genom modulen ”Inbox” från Node Package Manager (NPM) ur Node.js programmeringsspråk. Denna modul användes som ett koncepttest, men i en verklig miljö så underminerar denna implementation systemets skalbarhet när ett stort antal värdar ansluter till systemet. Varje klient begär individuellt information relaterad till systemets metadata från IMAP-servern, vilket leder till en ineffektiv allokering av systemets resurser när ett stort antal värdar är samtidigt anslutna till eT-ramverket. Denna uppsats löser problemet genom att använda ett asynkront, icke-blockerande och händelsedrivet tillvägagångssätt för att implementera en IMAP klient-serveranslutning. Detta görs genom att integrera och modifiera NPM:s ”Imap”-modul, tagen från NPM:s katalog, så att den passar eT-ramverket. Eftersom formatet JavaScript Object Notation (JSON) har blivit ett av de mest spridda formaten för datautbyte så modifieras även eT:s metadata-struktur för att göra systemets metadata enkelt att omvandla till JSON-objekt. Denna funktionalitet ger ett bredare kompatibilitet och interoperabilitet med externa system. Utvärdering och tester av den nya modulens operationella beteende utfördes genom en serie dataöverföringsexperiment i en wide area network-miljö. Dessa experiment genomfördes för att få bekräftat att förändringarna i systemets arkitektur inte påverkade dess prestanda.
|
Page generated in 0.0581 seconds