481 |
Jahresbericht 2013 zur kooperativen DV-Versorgung16 November 2017 (has links) (PDF)
No description available.
|
482 |
Jahresbericht 2015 zur kooperativen DV-VersorgungReuschel, Petra 16 November 2017 (has links) (PDF)
No description available.
|
483 |
Méthodes de préconditionnement pour la résolution de systèmes linéaires sur des machines massivement parallèles / Preconditioning methods for solving linear systems on massively parallel machinesQu, Long 10 April 2014 (has links)
Cette thèse traite d’une nouvelle classe de préconditionneurs qui ont pour but d’accélérer la résolution des grands systèmes creux, courant dans les problèmes scientifiques ou industriels, par les méthodes itératives préconditionnées. Pour appliquer ces préconditionneurs, la matrice d’entrée doit être réorganisée avec un algorithme de dissection emboîtée. Nous introduisons également une technique de recouvrement qui s’adapte à l’idée de chevauchement des sous-domaines provenant des méthodes de décomposition de domaine, aux méthodes de dissection emboîtée pour améliorer la convergence de nos préconditionneurs.Les résultats montrent que cette technique de recouvrement nous permet d’améliorer la vitesse de convergence de Nested SSOR (NSSOR) et Nested Modified incomplete LU with Rowsum proprety (NMILUR) qui sont des préconditionneurs que nous étudions. La dernière partie de cette thèse portera sur nos contributions dans le domaine du calcul parallèle. Nous présenterons la distribution des données et les algorithmes parallèles utilisés pour la mise en oeuvre de nos préconditionneurs. Les résultats montrent que sur une grille régulière 400x400x400, le nombre d’itérations nécessaire à la résolution avec un de nos préconditionneurs, Nested Filtering Factorization préconditionneur (NFF), n’augmente que légèrement quand le nombre de sous-domaines augmente jusqu’à 2048. En ce qui concerne les performances d’exécution sur le super-calculateur Curie, il passe à l’échelle jusqu’à 2048 coeurs et il est 2,6 fois plus rapide que le préconditionneur Schwarz Additif Restreint (RAS) qui est un des préconditionneurs basés sur les méthodes de décomposition de domaine implémentés dans la bibliothèque de calcul scientifique PETSc, bien connue de la communauté. / This thesis addresses a new class of preconditioners which aims at accelerating solving large sparse systems arising in scientific and engineering problem by using preconditioned iterative methods. To apply these preconditioners, the input matrix needs to be reordered with K-way nested dissection. We also introduce an overlapping technique that adapts the idea of overlapping subdomains from domain decomposition methods to nested dissection based methods to improve the convergence of these preconditioners. Results show that such overlapping technique improves the convergence rate of Nested SSOR (NSSOR) and Nested Modified Incomplete LU with Rowsum property (NMILUR) precondtioners that we worked on. We also present the data distribution and parallel algorithms for implementing these preconditioners. Results show that on a 400x400x400 regular grid, the number of iterations with Nested Filtering Factorization preconditioner (NFF) increases slightly while increasing the number of subdomains up to 2048. In terms of runtime performance on Curie supercomputer, it scales up to 2048 cores and it is 2.6 times faster than the domain decomposition preconditioner Restricted Additive Schwarz (RAS) as implemented in PETSc.
|
484 |
Routing on the Channel Dependency Graph:Domke, Jens 20 June 2017 (has links) (PDF)
In the pursuit for ever-increasing compute power, and with Moore's law slowly coming to an end, high-performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology-aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today's networks and the accompanying steady decrease of the mean time between failures. Alternative methods of operating and maintaining these high-performance interconnects, both in terms of hardware- and software-management, are necessary to mitigate negative effects experienced by scientific applications executed on the supercomputer. However, existing topology-agnostic routing algorithms either suffer from poor load balancing or are not bounded in the number of virtual channels needed to resolve deadlocks in the routing tables.
Using the fail-in-place strategy, a well-established method for storage systems to repair only critical component failures, is a feasible solution for current and future HPC interconnects as well as other large-scale installations such as data center networks. Although, an appropriate combination of topology and routing algorithm is required to minimize the throughput degradation for the entire system. This thesis contributes a network simulation toolchain to facilitate the process of finding a suitable combination, either during system design or while it is in operation. On top of this foundation, a key contribution is a novel scheduling-aware routing, which reduces fault-induced throughput degradation while improving overall network utilization. The scheduling-aware routing performs frequent property preserving routing updates to optimize the path balancing for simultaneously running batch jobs. The increased deployment of lossless interconnection networks, in conjunction with fail-in-place modes of operation and topology-agnostic, scheduling-aware routing algorithms, necessitates new solutions to solve the routing-deadlock problem. Therefore, this thesis further advances the state-of-the-art by introducing a novel concept of routing on the channel dependency graph, which allows the design of an universally applicable destination-based routing capable of optimizing the path balancing without exceeding a given number of virtual channels, which are a common hardware limitation. This disruptive innovation enables implicit deadlock-avoidance during path calculation, instead of solving both problems separately as all previous solutions.
|
485 |
Mouvement de données et placement des tâches pour les communications haute performance sur machines hiérarchiquesMoreaud, Stéphanie 12 October 2011 (has links)
Les architectures des machines de calcul sont de plus en plus complexes et hiérarchiques, avec des processeurs multicœurs, des bancs mémoire distribués, et de multiples bus d'entrées-sorties. Dans le cadre du calcul haute performance, l'efficacité de l'exécution des applications parallèles dépend du coût de communication entre les tâches participantes qui est impacté par l'organisation des ressources, en particulier par les effets NUMA ou de cache.Les travaux de cette thèse visent à l'étude et à l'optimisation des communications haute performance sur les architectures hiérarchiques modernes. Ils consistent tout d'abord en l'évaluation de l'impact de la topologie matérielle sur les performances des mouvements de données, internes aux calculateurs ou au travers de réseaux rapides, et pour différentes stratégies de transfert, types de matériel et plateformes. Dans une optique d'amélioration et de portabilité des performances, nous proposons ensuite de prendre en compte les affinités entre les communications et le matériel au sein des bibliothèques de communication. Ces recherches s'articulent autour de l'adaptation du placement des tâches en fonction des schémas de transfert et de la topologie des calculateurs, ou au contraire autour de l'adaptation des stratégies de mouvement de données à une répartition définie des tâches. Ce travail, intégré aux principales bibliothèques MPI, permet de réduire de façon significative le coût des communications et d'améliorer ainsi les performances applicatives. Les résultats obtenus témoignent de la nécessité de prendre en compte les caractéristiques matérielles des machines modernes pour en exploiter la quintessence. / The emergence of multicore processors led to an increasing complexity inside the modern servers, with many cores, distributed memory banks and multiple Input/Output buses. The execution time of parallel applications depends on the efficiency of the communications between computing tasks. On recent architectures, the communication cost is largely impacted by hardware characteristics such as NUMA or cache effects. In this thesis, we propose to study and optimize high performance communication on hierarchical architectures. We first evaluate the impact of the hardware affinities on data movement, inside servers or across high-speed networks, and for multiple transfer strategies, technologies and platforms. We then propose to consider affinities between hardware and communicating tasks inside the communication libraries to improve performance and ensure their portability. To do so,we suggest to adapt the tasks binding according to the transfer method and thetopology, or to adjust the data transfer strategies to a defined task distribution. Our approaches have been integrated in some main MPI implementations. They significantly reduce the communication costs and improve the overall application performance. These results highlight the importance of considering hardware topology for nowadays servers.
|
486 |
An investigation into parallel job scheduling using service level agreementsAli, Syed Zeeshan January 2014 (has links)
A scheduler, as a central components of a computing site, aggregates computing resources and is responsible to distribute the incoming load (jobs) between the resources. Under such an environment, the optimum performance of the system against the service level agreement (SLA) based workloads, can be achieved by calculating the priority of SLA bound jobs using integrated heuristic. The SLA defines the service obligations and expectations to use the computational resources. The integrated heuristic is the combination of different SLA terms. It combines the SLA terms with a specific weight for each term. Theweights are computed by applying parameter sweep technique in order to obtain the best schedule for the optimum performance of the system under the workload. The sweepingof parameters on the integrated heuristic observed to be computationally expensive. The integrated heuristic becomes more expensive if no value of the computed weights result in improvement in performance with the resulting schedule. Hence, instead of obtaining optimum performance it incurs computation cost in such situations. Therefore, there is a need of detection of situations where the integrated heuristic can be exploited beneficially. For that reason, in this thesis we propose a metric based on the concept of utilization, to evaluate the SLA based parallel workloads of independent jobs to detect any impact of integrated heuristic on the workload.
|
487 |
High-Performance Analytics (HPA) / High-Performance Analytics (HPA)Soukup, Petr January 2012 (has links)
The aim of the thesis on the topic of High-Performance Analytics is to gain a structured overview of solutions of high performance methods for data analysis. The thesis introduction concerns with definitions of primary and secondary data analysis, and with the primary systems which are not appropriate for analytical data analysis. The usage of mobile devices, modern information technologies and other factors caused a rapid change of the character of data. The major part of this thesis is devoted particularly to the historical turn in the new approaches towards analytical data analysis, which was caused by Big Data, a very frequent term these days. Towards the end of the thesis there are discussed the system sources which greatly participate in the new approaches to the analytical data analysis as well as in the technological solutions of High Performance Analytics themselves. The second, practical part of the thesis is aimed at a comparison of the performance in conventional methods for data analysis and in one of the high performance methods of High Performance Analytics (more precisely, with In-Memory Analytics). Comparison of individual solutions is performed in identical environment of High Performance Analytics server. The methods are applied to a certain sample whose volume is increased after every round of executed measurement. The conclusion evaluates the tests results and discusses the possibility of usage of the individual High Performance Analytics methods.
|
488 |
Opérateurs arithmétiques parallèles pour la cryptographie asymétrique / Parallel arithmetical operators for asymmetric cryptographyIzard, Thomas 19 December 2011 (has links)
Les protocoles de cryptographie asymétrique nécessitent des calculs arithmétiques dans différentes structures mathématiques de grandes tailles. Pour garantir une sécurité suffisante, ces tailles varient de plusieurs centaines à plusieurs milliers de bits et rendent les opérations arithmétiques coûteuses en temps de calcul. D'autre part, les architectures grand public actuelles embarquent plusieurs unités de calcul, réparties sur les processeurs et éventuellement sur les cartes graphiques. Ces ressources sont aujourd'hui facilement exploitables grâce à des interfaces de programmation parallèle comme OpenMP ou CUDA. Dans cette thèse, nous étudions la parallélisation d'opérateurs à différents niveaux arithmétique. Nous nous intéressons plus particulièrement à la multiplication entre entiers multiprécision ; à la multiplication modulaire ; et enfin à la multiplication scalaire sur les courbes elliptiques.Dans chacun des cas, nous étudions différents ordonnancements des calculs permettant d'obtenir les meilleures performances. Nous proposons également une bibliothèque permettant la parallélisation sur processeur graphique d'instances d'opérations modulaires et d'opérations sur les courbes elliptiques. Enfin, nous proposons une méthode d'optimisation automatique de la multiplication scalaire sur les courbes elliptiques pour de petits scalaires permettant l'élimination des sous-expressions communes apparaissant dans la formule et l'application systématique de transformations arithmétiques. / Asymmetric cryptography requires some computations in large size finite mathematical structures. To insure the required security, these sizes range from several hundred to several thousand of bits. Mathematical operations are thus expansive in terms of computation time. Otherwise, current architectures have several computing units, which are distribued over the processors and GPU and easily implementable using dedicated languages as OpenMP or CUDA. In this dissertation, we investigate the parallelization of some operators for different arithmetical levels.In particular, our research focuse on parallel multiprecision and modular multiplications, and the parallelization of scalar multiplication over elliptic curves. We also propose a library to parallelize modular operations and elliptic curves operations. Finally, we present a method which allow to optimize scalar elliptic curve multiplication for small scalars.
|
489 |
Calcul hautes performances pour les formulations intégrales en électromagnétisme basses fréquences. Intégration, compression matricielle par ondelettes et résolution sur architecture GPGPU / High performance computing for integral formulations in low frequencies electromagnetism – Integration, wavelets matrix compression and solving on GPGPU architectureRubeck, Christophe 18 December 2012 (has links)
Les méthodes intégrales sont des méthodes particulièrement bien adaptées à la modélisation des systèmes électromagnétiques car contrairement aux méthodes par éléments finis elles ne nécessitent pas le maillage des matériaux inactifs tel que l'air. Ces modèles sont donc légers en terme du nombre de degrés de liberté. Cependant ceux sont des méthodes à interactions totales qui génèrent des matrices de systèmes d'équations pleines. Ces matrices sont longues à calculer en temps processeur et coûteuses à stocker dans la mémoire vive de l'ordinateur. Nous réduisons dans ces travaux les temps de calcul grâce au parallélisme, c'est-à-dire l'utilisation de plusieurs processeurs, notamment sur cartes graphiques (GPGPU). Nous réduisons également le coût du stockage mémoire via de la compression matricielle par ondelettes (il s'agit d'un algorithme proche de la compression d'images). C'est une compression par pertes, nous avons ainsi développé un critère pour contrôler l'erreur introduite par la compression. Les méthodes développées sont appliquées sur une formulation électrostatique de calcul de capacités, mais elles sont à priori également applicables à d'autres formulations. / Integral equation methods are widely used in electromagnetism modeling because, in opposition to finite element methods, they do not require the meshing of non-active materials like air. Therefore they lead to formulations with small degrees of freedom. However, they also lead to fully dense systems of equations. Computation times are expensive and the storage of the matrix is very expensive. This work presents different parallel computation strategies in order to speed up the computation time, in particular the use of graphical processing units (GPGPU) is focused. The next point is to reduce the memory requirements thanks to wavelets compression (it is an algorithm similar to image compression). The compression technique introduces errors, therefore a control criterion is proposed. The methodology is applied to an electrostatic formulation but it is general and it could also be used with others integral formulations.
|
490 |
A simulation workflow to evaluate the performance of dynamic load balancing with over decomposition for iterative parallel applicationsTesser, Rafael Keller January 2018 (has links)
Nesta tese é apresentado um novo workflow de simulação para avaliar o desempenho do balanceamento de carga dinâmico baseado em sobre-decomposição aplicado a aplicações paralelas iterativas. Seus objetivos são realizar essa avaliação com modificações mínimas da aplicação e a baixo custo em termos de tempo e de sua necessidade de recursos computacionais. Muitas aplicações paralelas sofrem com desbalanceamento de carga dinâmico (temporal) que não pode ser tratado a nível de aplicação. Este pode ser causado por características intrínsecas da aplicação ou por fatores externos de hardware ou software. Como demonstrado nesta tese, tal desbalanceamento é encontrado mesmo em aplicações cujo código não aparenta qualquer dinamismo. Portanto, faz-se necessário utilizar mecanismo de balanceamento de carga dinâmico a nível de runtime. Este trabalho foca no balanceamento de carga dinâmico baseado em sobre-decomposição. No entanto, avaliar e ajustar o desempenho de tal técnica pode ser custoso. Isso geralmente requer modificações na aplicação e uma grande quantidade de execuções para obter resultados estatisticamente significativos com diferentes combinações de parâmetros de balanceamento de carga Além disso, para que essas medidas sejam úteis, são usualmente necessárias grandes alocações de recursos em um sistema de produção. Simulated Adaptive MPI (SAMPI), nosso workflow de simulação, emprega uma combinação de emulação sequencial e replay de rastros para reduzir os custos dessa avaliação. Tanto emulação sequencial como replay de rastros requerem um único nó computacional. Além disso, o replay demora apenas uma pequena fração do tempo de uma execução paralela real da aplicação. Adicionalmente à simulação de balanceamento de carga, foram desenvolvidas técnicas de agregação espacial e rescaling a nível de aplicação, as quais aceleram o processo de emulação. Para demonstrar os potenciais benefícios do balanceamento de carga dinâmico com sobre-decomposição, foram avaliados os ganhos de desempenho empregando essa técnica a uma aplicação iterativa paralela da área de geofísica (Ondes3D). Adaptive MPI (AMPI) foi utilizado para prover o suporte a balanceamento de carga dinâmico, resultando em ganhos de desempenho de até 36.58% em 288 cores de um cluster Essa avaliação também é usada pra ilustrar as dificuldades encontradas nesse processo, assim justificando o uso de simulação para facilitá-la. Para implementar o workflow SAMPI, foi utilizada a interface SMPI do simulador SimGrid, tanto no modo de emulação, como no de replay de rastros. Para validar esse simulador, foram comparadas execuções simuladas (SAMPI) e reais (AMPI) da aplicação Ondes3D. As simulações apresentaram uma evolução do balanceamento de carga bastante similar às execuções reais. Adicionalmente, SAMPI estimou com sucesso a melhor heurística de balanceamento de carga para os cenários testados. Além dessa validação, nesta tese é demonstrado o uso de SAMPI para exploração de parâmetros de balanceamento de carga e para planejamento de capacidade computacional. Quanto ao desempenho da simulação, estimamos que o workflow completo é capaz de simular a execução do Ondes3D com 24 combinações de parâmetros de balanceamento de carga em 5 horas para o nosso cenário de terremoto mais pesado e 3 horas para o mais leve. / In this thesis we present a novel simulation workflow to evaluate the performance of dynamic load balancing with over-decomposition applied to iterative parallel applications at low-cost. Its goals are to perform such evaluation with minimal application modification and at a low cost in terms of time and of resource requirements. Many parallel applications suffer from dynamic (temporal) load imbalance that can not be treated at the application level. It may be caused by intrinsic characteristics of the application or by external software and hardware factors. As demonstrated in this thesis, such dynamic imbalance can be found even in applications whose codes do not hint at any dynamism. Therefore, we need to rely on runtime dynamic load balancing mechanisms, such as dynamic load balancing based on over-decomposition. The problem is that evaluating and tuning the performance of such technique can be costly. This usually entails modifications to the application and a large number of executions to get statistically sound performance measurements with different load balancing parameter combinations. Moreover, useful and accurate measurements often require big resource allocations on a production cluster. Our simulation workflow, dubbed Simulated Adaptive MPI (SAMPI), employs a combined sequential emulation and trace-replay simulation approach to reduce the cost of such an evaluation Both sequential emulation and trace-replay require a single computer node. Additionally, the trace-replay simulation lasts a small fraction of the real-life parallel execution time of the application. Besides the basic SAMPI simulation, we developed spatial aggregation and applicationlevel rescaling techniques to speed-up the emulation process. To demonstrate the real-life performance benefits of dynamic load balance with over-decomposition, we evaluated the performance gains obtained by employing this technique on a iterative parallel geophysics application, called Ondes3D. Dynamic load balancing support was provided by Adaptive MPI (AMPI). This resulted in up to 36.58% performance improvement, on 288 cores of a cluster. This real-life evaluation also illustrates the difficulties found in this process, thus justifying the use of simulation. To implement the SAMPI workflow, we relied on SimGrid’s Simulated MPI (SMPI) interface in both emulation and trace-replay modes.To validate our simulator, we compared simulated (SAMPI) and real-life (AMPI) executions of Ondes3D. The simulations presented a load balance evolution very similar to real-life and were also successful in choosing the best load balancing heuristic for each scenario. Besides the validation, we demonstrate the use of SAMPI for load balancing parameter exploration and for computational capacity planning. As for the performance of the simulation itself, we roughly estimate that our full workflow can simulate the execution of Ondes3D with 24 different load balancing parameter combinations in 5 hours for our heavier earthquake scenario and in 3 hours for the lighter one.
|
Page generated in 0.1131 seconds