• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 57
  • 44
  • 17
  • 15
  • 11
  • 10
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 380
  • 108
  • 88
  • 79
  • 65
  • 63
  • 60
  • 56
  • 51
  • 43
  • 42
  • 40
  • 39
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2 / Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2

Pezzi, Guilherme Peretti January 2006 (has links)
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster. / In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.
72

Providing adaptability to MPI applications on current parallel architectures / Provendo adaptabilidade em aplicações MPI nas arquiteturas paralelas atuais

Cera, Marcia Cristina January 2012 (has links)
Atualmente, adaptabilidade é uma característica desejada em aplicações paralelas. Por exemplo, o crescente número de usuários competindo por recursos em arquiteturas paralelas gera mudanças constantes no conjunto de processadores disponíveis. Aplicações adaptativas são capazes de executar usando um conjunto volátil de processadores, oferecendo urna melhor utilização dos recursos. Este comportamento adaptativo é conhecido corno maleabilidade. Outro exemplo vem da constante evolução das arquiteturas multi-core, as quais aumentam o número de cores em seus chips a cada nova geração. Adaptabilidade é a chave para permitir que os programas paralelos sejam portáveis de uma máquina a outra. Assim. os programas paralelos são capazes de adaptar a extração do paralelismo de acordo com o grau de paralelismo específico da arquitetura alvo. Este comportamento pode ser visto como um caso particular de evolutividade. Nesse sentido, esta tese está focada em: (i) maleabilidade para adaptar a execução das aplicações paralelas às mudanças na disponibilidade dos processadores; e (ii) evolutividade para adaptar a extração do paralelismo de acordo com propriedades da arquitetura e dos dados de entrada. Portanto, a questão remanescente é "Como prover e suportar aplicações adaptativas?". Esta tese visa responder tal questão com base no MPI (Message-Passing Interface), o qual é a API paralela padrão para HPC em ambientes distribuídos. Nosso trabalho baseia-se nas características do MPI-2 que permitem criar processos em tempo de execução, dando alguma flexibilidade às aplicações MPI. Aplicações MPI maleáveis usam a criação dinâmica de processos para expandir-se nas ações de crescimento (para usar processadores extras). As ações de diminuição (para liberar processadores) finalizam os processos MPI que executam nos processadores requeridos, preservando os dados da aplicação. Note que as aplicações maleáveis requerem suporte do ambiente de execução, uma vez que precisam ser notificadas sobre a disponibilidade dos processadores. Aplicações MPI evolutivas seguem o paradigma do paralelismo de tarefas explícitas para permitir adaptação em tempo de execução. Assim, a criação dinâmica de processos é usada para extrair o paralelismo, ou seja, para criar novas tarefas MPI sob demanda. Para prover tais aplicações nós definimos tarefas MPI abstratas, implementamos a sincronização entre elas através da troca de mensagens, e propusemos uma abordagem para ajustar a granularidade das tarefas MPI, visando eficiência em ambientes distribuídos. Os resultados experimentais validaram nossa hipótese de que aplicações adaptativas podem ser providas usando características do MPI-2. Adicionalmente, esta tese identificou os requisitos rio nível do ambiente de execução para suportá-las em clusters. Portanto, as aplicações MPI maleáveis melhoraram a utilização de recursos de clusters; e as aplicações de tarefas explícitas adaptaram a extração do paralelismo de acordo com a arquitetura alvo. mostrando que este paradigma também é eficiente em ambientes distribuídos. / Currently, adaptability is a desired feature in parallel applications. For instante, the increasingly number of user competing for resources of the parallel architectures causes dynamic changes in the set of available processors. Adaptive applications are able to execute using a set of volatile processors, providing better resource utilization. This adaptive behavior is known as malleability. Another example comes from the constant evolution of the multi-core architectures, which increases the number of cores to each new generation of chips. Adaptability is the key to allow parallel programs portability from one multi-core machine to another. Thus, parallel programs can adapt the unfolding of the parallelism to the specific degree of parallelism of the target architecture. This adaptive behavior can be seen as a particular case of evolutivity. In this sense, this thesis is focused on: (i) malleability to adapt the execution of parallel applications as changes in processors availability; and (ii) evolutivity to adapt the unfolding of the parallelism at runtime as the architecture and input data properties. Thus, the open issue is "How to provide and support adaptive applications?". This thesis aims to answer this question taking into account the MPI (Message-Passing Interface), which is the standard parallel API for HPC in distributed-memory environments. Our work is based on MPI-2 features that allow spawning processes at runtime. adding some fiexibility to the MPI applications. Malleable MPI applications use dynamic process creation to expand themselves in growth action (to use further processors). The shrinkage actions (to release processors) end the execution of the MPI processes on the required processors in such a way that the application's data are preserved. Notice that malleable applications require a runtime environment support to execute, once they must be notified about the processors availability. Evolving MPI applications follow the explicit task parallelism paradigm to allow their runtime adaptation. Thus, dynamic process creation is used to unfold the parallelism, i.e., to create new MPI tasks on demand. To provide these applications we defined the abstract MPI tasks, implemented the synchronization among these tasks through message exchanges, and proposed an approach to adjust MPI tasks granularity aiming at efficiency in distributed-memory environments. Experimental results validated our hypothesis that adaptive applications can be provided using the MPI-2 features. Additionally, this thesis identifies the requirements to support these applications in cluster environments. Thus, malleable MPI applications were able to improve the cluster utilization; and the explicit task ones were able to adapt the unfolding of the parallelism to the target architecture, showing that this programming paradigm can be efficient also in distributed-memory contexts.
73

Mapeamento estático de processos MPI com emparelhamento perfeito de custo máximo em cluster homogêneo de multi-cores / Static MPI processes mapping using maximum weighted perfect matching at homogeneous multi-core clusters

Ferreira, Manuela Klanovicz January 2012 (has links)
Um importante fator que precisa ser considerado para alcançar alto desempenho em aplicações paralelas é a distribuição dos processos nos núcleos do sistema, denominada mapeamento de processos. Mesmo o mapeamento estático de processos é um problema NP-difícil. Por esse motivo, são utilizadas heurísticas que dependem da aplicação e do hardware no qual a aplicação será mapeada. Nas arquiteturas atuais, além da possibilidade de haver mais de um processador por nó do cluster, é possível haver mais de um núcleo de processamento por processador, assim, o mapeamento estático de processos pode considerar pelo menos três níveis de comunicação entre os processos que executam em um cluster multi-core: intra-chip, intra-nó e inter-nó. Este trabalho propõe a heurística MapEME (Mapeamento Estático MPI com Emparelhamento) que emprega o Emparelhamento Perfeito de Custo Máximo (EPCM) no cálculo do mapeamento estático de processos paralelos MPI em processadores multi-core. Os resultados alcançados pelo mapeamento gerado pela MapEME são comparados aos resultados obtidos pelo mapeamento gerado pela aplicação Scotch, que utiliza o Biparticionamento Recursivo Dual (BRD), já utilizado como heurística para mapeamento estático de processos. Ambas as heurísticas são comparadas à Busca Exaustiva (BE) para verificar o quanto estão próximas do ótimo. Os três métodos têm a complexidade e o ganho no tempo de execução em ralação à distribuição padrão da biblioteca MPICH2 comparados entre si. A principal contribuição deste trabalho é mostrar que a heurística EPCM apresenta ganho de até 40% equivalente a já difundida BRD, e possui uma complexidade menor ao ser aplicado em um cluster multi-core que compartilha cache nível 2 a cada dois núcleos. / An important factor that must be considered to achieve high performance on parallel applications is the mapping of processes on cores. However, since this is defined as an NP-Hard problem, it requires different mapping heuristics that depends on the application and the hardware on which it will be mapped. On the current architectures we can have more than one multi-core processors per node, and consequently the process mapping can consider three process communication types: intrachip, intranode and internode. This work propose the MapEME (Static Mapping MPI using Matching) that use the Maximum Weighted Perfect Matching (MWPM) to calculate the static process mapping and analyze its performance. The results provided by MapEME are compared with the results of application Scotch. It uses Dual Recursive Bipartitioning (DRB), an already used heuristics for static mapping. Both heuristics are compared with Exhaustive Search (ES) to verify how much the two heuristics are near the optimum. The three methods have theirs complexities analyzed. Also the mapping gain when compared with the standard MPICH2 distribution was measured. The main contribution of this work is to show that the heuristic, EPCM, provides gain up to 40%, close of DRB gain. Furthermore, EPCM has a lower complexity when applied to a multicore cluster that shares L2 cache every two cores.
74

Escalonamento estático de programas-MPI

Silva, Rafael Ennes January 2006 (has links)
O bom desempenho de uma aplicação paralela é obtido conforme o modo como as técnicas de paralelização são empregadas. Para utilizar essas técnicas, é preciso encontrar uma forma adequada de extrair o paralelismo. Esta extração pode ser feita através de um grafo representativo da aplicação. Neste trabalho são aplicados métodos de particionamento de grafos para otimizar as comunicações entre os processos que fazem parte de uma computação paralela. Nesse contexto, a alocação dos processos almeja minimizar a quantidade de comunicações entre processadores. Esta técnica é frequentemente adotada em Processamento de Alto Desempenho - PAD. No entanto, a construção de grafo geralmente está embutida no programa, cujas estruturas de dados privadas são empregadas na contrução do grafo. A proposta é usar ferramentas diretamente em programas MPI, empregando, apenas, os recursos padr ões da norma MPI 1.2. O objetivo é fornecer uma biblioteca (b -MPI) portável para o escalonamento estático de programas MPI. O escalonamento estático realizado pela biblioteca é feito através do mapeamento de processos Esse mapeamento busca agrupar os processos que trocam muitas informações em um mesma máquina, o que nesse caso diminui o volume de dados trafegados pela rede. O mapeamento será realizado estaticamente após uma execução prévia do programa MPI. As aplicações alvo para o uso da b -MPI são aquelas que mantêm o mesmo padrão de comunicação após execuções sucessivas. A validação da biblioteca foi realizada atrav és da Transformada Rápida de Fourier disponível no pacote FFTW, da resolução do Problema de Transferência de Calor através do Método de Schwarz e Multigrid e da Fatora ção LU implementada no benchmark HPL. Os resultados mostraram que a b -MPI pode ser utilizada para distribuir os processos e cientemente minimizando o volume de mensagens trafegadas pela rede. / A good performance of a parallel application is obtained according to the mode as the parallelization techniques are applied. To make use of these techniques, is necessary to nd an appropriate way to extract the parallelism. This extraction can be done through a representative graph of the application. In this work, methods of partitioning graphs are applied to optimize the communication between processes that belong to a parallel computation. In this context, the processes allocation aims to minimize the communication amount between processors. This technique is frequently adopted in High Performance Computing - HPC. However, the graph building is generally inside the program, that has private data structures employed in the graph building. The proposal is to utilize tools directly in MPI programs, employing only standard resources of the MPI 1.2 norm. The goal is to provide a portable library (b -MPI) to static schedule MPI programs. The static scheduling realized by the library is done through the mapping of processes. This mapping seeks to cluster the processes that exchange a lot of information in the same machine that, in this case decreases the data volume passed through the net. The mapping will be done staticly after a previous execution of a MPI program. The target applications to make use of b -MPI are those whose keep the same communication pattern after successives executions. The library validation is done through the available applications in the FFTW package, the solving of the problem of Heat Transference through the Additive Schwarz Method and Multigrid and the LU factorization implemented in the HPL benchmark. The results show that b -MPI can be utilized to distribute the processes ef ciently minimizing the volume of messages exchanged through the network.
75

Modelagem e dimensionamento do custo de migração de processos em programas MPI

Neves, Marcelo Veiga January 2009 (has links)
A migração de processos é importante em programas MPI por vários motivos, tais como permitir re-escalonamento de processos, balanceamento de cargas e tolerância a falhas. Independentemente do tipo do uso da migração, conhecer o custo imposto pela realização desta operação é um problema pertinente. Quando utiliza-se migração para tentar diminuir o tempo de execução de uma aplicação paralela, este custo passa a ser um ponto crítico. Existem algumas soluções para migração de processos em programas MPI disponíveis atualmente. No entanto, ainda não existe um estudo que quantifique o custo destas migrações. Nesse contexto, este trabalho apresenta um estudo para modelar e dimensionar o custo de migração de processos em programasMPI. Primeiramente, o trabalho identificou, analisou, avaliou e, quando necessário, adaptou as principais soluções disponíveis atualmente para migrar processos MPI. Com base nessas soluções, foram criados modelos de custo que poderão ser utilizado para estimar dinamicamente os custos de migração e auxiliar na tomada de decisão em algoritmos de escalonamento. Osmodelos criados foram utilizados para estimar os custos demigração emaplicações paralelas e o resultado foi comparado comos custos demigração reais. Nesta comparação, os valores previsto ficaram bastante próximos dos valores observados no experimento, demonstrando a qualidade das previsões dos modelos propostos. / Process migration is essential for MPI programs for different reasons, such as processes rescheduling, load balancing and fault tolerance. Knowing well the cost necessary for this operation is a pertinent problem, regardless of the type of migration use. Whenever migration is used for improving the performance of parallel applications, its cost becomes a deciding point. Nowadays, there are some solutions to process migration available for MPI programs. However, there is not a study that can quantify the migration cost and its impact on the execution of MPI programs. In this context, this work presents a study for modeling and dimensioning the process migration cost in MPI programs. First, we identified, analyzed, evaluated and, when needed, adapted the main solutions which are presently available to migrate MPI processes. Based in these solutions, we defined cost models. These models can be used to dynamically estimate the migration costs and to guide scheduling decisions. These models were used to predict the migration cost in parallel applications and the result was compared to observed migration costs. In this comparison, the predicted values were very similar to those observed in the experiment. This work still shows an evaluation about the impact of a migration in the execution of real parallel applications in order to verifying the viability of applying this approach to improve the performance.
76

Large Scale Parallel Inference of Protein and Protein Domain families / Inférence des familles de protéines et de domaines protéiques à grande échelle

Rezvoy, Clément 28 September 2011 (has links)
Les domaines protéiques sont des segments indépendants qui sont présents de façon récurrente dans plusieurs protéines. L'arrangement combinatoire de ces domaines est à l'origine de la diversité structurale et fonctionnelle des protéines. Plusieurs méthodes ont été développées pour permettre d'inférer la décomposition des protéines en domaines ainsi que la classification de ces domaines en familles. L'une de ces méthodes, MkDom2, permet l'inférence des familles de domaines de façon gloutonne. les familles sont inférées l'une après l'autre de façon a créer un découpage des protéines en arrangement de domaines et un classement de ces domaines en familles. MkDom2 est a l'origine de la base de données ProDom et est essentiel pour sa mise à jour. L'augmentation exponentielle du nombre de séquences analyser a rendue obsolète cette méthode qui nécessite désormais plusieurs années de calcul pour calculer ProDom. nous proposons un nouvel algorithme, MPI_MkDom2, permettant l'exploration simultanée de plusieurs familles de domaines sur une plate-forme de calcul distribué. MPI_MkDom2 est un algorithme distribué et asynchrone gérant l'équilibrage de charge pour une utilisation efficace de la plate-forme de calcul; il assure la création d'un découpage non-recouvrant de l'ensemble des protéines. Une mesure de proximité entre les classifications de domaines est définie afin d'évaluer l'effet du parallélisme sur le partitionnement produit. Nous proposons un second algorithme MPI_MkDom3. permettant le calcul simultanée d'une classification des domaines protéiques et des protéines en familles partageant le même arrangement en domaines. / Protein domains are recurring independent segment of proteins. The combinatorial arrangement of domains is at the root of the functional and structural diversity of proteins. Several methods have been developed to infer protein domain decomposition and domain family clustering from sequence information alone. MkDom2 is one of those methods. Mkdom2 infers domain families in a greedy fashion. Families are inferred one after the other in order to create a delineation of domains on proteins and a clustering of those domains in families. MkDom2 is instrumental in the building of the ProDom database. The exponential growth of the number of sequences to process as rendered MkDom2 obsolete, it would now take several years to compute a newrelease of ProDom. We present a nous algorithm, MPI_MkDom2, allowing computation of several families at once across a distributed computing platform. MPI_MkDom2 is an asynchronous distributed algorithm managing load balancing to ensure efficient platform usage; it ensures the creation of a non-overlapping partitioning of the whole protein set. A new proximity measure is defined to assess the effect of the parallel computation on the result. We also Propose a second algorithm, MPI_mkDom3, allowing the simultaneous computation of a clustering of protein domains as well as full protein sharing the same domain decomposition.
77

Résolution des équations de Maxwell tridimensionnelles instationnaires sur architecture massivement multicoeur / Resolution of tridimensional instationary Maxwell's equations on massively multicore architecture

Strub, Thomas 13 March 2015 (has links)
Cette thèse s'inscrit dans un projet d'innovation duale RAPID financé par DGA/DS/MRIS et appelé GREAT faisant intervenir la société Axessim, l'ONERA, INRIA, l'IRMA et le CEA. Ce projet a pour but la mise en place d'une solution industrielle de simulation électromagnétique basée sur une méthode Galerkin Discontinue (GD) parallèle sur maillage hexaédrique. Dans un premier temps, nous établissons un schéma numérique adapté à un système de loi de conservation. Nous pouvons ainsi appliquer cette approche aux équations de Maxwell, mais également à tout système hyperbolique. Dans un second temps, nous mettons en place une parallélisation à deux niveaux de ce schéma. D'une part, les calculs sont parallélisés sur carte graphique au moyen de la bibliothèque OpenCL. D'autre part, plusieurs cartes graphiques peuvent être utilisées, chacune étant pilotée par un processus MPI. De plus, les communications MPI et les calculs OpenCL sont asynchronisés permettant d'obtenir une forte accélération. / This thesis is part of a dual innovation project funded by RAPID DGA/DS/MRIS and called GREAT involving Axessim company, ONERA, INRIA, IRMA and the CEA. This project aims at the establishment of an industrial solution of electromagnetic simulation based on a method Discontinuous Galerkin (DG) on parallel hexahedral mesh. First, we establish a numerical scheme adapted to a conservation law system. We can apply this approach to the Maxwell equations but also to any hyperbolic system. In a second step, we set up a two-level parallelization of this scheme. On the one hand, the calculations are parallelized on graphics card using the OpenCL library. On the other hand, multiple graphics cards can be used, each driven by a MPI process. In addition, MPI communications and OpenCL computations are launched asynchronously in order to obtain a strong acceleration.
78

Static/Dynamic Analyses for Validation and Improvements of Multi-Model HPC Applications. / Analyse statique/dynamique pour la validation et l'amélioration des applications parallèles multi-modèles

Saillard, Emmanuelle 24 September 2015 (has links)
L’utilisation du parallélisme des architectures actuelles dans le domaine du calcul hautes performances, oblige à recourir à différents langages parallèles. Ainsi, l’utilisation conjointe de MPI pour le parallélisme gros grain, à mémoire distribuée et OpenMP pour du parallélisme de thread, fait partie des pratiques de développement d’applications pour supercalculateurs. Des erreurs, liées à l’utilisation conjointe de ces langages de parallélisme, sont actuellement difficiles à détecter et cela limite l’écriture de codes, permettant des interactions plus poussées entre ces niveaux de parallélisme. Des outils ont été proposés afin de palier ce problème. Cependant, ces outils sont généralement focalisés sur un type de modèle et permettent une vérification dite statique (à la compilation) ou dynamique (à l’exécution). Pourtant une combinaison statique/- dynamique donnerait des informations plus pertinentes. En effet, le compilateur est en mesure de donner des informations relatives au comportement général du code, indépendamment du jeu d’entrée. C’est par exemple le cas des problèmes liés aux communications collectives du modèle MPI. Cette thèse a pour objectif de développer des analyses statiques/dynamiques permettant la vérification d’une application parallèle mélangeant plusieurs modèles de programmation, afin de diriger les développeurs vers un code parallèle multi-modèles correct et performant. La vérification se fait en deux étapes. Premièrement, de potentielles erreurs sont détectées lors de la phase de compilation. Ensuite, un test au runtime est ajouté pour savoir si le problème va réellement se produire. Grâce à ces analyses combinées, nous renvoyons des messages précis aux utilisateurs et évitons les situations de blocage. / Supercomputing plays an important role in several innovative fields, speeding up prototyping or validating scientific theories. However, supercomputers are evolving rapidly with now millions of processing units, posing the questions of their programmability. Despite the emergence of more widespread and functional parallel programming models, developing correct and effective parallel applications still remains a complex task. Although debugging solutions have emerged to address this issue, they often come with restrictions. However programming model evolutions stress the requirement for a convenient validation tool able to handle hybrid applications. Indeed as current scientific applications mainly rely on the Message Passing Interface (MPI) parallel programming model, new hardwares designed for Exascale with higher node-level parallelism clearly advocate for an MPI+X solutions with X a thread-based model such as OpenMP. But integrating two different programming models inside the same application can be error-prone leading to complex bugs - mostly detected unfortunately at runtime. In an MPI+X program not only the correctness of MPI should be ensured but also its interactions with the multi-threaded model, for example identical MPI collective operations cannot be performed by multiple nonsynchronized threads. This thesis aims at developing a combination of static and dynamic analysis to enable an early verification of hybrid HPC applications. The first pass statically verifies the thread level required by an MPI+OpenMP application and outlines execution paths leading to potential deadlocks. Thanks to this analysis, the code is selectively instrumented, displaying an error and synchronously interrupting all processes if the actual scheduling leads to a deadlock situation.
79

Virtualisation en contexte HPC / Virtualisation in HPC context

Capra, Antoine 17 December 2015 (has links)
Afin de répondre aux besoins croissants de la simulation numérique et de rester à la pointe de la technologie, les supercalculateurs doivent d’être constamment améliorés. Ces améliorations peuvent être d’ordre matériel ou logiciel. Cela force les applications à s’adapter à un nouvel environnement de programmation au fil de son développement. Il devient alors nécessaire de se poser la question de la pérennité des applications et de leur portabilité d’une machine à une autre. L’utilisation de machines virtuelles peut être une première réponse à ce besoin de pérennisation en stabilisant les environnements de programmation. Grâce à la virtualisation, une application peut être développée au sein d’un environnement figé, sans être directement impactée par l’environnement présent sur une machine physique. Pour autant, l’abstraction supplémentaire induite par les machines virtuelles entraine en pratique une perte de performance. Nous proposons dans cette thèse un ensemble d’outils et de techniques afin de permettre l’utilisation de machines virtuelles en contexte HPC. Tout d’abord nous montrons qu’il est possible d’optimiser le fonctionnement d’un hyperviseur afin de répondre le plus fidèlement aux contraintes du HPC que sont : le placement des fils d’exécution et la localité mémoire des données. Puis en s’appuyant sur ce résultat, nous avons proposé un service de partitionnement des ressources d’un noeud de calcul par le biais des machines virtuelles. Enfin, pour étendre nos travaux à une utilisation pour des applications MPI, nous avons étudié les solutions et performances réseau d’une machine virtuelle. / To meet the growing needs of the digital simulation and remain at the forefront of technology, supercomputers must be constantly improved. These improvements can be hardware or software order. This forces the application to adapt to a new programming environment throughout its development. It then becomes necessary to raise the question of the sustainability of applications and portability from one machine to another. The use of virtual machines may be a first response to this need for sustaining stabilizing programming environments. With virtualization, applications can be developed in a fixed environment, without being directly impacted by the current environment on a physical machine. However, the additional abstraction induced by virtual machines in practice leads to a loss of performance. We propose in this thesis a set of tools and techniques to enable the use of virtual machines in HPC context. First we show that it is possible to optimize the operation of a hypervisor to respond accurately to the constraints of HPC that are : the placement of implementing son and memory data locality. Then, based on this, we have proposed a resource partitioning service from a compute node through virtual machines. Finally, to expand our work to use for MPI applications, we studied the network solutions and performance of a virtual machine.
80

Improving the Hybrid model MPI+Threads through Applications, Runtimes and Performance tools / Amélioration du modèle hybride MPI+Threads à travers les applications, les supports d’exécution et outils d’analyse de performance

Maheo, Aurèle 25 September 2015 (has links)
Afin de répondre aux besoins de plus en plus importants en puissance de calcul de la part des applicationsnumériques, les supercalculateurs ont dû évoluer et sont ainsi de plus en plus compliqués àprogrammer. Ainsi, en plus de l’apparition des systèmes à mémoire partagée, des architectures ditesNUMA (Non Uniform Memory Access) sont présentes au sein de ces machines, fournissant plusieursniveaux de parallélisme. Une autre contrainte, la diminution de la mémoire disponible par coeur decalcul, doit être soulignée. C’est ainsi que des modèles parallèles tels que MPI (Message Passing Interface)ne permettent plus aux codes scientifiques haute performance de passer à l’echelle et d’exploiterefficacement les machines de calcul, et doivent donc être combinés avec d’autres modèles plus adaptésaux architectures à mémoire partagée. OpenMP, en tant que modèle standardisé, est un choix privilégiépour être combiné avec MPI. Mais mélanger deux modèles avec des paradigmes différents est unet âche compliquée et peut engendrer des goulets d’étranglement qui doivent être identifiés. Cette thèsea pour objectif d’aborder ces limitations et met en avant plusieurs contributions couvrant divers aspects.Notre première contribution permet de r éduire le surcoût des supports exécutifs OpenMP en optimisantle travail d’activation et de synchronisation des threads OpenMP pour les codes MPI+OpenMP. Dansun second temps, nous nous focalisons sur les opérations collectives MPI. Notre contribution a pourbut d’optimiser l’opération MPI Allreduce en réutilisant des unités de calcul inoccupées, et faisant intervenirdes threads OpenMP. Nous introduisons également le concept de collectives unifiées, impliquantdes tâches MPI et des threads OpenMP dans une même opération. Enfin, nous nous intéressons àl’analyse de performance et plus précisément l’instrumentation des applications MPI+OpenMP, et notredernière contribution consiste en l’implémentation et l’ évaluation de l’outil OpenMP Tools API (OMPT)dans le support exécutif OpenMP du framework MPC. Cet outil nous permet d’instrumenter des constructionsOpenMP et de conduire une analyse axée aussi bien du côté des applications que dessupports d’exécution / To provide increasing computational power for numerical simulations, supercomputers evolved and arenow more and more complex to program. Indeed, after the appearance of shared memory systemsemerged architectures such as NUMA (Non Uniform Memory Access) systems, providing several levelsof parallelism. Another constraint, the decreasing amount of memory per compute core, has to bementioned. Therefore, parallel models such as Message Passing Interface (MPI) are no more sufficientto enable scalability of High Performance applications, and have to be coupled with another modeladapted to shared memory architectures. OpenMP, as a de facto standard, is a good candidate to bemixed with MPI. The principle is to use this model to augment legacy codes already parallelized withMPI. But hybridizing scientific codes is a complex task, bottlenecks exist and need to be identified. Thisthesis tackles these limitations and proposes different contributions following various aspects. Our firstcontribution reduces the overhead of the OpenMP layer by optimizing the creation and synchronizationof threads for MPI+OpenMP codes. On a second time, we target MPI collective operations. Our contributionconsists in proposing a technique to exploit idle cores in order to help the operation, with theexample of MPI Allreduce collective. We also introduce unified Collectives involving both MPI tasks andOpenMP threads. Finally, we focus on performance analysis of hybrid MPI+OpenMP codes, and ourlast contribution consists in the implementation of OpenMP Tools API (OMPT), an instrumentation tool,inside the OpenMP runtime of MPC framework. This tool allows us to instrument and profile OpenMPconstructs and allows the analysis of both runtime and application sides

Page generated in 0.0321 seconds