• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 8
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Um ambiente de monitoramento de recursos e escalonamento cooperativo de aplicações paralelas em grades computacionais. / A resource monitoring and parallel application cooperative scheduling environment on computing grids.

Paula, Nilton Cézar de 23 January 2009 (has links)
Grade computacional é uma alternativa para melhorar o desempenho de aplicações paralelas, por permitir o uso simultâneo de vários recursos distribuídos. Entretanto, para que a utilização de uma grade seja adequada, é necessário que os recursos sejam utilizados de maneira a permitir a otimização de algum critério. Para isto, várias estratégias de escalonamento têm sido propostas, mas o grande desafio é extrair o potencial que os recursos oferecem para a execução de aplicações paralelas. Uma estratégia bastante usada em sistemas de escalonamento atuais é escalonar uma aplicação paralela nos recursos de um único cluster. Contudo, apesar da estratégia ser simples, ela é muito limitada, devido principalmente a baixa utilização dos recursos. Este trabalho propõe e implementa o sistema GCSE (Grid Cooperative Scheduling Environment) que provê uma estratégia de escalonamento cooperativo para usar eficientemente os recursos distribuídos. Os processos de uma aplicação paralela podem ser distribuídos em recursos de vários clusters e computadores, todos conectados a redes de comunicação públicas. GCSE também gerencia a execução das aplicações, bem como oferece um conjunto de primitivas que fornece informações sobre os ambientes de execução para o suporte à comunicação entre processos. Além disto, uma estratégia de antecipação de dados é proposta para aumentar ainda mais o desempenho das aplicações. Para realizar um bom escalonamento é preciso descobrir os recursos distribuídos. Neste sentido, o sistema LIMA (Light-weIght Monitoring Architecture) foi projetado e implementado. Este sistema provê um conjunto de estratégias e mecanismos para o armazenamento distribuído e acesso eficiente às informações sobre os recursos distribuídos. Além disto, LIMA adiciona facilidades de descobrimento e integração com o GCSE e outros sistemas. Por fim, serão apresentados os testes e avaliações dos resultados com o uso integrado dos sistemas GCSE e LIMA, compondo um ambiente robusto para a execução de aplicações paralelas. / Computing grid is an alternative for improving the parallel application performance, because it allows the simultaneous use of many distributed resources. However, in order to take advantage of a grid, the resources must be used in such a way that some criteria can be optimized. Thus, various scheduling strategies have been proposed, but the great challenge is the exploitation of the potential that the resources provide to the parallel application execution. A strategy often used in current scheduling systems is to schedule a parallel application on resources of a single cluster. Even though this strategy is simple, it is very limited, mainly due to low resource utilization. This thesis proposes and implements the GCSE system (Grid Cooperative Scheduling Environment) that provides a cooperative scheduling strategy for efficiently using the distributed resources. The processes of a parallel application can be distributed in resources of many clusters and computers, and they are all connected by public communication networks. GCSE also manages the application execution, as well as offering a primitive set that provide information about the execution environments for ensuring the communication between processes. Moreover, a data advancement strategy is proposed for improving the application performance. In order to perform a good scheduling, the distributed resources must be discovered. Therefore, the LIMA system (Light-weIght Monitoring Architecture) was designed and implemented. This system provides both strategy and mechanism set for distributed storage and efficient access to information about the distributed resources. In addition, LIMA offers facilities for resource discovering and integrating its functionalities both GCSE and other systems. Finally, the tests and result evaluations are presented with the integrated use of both GCSE and LIMA systems, composing a robust environment for executing parallel application.
12

Partitionnement dans les systèmes de gestion de données parallèles / Data Partitioning in Parallel Data Management Systems

Liroz Gistau, Miguel 17 December 2013 (has links)
Au cours des dernières années, le volume des données qui sont capturées et générées a explosé. Les progrès des technologies informatiques, qui fournissent du stockage à bas prix et une très forte puissance de calcul, ont permis aux organisations d'exécuter des analyses complexes de leurs données et d'en extraire des connaissances précieuses. Cette tendance a été très importante non seulement pour l'industrie, mais a également pour la science, où les meilleures instruments et les simulations les plus complexes ont besoin d'une gestion efficace des quantités énormes de données.Le parallélisme est une technique fondamentale dans la gestion de données extrêmement volumineuses car il tire parti de l'utilisation simultanée de plusieurs ressources informatiques. Pour profiter du calcul parallèle, nous avons besoin de techniques de partitionnement de données efficaces, qui sont en charge de la division de l'ensemble des données en plusieurs partitions et leur attribution aux nœuds de calculs. Le partitionnement de données est un problème complexe, car il doit prendre en compte des questions différentes et souvent contradictoires telles que la localité des données, la répartition de charge et la maximisation du parallélisme.Dans cette thèse, nous étudions le problème de partitionnement de données, en particulier dans les bases de données parallèles scientifiques qui sont continuellement en croissance. Nous étudions également ces partitionnements dans le cadre MapReduce.Dans le premier cas, nous considérons le partitionnement de très grandes bases de données dans lesquelles des nouveaux éléments sont ajoutés en permanence, avec pour exemple une application aux données astronomiques. Les approches existantes sont limitées à cause de la complexité de la charge de travail et l'ajout en continu de nouvelles données limitent l'utilisation d'approches traditionnelles. Nous proposons deux algorithmes de partitionnement dynamique qui attribuent les nouvelles données aux partitions en utilisant une technique basée sur l'affinité. Nos algorithmes permettent d'obtenir de très bons partitionnements des données en un temps d'exécution réduit comparé aux approches traditionnelles.Nous étudions également comment améliorer la performance du framework MapReduce en utilisant des techniques de partitionnement de données. En particulier, nous sommes intéressés par le partitionnement efficient de données d'entrée / During the last years, the volume of data that is captured and generated has exploded. Advances in computer technologies, which provide cheap storage and increased computing capabilities, have allowed organizations to perform complex analysis on this data and to extract valuable knowledge from it. This trend has been very important not only for industry, but has also had a significant impact on science, where enhanced instruments and more complex simulations call for an efficient management of huge quantities of data.Parallel computing is a fundamental technique in the management of large quantities of data as it leverages on the concurrent utilization of multiple computing resources. To take advantage of parallel computing, we need efficient data partitioning techniques which are in charge of dividing the whole data and assigning the partitions to the processing nodes. Data partitioning is a complex problem, as it has to consider different and often contradicting issues, such as data locality, load balancing and maximizing parallelism.In this thesis, we study the problem of data partitioning, particularly in scientific parallel databases that are continuously growing and in the MapReduce framework.In the case of scientific databases, we consider data partitioning in very large databases in which new data is appended continuously to the database, e.g. astronomical applications. Existing approaches are limited since the complexity of the workload and continuous appends restrict the applicability of traditional approaches. We propose two partitioning algorithms that dynamically partition new data elements by a technique based on data affinity. Our algorithms enable us to obtain very good data partitions in a low execution time compared to traditional approaches.We also study how to improve the performance of MapReduce framework using data partitioning techniques. In particular, we are interested in efficient data partitioning of the input datasets to reduce the amount of data that has to be transferred in the shuffle phase. We design and implement a strategy which, by capturing the relationships between input tuples and intermediate keys, obtains an efficient partitioning that can be used to reduce significantly the MapReduce's communication overhead.
13

Um ambiente de monitoramento de recursos e escalonamento cooperativo de aplicações paralelas em grades computacionais. / A resource monitoring and parallel application cooperative scheduling environment on computing grids.

Nilton Cézar de Paula 23 January 2009 (has links)
Grade computacional é uma alternativa para melhorar o desempenho de aplicações paralelas, por permitir o uso simultâneo de vários recursos distribuídos. Entretanto, para que a utilização de uma grade seja adequada, é necessário que os recursos sejam utilizados de maneira a permitir a otimização de algum critério. Para isto, várias estratégias de escalonamento têm sido propostas, mas o grande desafio é extrair o potencial que os recursos oferecem para a execução de aplicações paralelas. Uma estratégia bastante usada em sistemas de escalonamento atuais é escalonar uma aplicação paralela nos recursos de um único cluster. Contudo, apesar da estratégia ser simples, ela é muito limitada, devido principalmente a baixa utilização dos recursos. Este trabalho propõe e implementa o sistema GCSE (Grid Cooperative Scheduling Environment) que provê uma estratégia de escalonamento cooperativo para usar eficientemente os recursos distribuídos. Os processos de uma aplicação paralela podem ser distribuídos em recursos de vários clusters e computadores, todos conectados a redes de comunicação públicas. GCSE também gerencia a execução das aplicações, bem como oferece um conjunto de primitivas que fornece informações sobre os ambientes de execução para o suporte à comunicação entre processos. Além disto, uma estratégia de antecipação de dados é proposta para aumentar ainda mais o desempenho das aplicações. Para realizar um bom escalonamento é preciso descobrir os recursos distribuídos. Neste sentido, o sistema LIMA (Light-weIght Monitoring Architecture) foi projetado e implementado. Este sistema provê um conjunto de estratégias e mecanismos para o armazenamento distribuído e acesso eficiente às informações sobre os recursos distribuídos. Além disto, LIMA adiciona facilidades de descobrimento e integração com o GCSE e outros sistemas. Por fim, serão apresentados os testes e avaliações dos resultados com o uso integrado dos sistemas GCSE e LIMA, compondo um ambiente robusto para a execução de aplicações paralelas. / Computing grid is an alternative for improving the parallel application performance, because it allows the simultaneous use of many distributed resources. However, in order to take advantage of a grid, the resources must be used in such a way that some criteria can be optimized. Thus, various scheduling strategies have been proposed, but the great challenge is the exploitation of the potential that the resources provide to the parallel application execution. A strategy often used in current scheduling systems is to schedule a parallel application on resources of a single cluster. Even though this strategy is simple, it is very limited, mainly due to low resource utilization. This thesis proposes and implements the GCSE system (Grid Cooperative Scheduling Environment) that provides a cooperative scheduling strategy for efficiently using the distributed resources. The processes of a parallel application can be distributed in resources of many clusters and computers, and they are all connected by public communication networks. GCSE also manages the application execution, as well as offering a primitive set that provide information about the execution environments for ensuring the communication between processes. Moreover, a data advancement strategy is proposed for improving the application performance. In order to perform a good scheduling, the distributed resources must be discovered. Therefore, the LIMA system (Light-weIght Monitoring Architecture) was designed and implemented. This system provides both strategy and mechanism set for distributed storage and efficient access to information about the distributed resources. In addition, LIMA offers facilities for resource discovering and integrating its functionalities both GCSE and other systems. Finally, the tests and result evaluations are presented with the integrated use of both GCSE and LIMA systems, composing a robust environment for executing parallel application.
14

Cooperative Resource Management for Parallel and Distributed Systems

Klein-Halmaghi, Cristian 29 November 2012 (has links) (PDF)
High-Performance Computing (HPC) resources, such as Supercomputers, Clusters, Grids and HPC Clouds, are managed by Resource Management Systems (RMSs) that multiple resources among multiple users and decide how computing nodes are allocated to user applications. As more and more petascale computing resources are built and exascale is to be achieved by 2020, optimizing resource allocation to applications is critical to ensure their efficient execution. However, current RMSs, such as batch schedulers, only offer a limited interface. In most cases, the application has to blindly choose resources at submittal without being able to adapt its choice to the state of the target resources, neither before it started nor during execution. The goal of this Thesis is to improve resource management, so as to allow applications to efficiently allocate resources. We achieve this by proposing software architectures that promote collaboration between the applications and the RMS, thus, allowing applications to negotiate the resources they run on. To this end, we start by analysing the various types of applications and their unique resource requirements, categorizing them into rigid, moldable, malleable and evolving. For each case, we highlight the opportunities they open up for improving resource management.The first contribution deals with moldable applications, for which resources are only negotiated before they start. We propose CooRMv1, a centralized RMS architecture, which delegates resource selection to the application launchers. Simulations show that the solution is both scalable and fair. The results are validated through a prototype implementation deployed on Grid'5000. Second, we focus on negotiating allocations on geographically-distributed resources, managed by multiple institutions. We build upon CooRMv1 and propose distCooRM, a distributed RMS architecture, which allows moldable applications to efficiently co-allocate resources managed by multiple independent agents. Simulation results show that distCooRM is well-behaved and scales well for a reasonable number of applications. Next, attention is shifted to run-time negotiation of resources, so as to improve support for malleable and evolving applications. We propose CooRMv2, a centralized RMS architecture, that enables efficient scheduling of evolving applications, especially non-predictable ones. It allows applications to inform the RMS about their maximum expected resource usage, through pre-allocations. Resources which are pre-allocated but unused can be filled by malleable applications. Simulation results show that considerable gains can be achieved. Last, production-ready software are used as a starting point, to illustrate the interest as well as the difficulty of improving cooperation between existing systems. GridTLSE is used as an application and DIET as an RMS to study a previously unsupported use-case. We identify the underlying problem of scheduling optional computations and propose an architecture to solve it. Real-life experiments done on the Grid'5000 platform show that several metrics are improved, such as user satisfaction, fairness and the number of completed requests. Moreover, it is shown that the solution is scalable.
15

Εφαρμογές στο πλέγμα υπολογιστών

Κοκκάλα, Χρυσούλα 13 October 2013 (has links)
Στη σύγχρονη εποχή, η ανάπτυξη των ετερογενών και κατανεμημένων περιβαλλόντων, όπως τα περιβάλλοντα πλέγματος, καθιστά εφικτή την επίλυση υπολογιστικά εντατικών προβλημάτων με αξιόπιστο και οικονομικό τρόπο. Το Πλέγμα Υπολογιστών είναι μια αναπτυσσόμενη υποδομή που παρέχει πρόσβαση σε υπολογιστική ισχύ και αποθηκευτικό χώρο κατανεμημένα σε όλο τον κόσμο. Εισήχθη για να ικανοποιήσει την ανάγκη για εφαρμογές που απαιτούν μεγάλο αριθμό υπολογισμών καθώς και την επικοινωνία των ατόμων που τις εκτελούν. Ένα πρόβλημα που μπορεί να εκμεταλλευτεί τα πλεονεκτήματα του Πλέγματος είναι το πρόβλημα χρονοπρογραμματισμού πληρωμάτων. Το συγκεκριμένο πρόβλημα είναι πολύπλοκο και χρονοβόρο εξαιτίας των πολλών περιορισμών που συνδέονται με αυτό. Στην παρούσα διπλωματική εργασία παρουσιάζεται με λεπτομέρεια η δομή και ο τρόπος λειτουργίας και εξυπηρέτησης χρηστών του Πλέγματος. Επίσης, καταγράφουμε τη μεθοδολογία και τον τρόπο υποβολής εργασιών στο Πλέγμα από τη σκοπιά του χρήστη. Επικεντρώνουμε το ενδιαφέρον μας στην αποδοτική επίλυση του προβλήματος χρονοπρογραμματισμού ανθρωπίνων πόρων, συγκεκριμένα του νοσηλευτικού προσωπικού ενός νοσοκομείου, με χρήση παράλληλης επεξεργασίας σε περιβάλλον δικτύου υπολογιστών. / -
16

Parallel Kafka Producer Applications : Their performance and its limitations

Sundbom, Arvid January 2023 (has links)
"This paper examines multi-threaded Kafka producer applications, and how the performance of such applications is affected by how the number of producer instances relates to the number of executing threads. Specifically, the performance of such applications when using a single producer instance, shared among all threads, and when each thread is allotted a separate, private instance, is compared. This comparison is carried out for a number of different producer configurations and varying levels of computational work per message produced.Overall, the data indicates that utilizing private producer instances results in highe rperformance, in terms of data throughput, than sharing a single instance among the executing threads. The magnitude of this difference is affected, to some extent, by the configuration profiles used to create the producer instances, as well as the computational workload of the application hosting the producers. Specifically, configuring producers for reliability seems to increase the difference, and so does increasing the rate at which messages are to be produced.As a result of this, Brod, a wrapper library [56], based on an implementation of a client library for Apache Kafka [25], has been developed. The purpose of the library is to provide functionality which simplifies the development of multi-threadedKafka producer applications."
17

ChipCflow - uma ferramenta para execução de algoritmos utilizando o modelo a fluxo de dados dinâmico em hardware reconfigurável / ChipCflow - a tool to executing algorithms using dynamic dataflow architecture in FPGA

Lopes, Joelmir José 29 June 2012 (has links)
Devido à complexidade das aplicações, a demanda crescente por sistemas que usam milhões de transistores e hardware complexo; tem sido desenvolvidas ferramentas que convertem C em Linguagem de Descrição de Hardware, tais como VHDL e Verilog. Neste contexto, esta tese apresenta o projeto ChipCflow, o qual usa arquitetura a fluxo de dados, para implementar lógica de alto desempenho em Field Programmable Gate Array (FPGA). Maquinas a fluxo de dados são computadores programáveis, cujo hardware é otimizado para computação paralela de granularidade fina dirigida por dados. Em outras palavras, a execução de programas é determinado pela disponibilidade dos dados, assim, o paralelismo é intrínseco neste sistema. Por outro lado, com o avanço da tecnologia da microeletrônica, o FPGA tem sido utilizado principalmente devido a sua flexibilidade, facilidade para implementar sistemas complexos e paralelismo intrínseco. Um dos desafios é criar ferramentas para programadores que usam linguagem de alto nível (HLL), como a linguagem C, e produzir hardware diretamente. Essas ferramentas devem usar a máxima experiência dos programadores, o paralelismo das arquiteturas a fluxo de dados dinâmica, a flexibilidade e o paralelismo do FPGA, para produzir um hardware eficiente, otimizado para alto desempenho e baixo consumo de energia. O projeto ChipCflow é uma ferramenta que converte os programas de aplicação escritos em linguagem C para a linguagem VHDL, baseado na arquitetura a fluxo de dados dinâmica. O principal objetivo dessa tese é definir e implementar os operadores do ChipCflow, usando a arquitetura a fluxo de dados dinâmica em FPGA. Esses operadores usam tagged tokens para identificar dados, com base em instâncias de operadores. A implementação dos operadores e das instâncias usam um modelo de implementação assíncrono em FPGA para obter maior velocidade e menor consumo / Due to the complexity of applications, the growing demand for both systems using millions of transistors and consecutive complex hardware, tools that convert C into a Hardware Description Language (HDL), as VHDL and Verilog, have been developed. In this context this thesis presents the ChipCflow project, which uses dataflow architecture to implement high-performance logics in Field Programmable Gate Array (FPGA). Dataflow machines are programmable computers whose hardware is optimized for fine-grain data-flow parallel computation. In other words the execution of programs is determined by data availability, thus parallelism is intrinsic in these systems. On the other hand, with the advance of technology of microelectronics, the FPGA has been used mainly because of its flexibility, facilities to implement complex systems and intrinsic parallelism. One of the challenges is to create tools for programmers who use HLL (High Level Language), such as C language, producing hardware directly. These tools should use the utmost experience of the programmers, the parallelism of dynamic dataflow architecture and the flexibility and parallelism of FPGA to produce efficient hardware optimized for high performance and lower power consumption. The ChipCflow project is a tool that converts application programs written in C language into VHDL, based on the dynamic dataflow architecture. The main goal in this thesis is to define and implement the operators of ChipCflow using dynamic dataflow architecture in FPGA. These operators use tagged tokens to identify data based on instances of operators and their implementation and instances use an asynchronous implementation model in FPGA to achieve faster speed and lower consumption
18

應用同步選擇網路在派翠網路之分析 / Application of SNC (Synchronized choice net) to analysis Petri nets

巫亮宏 Unknown Date (has links)
Well-behaved SNC covers well-behaved and various classes of FC (free-choice) and is not included in AC (asymmetric choice). An SNC allows internal choices and concurrency and hence is powerful for irodeling. Any SNC is bounded and its liveness conditions are simple. An integrated algorithm, has been presented for verification of a net being SNC and its liveness with polynomial time complexity. Scholars often need to verify properties on nets appearing in literatures. Verification by CAD tool is less desirable than that by hand due to the extra efforts to input the model aid learn to use the tool. We propose to manually search the maximum SNC component followed by locating bad siphons in an incremental manner. We then apply Lautenback's Maridng Condition (MC) for liveness to berify the property of liveness. But there are two drawbacks associated with the above MC. First, it guarantees only deadlock-freeness, and not necessary liveness. We have identified the structure cause for this and developed its livess conditions correspondingly. Second a net may be live even if the MC is not satisfied. We have identified the structure cause for this. The MC has been readjusted based on our proposed new theorey.
19

Cooperative Resource Management for Parallel and Distributed Systems / Gestion collaborative des ressources pour les systèmes parallèles et distribuées

Klein-Halmaghi, Cristian 29 November 2012 (has links)
Les ressources de calcul à haute performance (High-Performance Computing—HPC), telles que les supercalculateurs, les grappes, les grilles de calcul ou les Clouds HPC, sont gérées par des gestionnaires de ressources (Resource Management System—RMS) qui multiplexent les ressources entre plusieurs utilisateurs et décident comment allouer les nœuds de calcul aux applications des utilisateurs. Avec la multiplication de machines péta-flopiques et l’arrivée des machines exa-flopiques attendue en 2020, l’optimisation de l’allocation des ressources aux applications est essentielle pour assurer que leur exécution soit efficace. Cependant, les RMSs existants, tels que les batch schedulers, n’offrent qu’une interface restreinte. Dans la plupart des cas, l’application doit choisir les ressources « aveuglément » lors de la soumission sans pouvoir adapter son choix à l’état des ressources ciblées, ni avant, ni pendant l’exécution.Le but de cette Thèse est d’améliorer la gestion des ressources, afin de permettre aux applications d’allouer des ressources efficacement. Pour y parvenir, nous proposons des architectures logicielles qui favorisent la collaboration entre les applications et le gestionnaire de ressources, permettant ainsi aux applications de négocier les ressources qu’elles veulent utiliser. À cette fin, nous analysons d’abord les types d’applications et leurs besoins en ressources, et nous les divisons en plusieurs catégories : rigide, modelable, malléable et évolutive. Pour chaque cas, nous soulignons les opportunités d’amélioration de la gestion de ressources. Une première contribution traite les applications modelables, qui négocient les ressources seulement avant leur démarrage. Nous proposons CooRMv1, une architecture RMS centralisée, qui délègue la sélection des ressources aux lanceurs d’application. Des simulations montrent qu’un tel système se comporte bien en termes d’extensibilité et d’équité. Les résultats ont été validés avec un prototype déployé sur la plate-forme Grid’5000. Une deuxième contribution se focalise sur la négociation des allocations pour des ressources géographiquement distribuées qui appartiennent à plusieurs institutions. Nous étendons CooRMv1 pour proposer distCooRM, une architecture RMS distribuée, qui permet aux applications modelables de co-allouer efficacement des ressources gérées par plusieurs agents indépendants. Les résultats de simulation montrent que distCooRM se comporte bien et passe à l’échelle pour un nombre raisonnable d’applications. Ensuite, nous nous concentrons sur la négociation des ressources à l’exécution pour mieux gérer les applications malléables et évolutives. Nous proposons CooRMv2, une architecture RMS centralisée, qui permet l’ordonnancement efficace des applications évolutives, et surtout celles dont l’évolution n’est pas prévisible. Une application peut faire des « pré-allocations » pour exprimer ses pics de besoins en ressources. Cela lui permet de demander dynamiquement des ressources, dont l’allocation est garantie tant que la pré-allocation n’est pas dépassée. Les ressources pré-allouées mais inutilisées sont à la disposition des autres applications. Des gains importants sont ainsi obtenus, comme les simulations que nous avons effectuées le montrent.Enfin, nous partons de logiciels utilisés en production pour illustrer l’intérêt, mais aussi la difficulté, d’améliorer la collaboration entre deux systèmes existants. Avec GridTLSE comme application et DIET comme RMS, nous avons trouvé un cas d’utilisation mal supporté auparavant. Nous identifions le problème sous-jacent d’ordonnancement des calculs optionnels et nous proposons une architecture pour le résoudre. Des expériences réelles sur la plate-forme Grid’5000 montrent que plusieurs métriques peuvent être améliorées, comme par exemple la satisfaction des utilisateurs, l’équité et le nombre de requêtes traitées. En outre, nous montrons que cette solution présente une bonne extensibilité. / High-Performance Computing (HPC) resources, such as Supercomputers, Clusters, Grids and HPC Clouds, are managed by Resource Management Systems (RMSs) that multiple resources among multiple users and decide how computing nodes are allocated to user applications. As more and more petascale computing resources are built and exascale is to be achieved by 2020, optimizing resource allocation to applications is critical to ensure their efficient execution. However, current RMSs, such as batch schedulers, only offer a limited interface. In most cases, the application has to blindly choose resources at submittal without being able to adapt its choice to the state of the target resources, neither before it started nor during execution. The goal of this Thesis is to improve resource management, so as to allow applications to efficiently allocate resources. We achieve this by proposing software architectures that promote collaboration between the applications and the RMS, thus, allowing applications to negotiate the resources they run on. To this end, we start by analysing the various types of applications and their unique resource requirements, categorizing them into rigid, moldable, malleable and evolving. For each case, we highlight the opportunities they open up for improving resource management.The first contribution deals with moldable applications, for which resources are only negotiated before they start. We propose CooRMv1, a centralized RMS architecture, which delegates resource selection to the application launchers. Simulations show that the solution is both scalable and fair. The results are validated through a prototype implementation deployed on Grid’5000. Second, we focus on negotiating allocations on geographically-distributed resources, managed by multiple institutions. We build upon CooRMv1 and propose distCooRM, a distributed RMS architecture, which allows moldable applications to efficiently co-allocate resources managed by multiple independent agents. Simulation results show that distCooRM is well-behaved and scales well for a reasonable number of applications. Next, attention is shifted to run-time negotiation of resources, so as to improve support for malleable and evolving applications. We propose CooRMv2, a centralized RMS architecture, that enables efficient scheduling of evolving applications, especially non-predictable ones. It allows applications to inform the RMS about their maximum expected resource usage, through pre-allocations. Resources which are pre-allocated but unused can be filled by malleable applications. Simulation results show that considerable gains can be achieved. Last, production-ready software are used as a starting point, to illustrate the interest as well as the difficulty of improving cooperation between existing systems. GridTLSE is used as an application and DIET as an RMS to study a previously unsupported use-case. We identify the underlying problem of scheduling optional computations and propose an architecture to solve it. Real-life experiments done on the Grid’5000 platform show that several metrics are improved, such as user satisfaction, fairness and the number of completed requests. Moreover, it is shown that the solution is scalable.
20

SkePU 2: Language Embedding and Compiler Support for Flexible and Type-Safe Skeleton Programming

Ernstsson, August January 2016 (has links)
This thesis presents SkePU 2, the next generation of the SkePU C++ framework for programming of heterogeneous parallel systems using the skeleton programming concept. SkePU 2 is presented after a thorough study of the state of parallel programming models, frameworks and tools, including other skeleton programming systems. The advancements in SkePU 2 include a modern C++11 foundation, a native syntax for skeleton parameterization with user functions, and an entirely new source-to-source translator based on Clang compiler front-end libraries. SkePU 2 extends the functionality of SkePU 1 by embracing metaprogramming techniques and C++11 features, such as variadic templates and lambda expressions. The results are improved programmability and performance in many situations, as shown in both a usability survey and performance evaluations on high-performance computing hardware. SkePU’s skeleton programming model is also extended with a new construct, Call, unique in the sense that it does not impose any predefined skeleton structure and can encapsulate arbitrary user-defined multi-backend computations. We conclude that SkePU 2 is a promising new direction for the SkePU project, and a solid basis for future work, for example in performance optimization.

Page generated in 0.0918 seconds