Spelling suggestions: "subject:"arallel tasks"" "subject:"aparallel tasks""
1 |
Ordonnancement de graphes de tâches sur des plates-formes de calcul modernes / Scheduling task graphs on modern computing platformsSimon, Bertrand 04 July 2018 (has links)
Cette thèse porte sur trois thématiques principales liées à l'ordonnancement de graphes de tâches sur des plates-formes de calcul modernes. Un graphe de tâches est une modélisation classique d'un programme à exécuter, par exemple une application de calcul scientifique. La décomposition d'une application en différentes tâches permet d'exploiter le parallélisme potentiel de cette application sans adapter le programme à la plate-forme de calcul visée. Le graphe décrit ces tâches ainsi que leurs dépendances, certaines tâches ne pouvant être exécutées avant que d'autres ne soient terminées. L'exécution d'une application est alors déterminée par un ordonnancement du graphe, calculé par un logiciel dédié, qui décrit entre autres quelles ressources sont allouées à chaque tâche et à quel moment. Les trois thèmes étudiés sont les suivants: exploiter le parallélisme intrinsèque des tâches, utiliser des accélérateurs tels que des GPU, et prendre en compte une mémoire limitée.Certaines applications présentent deux types de parallélisme que l'on peut exploiter: plusieurs tâches peuvent être exécutées simultanément, et chaque tâche peut être exécutée sur plusieurs processeurs afin de réduire son temps de calcul. Nous proposons et étudions deux modèles permettant de régir ce temps de calcul, afin d'exploiter ces deux types de parallélisme.Nous étudions ensuite comment utiliser efficacement des accélérateurs de calcul tels que des GPU, dans un contexte dynamique où les futures tâches à ordonnancer ne sont pas connues. La difficulté principale consiste à décider si une tâche doit être exécutée sur l'un des rares accélérateurs disponibles ou sur l'un des nombreux processeurs classiques. La dernière thématique abordée concerne le problème d'une mémoire principale limitée, et le recours à des transferts de données coûteux. Nous avons traité ce problème via deux scénarios. S'il est possible d'éviter de tels transferts, nous avons proposé de modifier le graphe afin de garantir que toute exécution ne dépasse pas la mémoire disponible, ce qui permet d'ordonnancemer les tâches dynamiquement au moment de l'exécution. Si tout ordonnancement nécessite des transferts, nous avons étudié le problème consistant à minimiser leur quantité.L'étude de ces trois thèmes a permis de mieux comprendre la complexité de ces problèmes. Les solutions proposées dans le cadre d'étude théorique pourront influencer de futures implémentations logicielles. / This thesis deals with three main themes linked to task graph scheduling on modern computing platforms. A graph of tasks is a classical model of a program to be executed, for instance a scientific application. The decomposition of an application into several tasks allows to exploit the potential parallelism of this application without adaptating the program to the computing platform. The graph describes the tasks as well as their dependences, some tasks cannot be initiated before others are completed. The execution of an application is then determined by a schedule of the graph, computed by a dedicated software, which in particular describes which resources should be allocated to each task at which time. The three studied themes are the following: exploit inner task parallelism, use accelerators such as GPUs, and cope with a limited memory.For some applications, two types of parallelism can be exploited: several tasks can be executed concurrently, and each task may be executed on several processors, which reduces its processing time. We propose and study two models allowing to describe this processing time acceleration, in order to efficiently exploit both types of parallelism.We then study how to efficiently use accelerators such as GPUs, in a dynamic context in which the future tasks to schedule are unknown. The main difficulty consists in deciding whether a task should be executed on one of the rare available accelerators or on one of the many classical processors. The last theme covered in this thesis deals with a available main memory of limited size, and the resort to expensive data transfers. We focused on two scenarios. If it is possible to avoid such transfers, we propose to modify the graph in order to guarantee that any execution fits in memory, which allows to dynamically schedule the graph at runtime. If every schedule needs transfers, we studied how to minimize their quantity.The work on these three themes has led to a better understanding of the underlying complexities. The proposed theoretical solutions will influence future software implementations.
|
2 |
Programmation des architectures hétérogènes à l'aide de tâches divisibles ou modulables / Programmation of heterogeneous architectures using moldable tasksCojean, Terry 26 March 2018 (has links)
Les ordinateurs équipés d'accélérateurs sont omniprésents parmi les machines de calcul haute performance. Cette évolution a entraîné des efforts de recherche pour concevoir des outils permettant de programmer facilement des applications capables d'utiliser toutes les unités de calcul de ces machines. Le support d'exécution StarPU développé dans l'équipe STORM de INRIA Bordeaux, a été conçu pour servir de cible à des compilateurs de langages parallèles et des bibliothèques spécialisées (algèbre linéaire, développements de Fourier, etc.). Pour proposer la portabilité des codes et des performances aux applications, StarPU ordonnance des graphes dynamiques de tâches de manière efficace sur l’ensemble des ressources hétérogènes de la machine. L’un des aspects les plus difficiles, lors du découpage d’une application en graphe de tâches, est de choisir la granularité de ce découpage, qui va typiquement de pair avec la taille des blocs utilisés pour partitionner les données du problème. Les granularités trop petites ne permettent pas d’exploiter efficacement les accélérateurs de type GPU, qui ont besoin de peu de tâches possédant un parallélisme interne de données massif pour « tourner à plein régime ». À l’inverse, les processeurs traditionnels exhibent souvent des performances optimales à des granularités beaucoup plus fines. Le choix du grain d’un tâche dépend non seulement du type de l'unité de calcul sur lequel elle s’exécutera, mais il a en outre une influence sur la quantité de parallélisme disponible dans le système : trop de petites tâches risque d’inonder le système en introduisant un surcoût inutile, alors que peu de grosses tâches risque d’aboutir à un déficit de parallélisme. Actuellement, la plupart des approches pour solutionner ce problème dépendent de l'utilisation d'une granularité des tâches intermédiaire qui ne permet pas un usage optimal des ressources aussi bien du processeur que des accélérateurs. L'objectif de cette thèse est d'appréhender ce problème de granularité en agrégeant des ressources afin de ne plus considérer de nombreuses ressources séparées mais quelques grosses ressources collaborant à l'exécution de la même tâche. Un modèle théorique existe depuis plusieurs dizaines d'années pour représenter ce procédé : les tâches parallèles. Le travail de cette thèse consiste alors en l'utilisation pratique de ce modèle via l'implantation de mécanismes de gestion de tâches parallèles dans StarPU et l'implantation ainsi que l'évaluation d'ordonnanceurs de tâches parallèles de la littérature. La validation du modèle se fait dans le cadre de l'amélioration de la programmation et de l'optimisation de l'exécution d'applications numériques au dessus de machines de calcul modernes. / Hybrid computing platforms equipped with accelerators are now commonplace in high performance computing platforms. Due to this evolution, researchers concentrated their efforts on conceiving tools aiming to ease the programmation of applications able to use all computing units of such machines. The StarPU runtime system developed in the STORM team at INRIA Bordeaux was conceived to be a target for parallel language compilers and specialized libraries (linear algebra, Fourier transforms,...). To provide the portability of codes and performances to applications, StarPU schedules dynamic task graphs efficiently on all heterogeneous computing units of the machine. One of the most difficult aspects when expressing an application into a graph of task is to choose the granularity of the tasks, which typically goes hand in hand with the size of blocs used to partition the problem's data. Small granularity do not allow to efficiently use accelerators such as GPUs which require a small amount of task with massive inner data-parallelism in order to obtain peak performance. Inversely, processors typically exhibit optimal performances with a big amount of tasks possessing smaller granularities. The choice of the task granularity not only depends on the type of computing units on which it will be executed, but in addition it will influence the quantity of parallelism available in the system: too many small tasks may flood the runtime system by introducing overhead, whereas too many small tasks may create a parallelism deficiency. Currently, most approaches rely on finding a compromise granularity of tasks which does not make optimal use of both CPU and accelerator resources. The objective of this thesis is to solve this granularity problem by aggregating resources in order to view them not as many small resources but fewer larger ones collaborating to the execution of the same task. One theoretical machine and scheduling model allowing to represent this process exists since several decades: the parallel tasks. The main contributions of this thesis are to make practical use of this model by implementing a parallel task mechanism inside StarPU and to implement and study parallel task schedulers of the literature. The validation of the model is made by improving the programmation and optimizing the execution of numerical applications on top of modern computing machines.
|
3 |
Exploiting multiple levels of parallelism and online refinement of unstructured meshes in atmospheric model applicationSchepke, Claudio January 2012 (has links)
Previsões meteorológicas para longos períodos de tempo estão se tornando cada vez mais importantes. A preocupação mundial com as consequências da mudança do clima tem estimulado pesquisas para determinar o seu comportamento nas próximas décadas. Ao mesmo tempo, os passos necessários para definir uma melhor modelagem e simulação do clima e/ou tempo estão longe da precisão desejada. Aumentar o refinamento da superfície terrestre e, consequentemente, aumentar o número de pontos discretos (utilizados para a representação da atmosfera) na modelagem climática e precisão das soluções computadas é uma meta que está em conflito com o desempenho das aplicações numéricas. Aplicações que envolvem a interação de longos períodos de tempo e incluem um grande número de operações possuem um tempo de execução inviável para as arquiteturas de computadores tradicionais. Para superar esta situação, um modelo climatológico pode adotar diferentes níveis de refinamento da superfície terrestre, utilizando mais pontos discretos somente em regiões onde uma maior precisão é requerida. Este é o caso de Ocean-Land-AtmosphereModel, que permite o refinamento estático de uma determinada região no início da execução do código. No entanto, um refinamento dinâmico possibilitaria uma melhor compreensão das condições climáticas específicas de qualquer região da superfície terrestre que se tivesse interesse, sem a necessidade de reiniciar a execução da aplicação. Com o surgimento das arquiteturas multi-core e a adoção de GPUs para a computação de propósito geral, existem diferentes níveis de paralelismo. Hoje há paralelismo interno ao processador, entre processadores e entre computadores. Com o objetivo de extrair ao máximo a performance dos computadores atuais, é necessário utilizar todos os níveis de paralelismo disponíveis durante o desenvolvimento de aplicações concorrentes. No entanto, nenhuma interface de programação paralela explora simultaneamente bem os diferentes níveis de paralelismo existentes. Baseado neste contexto, esta tese investiga como explorar diferentes níveis de paralelismo em modelos climatológicos usando interfaces clássicas de programação paralela de forma combinada e como é possível prover refinamento de malhas em tempo de execução para estes modelos. Os resultados obtidos a partir de implementações realizadas mostraram que é possível reduzir o tempo de execução de uma simulação atmosférica utilizando diferentes níveis de paralelismo, através do uso combinado de interfaces de programação paralela. Além disso, foi possível prover maior desempenho na execução de aplicações climatológicas que utilizam refinamento de malhas em tempo de execução. Com isso, uma malha de maior resolução para a representação da atmosfera terrestre pode ser adotada e, consequentemente, as previsões numéricas serão mais precisas. / Weather forecasts for long periods of time has emerged as increasingly important. The global concern with the consequences of climate changes has stimulated researches to determine the climate in coming decades. At the same time the steps needed to better defining the modeling and the simulation of climate/weather is far of the desired accuracy. Upscaling the land surface and consequently to increase the number of points used in climate modeling and the precision of the computed solutions is a goal that conflicts with the performance of numerical applications. Applications that include the interaction of long periods of time and involve a large number of operations become the expectation for results infeasible in traditional computers. To overcome this situation, a climatic model can take different levels of refinement of the Earth’s surface, using more discretized elements only in regions where more precision are required. This is the case of Ocean-Land- Atmosphere Model, which allows the static refinement of a particular region of the Earth in the early execution of the code. However, a dynamic mesh refinement could allow to better understand specific climatic conditions that appear at execution time of any region of the Earth’s surface, without restarting execution. With the introduction of multi-core processors and GPU boards, computers architectures have many parallel layers. Today, there are parallelism inside the processor, among processors and among computers. In order to use the best performance of the computers it is necessary to consider all parallel levels to distribute a concurrent application. However, nothing parallel programming interface abstracts all these different parallel levels. Based in this context, this thesis investigates how to explore different levels of parallelism in climatological models using mixed interfaces of parallel programming and how these models can provide mesh refinement at execution time. The performance results show that is possible to reduce the execution time of atmospheric simulations using different levels of parallelism, through the combined use of parallel programming interfaces. Higher performance for the execution of atmospheric applications that use online mesh refinement was also provided. Therefore, more mesh resolution to describe the Earth’s atmosphere can be adopted, and consequently the numerical forecasts are more accurate.
|
4 |
Exploiting multiple levels of parallelism and online refinement of unstructured meshes in atmospheric model applicationSchepke, Claudio January 2012 (has links)
Previsões meteorológicas para longos períodos de tempo estão se tornando cada vez mais importantes. A preocupação mundial com as consequências da mudança do clima tem estimulado pesquisas para determinar o seu comportamento nas próximas décadas. Ao mesmo tempo, os passos necessários para definir uma melhor modelagem e simulação do clima e/ou tempo estão longe da precisão desejada. Aumentar o refinamento da superfície terrestre e, consequentemente, aumentar o número de pontos discretos (utilizados para a representação da atmosfera) na modelagem climática e precisão das soluções computadas é uma meta que está em conflito com o desempenho das aplicações numéricas. Aplicações que envolvem a interação de longos períodos de tempo e incluem um grande número de operações possuem um tempo de execução inviável para as arquiteturas de computadores tradicionais. Para superar esta situação, um modelo climatológico pode adotar diferentes níveis de refinamento da superfície terrestre, utilizando mais pontos discretos somente em regiões onde uma maior precisão é requerida. Este é o caso de Ocean-Land-AtmosphereModel, que permite o refinamento estático de uma determinada região no início da execução do código. No entanto, um refinamento dinâmico possibilitaria uma melhor compreensão das condições climáticas específicas de qualquer região da superfície terrestre que se tivesse interesse, sem a necessidade de reiniciar a execução da aplicação. Com o surgimento das arquiteturas multi-core e a adoção de GPUs para a computação de propósito geral, existem diferentes níveis de paralelismo. Hoje há paralelismo interno ao processador, entre processadores e entre computadores. Com o objetivo de extrair ao máximo a performance dos computadores atuais, é necessário utilizar todos os níveis de paralelismo disponíveis durante o desenvolvimento de aplicações concorrentes. No entanto, nenhuma interface de programação paralela explora simultaneamente bem os diferentes níveis de paralelismo existentes. Baseado neste contexto, esta tese investiga como explorar diferentes níveis de paralelismo em modelos climatológicos usando interfaces clássicas de programação paralela de forma combinada e como é possível prover refinamento de malhas em tempo de execução para estes modelos. Os resultados obtidos a partir de implementações realizadas mostraram que é possível reduzir o tempo de execução de uma simulação atmosférica utilizando diferentes níveis de paralelismo, através do uso combinado de interfaces de programação paralela. Além disso, foi possível prover maior desempenho na execução de aplicações climatológicas que utilizam refinamento de malhas em tempo de execução. Com isso, uma malha de maior resolução para a representação da atmosfera terrestre pode ser adotada e, consequentemente, as previsões numéricas serão mais precisas. / Weather forecasts for long periods of time has emerged as increasingly important. The global concern with the consequences of climate changes has stimulated researches to determine the climate in coming decades. At the same time the steps needed to better defining the modeling and the simulation of climate/weather is far of the desired accuracy. Upscaling the land surface and consequently to increase the number of points used in climate modeling and the precision of the computed solutions is a goal that conflicts with the performance of numerical applications. Applications that include the interaction of long periods of time and involve a large number of operations become the expectation for results infeasible in traditional computers. To overcome this situation, a climatic model can take different levels of refinement of the Earth’s surface, using more discretized elements only in regions where more precision are required. This is the case of Ocean-Land- Atmosphere Model, which allows the static refinement of a particular region of the Earth in the early execution of the code. However, a dynamic mesh refinement could allow to better understand specific climatic conditions that appear at execution time of any region of the Earth’s surface, without restarting execution. With the introduction of multi-core processors and GPU boards, computers architectures have many parallel layers. Today, there are parallelism inside the processor, among processors and among computers. In order to use the best performance of the computers it is necessary to consider all parallel levels to distribute a concurrent application. However, nothing parallel programming interface abstracts all these different parallel levels. Based in this context, this thesis investigates how to explore different levels of parallelism in climatological models using mixed interfaces of parallel programming and how these models can provide mesh refinement at execution time. The performance results show that is possible to reduce the execution time of atmospheric simulations using different levels of parallelism, through the combined use of parallel programming interfaces. Higher performance for the execution of atmospheric applications that use online mesh refinement was also provided. Therefore, more mesh resolution to describe the Earth’s atmosphere can be adopted, and consequently the numerical forecasts are more accurate.
|
5 |
Exploiting multiple levels of parallelism and online refinement of unstructured meshes in atmospheric model applicationSchepke, Claudio January 2012 (has links)
Previsões meteorológicas para longos períodos de tempo estão se tornando cada vez mais importantes. A preocupação mundial com as consequências da mudança do clima tem estimulado pesquisas para determinar o seu comportamento nas próximas décadas. Ao mesmo tempo, os passos necessários para definir uma melhor modelagem e simulação do clima e/ou tempo estão longe da precisão desejada. Aumentar o refinamento da superfície terrestre e, consequentemente, aumentar o número de pontos discretos (utilizados para a representação da atmosfera) na modelagem climática e precisão das soluções computadas é uma meta que está em conflito com o desempenho das aplicações numéricas. Aplicações que envolvem a interação de longos períodos de tempo e incluem um grande número de operações possuem um tempo de execução inviável para as arquiteturas de computadores tradicionais. Para superar esta situação, um modelo climatológico pode adotar diferentes níveis de refinamento da superfície terrestre, utilizando mais pontos discretos somente em regiões onde uma maior precisão é requerida. Este é o caso de Ocean-Land-AtmosphereModel, que permite o refinamento estático de uma determinada região no início da execução do código. No entanto, um refinamento dinâmico possibilitaria uma melhor compreensão das condições climáticas específicas de qualquer região da superfície terrestre que se tivesse interesse, sem a necessidade de reiniciar a execução da aplicação. Com o surgimento das arquiteturas multi-core e a adoção de GPUs para a computação de propósito geral, existem diferentes níveis de paralelismo. Hoje há paralelismo interno ao processador, entre processadores e entre computadores. Com o objetivo de extrair ao máximo a performance dos computadores atuais, é necessário utilizar todos os níveis de paralelismo disponíveis durante o desenvolvimento de aplicações concorrentes. No entanto, nenhuma interface de programação paralela explora simultaneamente bem os diferentes níveis de paralelismo existentes. Baseado neste contexto, esta tese investiga como explorar diferentes níveis de paralelismo em modelos climatológicos usando interfaces clássicas de programação paralela de forma combinada e como é possível prover refinamento de malhas em tempo de execução para estes modelos. Os resultados obtidos a partir de implementações realizadas mostraram que é possível reduzir o tempo de execução de uma simulação atmosférica utilizando diferentes níveis de paralelismo, através do uso combinado de interfaces de programação paralela. Além disso, foi possível prover maior desempenho na execução de aplicações climatológicas que utilizam refinamento de malhas em tempo de execução. Com isso, uma malha de maior resolução para a representação da atmosfera terrestre pode ser adotada e, consequentemente, as previsões numéricas serão mais precisas. / Weather forecasts for long periods of time has emerged as increasingly important. The global concern with the consequences of climate changes has stimulated researches to determine the climate in coming decades. At the same time the steps needed to better defining the modeling and the simulation of climate/weather is far of the desired accuracy. Upscaling the land surface and consequently to increase the number of points used in climate modeling and the precision of the computed solutions is a goal that conflicts with the performance of numerical applications. Applications that include the interaction of long periods of time and involve a large number of operations become the expectation for results infeasible in traditional computers. To overcome this situation, a climatic model can take different levels of refinement of the Earth’s surface, using more discretized elements only in regions where more precision are required. This is the case of Ocean-Land- Atmosphere Model, which allows the static refinement of a particular region of the Earth in the early execution of the code. However, a dynamic mesh refinement could allow to better understand specific climatic conditions that appear at execution time of any region of the Earth’s surface, without restarting execution. With the introduction of multi-core processors and GPU boards, computers architectures have many parallel layers. Today, there are parallelism inside the processor, among processors and among computers. In order to use the best performance of the computers it is necessary to consider all parallel levels to distribute a concurrent application. However, nothing parallel programming interface abstracts all these different parallel levels. Based in this context, this thesis investigates how to explore different levels of parallelism in climatological models using mixed interfaces of parallel programming and how these models can provide mesh refinement at execution time. The performance results show that is possible to reduce the execution time of atmospheric simulations using different levels of parallelism, through the combined use of parallel programming interfaces. Higher performance for the execution of atmospheric applications that use online mesh refinement was also provided. Therefore, more mesh resolution to describe the Earth’s atmosphere can be adopted, and consequently the numerical forecasts are more accurate.
|
6 |
Realisierung einer Schedulingumgebung für gemischt-parallele Anwendungen und Optimierung von layer-basierten Schedulingalgorithmen / Development of a scheduling support environment for mixed parallel applications and optimization of layer-based scheduling algorithmsKunis, Raphael 25 January 2011 (has links) (PDF)
Eine Herausforderung der Parallelverarbeitung ist das Erreichen von Skalierbarkeit großer paralleler Anwendungen für verschiedene parallele Systeme. Das zentrale Problem ist, dass die Ausführung einer Anwendung auf einem parallelen System sehr gut sein kann, die Portierung auf ein anderes System in der Regel jedoch zu schlechten Ergebnissen führt.
Durch die Verwendung des Programmiermodells der parallelen Tasks mit Abhängigkeiten kann die Skalierbarkeit für viele parallele Algorithmen
deutlich verbessert werden. Die Programmierung mit parallelen Tasks führt zu Task-Graphen mit Abhängigkeiten zur Darstellung einer parallelen Anwendung, die auch als gemischt-parallele Anwendung bezeichnet wird. Die Grundlage für eine effiziente Abarbeitung einer gemischt-parallelen Anwendung bildet ein geeigneter Schedule, der eine effiziente Abbildung der parallelen Tasks auf die Prozessoren des parallelen Systems vorgibt. Für die Berechnung eines Schedules werden Schedulingalgorithmen eingesetzt.
Ein zentrales Problem bei der Bestimmung eines Schedules für gemischt-parallele Anwendungen besteht darin, dass das Scheduling bereits für Single-Prozessor-Tasks mit Abhängigkeiten und ein paralleles System mit zwei Prozessoren NP-hart ist. Daher existieren lediglich Approximationsalgorithmen und Heuristiken um einen Schedule zu berechnen. Eine Möglichkeit zur Berechnung eines Schedules sind layerbasierte Schedulingalgorithmen. Diese Schedulingalgorithmen bilden zuerst Layer unabhängiger paralleler Tasks und berechnen den Schedule für jeden Layer separat.
Eine Schwachstelle dieser Schedulingalgorithmen ist das Zusammenfügen der einzelnen Schedules zum globalen Schedule. Der vorgestellte Algorithmus Move-blocks bietet eine elegante Möglichkeit das Zusammenfügen zu verbessern. Dies geschieht durch eine Verschmelzung der Schedules aufeinander folgender Layer.
Obwohl eine Vielzahl an Schedulingalgorithmen für gemischt-parallele Anwendungen existiert, gibt es bislang keine umfassende Unterstützung des Schedulings durch Programmierwerkzeuge. Im Besonderen gibt es keine Schedulingumgebung, die eine Vielzahl an Schedulingalgorithmen in sich vereint. Die Vorstellung der flexiblen, komponentenbasierten und erweiterbaren Schedulingumgebung SEParAT ist der zweite Fokus dieser Dissertation. SEParAT unterstützt verschiedene Nutzungsszenarien,
die weit über das reine Scheduling hinausgehen, z.B. den Vergleich von
Schedulingalgorithmen und die Erweiterung und Realisierung neuer Schedulingalgorithmen. Neben der Vorstellung der Nutzungsszenarien werden sowohl die interne Verarbeitung eines Schedulingdurchgangs als auch die komponentenbasierte Softwarearchitektur detailliert vorgestellt.
|
7 |
Efficient optimal multiprocessor scheduling algorithms for real-time systemsNelissen, Geoffrey 08 January 2013 (has links)
Real-time systems are composed of a set of tasks that must respect some deadlines. We find them in applications as diversified as the telecommunications, medical devices, cars, planes, satellites, military applications, etc. Missing deadlines in a real-time system may cause various results such as a diminution of the quality of service provided by the system, the complete stop of the application or even the death of people. Being able to prove the correct operation of such systems is therefore primordial. This is the goal of the real-time scheduling theory.<p><p>These last years, we have witnessed a paradigm shift in the computing platform architectures. Uniprocessor platforms have given place to multiprocessor architectures. While the real-time scheduling theory can be considered as being mature for uniprocessor systems, it is still an evolving research field for multiprocessor architectures. One of the main difficulties with multiprocessor platforms, is to provide an optimal scheduling algorithm (i.e. scheduling algorithm that constructs a schedule respecting all the task deadlines for any task set for which a solution exists). Although optimal multiprocessor real-time scheduling algorithms exist, they usually cause an excessive number of task preemptions and migrations during the schedule. These preemptions and migrations cause overheads that must be added to the task execution times. Therefore, task sets that would have been schedulable if preemptions and migrations had no cost, become unschedulable in practice. An efficient scheduling algorithm is therefore an algorithm that either minimize the number of preemptions and migrations, or reduce their cost.<p><p>In this dissertation, we expose the following results:<p>- We show that reducing the "fairness" in the schedule, advantageously impacts the number of preemptions and migrations. Hence, all the scheduling algorithms that will be proposed in this thesis, tend to reduce or even suppress the fairness in the computed schedule.<p><p>- We propose three new online scheduling algorithms. One of them --- namely, BF2 --- is optimal for the scheduling of sporadic tasks in discrete-time environments, and reduces the number of task preemptions and migrations in comparison with the state-of-the-art in discrete-time systems. The second one is optimal for the scheduling of periodic tasks in a continuous-time environment. Because this second algorithm is based on a semi-partitioned scheme, it should favorably impact the preemption overheads. The third algorithm --- named U-EDF --- is optimal for the scheduling of sporadic and dynamic task sets in a continuous-time environment. It is the first real-time scheduling algorithm which is not based on the notion of "fairness" and nevertheless remains optimal for the scheduling of sporadic (and dynamic) systems. This important result was achieved by extending the uniprocessor algorithm EDF to the multiprocessor scheduling problem. <p><p>- Because the coding techniques are also evolving as the degree of parallelism increases in computing platforms, we provide solutions enabling the scheduling of parallel tasks with the currently existing scheduling algorithms, which were initially designed for the scheduling of sequential independent tasks. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
8 |
Realisierung einer Schedulingumgebung für gemischt-parallele Anwendungen und Optimierung von layer-basierten SchedulingalgorithmenKunis, Raphael 20 January 2011 (has links)
Eine Herausforderung der Parallelverarbeitung ist das Erreichen von Skalierbarkeit großer paralleler Anwendungen für verschiedene parallele Systeme. Das zentrale Problem ist, dass die Ausführung einer Anwendung auf einem parallelen System sehr gut sein kann, die Portierung auf ein anderes System in der Regel jedoch zu schlechten Ergebnissen führt.
Durch die Verwendung des Programmiermodells der parallelen Tasks mit Abhängigkeiten kann die Skalierbarkeit für viele parallele Algorithmen
deutlich verbessert werden. Die Programmierung mit parallelen Tasks führt zu Task-Graphen mit Abhängigkeiten zur Darstellung einer parallelen Anwendung, die auch als gemischt-parallele Anwendung bezeichnet wird. Die Grundlage für eine effiziente Abarbeitung einer gemischt-parallelen Anwendung bildet ein geeigneter Schedule, der eine effiziente Abbildung der parallelen Tasks auf die Prozessoren des parallelen Systems vorgibt. Für die Berechnung eines Schedules werden Schedulingalgorithmen eingesetzt.
Ein zentrales Problem bei der Bestimmung eines Schedules für gemischt-parallele Anwendungen besteht darin, dass das Scheduling bereits für Single-Prozessor-Tasks mit Abhängigkeiten und ein paralleles System mit zwei Prozessoren NP-hart ist. Daher existieren lediglich Approximationsalgorithmen und Heuristiken um einen Schedule zu berechnen. Eine Möglichkeit zur Berechnung eines Schedules sind layerbasierte Schedulingalgorithmen. Diese Schedulingalgorithmen bilden zuerst Layer unabhängiger paralleler Tasks und berechnen den Schedule für jeden Layer separat.
Eine Schwachstelle dieser Schedulingalgorithmen ist das Zusammenfügen der einzelnen Schedules zum globalen Schedule. Der vorgestellte Algorithmus Move-blocks bietet eine elegante Möglichkeit das Zusammenfügen zu verbessern. Dies geschieht durch eine Verschmelzung der Schedules aufeinander folgender Layer.
Obwohl eine Vielzahl an Schedulingalgorithmen für gemischt-parallele Anwendungen existiert, gibt es bislang keine umfassende Unterstützung des Schedulings durch Programmierwerkzeuge. Im Besonderen gibt es keine Schedulingumgebung, die eine Vielzahl an Schedulingalgorithmen in sich vereint. Die Vorstellung der flexiblen, komponentenbasierten und erweiterbaren Schedulingumgebung SEParAT ist der zweite Fokus dieser Dissertation. SEParAT unterstützt verschiedene Nutzungsszenarien,
die weit über das reine Scheduling hinausgehen, z.B. den Vergleich von
Schedulingalgorithmen und die Erweiterung und Realisierung neuer Schedulingalgorithmen. Neben der Vorstellung der Nutzungsszenarien werden sowohl die interne Verarbeitung eines Schedulingdurchgangs als auch die komponentenbasierte Softwarearchitektur detailliert vorgestellt.
|
Page generated in 0.0745 seconds