Spelling suggestions: "subject:"taskscheduling"" "subject:"ascheduling""
21 |
Um estudo sobre problemas de escalonamento de tarefas com atrasos de comunicação de valores extremos / A study of scheduling problems subjected to extreme delay valuesPires, Renan Ferraz January 2013 (has links)
Esta dissertação de mestrado apresenta um estudo sobre problemas de escalonamento de tarefas com atrasos de comunicação. Mais precisamente, são abordados problemas de escalonar um conjunto de tarefas em um conjunto de máquinas paralelas de número limitado ou não, e tarefas de tempo de processamento unitário, sujeitas a relações de precedência, e com atrasos de comunicação estabelecidos para cada par de tarefas precedentes, assumindo valores extremos, ou seja, podendo ser desprezíveis ou infinitamente grandes, isto com o objetivo de minimizaro o tempo em que a última tarefa escalonada termina seu processamento - minimização do makespan. Sendo assim, dois problemas são demostrados serem da classe NP-difícil. Para o primeiro, a quantidade de processadores é indicada a cada instância, sendo este resultado válido ainda que as relações de precedência formem um conjunto de cadeias (P|chains; cij ∈ {0, ∞}; pj = 1|Cmax). O segundo problema admite relações de precedência arbitrárias e é válido para qualquer quantidade fixa de processadores diferente de um (P2|prec;cij ∈ {0, ∞}; pj = 1|Cmax). Por outro lado, neste trabalho, dois outros problemas são demonstrados serem solúveis em tempo polinomial, ou seja, estarem na classe P, ambos quando uma quantidade ilimitada de processadores está disponível. É visto que, se a ordem de precedência das tarefas é limitada a uma árvore descendente, o problema é polinomial (P∞|tree; cij ∈ {0, ∞}; pj = 1|Cmax). O outro caso polinomial demonstrado é válido quando é permitido processar a mesma tarefa em mais de um processador (P∞|prec; cij ∈ {0, ∞}; pj = 1|Cmax). Para ambos os casos são apresentados os algoritmos polinomiais. Finalmente, são apresentados resultados para o problema de escalonar tarefas particionadas em conjuntos para os quais todas as tarefas devem ser processadas no mesmo processador. O problema é NP-difícil quando a quantidade de processadores é determinada a cada instância. Esse resultado é válido ainda que a precedência seja restrita a duas cadeias. O problema se torna polinomial quando o conjunto de partições é limitado por constante e as cadeias são restritas em uma das duas formas: pela quantidade delas ou pela quantidade de tarefas em cada uma delas. Como trabalho futuro, este estudo deixa em aberto a NP-Completude do problema de escalonar sob tais atrasos de comunicação de valores extremos, para uma quantidade fixa de processadores, quando a ordem de precedência é de alguma forma restrita, por exemplo, uma árvore descendente (Pm|out-tree;cij ∈ {0, ∞}; pj = 1|Cmax). / This Master’s Thesis presents a study on scheduling problems subject to communication delays. More precisely, this work involves job scheduling problems with a number of parallel machines, limited or not, and where the tasks (or jobs) have unit execution time, and are subject to some precedence relation. Communication delays are imposed at each pair of preceding tasks, taking extreme values, which may be negligible or infinitely large. The objective is minimize the completion time of the latest job to be processed, that is, to get the minimum makespan. Thus, NP-hard results are demonstrated for two cases. For the first, when the number of processors is indicated in the instance of the problem, and this result holds even when the precedence relation is restricted to a set of chains (P|chains; cij ∈ {0, ∞}; pj = 1|Cmax). The second results is valid when arbitrary precedence relations are allowed, and any fixed number of processors (greater than one) is available (P2|prec;cij ∈ {0, ∞}; pj = 1|Cmax). Two other problems are demonstrated to have polynomial time solutions, both when an unlimited number of processors are available. The first result imposes the precedence relation to be an out-tree (P∞|tree; cij ∈ {0, ∞}; pj = 1|Cmax). The second result is valid when the execution of the same job on multiples processors are allowed (P∞|prec; cij ∈ {0, ∞}; pj = 1|Cmax). For both cases, polynomial algorithms are presented. Finally, results are presented for the problem of job scheduling that are partitioned in sets which must be executed on the same processors. The problem is demonstrated to be NP-hard even if the precedence relation consists of two chains. Also, it is shown that the problem becomes solvable in polynomial time if the number of partitions is limited by a constant and the chains are restricted by a constant on either their number, or the number of tasks that each chain may have. As future work, this study leaves open whether is NP-hard the case to schedule tasks subject to such communication delays with extreme values, when a fixed number of processors is available, and the precedence relations are some how restricted, for example, by an out-tree (Pm|out-tree;cij ∈ {0, ∞}; pj = 1|Cmax).
|
22 |
Energy Efficiency Analysis of ARM big.LITTLE Global Task SchedulingIsraelsson, Sigurd January 2015 (has links)
In this paper an ARM big.LITTLE system with Global Task Scheduling is evaluated in terms of its energy efficiency and performance measured in execution time. The big.LITTLE system is evaluated against the same system but with only the big or LITTLE processor active. The evaluation is done by performing experiments that target three different levels of load: full load, varying load and low load. The benchmarking software PARSEC Blackscholes and BBench are used to put the system under a synthetic workload in the tests. The results show that overall big.LITTLE achieves an improvement in execution time for all test scenarios, although very slim for varying load, and is more energy efficient than the big processor with the possible exception of a low load scenario. However, the LITTLE processor by itself is found to be the most energy efficient system even though it showed the slowest execution time.
|
23 |
Virtuální servery s operačním systémem Fedora / Virtual servers with Fedora operating systemGajdušek, Ondřej January 2021 (has links)
This thesis deals with the PlanetLab Server Manager application whose aim is to simplify application development within the experimental distributed network PlanetLab. Thesis presents the PlanetLab network and describes its infrastructure. Application PlanetLab Server Manager, shortly abbreviated as plbmng, is described, its current state is evaluated and the existing functionality is presented to the reader. Further work focuses mainly on the implementation of the new functionality that is a software distribution and job scheduling to run the software at the specified time. The last part of the work presents an experiment that demonstrates a real-life usage of the newly added functionality. Work results are published at the public repository on the GitLab platform. The application was published at the package index for Python Packages - PyPI.
|
24 |
DYNAMIC TASK OFFLOADING FOR LATENCY MINIMIZATION IN IOT EDGE-CLOUD ENVIRONMENTSHaimin Ku (12457464) 26 April 2022 (has links)
<p>With the exponential growth and diversity of Internet of Things (IoT) devices, computational-intensive and delay-sensitive applications, such as object detection, smart homes, and smart grids, are emerging constantly. We can adopt the paradigm of cloud computing to offload computation-heavy tasks from IoT devices to a cloud server which can break through the limitation of IoT devices with more powerful resources. However, cloud computing architecture can cause high latency which is not suitable for IoT devices that have limited computing and storage capabilities. Edge computing has been introduced to improve this situation by deploying an edge device nearby IoT devices that can provide IoT devices computing resources with low latency compared to cloud computing. Nevertheless, the edge server may not be able to complete all the offloaded tasks from the devices in time when the requests are flooding. In such cases, the edge server can offload some of the requested tasks to a cloud server to further speed up the offloading process with more powerful cloud resources. In this paper, we aim to minimize the average completion time of tasks in an IoT edge-cloud environment, by optimizing the task offloading ratio from edge to cloud, based on Deep Deterministic Policy Gradient (DDPG), a type of Reinforcement Learning (RL) approach. We propose a dynamic task offloading decision mechanism deployed on the edge that can determine the amounts of computational resources to be processed in the cloud server considering multiple factors to complete a task. Simulation results demonstrate that our dynamic task offloading decision mechanism can improve the overall completion time of tasks than naïve approaches. </p>
|
25 |
A Scheduling and Partitioning Model for Stencil-based Applications on Many-Core Devices / Modèle d'Ordonnancement et de Partitionnement pour Applications à Maillages et Calculs Réguliers dans le Cadre d'Accélérateurs de Type «ManyCore»Papin, Jean-Charles 08 September 2016 (has links)
La puissance de calcul des plus grands calculateurs ne fait qu'augmenter: de quelques centaines de cœurs de calculs dans les années 1990, on en est maintenant à plusieurs millions! Leur infrastructure évolue aussi: elle n'est plus linéaire, mais complètement hiérarchique. Les applications de calcul intensif, largement utilisées par la communauté scientifique, doivent donc se munir d'outils permettant d'utiliser pleinement l'ensemble de ces ressources de manière efficace. La simulation numérique repose bien souvent sur d'importants calculs dont le coût, en termes de temps et d'accès mémoire, peut fortement varier au cours du temps: on parle de charge de calcul variable. Dans cette Thèse, on se propose d'étudier les outils actuels de répartition des données et des calculs, afin de voir les raisons qui font que de tels outils ne sont pas pleinement adaptés aux fortes variations de charge ainsi qu'à la hiérarchie toujours plus importante des nouveaux calculateurs. Nous proposerons alors un nouveau modèle d'ordonnancement et de partitionnement, basé sur des interactions physiques, particulièrement adapté aux applications basées sur des maillages réguliers et présentant de fortes variations de charge au cours du temps. Nous validerons alors ce modèle en le comparant à des outils de partitionnement de graphes reconnus et largement utilisés, et verrons les raisons qui le rendent plus performant pour des applications aussi bien parallèles que distribuées. Enfin, nous proposerons une interface nous permettant d'utiliser cette méthode d'ordonnancement dans des calculateurs toujours plus hiérarchiques. / Computing capability of largest computing centers is still increasing: from a few hundred of cores in the90's, they can now exceed several million of cores! Their infrastructure also evolves: it is no longerlinear, but fully hierarchical.High Performance applications, well used by the scientific community, require on tools that allow themto efficiently and fully use computing resources.Numerical simulations mostly rely on large computations chains for which the cost (computing load), either acomputing time or a memory access time, can strongly vary over time: it is referred to as dynamic computing loadevolution.In this thesis, we propose to study actual data partitioning and computing scheduling tools, and to explore theirlimitations with regards to strong and repetitive load variation as well as the still increasing cluster hierarchy.We will then propose a new scheduling and partitioning model, based on physical interactions, particularlysuitable to regular mesh based applications that produce strong computing load variations over time.We will then compare our model against well-known and widely used graph partitioning tools and we will see thereasons that make this model more reliable for such parallel and distributed applications.Lastly, we will propose a multi-level scheduling interface that is specially designed to allow to use ourmodel in even more hierarchical clusters.
|
26 |
REAL-TIME SCHEDULING ALGORITHMS FOR PRECEDENCE RELATED TASKS ON HETEROGENEOUS MULTIPROCESSORSAULUCK, NITIN 23 May 2005 (has links)
No description available.
|
27 |
ENERGY MANAGEMENT FOR BATTERY-POWERED RECONFIGURABLE COMPUTING PLATFORMS AND NETWORKSKHAN, JAWAD B. January 2005 (has links)
No description available.
|
28 |
Parallel Processing of Large Scale Genomic DataKutlu, Mucahid 09 October 2015 (has links)
No description available.
|
29 |
GPScheDVS: A New Paradigm of the Autonomous CPU Speed Control for Commodity-OS-based General-Purpose Mobile Computers with a DVS-friendly Task SchedulingKim, Sookyoung 25 September 2008 (has links)
This dissertation studies the problem of increasing battery life-time and reducing CPU heat dissipation without degrading system performance in commodity-OS-based general-purpose (GP) mobile computers using the dynamic voltage scaling (DVS) function of modern CPUs. The dissertation especially focuses on the impact of task scheduling on the effectiveness of DVS in achieving this goal. The task scheduling mechanism used in most contemporary general-purpose operating systems (GPOS) prioritizes tasks based only on their CPU occupancies irrespective of their deadlines. In currently available autonomous DVS schemes for GP mobile systems, the impact of this GPOS task scheduling is ignored and a DVS scheme merely predicts and enforces the lowest CPU speed that can meet tasks' deadlines without meddling with task scheduling. This research, however, shows that it is impossible to take full advantage of DVS in balancing energy/power and performance in the current DVS paradigm due to the mismatch between the urgency (i.e., having a nearer deadline) and priority of tasks under the GPOS task scheduling. This research also shows that, consequently, a new DVS paradigm is necessary, where a "DVS-friendly" task scheduling assigns higher priorities to more urgent tasks.
The dissertation begins by showing how the mismatch between the urgency and priority of tasks limits the effectiveness of DVS and why conventional real-time (RT) task scheduling, which is intrinsically DVS-friendly cannot be used in GP systems. Then, the dissertation describes the requirements for "DVS-friendly GP" task scheduling as follows. Unlike the existing GPOS task scheduling, it should prioritize tasks by their deadline. But, at the same time, it must be able to do so without a priori knowledge of the deadlines and be able to handle the various tasks running in today's GP systems, unlike conventional RT task scheduling. The various tasks include sporadic tasks such as user-interactive tasks and tasks having dependencies on each other such as a family of threads and user-interface server/clients tasks. Therefore, the first major result of this research is to propose a new DVS paradigm for commodity-OS-based GP mobile systems in which DVS is performed under a DVS-friendly GP task scheduling that meets these requirements.
The dissertation then proposes GPSched, a DVS-friendly GP task scheduling mechanism for commodity-Linux-based GP mobile systems, as the second major result. GPSched autonomously prioritizes tasks by their deadlines using the type of services that each task is involved with as the indicator of the deadline. At the same time, GPSched properly handles a family of threads and user-interface server/clients tasks by distinguishing and scheduling them as a group, and user-interactive tasks by incorporating a feature of current GPOS task scheduling — raising the priority of a task that is idle most of the time — which is desirable to quickly respond to user input events in its prioritization mechanism.
The final major result is GPScheDVS, the integration of GPSched and a task-based DVS scheme customized for GPSched called GPSDVS. GPScheDVS provides two alternative modes: (1) the system-energy-centric (SE) mode aiming at a longer battery life-time by reducing system energy consumption and (2) the CPU-power-centric (CP) mode focusing on limiting CPU heat dissipation by reducing CPU power consumption.
Experiments conducted under a set of real-life usage scenarios on a laptop show that the best, worst, and average reductions of system energy consumption by the SE mode GPScheDVS were 24%, -1%, and 17%, respectively, over the no-DVS case and 11%, -1%, and 5%, respectively, over the state-of-the-art task-based DVS scheme in the current DVS paradigm. The experiments also show that the best, worst, and average reductions of CPU energy consumption by the SE mode GPScheDVS were 69%, 0%, and 43% over the no-DVS case and 26%, -1%, and 13% over the state-of-the-art task-based DVS scheme in the current DVS paradigm. Considering that no power management was performed on non-CPU components for the experiments, these results imply that the system energy savings achievable by GPScheDVS will be increased if the non-CPU components' power is properly managed. On the other hand, the best, worst, and average reductions of average CPU power by the CP mode GPScheDVS were 69%, 49%, and 60% over the no-DVS case and 63%, 0%, and 30% over the existing task-based DVS scheme. Furthermore, oscilloscope measurements show that the best, worst, and average reduction of peak system power by the CP mode GPScheDVS were 29%, 10%, and 23% over the no-DVS case and 28%, 6%, and 22% over the existing task-based DVS scheme signifying that GPScheDVS is effective also in restraining the peak CPU power.
On the top of these advantages in energy and power, the experimental results show that GPScheDVS even improves system performance in either mode due to its deadline-based task scheduling property. For example, the deadline meet ratio on continuous videos by GPScheDVS was at least 91.2%, whereas the ratios by the no-DVS case and the existing task-based DVS scheme were down to 71.3% and 71.0%, respectively. / Ph. D.
|
30 |
Dynamic Power-Aware Techniques for Real-Time Multicore Embedded SystemsMarch Cabrelles, José Luis 30 March 2015 (has links)
The continuous shrink of transistor sizes has allowed more complex and powerful devices
to be implemented in the same area, which provides new capabilities and functionalities.
However, this complexity increase comes with a considerable rise in power consumption.
This situation is critical in portable devices where the energy budget is limited and,
hence, battery lifetime defines the usefulness of the system. Therefore, power consumption
has become a major concern in the design of real-time multicore embedded systems.
This dissertation proposes several techniques aimed to save energy without sacrifying
real-time schedulability in this type of systems. The proposed techniques deal with
different main components of the system. In particular, the techniques affect the task
partitioner and the scheduler, as well as the memory controller.
Some of the techniques are especially tailored for multicores with shared Dynamic Voltage
and Frequency Scaling (DVFS) domains. Workload balancing among cores in a
given domain has a strong impact on power consumption, since all the cores sharing a
DVFS domain must run at the speed required by the most loaded core.
In this thesis, a novel workload partitioning algorithm is proposed, namely Loadbounded
Resource Balancing (LRB). The proposal allocates tasks to cores to balance
a given resource (processor or memory) consumption among cores, improving real-time
schedulability by increasing overlapping between processor and memory. However, distributing
tasks in this way regardless the individual core utilizations could lead to unfair
load distributions. That is, one of the cores could become much loaded than the others.
To avoid this scenario, when a given utilization threshold is exceeded, tasks are assigned
to the least loaded core.
Unfortunately, workload partitioning alone is sometimes not able to achieve a good workload
balance among cores. Therefore, this work also explores novel task migration
approaches. Two task migration heuristics are proposed. The first heuristic, referred to
as Single Option Migration (SOM ), attempts to perform only one migration when the
workload changes to improve utilization balance. Three variants of the SOM algorithm
have been devised, depending on the point of time the migration attempt is performed:
when a task arrives to the system (SOMin), when a task leaves the system (SOMout), and
in both cases (SOMin−out). The second heuristic, referred to as Multiple Option Migration
(MOM ) explores an additional alternative workload partitioning before performing
the migration attempt.
Regarding the memory controller, memory controller scheduling policies are devised.
Conventional policies used in Non Real-Time (NRT) systems are not appropriate
for systems providing support for both Hard Real-Time (HRT) and Soft Real-Time
(SRT) tasks. Those policies can introduce variability in the latencies of the memory
requests and, hence, cause an HRT deadline miss that could lead to a critical failure of
the real-time system. To deal with this drawback, a simple policy, referred to as HR-
first, which prioritizes requests of HRT tasks, is proposed. In addition, a more advanced
approach, namely ATR-first, is presented. ATR-first prioritizes only those requests of
HRT tasks that are necessary to ensure real-time schedulability, improving the Quality
of Service (QoS) of SRT tasks.
Finally, this thesis also tackles dynamic execution time estimation. The accuracy
of this estimation is important to avoid deadline misses of HRT tasks but also to increase
QoS in SRT systems. Besides, it can also help to improve the schedulability of the systems
and reduce power consumption. The Processor-Memory (Proc-Mem) model, that
dynamically predicts the execution time of real-time application for each frequency level,
is proposed. This model measures at the first hyperperiod, making use of Performance
Monitoring Counters (PMCs) at run-time, the portion of time that each core is performing
computation (CPU ), waiting for memory (MEM ), or both (OVERLAP). This
information will be used to estimate the execution time at any other working frequency / March Cabrelles, JL. (2014). Dynamic Power-Aware Techniques for Real-Time Multicore Embedded Systems [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48464
|
Page generated in 0.0741 seconds