• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 19
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On exploiting spare capacity in hard real-time systems

Davis, Robert Ian January 1995 (has links)
No description available.
2

A Task Selection Based Power-aware Scheduling Algorithm for Applying DVS

Mori, Yuichiro, Asakura, Koichi, Watanabe, Toyohide 08 November 2009 (has links)
No description available.
3

Real-Time Task Scheduling under Thermal Constraints

Ahn, Youngwoo 2010 August 1900 (has links)
As the speed of integrated circuits increases, so does their power consumption. Most of this power is turned into heat, which must be dissipated effectively in order for the circuit to avoid thermal damage. Thermal control therefore has emerged as an important issue in design and management of circuits and systems. Dynamic speed scaling, where the input power is temporarily reduced by appropriately slowing down the circuit, is one of the major techniques to manage power so as to maintain safe temperature levels. In this study, we focus on thermally-constrained hard real-time systems, where timing guarantees must be met without exceeding safe temperature levels within the microprocessor. Speed scaling mechanisms provided in many of today’s processors provide opportunities to temporarily increase the processor speed beyond levels that would be safe over extended time periods. This dissertation addresses the problem of safely controlling the processor speed when scheduling mixed workloads with both hard-real-time periodic tasks and non-real-time, but latency-sensitive, aperiodic jobs. We first introduce the Transient Overclocking Server, which safely reduces the response time of aperiodic jobs in the presence of hard real-time periodic tasks and thermal constraints. We then propose a design-time (off-line) execution-budget allocation scheme for the application of the Transient Overclocking Server. We show that there is an optimal budget allocation which depends on the temporal character istics of the aperiodic workload. In order to provide a quantitative framework for the allocation of budget during system design, we present a queuing model and validate the model with results from a discrete-event simulator. Next, we describe an on-line thermally-aware transient overclocking method to reduce the response time of aperiodic jobs efficiently at run-time. We describe a modified Slack-Stealing algorithm to consider the thermal constraints of systems together with the deadline constraints of periodic tasks. With the thermal model and temperature data provided by embedded thermal sensors, we compute slack for aperiodic workload at run-time that satisfies both thermal and temporal constraints. We show that the proposed Thermally-Aware Slack-Stealing algorithm minimizes the response times of aperiodic jobs while guaranteeing both the thermal safety of the system and the schedulability of the real-time tasks. The two proposed speed control algorithms are examples of so-called proactive schemes, since they rely on a prediction of the thermal trajectory to control the temperature before safe levels are exceeded. In practice, the effectiveness of proactive speed control for the thermal management of a system relies on the accuracy of the thermal model that underlies the prediction of the effects of speed scaling and task execution on the temperature of the processor. Due to variances in the manufacturing of the circuit and of the environment it is to operate, an accurate thermal model can be gathered at deployment time only. The absence of power data makes a straightforward derivation of a model impossible. We, therefore, study and describe a methodology to infer efficiently the thermal model based on the monitoring of system temperatures and number of instructions used for task executions.
4

Dynamic Voltage and Frequency Scaling Enhanced Task Scheduling Technologies Toward Greener Cloud Computing

Aldhahri, Eiman Ali 01 May 2014 (has links)
The skyrocketing amount of electricity consumed by many data centers around the globe has become a serious issue for the cloud computing and entire IT industry. The demand for data centers is rapidly increasing due to widespread usage of cloud services. It also leads to huge carbon emissions contributing to the global greenhouse effect. The US Environmental Protection Agency has declared that data centers represent a substantial portion of the energy consumption in the US and the whole world. Some of this energy consumption is caused by idle servers or servers running at higher-than-necessary frequencies. Due to the Dynamic Voltage and Frequency Scaling (DVFS) technology enabled in many CPUs, strategically reducing CPU frequency without affecting the Quality of Service (QoS) is desired. Our goal in this paper is to calculate and tune to the best CPU frequency for each running task combined with two commonly-used scheduling approaches, namely round robin and first fit algorithms, given the CPU configuration and the execution deadline. The effectiveness of our algorithms is evaluated under a CloudSim/CloudReport simulation environment as well as real hypervisor computer system with power gauge. The open source CloudReport, based on the CloudSim simulator, has been used to integrate our DVFS algorithm with the two scheduling algorithms to illustrate the efficiency of power saving in different scenarios. Furthermore, electricity consumption is measured and compared using power gauge of Watts Up meter.
5

Software Design of A Task-level High Level Synthesis Method

Jian, Jia-Dau 07 September 2004 (has links)
Along with the development of VLSI technology and the trend of system-on-chip design, traditional high-level synthesis can not deal with relatively complexity of system-on-chip design. In order to achieve optimal resource allocation, meet its performance and power requirements, and reduce its design time, we need a high-level synthesis software dealing system-level behavior. In consideration of system complexity, we have proposed a high-level synthesis method that synthesis for the task-level grains in a system behavior. This method performs efficient task-level resource allocation, task binding and task scheduling to reach a system design that meets the low performance and power requirements with low implementation cost. We utilize simulated annealing technique to achieve its overall system optimization. We designed and implemented the software design of the task-level high-level synthesis method. In this research, the design consists of three modules: the initial synthesis module, the heuristic movement module and the performance evaluation module. We will use the software to carry out the experiments of the task-level high-level synthesis method on application systems to verify its capability in designing systematic chips.
6

Exploring heterogeneous scheduling using the task-centric programming model

Podobas, Artur, Brorsson, Mats, Vlassov, Vladimir January 2012 (has links)
Computer architecture technology is moving towards more heteroge-neous solutions, which will contain a number of processing units with different capabilities that may increase the performance of the system as a whole. How-ever, with increased performance comes increased complexity; complexity that is now barely handled in homogeneous multiprocessing systems. The present study tries to solve a small piece of the heterogeneous puzzle; how can we exploit all system resources in a performance-effective and user-friendly way? Our proposed solution includes a run-time system capable of using a variety of different heterogeneous components while providing the user with the already familiar task-centric programming model interface. Furthermore, when dealing with non-uniform workloads, we show that traditional approaches based on centralized or work-stealing queue algorithms do not work well and propose a scheduling algorithm based on trend analysis to distribute work in a performance-effective way across resources. / <p>QC 20130429</p> / ENCORE
7

Conception d'un modèle de composants logiciels avec ordonnancement de tâches pour les architectures parallèles multi-coeurs, application au code Gysela / Conception of a software component model with task scheduling for many-core based parallel architecture, application to the Gysela5D code

Richard, Jérôme 06 December 2017 (has links)
Cette thèse vise à définir et à valider un modèle de programmation intégrant la description d'architectures logicielles et un ordonnancement dynamique de tâches dans un contexte de haute performance. Par exemple, il s'agit de combiner les avantages de modèles tels que L²C et StarPU. L'objectif final est de proposer un modèle capable de supporter des applications telles que Gysela5D sur les architectures parallèles actuelles et futures (tel que des clusters très variés et supercalculateurs comportant des accélérateurs). / This thesis aims to define and validate a programing model that combines the description of software architecture with dynamic task scheduling in a high performance context. For example by integrating the advantages of the L²C and StarPU models. The final goal is to propose a model that enables the description of applications such as Gysela5D on current and future parallel architectures (such as various clusters and supercomputers including accelerators).
8

Locality-aware Scheduling and Characterization of Task-based Programs

Muddukrishna, Ananya January 2014 (has links)
Modern computer architectures expose an increasing number of parallel features supported by complex memory access and communication structures. Currently used task scheduling techniques perform poorly since they focus solely on balancing computation load across parallel features and remain oblivious to locality properties of support structures. We contribute with locality-aware task scheduling mechanisms which improve execution time performance on average by 44\% and 11\% respectively on two locality-sensitive architectures - the Tilera TILEPro64 manycore processor and an AMD Opteron 6172 processor based four socket SMP machine. Programmers need task performance metrics such as amount of task parallelism and task memory hierarchy utilization to analyze performance of task-based programs. However, existing tools indicate performance mainly using thread-centric metrics. Programmers therefore resort to using low-level and tedious thread-centric analysis methods to infer task performance. We contribute with tools and methods to characterize task-based OpenMP programs at the level of tasks using which programmers can quickly understand important properties of the task graph such as critical path and parallelism as well as properties of individual tasks such as instruction count and memory behavior. / <p>QC 20140212</p>
9

Análise de desempenho de algoritmos de escalonamento de tarefas em grids computacionais usando simuladores. / Performance analysis of task scheduling algorithms in grid computing using simulators.

Rodamilans, Charles Boulhosa 10 February 2009 (has links)
Escalonamento em Grid tem sido vastamente estudado devido à sua grande importância para o desempenho da Grid. Dada a sua complexidade, este é subdividido em escalonamento de recursos e de aplicações. A qualidade do escalonamento está relacionada ao algoritmo de escalonamento de tarefas. O presente trabalho tem como objetivo apresentar a metodologia AGSA (Analysis of Grid Scheduling Algorithms) para a comparação de algoritmos de escalonamento de tarefas em Grid. O intuito desta metodologia é analisar o comportamento e desempenho dos algoritmos em diversos cenários. O ambiente de simulação CEGSE (Characterization oriEnted Grid Scheduling Environment) foi desenvolvido para a criação e simulação destes cenários. Os estudos de caso comprovam a eficácia da metodologia. / Grid Scheduling has been studied because it is very important for Grid performance. Due Grid Scheduling\'s complexity, it is subdivided in resource and application scheduling. The quality of scheduling is related a tasks scheduling algorithm. The dissertation presents the AGSA (Analysis of Grid Scheduling Algorithms) methodology for comparison of Grid Scheduling Algorithms in Grid Computing. The methodology purpose is the behavior and performance analysis of algorithms in various scenarios. The CEGSE (Characterization oriEnted Grid Scheduling Environment) simulation environment is developed for this scenarios create and simulate. The case studies ratify the methodology efficiency.
10

Optimizing a software build system through multi-core processing

Dahlberg, Robin January 2019 (has links)
In modern software development, continuous integration has become a integral part of agile development methods, advocating that developers should integrate their code frequently. Configura currently has one dedicated machine, performing tasks such as building the software and running system tests each time a developer submits new code to the main repository. One of the main practices of continuous integration advocates for having a fast build in order to keep the feedback loop short for developers, leading to increased productivity. Configura’s build system, named Build Central, currently uses a sequential build procedure to execute said tasks and was becoming too slow to keep up with the number of requested builds. The primary method for speeding up this procedure was to utilize the multi-core architecture of the build machine. In order to accomplish this, the system would need to deploy a scheduling algorithm to distribute and order tasks correctly. In this thesis, six scheduling algorithms are implemented and compared. Four of these algorithms are based on the classic list scheduling approach, and two additional algorithms are proposed which are based on dynamic scheduling principles. In this particular system, the dynamic algorithms proved to have better performance compared to the static scheduling algorithms. Performance on Build Central, using four processing cores, was improved with approximately 3.4 times faster execution time on an average daily build, resulting in a large increase of the number of builds that can be performed each day.

Page generated in 0.0994 seconds