• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A List-based Low Power Scheduling Mechanism for Processor-in-Memory Systems

Shu, Yu-Wen 21 July 2003 (has links)
Power consumption is gradually becoming an important issue in designing computing systems. Most of the low power researches focused on semiconductor technique and hardware architecture design but less utilized the techniques of software optimization. In this thesis, list scheduling is employed to reduce the energy cost for the Processor-in-Memory system not at the sacrifice of execution performance. In our list-based low power scheduling algorithm, a priority list will be maintained for each scheduling step. The scheduling kernel utilizes the priority of mobility to determine which task will be scheduled to the suitable processor based on the energy cost model of energy-delay product. The experimental results are presented and discussed.
2

Método beam search aplicado a problemas de programação da produção / Beam search method for scheduling problems

Jesus Filho, José Eurípedes Ferreira de 05 December 2018 (has links)
Nesta tese, dois diferentes problemas de programação da produção são abordados, o Flexible JobShop Scheduling Problem com Flexibilidade de sequenciamento e o Flowshop Scheduling Problem com tempos de espera e permutação de sequência. Para ambos, inicialmente um algoritmo list scheduling (LS) que explora características do problema é desenvolvido e então estendido para um método do tipo Beam Search (BS) que utiliza o LS em seus principais elementos: (1) expansão dos níveis, (2) avaliação local dos candidatos e (3) avaliação global dos candidatos. Todos os métodos propostos são determinísticos e seus pseudocódigos são cuidadosamente descritos para garantir a replicabilidade dos resultados reportados. O desempenho dos métodos propostos são avaliados utilizando instâncias e outros métodos heurísticos da literatura. Os resultados computacionais obtidos mostram a eficiência das heurísticas propostas que superaram os métodos da literatura utilizando pouco tempo computacional. / In this thesis two diferent scheduling problems were addressed, the Flexible Job Shop Scheduling Problem with sequence Flexibility and the Flowshop Scheduling Problem with waiting times and sequence permutation. For both problems, firstly, a list scheduling (LS) algorithm which exploit features of the problem was developed and then it was extedend to a Beam Search (BS) method which use the LS in his main features: (1) level expansion, (2) local evaluation and (3) global evaluation. All the proposed methods are deterministics and their pseudocodes are carefully described to ensure the replicability of the reported results. The performance of the proposed methods was evaluated using instances and other heuristic methods found in literature. The computational results show the eficiency of the proposed heuristics, which outperformed the literature methods while using low computational time.
3

Generalizing List Scheduling for Stochastic Soft Real-time Parallel Applications

Dandass, Yoginder Singh 13 December 2003 (has links)
Advanced architecture processors provide features such as caches and branch prediction that result in improved, but variable, execution time of software. Hard real-time systems require tasks to complete within timing constraints. Consequently, hard real-time systems are typically designed conservatively through the use of tasks? worst-case execution times (WCET) in order to compute deterministic schedules that guarantee task?s execution within giving time constraints. This use of pessimistic execution time assumptions provides real-time guarantees at the cost of decreased performance and resource utilization. In soft real-time systems, however, meeting deadlines is not an absolute requirement (i.e., missing a few deadlines does not severely degrade system performance or cause catastrophic failure). In such systems, a guaranteed minimum probability of completing by the deadline is sufficient. Therefore, there is considerable latitude in such systems for improving resource utilization and performance as compared with hard real-time systems, through the use of more realistic execution time assumptions. Given probability distribution functions (PDFs) representing tasks? execution time requirements, and tasks? communication and precedence requirements, represented as a directed acyclic graph (DAG), this dissertation proposes and investigates algorithms for constructing non-preemptive stochastic schedules. New PDF manipulation operators developed in this dissertation are used to compute tasks? start and completion time PDFs during schedule construction. PDFs of the schedules? completion times are also computed and used to systematically trade the probability of meeting end-to-end deadlines for schedule length and jitter in task completion times. Because of the NP-hard nature of the non-preemptive DAG scheduling problem, the new stochastic scheduling algorithms extend traditional heuristic list scheduling and genetic list scheduling algorithms for DAGs by using PDFs instead of fixed time values for task execution requirements. The stochastic scheduling algorithms also account for delays caused by communication contention, typically ignored in prior DAG scheduling research. Extensive experimental results are used to demonstrate the efficacy of the new algorithms in constructing stochastic schedules. Results also show that through the use of the techniques developed in this dissertation, the probability of meeting deadlines can be usefully traded for performance and jitter in soft real-time systems.
4

On the Near-Optimality of List Scheduling Heuristics for Local and Global Instruction Scheduling

Chase, Michael January 2006 (has links)
Modern architectures allow multiple instructions to be issued at once and have other complex features. To account for this, compilers perform instruction scheduling after generating the output code. The instruction scheduling problem is to find an optimal schedule given the limitations and capabilities of the architecture. While this can be done optimally, a greedy algorithm known as list scheduling is used in practice in most production compilers. <br /><br /> List scheduling is generally regarded as being near-optimal in practice, provided a good choice of heuristic is used. However, previous work comparing a list scheduler against an optimal scheduler either makes the assumption that an idealized architectural model is being used or uses too few test cases to strongly prove or disprove the assumed near-optimality of list scheduling. It remains an open question whether or not list scheduling performs well when scheduling for a realistic architectural model. <br /><br /> Using constraint programming, we developed an efficient optimal scheduler capable of scheduling even very large blocks within a popular benchmark suite in a reasonable amount of time. I improved the architectural model and optimal scheduler by allowing for an issue width not equal to the number of functional units, instructions that monopolize the processor for one cycle, and non-fully pipelined instructions. I then evaluated the performance of list scheduling for this more realistic architectural model. <br /><br /> I found that when scheduling for basic blocks when using a realistic architectural model, only 6% or less of schedules produced by a list scheduler are non-optimal, but when scheduling for superblocks, at least 40% of schedules produced by a list scheduler are non-optimal. Furthermore, when the list scheduler and optimal scheduler differed, the optimal scheduler was able to improve schedule cost by at least 5% on average, realizing maximum improvements of 82%. This suggests that list scheduling is only a viable solution in practice when scheduling basic blocks. When scheduling superblocks, the advantage of using a list scheduler is its speed, not the quality of schedules produced, and other alternatives to list scheduling should be considered.
5

On the Near-Optimality of List Scheduling Heuristics for Local and Global Instruction Scheduling

Chase, Michael January 2006 (has links)
Modern architectures allow multiple instructions to be issued at once and have other complex features. To account for this, compilers perform instruction scheduling after generating the output code. The instruction scheduling problem is to find an optimal schedule given the limitations and capabilities of the architecture. While this can be done optimally, a greedy algorithm known as list scheduling is used in practice in most production compilers. <br /><br /> List scheduling is generally regarded as being near-optimal in practice, provided a good choice of heuristic is used. However, previous work comparing a list scheduler against an optimal scheduler either makes the assumption that an idealized architectural model is being used or uses too few test cases to strongly prove or disprove the assumed near-optimality of list scheduling. It remains an open question whether or not list scheduling performs well when scheduling for a realistic architectural model. <br /><br /> Using constraint programming, we developed an efficient optimal scheduler capable of scheduling even very large blocks within a popular benchmark suite in a reasonable amount of time. I improved the architectural model and optimal scheduler by allowing for an issue width not equal to the number of functional units, instructions that monopolize the processor for one cycle, and non-fully pipelined instructions. I then evaluated the performance of list scheduling for this more realistic architectural model. <br /><br /> I found that when scheduling for basic blocks when using a realistic architectural model, only 6% or less of schedules produced by a list scheduler are non-optimal, but when scheduling for superblocks, at least 40% of schedules produced by a list scheduler are non-optimal. Furthermore, when the list scheduler and optimal scheduler differed, the optimal scheduler was able to improve schedule cost by at least 5% on average, realizing maximum improvements of 82%. This suggests that list scheduling is only a viable solution in practice when scheduling basic blocks. When scheduling superblocks, the advantage of using a list scheduler is its speed, not the quality of schedules produced, and other alternatives to list scheduling should be considered.
6

Optimizing a software build system through multi-core processing

Dahlberg, Robin January 2019 (has links)
In modern software development, continuous integration has become a integral part of agile development methods, advocating that developers should integrate their code frequently. Configura currently has one dedicated machine, performing tasks such as building the software and running system tests each time a developer submits new code to the main repository. One of the main practices of continuous integration advocates for having a fast build in order to keep the feedback loop short for developers, leading to increased productivity. Configura’s build system, named Build Central, currently uses a sequential build procedure to execute said tasks and was becoming too slow to keep up with the number of requested builds. The primary method for speeding up this procedure was to utilize the multi-core architecture of the build machine. In order to accomplish this, the system would need to deploy a scheduling algorithm to distribute and order tasks correctly. In this thesis, six scheduling algorithms are implemented and compared. Four of these algorithms are based on the classic list scheduling approach, and two additional algorithms are proposed which are based on dynamic scheduling principles. In this particular system, the dynamic algorithms proved to have better performance compared to the static scheduling algorithms. Performance on Build Central, using four processing cores, was improved with approximately 3.4 times faster execution time on an average daily build, resulting in a large increase of the number of builds that can be performed each day.
7

Multiprocessor scheduling in the presence of link contention delays

Macey, Benjamin January 2004 (has links)
[Truncated abstract] Parallel computing is recognised today as an important tool in the solution of a wide variety of computationally intensive problems, problems which were previously considered intractable. While it offers the promise of vastly increased performance, parallel computing introduces additional complexities which are not encountered with sequential processing. One of these is the scheduling problem, in which the individual tasks comprising a parallel program are scheduled onto the processors comprising the parallel architecture. The objective is to minimise execution time while still preserving the precedence relations between the tasks. Scheduling is of vital importance since a poor task schedule can undo any potential gains from the parallelism present in the application. Inappropriate scheduling can result in the hardware being used inefficiently, or worse, the program could run slower in parallel than on a single processor. The scheduling problem is one of the more difficult problems facing the parallel programmer. In fact, it is NP-complete in the general case. As a result, a large number of heuristic methods with sub-optimal performance but polynomial, rather than exponential, time complexity have been proposed. In order to simplify their algorithms, researchers have restricted the problem: by making assumptions concerning the parallel architecture or imposing limitations on the task graph representing the parallel program. The evolution of the task scheduling problem has involved the gradual relaxation of these restrictions. A major change occurred when the assumption of zero inter-processor communication costs was removed. This was driven by the increasing popularity of distributed-memory message-passing multiprocessors.
8

VARIATIONS ON ROTATION SCHEDULING

Richter, Michael Edwin 13 September 2007 (has links)
No description available.
9

A Study Of Genetic Representation Schemes For Scheduling Soft Real-Time Systems

Bugde, Amit 13 May 2006 (has links)
This research presents a hybrid algorithm that combines List Scheduling (LS) with a Genetic Algorithm (GA) for constructing non-preemptive schedules for soft real-time parallel applications represented as directed acyclic graphs (DAGs). The execution time requirements of the applications' tasks are assumed to be stochastic and are represented as probability distribution functions. The performance in terms of schedule lengths for three different genetic representation schemes are evaluated and compared for a number of different DAGs. The approaches presented in this research produce shorter schedules than HLFET, a popular LS approach for all of the sample problems. Of the three genetic representation schemes investigated, PosCT, the technique that allows the GA to learn which tasks to delay in order to allow other tasks to complete produced the shortest schedules for a majority of the sample DAGs.
10

Mutli-objective trade-off exploration for Cyclo-Static and Synchronous Dataflow graphs

Sinha, Ashmita 30 October 2012 (has links)
Many digital signal processing and real-time streaming systems are modeled using dataflow graphs, such as Synchronous Dataflow (SDF) and Cyclo-static Dataflow (CSDF) graphs that allow static analysis and optimization techniques. However, mapping of such descriptions into tightly constrained real-time implementations requires optimization of resource sharing, buffering and scheduling across a multi-dimensional latency-throughput-area objective space. This requires techniques that can find the Pareto-optimal set of implementations for the designer to choose from. In this work, we address the problem of multi-objective mapping and scheduling of SDF and CSDF graphs onto heterogeneous multi-processor platforms. Building on previous work, this thesis extends existing two-stage hybrid heuristics that combine an evolutionary algorithm with an integer linear programming (ILP) model to jointly optimize throughput, area and latency for SDF graphs. The primary contributions of this work include: (1) extension of the ILP model to support CSDFGs with additional buffer size optimizations; (2) a further optimization in the ILP-based scheduling model to achieve a runtime speedup of almost a factor of 10 compared to the existing SDFG formulation; (3) a list scheduling heuristic that replaces the ILP model in the hybrid heuristic to generate Pareto-optimal solutions at significantly decreased runtime while maintaining near-optimality of the solutions within an acceptable gap of 10% when compared to its ILP counterparts. The list scheduling heuristic presented in this work is based on existing modulo scheduling approaches for software pipelining in the compiler domain, but has been extended by introducing a new concept of mobility-based rescheduling before resorting to backtracking. It has been proved in this work that if mobility-based rescheduling is performed, the number of required backtrackings and hence overall complexity and runtime is less. / text

Page generated in 0.1247 seconds