• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • Tagged with
  • 10
  • 10
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A field study of schedulers in industry : understanding their work, practices and performance

Crawford, Sarah January 2000 (has links)
No description available.
2

Models and Complexity Results in Real-Time Scheduling Theory

Ekberg, Pontus January 2015 (has links)
When designing real-time systems, we want to prove that they will satisfy given timing constraints at run time. The main objective of real-time scheduling theory is to analyze properties of mathematical models that capture the temporal behaviors of such systems. These models typically consist of a collection of computational tasks, each of which generates an infinite sequence of task activations. In this thesis we study different classes of models and their corresponding analysis problems. First, we consider models of mixed-criticality systems. The timing constraints of these systems state that all tasks must meet their deadlines for the run-time scenarios fulfilling certain assumptions, for example on execution times. For the other scenarios, only the most important tasks must meet their deadlines. We study both tasks with sporadic activation patterns and tasks with complicated activation patterns described by arbitrary directed graphs. We present sufficient schedulability tests, i.e., methods used to prove that a given collection of tasks will meet their timing constraints under a particular scheduling algorithm. Second, we consider models where tasks can lock mutually exclusive resources and have activation patterns described by directed cycle graphs. We present an optimal scheduling algorithm and an exact schedulability test. Third, we address a pair of longstanding open problems in real-time scheduling theory. These concern the computational complexity of deciding whether a collection of sporadic tasks are schedulable on a uniprocessor. We show that this decision problem is strongly coNP-complete in the general case. In the case where the asymptotic resource utilization of the tasks is bounded by a constant smaller than 1, we show that it is weakly coNP-complete.
3

Missile autopilot design using a gain scheduling technique

White, David Paul January 1994 (has links)
No description available.
4

Automated process modelling and continuous improvement

Fresco, John Anthony January 2010 (has links)
This thesis discusses and demonstrates the benefits of simulating and optimising a manufacturing control system in order to improve flow of production material through a system with high variety low volume output requirements. The need for and factors affecting synchronous flow are also discussed along with the consequences of poor flow and various solutions for overcoming it. A study into and comparison of various planning and control methodologies designed to promote flow of material through a manufacturing system was carried out to identify a suitable system to model. The research objectives are; • Identify the best system to model that will promote flow, • Identify the potential failure mechanisms within that system that exist and have not been yet resolved, • Produce a model that can fully resolve or reduce the probability of the identified failure mechanisms having an effect. This research led to an investigation into the main elements of a Drum-Buffer-Rope (DBR) environment in order to generate a comprehensive description of the requirements for DBR implementation and operation and attempt to improve the limitations that have been identified via the research literature. These requirements have been grouped into three areas, i.e.: a. plant layout and kanban controls, b. planning and control, and c. DBR infrastructure. A DBR model was developed combined with Genetic Algorithms with the aim of maximising the throughput level for an individual product mix. The results of the experiments have identified new knowledge on how DBR processes facilitate and impede material flow synchronisation within high variety/low volume manufacturing environments. The research results were limited to the assumptions made and constraints of the model, this research has highlighted that as such a model becomes more complex it also becomes more volatile and more difficult to control, leading to the conclusions that more research is required by extending the complexity of the model by adding more product mix and system variability to compare results with the results of this research. After which it will be expected that the model will be useful to enable a quick system response to large variations in product demand within the mixed model manufacturing industry.
5

The impact of multitasking on critical chain portfolios

Ghaffari, Mahdi January 2017 (has links)
Critical Chain Project Management (CCPM) is a project scheduling technique which has been developed to overcome some of the deficiencies of traditional methods and where, in a single project environment, the critical chain is the longest chain of activities in a project network, taking into account both activity precedence and resource dependencies. In multi-project environments, the constraint is the resource which impedes projects' earlier completion. CCPM relies on buffers to protect the critical chain and monitor/control the project. The literature review conducted by this study reveals that the research on CCPM principles in multi-project environments is still extremely scarce. The review also suggests that outright elimination of multitasking (i.e. switching back and forth among two or more concurrent tasks) by imposing a relay race mentality (i.e. starting a task as soon as it becomes available and finishing it as soon as possible), as one of the main features of CCPM, might worsen the resource constraints of CCPM portfolios and cause creation of over-protective buffers. It further implies that there is also a good level of multitasking that can benefit such environments by improving resource availability and requiring shorter protective buffers. This research aims to bridge the gap by investigating the impact of level of multitasking on resource availability issues and project and feeding buffer sizing in CCPM portfolios with different resource capacities. This is pursued through adopting a deductive approach and developing five research hypotheses, considering ten different levels of resource capacity, testing the hypotheses by conducting Monte Carlo simulations of randomly generated project data and comparing the results with deterministic duration values of the same portfolios with 30%, 40% and 50% feeding and project buffer sizes. In total, ten portfolios with similar size, variability and complexity levels, each containing four projects, were simulated. It was concluded that: firstly, some limited levels of multitasking, determined in relation to the level of resource capacity, can be beneficial to time performance of CCPM portfolios; secondly, shorter buffer sizes can be accounted for by abolishing the ban on multitasking while maintaining a lower rate of resource capacity; finally, the element of relay race work ethic that completely bans multitasking should not be implemented as it proved to be counterproductive in terms of resource availability. Seven recommendations and a buffer sizing framework are provided as complementary guidelines to practitioners' own experience, knowledge and judgment, in addition to an explanation of theoretical and practical contributions and suggestions for future research.
6

Escalonamento de tarefas em processadores de velocidade variável em múltiplas organizações / Energy-aware multi-organization scheduling problem

Raphael, Pedro Luis Furio 08 May 2015 (has links)
Problemas de escalonamento cuja função objetivo é o consumo de energia tem sido cada vez mais estudados. Neste trabalho, estudamos o problema conhecido, em inglês, por Dynamic Speed Scaling, um problema de escalonamento de tarefas bem definidas em processadores de velocidade variável, cujo consumo de energia é função da velocidade. Além disso, relacionamos este problema com outro conhecido como MOSP, sigla em inglês para Multi-Organization Scheduling Problem. Neste, queremos escalonar tarefas de múltiplas organizações independentes respeitando certas restrições individuais. Provamos, neste trabalho, que este novo problema é NP-Completo e desenvolvemos várias heurísticas eficientes cujos testes experimentais mostram economia de energia significativa. / We studied, in this work, the problem of scheduling a set of well-defined tasks in a variable speed processor with the objective of minimizing the energy consumption, that is given as a function of the processor\'s speed, field known as Dynamic Speed Scaling. Also, we relate this problem to another known as MOSP (Multi-Organization Scheduling Problem), problem in which several independent organizations share tasks and resources to achieve a better global solution, but also respecting selfish restrictions. For the combined problem, we show that it is NP-Complete and designed several efficient heuristics that achieves good results in a experimental setup.
7

Escalonamento de tarefas em processadores de velocidade variável em múltiplas organizações / Energy-aware multi-organization scheduling problem

Pedro Luis Furio Raphael 08 May 2015 (has links)
Problemas de escalonamento cuja função objetivo é o consumo de energia tem sido cada vez mais estudados. Neste trabalho, estudamos o problema conhecido, em inglês, por Dynamic Speed Scaling, um problema de escalonamento de tarefas bem definidas em processadores de velocidade variável, cujo consumo de energia é função da velocidade. Além disso, relacionamos este problema com outro conhecido como MOSP, sigla em inglês para Multi-Organization Scheduling Problem. Neste, queremos escalonar tarefas de múltiplas organizações independentes respeitando certas restrições individuais. Provamos, neste trabalho, que este novo problema é NP-Completo e desenvolvemos várias heurísticas eficientes cujos testes experimentais mostram economia de energia significativa. / We studied, in this work, the problem of scheduling a set of well-defined tasks in a variable speed processor with the objective of minimizing the energy consumption, that is given as a function of the processor\'s speed, field known as Dynamic Speed Scaling. Also, we relate this problem to another known as MOSP (Multi-Organization Scheduling Problem), problem in which several independent organizations share tasks and resources to achieve a better global solution, but also respecting selfish restrictions. For the combined problem, we show that it is NP-Complete and designed several efficient heuristics that achieves good results in a experimental setup.
8

Um estudo sobre formulações matemáticas e estratégias algorítmicas para problemas de escalonamento em máquinas paralelas com penalidades de antecipação e atraso / A study of mathematical formulations and algorithmic strategies for scheduling problems on parallel machines with earliness and tardiness penalties

Amorim, Rainer Xavier de 27 March 2013 (has links)
Made available in DSpace on 2015-04-11T14:02:41Z (GMT). No. of bitstreams: 1 rainer.pdf: 3537323 bytes, checksum: 46bd81628ce774393ea9334f7287a55f (MD5) Previous issue date: 2013-03-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This dissertation presents a study on scheduling problems with earliness and tardiness penalties on identical parallel machines, considering independent and weighted jobs with arbitrary processing times. An analysis of the major mathematical formulations in integer programming is given, and presented the main results from the literature. An integer mathematical formulation based on network flow model was also proposed for the problem, which can be applied on single and parallel machines without idle time. Exact methods of implicit enumeration were studied and applied for the problem through the integer linear programming solver CPLEX and the UFFLP library and, mainly, algorithmic strategies of global optimization based on local search heuristic and path-relinking technique were developed. The computational experiments shows that the proposed algorithmic strategies are competitive in relation to existing results from the literature for single-machine scheduling, involving instances based on OR-Library benchmark for 40, 50, 100, 150, 200 and 300 jobs, where all the optimal values were found, and, mainly, being the best algorithmic strategy for multiprocessor environments, involving 2, 4 and 10 identical parallel machines. / Esta dissertação apresenta um estudo sobre problemas de escalonamento com penalidades de antecipação e atraso em máquinas paralelas, considerando tarefas independentes, ponderadas e de tempos de execução arbitrários. Uma análise sobre as principais formulações matemáticas em programação inteira é dada, bem como apresentados os principais resultados da literatura. Uma formulação matemática de programação inteira baseada no modelo de fluxo em redes também foi proposta para o problema, que pode ser aplicada em ambientes mono e multiprocessado sem tempo ocioso. Métodos de enumeração implícita foram estudados e aplicados aos problemas em questão através do resolvedor de programação linear inteira CPLEX e da biblioteca UFFLP, principalmente, estratégias algorítmicas aproximadas de otimização global baseadas em heurísticas de busca local e técnica de reconexão de caminhos foram desenvolvidas. Os experimentos computacionais mostram que as estratégias propostas são competitivas em relação aos resultados existentes na literatura para ambientes de escalonamento monoprocessados, envolvendo instâncias baseadas no benchmark da OR-Library para 40, 50, 100, 150, 200 e 300 tarefas, onde todos os ótimos foram encontrados, e, principalmente, sendo a melhor estratégia apresentada para ambientes multiprocessados, envolvendo 2, 4 e 10 máquinas paralelas idênticas.
9

Deployment of mixed criticality and data driven systems on multi-cores architectures / Déploiement de systèmes à flots de données en criticité mixte pour architectures multi-coeurs

Medina, Roberto 30 January 2019 (has links)
De nos jours, la conception de systèmes critiques va de plus en plus vers l’intégration de différents composants système sur une unique plate-forme de calcul. Les systèmes à criticité mixte permettent aux composants critiques ayant un degré élevé de confiance (c.-à-d. une faible probabilité de défaillance) de partager des ressources de calcul avec des composants moins critiques sans nécessiter des mécanismes d’isolation logicielle.Traditionnellement, les systèmes critiques sont conçus à l’aide de modèles de calcul comme les graphes data-flow et l’ordonnancement temps-réel pour fournir un comportement logique et temporel correct. Néanmoins, les ressources allouées aux data-flows et aux ordonnanceurs temps-réel sont fondées sur l’analyse du pire cas, ce qui conduit souvent à une sous-utilisation des processeurs. Les ressources allouées ne sont ainsi pas toujours entièrement utilisées. Cette sous-utilisation devient plus remarquable sur les architectures multi-cœurs où la différence entre le meilleur et le pire cas est encore plus significative.Le modèle d’exécution à criticité mixte propose une solution au problème susmentionné. Afin d’allouer efficacement les ressources tout en assurant une exécution correcte des composants critiques, les ressources sont allouées en fonction du mode opérationnel du système. Tant que des capacités de calcul suffisantes sont disponibles pour respecter toutes les échéances, le système est dans un mode opérationnel de « basse criticité ». Cependant, si la charge du système augmente, les composants critiques sont priorisés pour respecter leurs échéances, leurs ressources de calcul augmentent et les composants moins/non critiques sont pénalisés. Le système passe alors à un mode opérationnel de « haute criticité ».L’ intégration des aspects de criticité mixte dans le modèle data-flow est néanmoins un problème difficile à résoudre. Des nouvelles méthodes d’ordonnancement capables de gérer des contraintes de précédences et des variations sur les budgets de temps doivent être définies.Bien que plusieurs contributions sur l’ordonnancement à criticité mixte aient été proposées, l’ordonnancement avec contraintes de précédences sur multi-processeurs a rarement été étudié. Les méthodes existantes conduisent à une sous-utilisation des ressources, ce qui contredit l’objectif principal de la criticité mixte. Pour cette raison, nous définissons des nouvelles méthodes d’ordonnancement efficaces basées sur une méta-heuristique produisant des tables d’ordonnancement pour chaque mode opérationnel du système. Ces tables sont correctes : lorsque la charge du système augmente, les composants critiques ne manqueront jamais leurs échéances. Deux implémentations basées sur des algorithmes globaux préemptifs démontrent un gain significatif en ordonnançabilité et en utilisation des ressources : plus de 60 % de systèmes ordonnançables sur une architecture donnée par rapport aux méthodes existantes.Alors que le modèle de criticité mixte prétend que les composants critiques et non critiques peuvent partager la même plate-forme de calcul, l'interruption des composants non critiques réduit considérablement leur disponibilité. Ceci est un problème car les composants non critiques doivent offrir une degré minimum de service. C’est pourquoi nous définissons des méthodes pour évaluer la disponibilité de ces composants. A notre connaissance, nos évaluations sont les premières capables de quantifier la disponibilité. Nous proposons également des améliorations qui limitent l’impact des composants critiques sur les composants non critiques. Ces améliorations sont évaluées grâce à des automates probabilistes et démontrent une amélioration considérable de la disponibilité : plus de 2 % dans un contexte où des augmentations de l’ordre de 10-9 sont significatives.Nos contributions ont été intégrées dans un framework open-source. Cet outil fournit également un générateur utilisé pour l’évaluation de nos méthodes d’ordonnancement. / Nowadays, the design of modern Safety-critical systems is pushing towards the integration of multiple system components onto a single shared computation platform. Mixed-Criticality Systems in particular allow critical components with a high degree of confidence (i.e. low probability of failure) to share computation resources with less/non-critical components without requiring software isolation mechanisms (as opposed to partitioned systems).Traditionally, safety-critical systems have been conceived using models of computations like data-flow graphs and real-time scheduling to obtain logical and temporal correctness. Nonetheless, resources given to data-flow representations and real-time scheduling techniques are based on worst-case analysis which often leads to an under-utilization of the computation capacity. The allocated resources are not always completely used. This under-utilization becomes more notorious for multi-core architectures where the difference between best and worst-case performance is more significant.The mixed-criticality execution model proposes a solution to the abovementioned problem. To efficiently allocate resources while ensuring safe execution of the most critical components, resources are allocated in function of the operational mode the system is in. As long as sufficient processing capabilities are available to respect deadlines, the system remains in a ‘low-criticality’ operational mode. Nonetheless, if the system demand increases, critical components are prioritized to meet their deadlines, their computation resources are increased and less/non-critical components are potentially penalized. The system is said to transition to a ‘high-criticality’ operational mode.Yet, the incorporation of mixed-criticality aspects into the data-flow model of computation is a very difficult problem as it requires to define new scheduling methods capable of handling precedence constraints and variations in timing budgets.Although mixed-criticality scheduling has been well studied for single and multi-core platforms, the problem of data-dependencies in multi-core platforms has been rarely considered. Existing methods lead to poor resource usage which contradicts the main purpose of mixed-criticality. For this reason, our first objective focuses on designing new efficient scheduling methods for data-driven mixed-criticality systems. We define a meta-heuristic producing scheduling tables for all operational modes of the system. These tables are proven to be correct, i.e. when the system demand increases, critical components will never miss a deadline. Two implementations based on existing preemptive global algorithms were developed to gain in schedulability and resource usage. In some cases these implementations schedule more than 60% of systems compared to existing approaches.While the mixed-criticality model claims that critical and non-critical components can share the same computation platform, the interruption of non-critical components degrades their availability significantly. This is a problem since non-critical components need to deliver a minimum service guarantee. In fact, recent works in mixed-criticality have recognized this limitation. For this reason, we define methods to evaluate the availability of non-critical components. To our knowledge, our evaluations are the first ones capable of quantifying availability. We also propose enhancements compatible with our scheduling methods, limiting the impact that critical components have on non-critical ones. These enhancements are evaluated thanks to probabilistic automata and have shown a considerable improvement in availability, e.g. improvements of over 2% in a context where 10-9 increases are significant.Our contributions have been integrated into an open-source framework. This tool also provides an unbiased generator used to perform evaluations of scheduling methods for data-driven mixed-criticality systems.
10

Casos especiais ótimos de algoritmos aproximativos para problemas de escalonamento com restrições de precedência em processadores paralelos idênticos

Lever, Elton Carlos Costa, 92 991210234 22 June 2017 (has links)
Submitted by Elton Lever (elton@icomp.ufam.edu.br) on 2018-08-23T20:26:01Z No. of bitstreams: 1 DissertacaoMestradoElton Lever-ProfRosiane-PPGI-VF.pdf: 2475783 bytes, checksum: 57e9ed5c603736311bd6f477643ff425 (MD5) / Approved for entry into archive by Secretaria PPGI (secretariappgi@icomp.ufam.edu.br) on 2018-08-23T20:35:20Z (GMT) No. of bitstreams: 1 DissertacaoMestradoElton Lever-ProfRosiane-PPGI-VF.pdf: 2475783 bytes, checksum: 57e9ed5c603736311bd6f477643ff425 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-08-24T13:35:27Z (GMT) No. of bitstreams: 1 DissertacaoMestradoElton Lever-ProfRosiane-PPGI-VF.pdf: 2475783 bytes, checksum: 57e9ed5c603736311bd6f477643ff425 (MD5) / Made available in DSpace on 2018-08-24T13:35:28Z (GMT). No. of bitstreams: 1 DissertacaoMestradoElton Lever-ProfRosiane-PPGI-VF.pdf: 2475783 bytes, checksum: 57e9ed5c603736311bd6f477643ff425 (MD5) Previous issue date: 2017-06-22 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This dissertation addresses the class of job scheduling problems with precedence constraints and unit execution times, in identical parallel processors. Such a class of problems is of great importance in computational complexity theory, since small varia- tions in the conditions involved in scheduling make an easy problem very difficult. Two major problems involve the condition of the number of processors, where, if the number of processors is variable, given as input, such problem is proved to be NP-complete, but if the number of processors is fixed, the problem is still open. In this context, the focus of the research involves the problem already proven to be NP-complete, where for which we investigated the main approximation algorithms in the literature and their proofs of approximation ratio of the optimal, such as of the Garey & Jonhson’s 2-approximation algorithm, of the Hu, of the Coffman & Graham, and of the Gangal & Ranade with 2 − (7/(3P + 1)), the best approximation ratio in the literature. The approximation ratio proofs of such algorithms were detailed. As the main contribution of this research, were proved the optimality for specific classes of acyclic directed graphs involving trees (prece- dence trees, such as in-tree and out-tree) for the best approximation algorithms literature. / Esta dissertação aborda a classe de problemas de escalonamento de tarefas com restrições de precedências e tempos unitários em processadores paralelos idênticos. Tal classe de problemas tem uma grande importância em teoria da complexidade computacional, uma vez que pequenas variações nas condições envolvidas no esca- lonamento, fazem com que um problema fácil se torne muito difícil. Dois grandes problemas envolvem a condição do número de processadores, onde, se o número de processadores for variável, dado como entrada, tal problema é provado ser NP-completo, mas, se o número de processadores for fixo, o problema ainda está em aberto. Neste contexto, o foco da pesquisa envolve o problema já provado ser NP-completo, onde para qual se investigou os principais algoritmos aproximativos existentes na literatura e suas provas de razão de aproximação do ótimo, tais como o algoritmo 2-aproximativo de Garey & Jonhson e as melhorias de Hu, Coffman & Graham e de Gangal & Ranade (GR) com 2 −(7/(3P+1)), o de melhor razão de aproximação da literatura. As provas de razão de aproximação de tais algoritmos foram detalhadas. Como principal contribuição da pesquisa, foram determinados casos especiais ótimos, para classes específicas de grafos direcionados acíclicos que envolvem arborescências (árvores de precedência, como in-tree e out-tree) para o melhor algoritmos aproximativo da literatura. / Compreender o que querem em alguns momentos.

Page generated in 0.2172 seconds