• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 11
  • 7
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Characterizing The Vulnerability Of Parallelism To Resource Constraints

Vivekanand, V 01 1900 (has links) (PDF)
No description available.
22

Avaliação de algoritmos de roteamento e escalonamento de mensagens para redes WirelessHART

Dickow, Victor Hugo January 2014 (has links)
A aplicação de redes sem fio vem crescendo consideravelmente nos últimos anos. Protocolos baseados nesta tecnologia estão sendo desenvolvidos para uma grande variedade de aplicações. A confiabilidade é um dos principais requisitos dos protocolos de comunicação nos ambientes industriais. Interferências, ambiente ruidoso e o grande risco das aplicações industriais que são monitoradas são fatores que elevam os níveis de exigência no que se refere à confiabilidade, redundância e segurança do protocolo. O protocolo WirelessHART é um padrão de comunicação sem fio desenvolvido especificamente para monitoramento e controle de processos com os requisitos necessários para ser utilizado em ambientes industriais. A norma do WirelessHART define diversos aspectos técnicos a serem utilizados no desenvolvimento de algoritmos. Os algoritmos de roteamento e escalonamento de mensagens são de grande relevância para o cumprimento dos requisitos temporais, de confiabilidade e segurança. Requisitos de roteamento e escalonamento são especificados, porém, os algoritmos a serem utilizados não são definidos. O objetivo nessa dissertação é analisar alguns dos principais algoritmos que tem sido propostos especificamente para o protocolo WirelessHART e apresentar um conjunto capaz de ser aplicado nesse protocolo. Análises e comparações entre os algoritmos são realizadas proporcionando um estudo aprofundado sobre seus impactos no desempenho do protocolo. / The application of wireless networks has grown considerably in recent years. Protocols based on this technology are being developed for a great variety of applications. Reliability is one of the main requirements for communication protocols in industrial environments. Interferences, noisy environment and high risk processes that are monitored are factors that increase the levels of requirements in terms of reliability, redundancy and security of the protocol. The WirelessHART protocol is a wireless communication standard specifically designed for process monitoring and control applications with the necessary requirements for to be used in industrial environments. The WirelessHART standard defines several technical aspects to be used in the development of the algorithms. The algorithms of routing and scheduling messages are highly relevant to meeting the timing requirements of reliability and safety. Routing and scheduling strategies are specified, however, the routing and scheduling algorithms are not defined for use. The goal of this dissertation is to analyze some of the main algorithms that have been proposed specifically for the WirelessHART protocol and to present a set able to be applied in this protocol. Analyzes and comparisons between algorithms are realized by providing a detailed study of their impacts on the protocol performance.
23

Performance Modelling and Analysis of Weighted Fair Queueing for Scheduling in Communication Networks. An investigation into the Development of New Scheduling Algorithms for Weighted Fair Queueing System with Finite Buffer.

Alsawaai, Amina S.M. January 2010 (has links)
Analytical modelling and characterization of Weighted Fair Queueing (WFQ) have recently received considerable attention by several researches since WFQ offers the minimum delay and optimal fairness guarantee. However, all previous work on WFQ has focused on developing approximations of the scheduler with an infinite buffer because of supposed scalability problems in the WFQ computation. The main aims of this thesis are to study WFQ system, by providing an analytical WFQ model which is a theoretical construct based on a form of processor sharing for finite capacity. Furthermore, the solutions for classes with Poisson arrivals and exponential service are derived and verified against global balance solution. This thesis shows that the analytical models proposed can give very good results under particular conditions which are very close to WFQ algorithms, where accuracy of the models is verified by simulations of WFQ model. Simulations were performed with QNAP-2 simulator. In addition, the thesis presents several performance studies signifying the power of the proposed analytical model in providing an accurate delay bounds to a large number of classes. These results are not able to cover all unsolved issues in the WFQ system. They represent a starting point for the research activities that the Author will conduct in the future. The author believes that the most promising research activities exist in the scheduler method to provide statistical guarantees to multi-class services. The author is convinced that alternative software, for example, on the three class model buffer case, is able to satisfy the large number of buffer because of the software limitation in this thesis. While they can be a good topic for long-term research, the short-medium term will show an increasing interest in the modification of the WFQ models to provide differentiated services. / Ministry of Higher Education
24

Comparison of Scheduling Algorithms for a Multi-Product Batch-Chemical Plant with a Generalized Serial Network

Tra, Niem-Trung L. 03 February 2000 (has links)
Despite recent advances in computer power and the development of better algorithms, theoretical scheduling methodologies developed for batch-chemical production are seldom applied in industry (Musier & Evans 1989 and Grossmann et al. 1992). Scheduling decisions may have significant impact on overall company profitability by defining how capital is utilized, the operating costs required, and the ability to meet due dates. The purpose of this research is to compare different production scheduling methods by applying them to a real-world multi-stage, multi-product, batch-chemical production line. This research addresses the problem that the theoretical algorithms are seldom applied in industry and allows for performance analysis of several theoretical algorithms. The research presented in this thesis focuses on the development and comparison of several scheduling algorithms. The two objectives of this research are to: 1. modify different heuristic production scheduling algorithms to minimize tardiness for a multi-product batch plant involving multiple processing stages with several out-of-phase parallel machines in each stage; and 2. compare the robustness and performance of these production schedules using a stochastic discrete event simulation of a real-world production line. The following three scheduling algorithms are compared: 1. a modified Musier and Evans scheduling algorithm (1989); 2. a modified Ku and Karimi Sequence Building Algorithm (1991); and 3. a greedy heuristic based on an earliest-due-date (EDD) policy. Musier and Evans' heuristic improvement method (1989) is applied to the three algorithms. The computation times to determine the total tardiness of each schedule are compared. Finally, all the schedules are tested for robustness and performance in a stochastic setting with the use of a discrete event simulation (DES) model. Mignon, Honkomp, and Reklaitis' evaluation techniques (1995) and Multiple Comparison of the Best are used to help determine the best algorithm. / Master of Science
25

Estudo comparativo de técnicas de escalonamento de tarefas dependentes para grades computacionais / Comparative Study of Task Dependent Scheduling Algorithms to Grid Computing

Aliaga, Alvaro Henry Mamani 22 August 2011 (has links)
À medida que a ciência avança, muitas aplicações em diferentes áreas precisam de grande poder computacional. A computação em grade é uma importante alternativa para a obtenção de alto poder de processamento, no entanto, esse alto poder computacional deve ser bem aproveitado. Mediante o uso de técnicas de escalonamento especializadas, os recursos podem ser utilizados adequadamente. Atualmente existem vários algoritmos propostos para computação em grade, portanto, é necessário seguir uma boa metodologia para escolher o algoritmo que ofereça melhor desempenho, dadas determinadas características. No presente trabalho comparamos os algoritmos de escalonamento: Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e (c) Path Clustering Heuristic (PCH); cada algoritmo é avaliado com diferentes aplicações e sobre diferentes arquiteturas usando técnicas de simulação, seguindo quatro critérios: (i) desempenho, (ii) escalabilidade, (iii) adaptabilidade e (iv) distribuição da carga do trabalho. Diferenciamos as aplicações para grade em dois tipos: (i) aplicações regulares e (ii) aplicações irregulares; dado que em aplicações irregulares não é facil comparar o critério de escalabilidade. Seguindo esse conjunto de critérios o algoritmo HEFT possui o melhor desempenho e escalabilidade; enquanto que os três algoritmos possuem o mesmo nível de adaptabilidade. Na distribuição de carga de trabalho o algoritmo HEFT aproveita melhor os recursos do que os outros. Por outro lado os algoritmos CPOP e PCH usam a técnica de escalonar o caminho crítico no processador que ofereça o melhor tempo de término, mas essa abordagem nem sempre é a mais adequada. / As science advances, many applications in different areas need a big amount of computational power. Grid computing is an important alternative to obtain high processing power, but this high computational power must be well used. By using specialized scheduling techniques, resources can be properly used. Currently there are several algorithms for grid computing, therefore, is necessary to follow a good methodology to choose an algorithm that offers better performance given certain settings. In this work, we compare task dependent scheduling algorithms: (a) Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e Path Clustering Heuristic (PCH); each algorithm is evaluated with different applications and on different architectures using simulation techniques, following four criterias: (i) performance, (ii) scalability, (iii) adaptability and (iv) workload distribution. We distinguish two kinds of grid applications: (i) regular applications and (ii) irregular applications, since in irregular applications is not easy to compare scalability criteria. Following this set of criteria the HEFT algorithm reaches the best performance and scalability, while the three algorithms have the same level of adaptability. In workload distribution HEFT algorithm makes better use of resources than others. On the other hand, CPOP and PCH algorithms use scheduling of tasks which belong to the critical path on the processor which minimizes the earliest finish time, but this approach is not always the most appropriate.
26

Uma arquitetura de nuvem em comunidade para aplicações de tempo real. / A community cloud architecture for real-time applications.

Ös, Marcelo Dutra 30 November 2015 (has links)
A Computação em Nuvem é um paradigma de computação distribuída que vem sendo utilizado extensivamente em vários campos de interesse nos últimos anos, desde aplicações web comuns até a aplicações de alta-performance computacional. O modelo de pagamento pelo uso e a isonomia dos métodos de acesso transformaram o ambiente de Computação em Nuvem em uma alternativa extremamente popular e atrativa tanto para universidades como para empresas privadas. Entre os modelos de implantação adotados atualmente destaca-se o de nuvem em comunidade, onde várias entidades que possuem interesses em comum constroem, mantém e compartilham a mesma infraestrutura de serviços em nuvem. O modelo computacional em nuvem também pode ser atrativo para aplicações que tenham como requisito o processamento em tempo real, principalmente pela capacidade de manipulação de grandes volumes de dados e pela propriedade de elasticidade, que é a inserção ou remoção de recursos computacionais dinamicamente de acordo com a demanda. Nesta tese, são identificados os requisitos para a construção de um ambiente em nuvem em comunidade para aplicações de tempo real. A partir destes requisitos e de uma revisão bibliográfica baseada em nuvem e sistemas distribuídos de tempo real, é desenvolvida a proposta de uma arquitetura de nuvem em comunidade de tempo real. Um estudo de caso de compra e venda de ações em bolsa de valores é apresentado como uma aplicação viável para este modelo, sendo que um algoritmo de escalonamento de tempo real para este ambiente é proposto. Por fim, é desenvolvido nesta tese um simulador cujo objetivo é demonstrar em termos quantitativos quais as melhorias de desempenho atingidas com esta arquitetura. / Cloud Computing is a distributed computing paradigm which is being extensively applied to many fields of interest in the last few years, ranging from ordinary web applications to highperformance computing. The pay-per-use model and ubiquitous access methods have made Cloud Computing an interesting and popular alternative for both enterprises and universities. Among the deployment models adopted, one of the most prominent is the community cloud, where several entities who share similar interests build, maintain and use the same infrastructure of cloud services. The cloud computing paradigm can be attractive to applications whose requirements are the processing in real-time too, mainly because of its capacity of handling huge amounts of data as for the property of elasticity, which is the dynamic and automatic insertion or removal of computing resources on-demand. In this thesis, the requirements of a community cloud for real-time applications are identified. Based on these requirements and on a bibliographical review of the research fields of real-time distributed systems and real-time clouds, it is developed a proposal for a real-time community cloud architecture. A case study of a trading real-time application at a stock exchange is presented as a feasible application for this model. Also, a real-time scheduling algorithm is proposed for this environment. A simulator is built in order to demonstrate the quantitative improvements this architecture brings.
27

Uma arquitetura de nuvem em comunidade para aplicações de tempo real. / A community cloud architecture for real-time applications.

Marcelo Dutra Ös 30 November 2015 (has links)
A Computação em Nuvem é um paradigma de computação distribuída que vem sendo utilizado extensivamente em vários campos de interesse nos últimos anos, desde aplicações web comuns até a aplicações de alta-performance computacional. O modelo de pagamento pelo uso e a isonomia dos métodos de acesso transformaram o ambiente de Computação em Nuvem em uma alternativa extremamente popular e atrativa tanto para universidades como para empresas privadas. Entre os modelos de implantação adotados atualmente destaca-se o de nuvem em comunidade, onde várias entidades que possuem interesses em comum constroem, mantém e compartilham a mesma infraestrutura de serviços em nuvem. O modelo computacional em nuvem também pode ser atrativo para aplicações que tenham como requisito o processamento em tempo real, principalmente pela capacidade de manipulação de grandes volumes de dados e pela propriedade de elasticidade, que é a inserção ou remoção de recursos computacionais dinamicamente de acordo com a demanda. Nesta tese, são identificados os requisitos para a construção de um ambiente em nuvem em comunidade para aplicações de tempo real. A partir destes requisitos e de uma revisão bibliográfica baseada em nuvem e sistemas distribuídos de tempo real, é desenvolvida a proposta de uma arquitetura de nuvem em comunidade de tempo real. Um estudo de caso de compra e venda de ações em bolsa de valores é apresentado como uma aplicação viável para este modelo, sendo que um algoritmo de escalonamento de tempo real para este ambiente é proposto. Por fim, é desenvolvido nesta tese um simulador cujo objetivo é demonstrar em termos quantitativos quais as melhorias de desempenho atingidas com esta arquitetura. / Cloud Computing is a distributed computing paradigm which is being extensively applied to many fields of interest in the last few years, ranging from ordinary web applications to highperformance computing. The pay-per-use model and ubiquitous access methods have made Cloud Computing an interesting and popular alternative for both enterprises and universities. Among the deployment models adopted, one of the most prominent is the community cloud, where several entities who share similar interests build, maintain and use the same infrastructure of cloud services. The cloud computing paradigm can be attractive to applications whose requirements are the processing in real-time too, mainly because of its capacity of handling huge amounts of data as for the property of elasticity, which is the dynamic and automatic insertion or removal of computing resources on-demand. In this thesis, the requirements of a community cloud for real-time applications are identified. Based on these requirements and on a bibliographical review of the research fields of real-time distributed systems and real-time clouds, it is developed a proposal for a real-time community cloud architecture. A case study of a trading real-time application at a stock exchange is presented as a feasible application for this model. Also, a real-time scheduling algorithm is proposed for this environment. A simulator is built in order to demonstrate the quantitative improvements this architecture brings.
28

Estudo comparativo de técnicas de escalonamento de tarefas dependentes para grades computacionais / Comparative Study of Task Dependent Scheduling Algorithms to Grid Computing

Alvaro Henry Mamani Aliaga 22 August 2011 (has links)
À medida que a ciência avança, muitas aplicações em diferentes áreas precisam de grande poder computacional. A computação em grade é uma importante alternativa para a obtenção de alto poder de processamento, no entanto, esse alto poder computacional deve ser bem aproveitado. Mediante o uso de técnicas de escalonamento especializadas, os recursos podem ser utilizados adequadamente. Atualmente existem vários algoritmos propostos para computação em grade, portanto, é necessário seguir uma boa metodologia para escolher o algoritmo que ofereça melhor desempenho, dadas determinadas características. No presente trabalho comparamos os algoritmos de escalonamento: Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e (c) Path Clustering Heuristic (PCH); cada algoritmo é avaliado com diferentes aplicações e sobre diferentes arquiteturas usando técnicas de simulação, seguindo quatro critérios: (i) desempenho, (ii) escalabilidade, (iii) adaptabilidade e (iv) distribuição da carga do trabalho. Diferenciamos as aplicações para grade em dois tipos: (i) aplicações regulares e (ii) aplicações irregulares; dado que em aplicações irregulares não é facil comparar o critério de escalabilidade. Seguindo esse conjunto de critérios o algoritmo HEFT possui o melhor desempenho e escalabilidade; enquanto que os três algoritmos possuem o mesmo nível de adaptabilidade. Na distribuição de carga de trabalho o algoritmo HEFT aproveita melhor os recursos do que os outros. Por outro lado os algoritmos CPOP e PCH usam a técnica de escalonar o caminho crítico no processador que ofereça o melhor tempo de término, mas essa abordagem nem sempre é a mais adequada. / As science advances, many applications in different areas need a big amount of computational power. Grid computing is an important alternative to obtain high processing power, but this high computational power must be well used. By using specialized scheduling techniques, resources can be properly used. Currently there are several algorithms for grid computing, therefore, is necessary to follow a good methodology to choose an algorithm that offers better performance given certain settings. In this work, we compare task dependent scheduling algorithms: (a) Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e Path Clustering Heuristic (PCH); each algorithm is evaluated with different applications and on different architectures using simulation techniques, following four criterias: (i) performance, (ii) scalability, (iii) adaptability and (iv) workload distribution. We distinguish two kinds of grid applications: (i) regular applications and (ii) irregular applications, since in irregular applications is not easy to compare scalability criteria. Following this set of criteria the HEFT algorithm reaches the best performance and scalability, while the three algorithms have the same level of adaptability. In workload distribution HEFT algorithm makes better use of resources than others. On the other hand, CPOP and PCH algorithms use scheduling of tasks which belong to the critical path on the processor which minimizes the earliest finish time, but this approach is not always the most appropriate.
29

Etude des méthodes d'ordonnancement sur les réseaux de capteurs sans fil. / Study on Scheduling over Wireless Sensor Networks.

Alghamdi, Bandar 06 November 2015 (has links)
Les Wireless Body Area (WBAN) sont une technologie de réseau sans fil basée sur les radio-fréquences qui consiste à interconnecter sur, autour ou dans le corps humain de minuscules dispositifs pouvant effectuer des mesures (capteurs). Ces réseaux sont considérés comme les plus critiques dans les réseaux de capteurs sans fil. Ils sont basés sur des architectures de réseaux auto-organisés. Chacun des capteurs corporels reçoit ou envoie des paquets du ou au coordinateur du réseau. Ce dernier est responsable de l'ordonnancement des tâches pour l'ensemble des noeuds fils. L'ordonnancement dans les WBAN nécessite un mécanisme dynamique et adaptatif pour gérer les cas d'urgence qui peuvent se produire et permet ainsi d'améliorer les paramètres les plus importants comme la qualité de la transmission, le temps de réponse, le débit, le taux de paquets délivres, etc.Dans ces travaux de thèse, nous avons proposé trois techniques d'ordonnancement qui sont : la méthode semi-dynamique; la méthode dynamique et la méthode basée sur la priorité. De plus, une étude sur les plateformes WBAN est présentée. Dans cette étude, nous avons proposé une classification et une évaluation qualitative des plateformes déjà existantes. Nous avons aussi étudier les modèles de mobilité en proposant une architecture permettant de les décrire. Nous avons aussi mis en place une procédure de diagnostique afin de détecter rapidement des maladies épidémiques dangereuses. Par la suite, ces différentes propositions ont été validées en utilisant deux méthodes afin de vérifier leur faisabilité. Ces méthodes sont la simulation avec OPNET et l'implémentation réelle sur des capteurs TelosB et TinyOS. / The Wireless Body Area Network (WBAN) is the most critical field when considering Wireless Sensor Networks (WSN). It must be a self-organizing network architecture, meaning that it should be able to efficiently manage all network architecture requirements. The WBAN usually contains at least two or more body sensors. Each body sensor sends packets to or receives packets from the Personal Area Network Coordinator (PANC). The PANC is responsible for scheduling its child nodes' tasks. Scheduling tasks in the WBAN requires a dynamic and an adaptive process in order to handle cases of emergency that can occur with a given patient. To improve the most important parameters of a WBAN, such as quality link, response time, throughput, the duty-cycle, and packet delivery, we propose three scheduling processes: the semi-dynamic, dynamic, and priority-based dynamic scheduling approaches.In this thesis, we propose three task scheduling techniques, Semi-Dynamic Scheduling (SDS), Efficient Dynamic Scheduling (EDS) and High Priority Scheduling (HPS) approaches. Moreover, a comprehensive study has been performed for the WBAN platforms by classifying and evaluating them. We also investigate the mobility model for the WBANs by designing an architecture that describe this model. In addition, we detail a diagnosis procedure by using classification methods in order to solve very sensitive epidemic diseases. Then, our proposals have been validated using two techniques to check out the feasibility of our proposals. These techniques are simulation scenarios using the well-known network simulator OPNET and real implementations over TelosB motes under the TinyOS system.
30

Parallelize Automated Tests in a Build and Test Environment

Durairaj, Selva Ganesh January 2016 (has links)
This thesis investigates the possibilities of finding solutions, in order to reduce the total time spent for testing and waiting times for running multiple automated test cases in a test framework. The “Automated Test Framework”, developed by Axis Communications AB, is used to write the functional tests to test both hardware and software of a resource. The functional tests that tests the software is considered in this thesis work. In the current infrastructure, tests are executed sequentially and resources are allocated using First In First Out scheduling algorithm. From the user’s point of view, it is inefficient to wait for many hours to run their tests that take few minutes to execute. The thesis consists of two main parts: (1) identify a plugin that suits the framework and executes the tests in parallel, which reduces the overall execution time of tests and (2) analyze various scheduling algorithms in order to address the resource allocation problem, which arose due to limited resource availability, while the tests were run in parallel. By distributing multiple tests across several resources and executing them in parallel, help in improving the test strategy, thereby reducing the overall execution times of test suites. The case studies were created to emulate the problematic scenarios in the company and sample tests were written that reflect the real tests in the framework. Due to the complexity of the current architecture and the limited resources available for running the test in parallel, a simulator was developed with the identified plugin in a multi-core computer, with each core simulating a resource. Multiple tests were run using the simulator in order to explore, check and assess if the overall execution time of the tests can be reduced. While achieving parallelism in running the automated tests, resource allocation became a problem, since limited resources are available to run parallel tests. In order to address this problem, scheduling algorithms were considered. A prototype was developed to mimic the behaviour of a scheduling plugin and the scheduling algorithms were implemented in the prototype. The set of values were given as input to the prototype and tested with scenarios described under case studies. The results from the prototype are used to analyze the impact caused by various scheduling algorithms on reducing the waiting times of the tests. The combined usage of simulator along with scheduler prototype helped in understanding how to minimize the total time spent for testing and improving the resource allocation process.

Page generated in 0.0785 seconds