• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptive Scheduling in a Distributed Cyber-Physical System: A case study on Future Power Grids

Choudhari, Ashish 01 December 2015 (has links)
Cyber-physical systems (CPS) are systems that are composed of physical and computational components. CPS components are typically interconnected through a communication network that allows components to interact and take automated actions that are beneficial for the overall CPS. Future Power-Grid is one of the major example of Cyber-physical systems. Traditionally, Power-Grids use a centralized approach to manage the energy produced at power sources or large power plants. Due to the advancement and availability of renewable energy sources such as wind farms and solar systems, there are more number of energy sources connecting to the power grid. Managing these large number of energy sources using a centralized technique is not practical and is computationally very expensive. Therefore, a decentralized way of monitoring and scheduling of energy across the power grid is preferred. In a decentralized approach, computational load is distributed among the grid entities that are interconnected through a readily available communication network like internet. The communication network allows the grid entities to coordinate and exchange their power state information with each other and take automated actions that lead to efficient consumption of energy as well as the network bandwidth. Thus, the future power grid is appropriately called a "Smart-Grid". While Smart-Grids provide efficient energy operations, they also impose several challenges in the design, verification and monitoring phases. The computer network serves as a backbone for scheduling messages between the Smart-Grid entities. Therefore, network delays experienced by messages play a vital role in grid stability and overall system performance. In this work, we study the effects of network delays on Smart-Grid performance and propose adaptive algorithms to efficiently schedule messages between the grid entities. Algorithms proposed in this work also ensure the grid stability and perform network congestion control. Through this work, we derive useful conclusions regarding the Smart-Grid performance and find new challenges that can serve as future research directions in this domain.
2

ADAPTIVE, MULTI-OBJECTIVE JOB SHOP SCHEDULING USING GENETIC ALGORITHMS

Metta, Haritha 01 January 2008 (has links)
This research proposes a method to solve the adaptive, multi-objective job shop scheduling problem. Adaptive scheduling is necessary to deal with internal and external disruptions faced in real life manufacturing environments. Minimizing the mean tardiness for jobs to effectively meet customer due date requirements and minimizing mean flow time to reduce the lead time jobs spend in the system are optimized simultaneously. An asexual reproduction genetic algorithm with multiple mutation strategies is developed to solve the multi-objective optimization problem. The model is tested for single day and multi-day adaptive scheduling. Results are compared with those available in the literature for standard problems and using priority dispatching rules. The findings indicate that the genetic algorithm model can find good solutions within short computational time.
3

Scientific Workflows for Hadoop

Bux, Marc Nicolas 07 August 2018 (has links)
Scientific Workflows bieten flexible Möglichkeiten für die Modellierung und den Austausch komplexer Arbeitsabläufe zur Analyse wissenschaftlicher Daten. In den letzten Jahrzehnten sind verschiedene Systeme entstanden, die den Entwurf, die Ausführung und die Verwaltung solcher Scientific Workflows unterstützen und erleichtern. In mehreren wissenschaftlichen Disziplinen wachsen die Mengen zu verarbeitender Daten inzwischen jedoch schneller als die Rechenleistung und der Speicherplatz verfügbarer Rechner. Parallelisierung und verteilte Ausführung werden häufig angewendet, um mit wachsenden Datenmengen Schritt zu halten. Allerdings sind die durch verteilte Infrastrukturen bereitgestellten Ressourcen häufig heterogen, instabil und unzuverlässig. Um die Skalierbarkeit solcher Infrastrukturen nutzen zu können, müssen daher mehrere Anforderungen erfüllt sein: Scientific Workflows müssen parallelisiert werden. Simulations-Frameworks zur Evaluation von Planungsalgorithmen müssen die Instabilität verteilter Infrastrukturen berücksichtigen. Adaptive Planungsalgorithmen müssen eingesetzt werden, um die Nutzung instabiler Ressourcen zu optimieren. Hadoop oder ähnliche Systeme zur skalierbaren Verwaltung verteilter Ressourcen müssen verwendet werden. Diese Dissertation präsentiert neue Lösungen für diese Anforderungen. Zunächst stellen wir DynamicCloudSim vor, ein Simulations-Framework für Cloud-Infrastrukturen, welches verschiedene Aspekte der Variabilität adäquat modelliert. Im Anschluss beschreiben wir ERA, einen adaptiven Planungsalgorithmus, der die Ausführungszeit eines Scientific Workflows optimiert, indem er Heterogenität ausnutzt, kritische Teile des Workflows repliziert und sich an Veränderungen in der Infrastruktur anpasst. Schließlich präsentieren wir Hi-WAY, eine Ausführungsumgebung die ERA integriert und die hochgradig skalierbare Ausführungen in verschiedenen Sprachen beschriebener Scientific Workflows auf Hadoop ermöglicht. / Scientific workflows provide a means to model, execute, and exchange the increasingly complex analysis pipelines necessary for today's data-driven science. Over the last decades, scientific workflow management systems have emerged to facilitate the design, execution, and monitoring of such workflows. At the same time, the amounts of data generated in various areas of science outpaced hardware advancements. Parallelization and distributed execution are generally proposed to deal with increasing amounts of data. However, the resources provided by distributed infrastructures are subject to heterogeneity, dynamic performance changes at runtime, and occasional failures. To leverage the scalability provided by these infrastructures despite the observed aspects of performance variability, workflow management systems have to progress: Parallelization potentials in scientific workflows have to be detected and exploited. Simulation frameworks, which are commonly employed for the evaluation of scheduling mechanisms, have to consider the instability encountered on the infrastructures they emulate. Adaptive scheduling mechanisms have to be employed to optimize resource utilization in the face of instability. State-of-the-art systems for scalable distributed resource management and storage, such as Apache Hadoop, have to be supported. This dissertation presents novel solutions for these aspirations. First, we introduce DynamicCloudSim, a cloud computing simulation framework that is able to adequately model the various aspects of variability encountered in computational clouds. Secondly, we outline ERA, an adaptive scheduling policy that optimizes workflow makespan by exploiting heterogeneity, replicating bottlenecks in workflow execution, and adapting to changes in the underlying infrastructure. Finally, we present Hi-WAY, an execution engine that integrates ERA and enables the highly scalable execution of scientific workflows written in a number of languages on Hadoop.

Page generated in 0.0932 seconds