• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The impact of decentral dispatching strategies on the performance of intralogistics transport systems

Klein, Nils 17 July 2014 (has links) (PDF)
This thesis focuses on control strategies for intralogistics transport systems. It evaluates how switching from central to decentral dispatching approaches influences the performance of these systems. Many ideas and prototypes for implementing decentral control have been suggested by the scientific community. But usually only the qualitative advantages of this new paradigm are stated. The impact on the performance is not quantified and analyzed. Additionally, decentral control is often confused with distributed algorithms or uses the aggregation of local to global information. In the case of the latter, the technological limitations due to the communication overhead are not considered. The decentral prototypes usually only focus on routing. This paper takes a step back and provides a generic simulation environment which can be used by other researchers to test and compare control strategies in the future. The test environment is used for developing four truly decentral dispatching strategies which work only based on local information. These strategies are compared to a central approach for controlling transportation systems. Input data from two real-world applications is used for a series of simulation experiments with three different layout complexities. Based on the simulation studies neither the central nor the decentral dispatching strategies show a universally superior performance. The results depend on the combination of input data set and layout scenario. The expected efficiency loss for the decentral approaches can be confirmed for stable input patterns. Regardless of the layout complexity the decentral strategies always need more vehicles to reach the performance level of the central control rule when these input characteristics are present. In the case of varying input data and high throughput the decentral strategies outperform the central approach in simple layouts. They require fewer vehicles and less vehicle movement to achieve the central performance. Layout simplicity makes the central dispatching strategy prone to undesired effects. The simple-minded decentral decision rules can achieve a better performance in this kind of environment. But only complex layouts are a relevant benchmark scenario for transferring decentral ideas to real-world applications. In such a scenario the decentral performance deteriorates while the layout-dependent influences on the central strategy become less relevant. This is true for both analyzed input data sets. Consequently, the decentral strategies require at least 36% to 53% more vehicles and 20% to 42% more vehicle movement to achieve the lowest central performance level. Therefore their usage can currently not be justified based on investment and operating costs. The characteristics of decentral systems limit their own performance. The restriction to local information leads to poor dispatching decisions which in return induce self-enforcing inefficiencies. In addition, the application of decentral strategies requires bigger storage location capacity. In several disturbance scenarios the decentral strategies perform fairly well and show their ability to adapt to changed environmental conditions. However, their performance after the disturbance remains in some cases unpredictable and relates to the properties of self-organizing complex systems. A real-world applicability has to be called into question.
2

The impact of decentral dispatching strategies on the performance of intralogistics transport systems

Klein, Nils 14 August 2013 (has links)
This thesis focuses on control strategies for intralogistics transport systems. It evaluates how switching from central to decentral dispatching approaches influences the performance of these systems. Many ideas and prototypes for implementing decentral control have been suggested by the scientific community. But usually only the qualitative advantages of this new paradigm are stated. The impact on the performance is not quantified and analyzed. Additionally, decentral control is often confused with distributed algorithms or uses the aggregation of local to global information. In the case of the latter, the technological limitations due to the communication overhead are not considered. The decentral prototypes usually only focus on routing. This paper takes a step back and provides a generic simulation environment which can be used by other researchers to test and compare control strategies in the future. The test environment is used for developing four truly decentral dispatching strategies which work only based on local information. These strategies are compared to a central approach for controlling transportation systems. Input data from two real-world applications is used for a series of simulation experiments with three different layout complexities. Based on the simulation studies neither the central nor the decentral dispatching strategies show a universally superior performance. The results depend on the combination of input data set and layout scenario. The expected efficiency loss for the decentral approaches can be confirmed for stable input patterns. Regardless of the layout complexity the decentral strategies always need more vehicles to reach the performance level of the central control rule when these input characteristics are present. In the case of varying input data and high throughput the decentral strategies outperform the central approach in simple layouts. They require fewer vehicles and less vehicle movement to achieve the central performance. Layout simplicity makes the central dispatching strategy prone to undesired effects. The simple-minded decentral decision rules can achieve a better performance in this kind of environment. But only complex layouts are a relevant benchmark scenario for transferring decentral ideas to real-world applications. In such a scenario the decentral performance deteriorates while the layout-dependent influences on the central strategy become less relevant. This is true for both analyzed input data sets. Consequently, the decentral strategies require at least 36% to 53% more vehicles and 20% to 42% more vehicle movement to achieve the lowest central performance level. Therefore their usage can currently not be justified based on investment and operating costs. The characteristics of decentral systems limit their own performance. The restriction to local information leads to poor dispatching decisions which in return induce self-enforcing inefficiencies. In addition, the application of decentral strategies requires bigger storage location capacity. In several disturbance scenarios the decentral strategies perform fairly well and show their ability to adapt to changed environmental conditions. However, their performance after the disturbance remains in some cases unpredictable and relates to the properties of self-organizing complex systems. A real-world applicability has to be called into question.
3

Run-time Adaptation of Role-based Software Systems

Weißbach, Martin 06 September 2018 (has links)
Self-adaptive software systems possess the ability to modify their own structure or behavior in response to changes in their operational environment. Access to sensor data providing information on the monitored environment is a necessary prerequisite in such software systems. In the future, self-adaptive software systems will be increasingly distributed and interconnected to perform their assigned tasks, e.g., within smart environments or as part of autonomous systems. Adaptations of the software systems\\\' structure or behavior will therefore have to be performed consistently on multiple remote subsystems. Current approaches, however, do not completely support the run-time adaptation of distributed and interconnected software systems. Supported adaptations are local to a specific device and do not require further coordination or the execution of such adaptations is controlled by a centralized management system. Approaches that support the decentralized adaptation process, help to determine a stable state, e.g., defined by quiescence, of one adaptable entity without central knowledge ahead of the actual adaptation process. The execution of complex adaptation scenarios comprising several adaptations on multiple computational devices is currently not supported. Consequently, inherent properties of a distributed system such as intermittent connectivity or local adaptation failures pose further challenges on the execution of adaptations affecting system parts deployed to multiple devices. Adaptation operations in the current research landscape cover different types of changes that can be performed upon a self-adaptive software system. Simple adaptations allow the modification of bindings between components or services as well as the removal or creation and integration of such components or services into the system. Semantically more expressive operations allow for the relocation of behavioral parts of the system. In this thesis, a coordination protocol is presented that supports the decentralized execution of multiple, possibly dependent adaptation operations and ensures a consistent transition of the software system from its source to a desired target configuration. An adaptation operation describes exactly one behavioral modification of the system, e.g., the addition or replacement of a component representing a behavioral element of the system\\\'s configuration. We rely on the notion of Roles as an abstraction to define the software system\\\'s static and dynamic, i.e., context-dependent, parts. Roles are an intuitive means to describe behavioral adaptations in distributed, context-dependent software systems due to their behavioral, relational and context-dependent nature. Adaptation operations therefore utilize the Role concept to describe the intended run-time modifications of the software system. The proposed protocol is designed to maintain a consistent transition of the software system from a given source to a target configuration in the presence of link failures between remote subsystems, i.e., messages used by the protocol to coordinate the adaptation process are lost on transmission, and in case of local failures during the adaptation process. The evaluation of our approach comprises two aspects: In one step, the correctness of the coordination protocol is formally validated using the model checking tool PRISM. The protocol is shown to be deadlock-free even in the presence of coordination message losses and local adaptation failures. In a second step, the approach is evaluated with the help of an emulated execution environment in which the degree of coordination message losses and adaptation failures is varied. The adaptation duration and the partial unavailability of the system, i.e., the time roles are passive due to ongoing adaptations, is measured as well as the success rate of the adaptation process for different rates of message losses and adaptation failures.
4

Coordinated Execution of Adaptation Operations in Distributed Role-based Software Systems

Weißbach, Martin, Springer, Thomas 01 July 2021 (has links)
Future applications will run in a highly heterogeneous and dynamic execution environment that forces them to adapt their behavior and offered functionality depending on the user's or the system's current situation. Since application components in such heterogeneous multi-device systems will be distributed over multiple interconnected devices and cooperate to achieve a common goal, a coordinated adaptation is required to ensure a consistent system behavior. In this paper we present a decentralized adaptation middleware to adapt a distributed software system. Our approach supports the reliable execution of multiple adaptation operations that depend on each other and are performed transactionally even in unsteady environments coined by message loss or node failures. We implemented our approach in a search-and-rescue robot scenario to show its feasibility and conduct first performance evaluations.
5

Distributed Collaboration on Versioned Decentralized RDF Knowledge Bases

Arndt, Natanael 30 June 2021 (has links)
Ziel dieser Arbeit ist es, die Entwicklung von RDF-Wissensbasen in verteilten kollaborativen Szenarien zu unterstützen. In dieser Arbeit wird eine neue Methodik für verteiltes kollaboratives Knowledge Engineering – „Quit“ – vorgestellt. Sie geht davon aus, dass es notwendig ist, während des gesamten Kooperationsprozesses Dissens auszudrücken und individuelle Arbeitsbereiche für jeden Mitarbeiter bereitzustellen. Der Ansatz ist von der Git-Methodik zum kooperativen Software Engineering inspiriert und basiert auf dieser. Die Analyse des Standes der Technik zeigt, dass kein System die Git-Methodik konsequent auf das Knowledge Engineering überträgt. Die Hauptmerkmale der Quit-Methodik sind unabhängige Arbeitsbereiche für jeden Benutzer und ein gemeinsamer verteilter Arbeitsbereich für die Zusammenarbeit. Während des gesamten Kollaborationsprozesses spielt die Data-Provenance eine wichtige Rolle. Zur Unterstützung der Methodik ist der Quit-Stack als eine Sammlung von Microservices implementiert, die es ermöglichen, die Semantic-Web-Datenstruktur und Standardschnittstellen in den verteilten Kollaborationsprozess zu integrieren. Zur Ergänzung der verteilten Datenerstellung werden geeignete Methoden zur Unterstützung des Datenverwaltungsprozesses erforscht. Diese Managementprozesse sind insbesondere die Erstellung und das Bearbeiten von Daten sowie die Publikation und Exploration von Daten. Die Anwendung der Methodik wird in verschiedenen Anwendungsfällen für die verteilte Zusammenarbeit an Organisationsdaten und an Forschungsdaten gezeigt. Weiterhin wird die Implementierung quantitativ mit ähnlichen Arbeiten verglichen. Abschließend lässt sich feststellen, dass der konsequente Ansatz der Quit-Methodik ein breites Spektrum von Szenarien zum verteilten Knowledge Engineering im Semantic Web ermöglicht.:Preface by Thomas Riechert Preface by Cesare Pautasso 1 Introduction 2 Preliminaries 3 State of the Art 4 The Quit Methodology 5 The Quit Stack 6 Data Creation and Authoring 7 Publication and Exploration 8 Application and Evaluation 9 Conclusion and Future Work Bibliography Web References List of Figures List of Tables List of Listings List of Definitions and Acronyms List of Namespace Prefixes / The aim of this thesis is to support the development of RDF knowledge bases in a distributed collaborative setup. In this thesis, a new methodology for distributed collaborative knowledge engineering – called Quit – is presented. It follows the premise that it is necessary to express dissent throughout a collaboration process and to provide individual workspaces for each collaborator. The approach is inspired by and based on the Git methodology for collaboration in software engineering. The state-of-the-art analysis shows that no system is consequently transferring the Git methodology to knowledge engineering. The key features of the Quit methodology are independent workspaces for each user and a shared distributed workspace for the collaboration. Throughout the whole collaboration process data provenance plays an important role. To support the methodology the Quit Stack is implemented as a collection of microservices, that allow to integrate the Semantic Web data structure and standard interfaces with the distributed collaborative process. To complement the distributed data authoring, appropriate methods to support the data management process are researched. These management processes are in particular the creation and authoring of data as well as the publication and exploration of data. The application of the methodology is shown in various use cases for the distributed collaboration on organizational data and on research data. Further, the implementation is quantitatively compared to the related work. Finally, it can be concluded that the consequent approach followed by the Quit methodology enables a wide range of distributed Semantic Web knowledge engineering scenarios.:Preface by Thomas Riechert Preface by Cesare Pautasso 1 Introduction 2 Preliminaries 3 State of the Art 4 The Quit Methodology 5 The Quit Stack 6 Data Creation and Authoring 7 Publication and Exploration 8 Application and Evaluation 9 Conclusion and Future Work Bibliography Web References List of Figures List of Tables List of Listings List of Definitions and Acronyms List of Namespace Prefixes

Page generated in 0.0442 seconds