Spelling suggestions: "subject:"cyberphysical"" "subject:"bothphysical""
201 |
A Hands-on Modular Laboratory Environment to Foster Learning in Control System SecurityDeshmukh, Pallavi Prafulla 07 July 2016 (has links)
Cyber-Physical Systems (CPSes) form the core of Industrial Control Systems (ICS) and critical infrastructures. These systems use computers to control and monitor physical processes in many critical industries including aviation, industrial automation, transportation, communications, waste treatment, and power systems. Increasingly, these systems are connected with corporate networks and the Internet, making them susceptible to risks similar to traditional computing systems experiencing cyber-attacks on a conventional IT network. Furthermore, recent attacks like the Stuxnet worm have demonstrated the weaknesses of CPS security, which has gained much attention in the research community to develop more effective security mechanisms. While this remains an important topic of research, often CPS security is not given much attention in undergraduate programs. There can be a significant disconnect between control system engineers with CPS engineering skills and network engineers with an IT background.
This thesis describes hands-on courseware to help students bridge this gap. This courseware incorporates cyber-physical security concepts into effective learning modules that highlight real-world technical issues. A modular learning approach helps students understand CPS architectures and their vulnerabilities to cyber-attacks via experiential learning, and acquire practical skills through actively participating in the hands-on exercises. The ultimate goal of these lab modules is to show how an adversary would break into a conventional CPS system by exploiting various network protocols and security measures implemented in the system. A mock testbed environment is created using commercial-off-the-shelf hardware to address the unique aspects of a CPS, and serve as a cybersecurity trainer for students from control system or IT backgrounds. The modular nature of this courseware, which uses an economical and easily replicable hardware testbed, make this experience uniquely available as an adjunct to a conventional embedded system, control system design, or cybersecurity courses. To assess the impact of this courseware, an evaluation survey is developed to measure the understanding of the unique aspects of CPS security addressed. These modules leverage the existing academic subjects, help students understand the sequence of steps taken by adversaries, and serve to bridge theory and practice. / Master of Science
|
202 |
Fast and Scalable Structure-from-Motion for High-precision Mobile Augmented Reality SystemsBae, Hyojoon 24 April 2014 (has links)
A key problem in mobile computing is providing people access to necessary cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that address this key problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects's imagery. As a consequence, many mobile augmented reality approaches have been proposed to identify and visualize relevant cyber-information on users' mobile devices by intelligently interpreting users' positions and orientations in 3D and their associated surroundings. However, existing approaches for mobile augmented reality primarily rely on Radio Frequency (RF) based location tracking technologies (e.g., Global Positioning Systems or Wireless Local Area Networks), which typically do not provide sufficient precision in RF-denied areas or require additional hardware and custom mobile devices.
To remove the dependency on external location tracking technologies, this dissertation presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user's position. Rather, the user's 3D location and orientation are automatically and purely derived by comparing images from the user's mobile device to a 3D point cloud model generated from a set of pre-collected photographs.
A further challenge of mobile augmented reality is creating 3D cyber-information and associating it with real-world physical objects, especially using the limited 2D user interfaces in standard mobile devices. To address this challenge, this research provides a new image-based 3D cyber-physical content authoring method designed specifically for the limited screen sizes and capabilities of commodity mobile devices. This new approach does not only provide a method for creating 3D cyber-information with standard mobile devices, but also provides an automatic association of user-driven cyber-information with real-world physical objects in 3D.
Finally, a key challenge of scalability for mobile augmented reality is addressed in this dissertation. In general, mobile augmented reality is required to work regardless of users' location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing fast direct 2D-to-3D matching algorithms for localization, as well as applying caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users' location, size of physical objects, and number of cyber-physical information items.
To realize all of these research objectives, five research methods are developed and validated: 1) Hybrid 4-Dimensional Augmented Reality (HD4AR), 2) Plane transformation based 3D cyber-physical content authoring from a single 2D image, 3) Cached k-d tree generation for fast direct 2D-to-3D matching, 4) double-stage matching algorithm with a single indexed k-d tree, and 5) K-means Clustering of 3D physical models with geo-information. After discussing each solution with technical details, the perceived benefits and limitations of the research are discussed with validation results. / Ph. D.
|
203 |
Data Analytics for Statistical LearningKomolafe, Tomilayo A. 05 February 2019 (has links)
The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. Big data is a widely-used term without a clear definition. The difference between big data and traditional data can be characterized by four Vs: velocity (speed at which data is generated), volume (amount of data generated), variety (the data can take on different forms), and veracity (the data may be of poor/unknown quality). As many industries begin to recognize the value of big data, organizations try to capture it through means such as: side-channel data in a manufacturing operation, unstructured text-data reported by healthcare personnel, various demographic information of households from census surveys, and the range of communication data that define communities and social networks.
Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called statistical learning of the data, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies in the process.
However, several open challenges still exist in this framework for big data analytics. Recently, data types such as free-text data are also being captured. Although many established processing techniques exist for other data types, free-text data comes from a wide range of individuals and is subject to syntax, grammar, language, and colloquialisms that require substantially different processing approaches. Once the data is processed, open challenges still exist in the statistical learning step of understanding the data.
Statistical learning aims to satisfy two objectives, (1) develop a model that highlights general patterns in the data (2) create a signaling mechanism to identify if outliers are present in the data. Statistical modeling is widely utilized as researchers have created a variety of statistical models to explain everyday phenomena such as predicting energy usage behavior, traffic patterns, and stock market behaviors, among others. However, new applications of big data with increasingly varied designs present interesting challenges. Consider the example of free-text analysis posed above. There's a renewed interest in modeling free-text narratives from sources such as online reviews, customer complaints, or patient safety event reports, into intuitive themes or topics. As previously mentioned, documents describing the same phenomena can vary widely in their word usage and structure.
Another recent interest area of statistical learning is using the environmental conditions that people live, work, and grow in, to infer their quality of life. It is well established that social factors play a role in overall health outcomes, however, clinical applications of these social determinants of health is a recent and an open problem. These examples are just a few of many examples wherein new applications of big data pose complex challenges requiring thoughtful and inventive approaches to processing, analyzing, and modeling data.
Although a large body of research exists in the area of anomaly detection increasingly complicated data sources (such as side-channel related data or network-based data) present equally convoluted challenges. For effective anomaly-detection, analysts define parameters and rules, so that when large collections of raw data are aggregated, pieces of data that do not conform are easily noticed and flagged.
In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This paper focuses on the healthcare, manufacturing and social-networking industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows:
• In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data.
• In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection
o I address the research area of statistical modeling in two ways:
- There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups
- In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors
o I address the research area of anomaly detection in two ways:
- A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network based anomaly detection technique and introduce methodological improvements
- Manufacturing enterprises which are now more connected than ever are vulnerably to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process / PHD / The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. The fields of manufacturing and healthcare are two examples of industries that are currently undergoing significant transformations due to the rise of big data. The addition of large sensory systems is changing how parts are being manufactured and inspected and the prevalence of Health Information Technology (HIT) systems in healthcare systems is also changing the way healthcare services are delivered. These industries are turning to big data analytics in the hopes of acquiring many of the benefits other sectors are experiencing, including reducing cost, improving safety, and boosting productivity. However, there are many challenges that exist along with the framework of big data analytics, from pre-processing raw data, to statistical modeling of the data, and identifying anomalies present in the data or process. This work offers significant contributions in each of the aforementioned areas and includes practical real-world applications.
Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called ‘statistical learning of the data’, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies or outliers in the process.
In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This work focuses on the healthcare and manufacturing industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows:
• In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data.
• In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection
o I address the research area of statistical modeling in two ways:
- There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups
- In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors
o I address the research area of anomaly detection in two ways:
- A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network-based anomaly detection technique and introduce methodological improvements
- Manufacturing enterprises which are now more connected than ever are vulnerable to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process.
|
204 |
Exploring the Cooperative Abilities Between Homogeneous Robotic Arms : An Explorative Study of Robotics and Reinforcement LearningJärnil Pérez, Tomas January 2024 (has links)
The field of robotics has witnessed significant advancements in recent years, with robotic arms playing a pivotal role in various industrial and research applications. In large-scale manufacturing, manual labour has been replaced with robots due to their efficiency in time and cost. However, in order to replace human labour, the robots need to collaborate in a way that humans do. This master's thesis, conducted at the Cyber-physical Systems Lab (CPS-Lab) at Uppsala University, delves into the intricacies of cooperative interactions between two homogenous robotic arms powered by machine learning algorithms, aiming to explore their collective capabilities. The project will focus on implementing a multi-agent cart-pole experiment that will challenge the two robotic arms' cooperative abilities. First, the problem is simulated, and afterwards implemented in real life. The experiment will be evaluated by the performance of various tested machine learning algorithms. In the end, The simulation yielded poor results due to the complexity of the problem and the lack of proper hyperparameter tuning. The real life experiment failed instantly, caused by the robotic arms not being designed for this application, a large simulation gap, and latency in the controller design. Overall, the results show that the experiment was challenging for the robotic arms, but that it might be possible under different circumstances.
|
205 |
Enhancing Trust in Reconfigurable Hardware SystemsVenugopalan, Vivek 01 March 2017 (has links)
A Cyber-Physical System (CPS) is a large-scale, distributed, embedded system, consisting of various components that are glued together to realize control, computation and communication functions. Although these systems are complex, they are ubiquitous in the Internet of Things (IoT) era of autonomous vehicles/drones, smart homes, smart grids, etc. where everything is connected. These systems are vulnerable to unauthorized penetration due to the absence of proper security features and safeguards to protect important information. Examples such as the typewriter hack involving subversive chips resulting in leakage of keystroke data and hardware backdoors crippling anti-aircraft guns during an attack demonstrate the need to protect all system functions. With more focus on securing a system, trust in untrusted components at the integration stage is of a higher priority.
This work builds on a red-black security system, where an architecture testbed is developed with critical and non-critical IP cores and subjected to a variety of Hardware Trojan Threats (HTTs). These attacks defeat the classic trusted hardware model assumptions and demonstrate the ability of Trojans to evade detection methods based on physical characteristics. A novel metric is defined for hardware Trojan detection, termed as HTT Detectability Metric (HDM) that leverages a weighted combination of normalized physical parameters. Security analysis results show that using HDM, 86% of the implemented Trojans were detected as compared to using power consumption, timing variation and resource utilization alone. This led to the formulation of the security requirements for the development of a novel, distributed and secure methodology for enhancing trust in systems developed under untrusted environments called FIDelity Enhancing Security (FIDES). FIDES employs a decentralized information flow control (DIFC) model that enables safe and distributed information flows between various elements of the system such as IP cores, physical memory and registers. The DIFC approach annotates/tags each data item with its sensitivity level and the identity of the participating entities during the communication.
Trust enhanced FIDES (TE-FIDES) is proposed to address the vulnerabilities arising from the declassification process during communication between third-party soft IP cores. TE-FIDES employs a secure enclave approach for preserving the confidentiality of the sensitive information in the system. TE-FIDES is evaluated by targeting an IoT-based smart grid CPS application, where malicious third-party soft IP cores are prevented from causing a system blackout. The resulting hardware implementation using TE-FIDES is found to be resilient to multiple hardware Trojan attacks. / Ph. D. / The Internet-of-Things (IoT) has emerged as one of the most innovative multidisciplinary paradigms combining heterogeneous sensors, software architectures, embedded hardware systems, and data analytics. With the growth in deployment of IoT systems, security of the sensors and trustworthiness of the data exchanged is of paramount significance. IoT security approaches are derived from the vulnerabilities existing in cyber-physical systems (CPS) and the countermeasures designed against them. An unauthorized penetration due to the absence of safeguards can cripple the system and leak sensitive data. This dissertation studies the vulnerabilities posed due to the presence of hardware Trojans in such IoT-based CPS. FIDelity Enhancing Security (FIDES), named after the Greek Goddess of Trust, is a novel, distributed and secure methodology proposed to address the security requirements and enhance trust of systems developed in untrusted environments. FIDES utilizes a distributed scheme that monitors the communication between the Intellectual Property (IP) cores using tags. Trust Enhanced FIDES (TE-FIDES) is proposed to reduce the vulnerabilities arising from the declassification process of the third-party soft IP cores. TE-FIDES employs a secure enclave approach for preserving the integrity of the sensitive information in the system. In addition, TE-FIDES also uses a trust metric to record snapshots of each IP core’s state during the declassification process. TE-FIDES is evaluated by mapping an IoT-based CPS application and subjecting it to a variety of hardware Trojan attacks. The performance costs for resilient and trustworthy operation of the TE-FIDES implementation are evaluated and TE-FIDES proves to be resilient to the attacks with acceptable cyber costs.
|
206 |
Implementing telerobotics in industrial assemblingTébar, Erica January 2024 (has links)
Remote control of automation systems is consistently undeniably as a crucial aspect of their development, as it eliminates the need to travel unnecessary distances to operate them. Therefore, a framework is proposed not only for controlling an industrial robotic system but also for monitoring its behaviour and environment to ensure efficient and secure control over it. This project is carried out within the field of robotics, although its application can extend to other domains such as automotive, among others. In the following project, a system based on industry 5.0 and Cyber Physical Systems is developed and implemented capable of storing and recovering the data collected from a robotic station while allowing its control through a User Interface. Giving the operator the opportunity to control an industrial assembly process remotely in a reliable and safe way.
|
207 |
Unveiling automation potential through a better understanding of ideal cycle timeWilbers, S., Kupper, S., Günther, N., van de Sand, R., Prell, B., Speck, S., Reiff-Stephan, J. 20 February 2025 (has links)
This publication demonstrates that determining the maximum speed or Ideal Cycle Time (ICT) of machinery
or cyber-physical systems is crucial for uncovering the limits of automation in a given system.
Possibly increasing Overall Equipment Effectiveness (OEE) and identify opportunities for further digitization,
automation, and AI integration. Based on literature review and expert interviews, methods
for establishing ICT, mentioned in literature were identified and crosscheck with what practitioners
in operation actually use and how they apply them. The identified methods were: Empirical Measurement
and Data Analysis, Time Studies, Statistical Process Control (SPC), Benchmarking, Simulation
and Modeling, Expert Judgment, and Continuous Improvement Practices we. We contrast these with
insights obtained from interviews conducted with experts from companies in the German federal State
of Brandenburg, representing diverse industries and sectors. Findings suggest that while companies
recognize somewhat their ICT or maximum operational speeds, they often lack a structured method for
determining them. They frequently use combinations of established methods inconsistently. We deduce
that a formalized approach to defining ICT can better reveal system limitations and potential for
expanding them through advanced automation and. We argue that a well-defined ICT is essential for
pushing the boundaries of automated systems, contributing to more effective and Humanity–Centered
Automation (HCA) solutions.
|
208 |
Design, Implementation and Validation of Resource-Aware and Resilient Wireless Networked Control SystemsAraújo, José January 2014 (has links)
Networked control over wireless networks is of growing importance in many application domains such as industrial control, building automation and transportation systems. Wide deployment however, requires systematic design tools to enable efficient resource usage while guaranteeing close-loop control performance. The control system may be greatly affected by the inherent imperfections and limitations of the wireless medium and malfunction of system components. In this thesis, we make five important contributions that address these issues. In the first contribution, we consider event- and self-triggered control and investigate how to efficiently tune and execute these paradigms for appropriate control performance. Communication strategies for aperiodic control are devised, where we jointly address the selection of medium-access control and scheduling policies. Experimental results show that the best trade-off is obtained by a hybrid scheme, combining event- and self-triggered control together with contention-based and contention-free medium access control. The second contribution proposes an event-based method to select between fast and slow periodic sampling rates. The approach is based on linear quadratic control and the event condition is a quadratic function of the system state. Numerical and experimental results show that this hybrid controller is able to reduce the average sampling rate in comparison to a traditional periodic controller, while achieving the same closed-loop control performance. In the third contribution, we develop compensation methods for out-of-order communications and time-varying delays using a game-theoretic minimax control framework. We devise a linear temporal coding strategy where the sensor combines the current and previous measurements into a single packet to be transmitted. An experimental evaluation is performed in a multi-hop networked control scenario with a routing layer vulnerability exploited by a malicious application. The experimental and numerical results show the advantages of the proposed compensation schemes. The fourth contribution proposes a distributed reconfiguration method for sensor and actuator networks. We consider systems where sensors and actuators cooperate to recover from faults. Reconfiguration is performed to achieve model-matching, while minimizing the steady-state estimation error covariance and a linear quadratic control cost. The reconfiguration scheme is implemented in a room heating testbed, and experimental results demonstrate the method's ability to automatically reconfigure the faulty system in a distributed and fast manner. The final contribution is a co-simulator, which combines the control system simulator Simulink with the wireless network simulator COOJA. The co-simulator integrates physical plant dynamics with realistic wireless network models and the actual embedded software running on the networked devices. Hence, it allows for the validation of the complete wireless networked control system, including the study of the interactions between software and hardware components. / <p>QC 20140929</p>
|
209 |
Engineering complex systems with multigroup agentsCase, Denise Marie January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Scott A. DeLoach / As sensor prices drop and computing devices continue to become more compact and powerful, computing capabilities are being embedded throughout our physical environment. Connecting these devices in cyber-physical systems (CPS) enables applications with significant societal impact and economic benefit. However, engineering CPS poses modeling, architecture, and engineering challenges and, to fully realize the desired benefits, many outstanding challenges must be addressed. For the cyber parts of CPS, two decades of work in the design of autonomous agents and multiagent systems (MAS) offers design principles for distributed intelligent systems and formalizations for agent-oriented software engineering (AOSE). MAS foundations offer a natural fit for enabling distributed interacting devices. In some cases, complex control structures such as holarchies can be advantageous. These can motivate complex organizational strategies when implementing such systems with a MAS, and some designs may require agents to act in multiple groups simultaneously. Such agents must be able to manage their multiple associations and assignments in a consistent and unambiguous way. This thesis shows how designing agents as systems of intelligent subagents offers a reusable and practical approach to designing complex systems. It presents a set of flexible, reusable components developed for OBAA++, an organization-based architecture for single-group MAS, and shows how these components were used to develop the Adaptive Architecture for Systems of Intelligent Systems (AASIS) to enable multigroup agents suitable for complex, multigroup MAS. This work illustrates the reusability and flexibility of the approach by using AASIS to simulate a CPS for an intelligent power distribution system (IPDS) operating two multigroup MAS concurrently: one providing continuous voltage control and a second conducting discrete power auctions near sources of distributed generation.
|
210 |
Echtzeitfähige Softwareagenten zur Realisierung cyber-physischer ProduktionssystemeTheiss, Sebastian 13 October 2016 (has links) (PDF)
Aktuelle ökonomische Trends, wie die zunehmende Globalisierung und die wachsende Technisierung und Individualisierung vieler Konsumgüter, führen im Hinblick auf die zur Fertigung dieser Güter eingesetzte Automatisierungstechnik zu steigender Komplexität und hohen Flexibilitätsanforderungen. Ein Konzept zur Adressierung dieser Anforderungen ist die Auslegung von automatisierten Anlagen als modulares System flexibel kombinierbarer cyber-physischer Komponenten. Die namensgebende Einheit von mechatronischem Bauteil und lokaler Rechenkapazität ermöglicht Herstellern solcher Komponenten, Softwarebausteine für typische Steuer-, Bedien- oder Diagnoseaufgaben gebrauchsfertig vorzubereiten und so den (Re-)Engineeringaufwand bei der (Um-)Gestaltung des Gesamtsystems deutlich zu reduzieren. Allerdings stellt diese Vision hohe Ansprüche an die zugrundeliegende Softwarearchitektur, die von den derzeit zur Realisierung automatisierter Systeme eingesetzten Technologien nicht vollständig erfüllt werden.
Das Paradigma der Agentenorientierung ist ein tragfähiger Ansatz zur Realisierung solcher lose gekoppelten verteilten Systeme und stellt durch leistungsfähige Interaktionsmechanismen sowie die enge Integration von semantischem Wissen zusätzliche Funktionalität in Aussicht: Als Agenten ausgelegte Komponenten könnten auch die logische Vernetzung untereinander während der Inbetriebnahme, nach Umrüstungen oder in Reaktion auf Betriebsstörungen teilweise selbst übernehmen. Dadurch ergeben sich Fähigkeiten wie Selbstkonfiguration und Selbstregeneration, die in der Fachliteratur unter dem Begriff Self-X zusammengefasst werden. Die fehlende Echtzeitfähigkeit, insbesondere in Bezug auf besagte Interaktionsmechanismen, hat jedoch bisher die Einsetzbarkeit von Agentensystemen in der Automatisierung limitiert und die Ausschöpfung der genannten Potentiale behindert.
Deshalb wird in dieser Dissertation eine echtzeitfähige Laufzeitumgebung für Softwareagenten entworfen und anschließend die Überarbeitung bestehenden Kommunikationsmechanismen im Hinblick auf ihre Echtzeitfähigkeit vorgenommen. In diesem Kontext wird mit dem Konzept der semantischen Adressierung eine vielfältig einsetzbare Möglichkeit geschaffen, Nachrichten an ausgewählte Gruppen von Agenten mit bestimmten, semantisch beschriebenen Eigenschaften zur verschicken. Die dabei zur Wissensrepräsentation genutzten Taxonomie-Bäume bieten ein für viele Aufgabenstellungen ausreichendes Maß an Ausdrucksstärke und erlauben zudem die Verarbeitung unter harten Echtzeitbedingungen. Abschließend werden die geschaffenen Mechanismen in einem Antwortzeitmodell abgebildet, mit dem das rechtzeitige Reagieren eines Agentensystems auf lokal oder verteilt zu behandelnde Ereignisse überprüft und nachgewiesen werden kann. Damit wird ein Hauptkritikpunkt von Agentensystemen adressiert, was zu einer nachhaltigen Steigerung der Akzeptanz des Agentenparadigmas führen könnte. Während große Teile der erarbeiten Lösung als allgemeingültige Grundlagenforschung verstanden werden können, wird bei der Formulierung von Anforderungen, der Darstellung von Beispielen und der Erläuterung von Entwurfsentscheidungen immer wieder auf automatisierungstechnische Belange Bezug genommen. Außerdem wird am Ende der Arbeit eine kritische Bewertung der Ergebnisse vor dem Hintergrund eines möglichen Einsatzes in zukünftigen Automatisierungssystemen durchgeführt und damit das Gesamtbild abgerundet.
|
Page generated in 0.0546 seconds