Spelling suggestions: "subject:"oberschichteninteraktion"" "subject:"sensorinteraktion""
1 |
Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision SystemBdiwi, Mohamad 12 August 2014 (has links) (PDF)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows:
• How to define which subspaces should be vision, position or force controlled?
• When the controller should switch from one control mode to another one?
• How to insure that the visual information could be reliably used?
• How to define the most appropriated vision/force control structure?
In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed.
In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely:
1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge.
2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable.
3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene.
4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user.
If the previous properties are relatively achieved, the proposed robot system can:
• Perform different successive and complex tasks.
• Grasp/contact and track imprecisely placed objects with different poses.
• Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events.
• Benefit from all the advantages of different vision/force control structures.
• Benefit from all the information provided by the sensors.
• Reduce the human intervention or reprogramming during the execution of the task.
• Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
|
2 |
Gestures in human-robot interaction / development of intuitive gesture vocabularies and robust gesture recognitionBodiroža, Saša 16 February 2017 (has links)
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt. / Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
|
3 |
Learning Continuous Human-Robot Interactions from Human-Human DemonstrationsVogt, David 02 March 2018 (has links) (PDF)
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
|
4 |
Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision SystemBdiwi, Mohamad 10 June 2014 (has links)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows:
• How to define which subspaces should be vision, position or force controlled?
• When the controller should switch from one control mode to another one?
• How to insure that the visual information could be reliably used?
• How to define the most appropriated vision/force control structure?
In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed.
In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely:
1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge.
2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable.
3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene.
4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user.
If the previous properties are relatively achieved, the proposed robot system can:
• Perform different successive and complex tasks.
• Grasp/contact and track imprecisely placed objects with different poses.
• Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events.
• Benefit from all the advantages of different vision/force control structures.
• Benefit from all the information provided by the sensors.
• Reduce the human intervention or reprogramming during the execution of the task.
• Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
|
5 |
Multimodal Learning CompanionsYun, Hae Seon 20 December 2024 (has links)
Technologien wie Sensoren können dabei helfen, die Fortschritte und Zustände der Lernenden (z.B. Langeweile, Verhaltensweisen des Aufgebens) zu verstehen, und diese erkannten Zustände können genutzt werden, um ein Unterstützungssystem zu entwickeln, das als Begleiter fungiert. Zu diesem Zweck werden in dieser Dissertation drei Forschungsfragen untersucht: 1) Wie können multimodale Sensordaten wie physiologische und eingebettete Sensordaten genutzt werden, um Lernbegleiter zu entwickeln, die den Lernenden ein Bewusstsein für ihre Zustände vermitteln? als erste Forschungsfrage, 2) Wie können Lernbegleiter auf verschiedenen Modalitätsschnittstellen entworfen werden, wie z.B. bildschirmbasierte Agenten und verkörperte Roboter?, um verschiedene Möglichkeiten zu untersuchen, wie Lernende effektiv beraten werden können, und 3) Wie können nicht-technische Nutzer bei der Gestaltung und Nutzung multimodaler Lernbegleiter für ihre Anwendungen unterstützt werden? Zur Beantwortung der obengenannten Forschungsfragen wurde als Methode der Design-Based Research (DBR) Ansatz gewählt, bei der Theorie und Praxis gleichermaßen berücksichtigt wurden. Die daraus abgeleiteten Designüberlegungen dienten als Leitfaden für die Gestaltung von Lernbegleitern und der Plattform zur Entwicklung multimodaler Lernbegleiter. / This dissertation investigates three research questions: 1) How can multimodal sensor data such as physiological and embedded sensor data be used to design learning companions to provide learners with an awareness of their states?, 2) How can learning companions be designed for different modality interfaces, such as screen-based agents and embodied robots? to investigate various means to provide effective advice to learners, and 3) How can non-technical users be supported in designing and using multimodal learning companions in their various use cases? To answer these research questions, design-based research (DBR) methodology was utilized, considering both theory and practice. The derived design considerations were employed to guide the design of the learning companions as well as the platform to design multimodal learning companions. The findings from this dissertation reveal an association between the change in physiological sensor values and the arousal of emotion, which is also endorsed by prior studies. It was also found that using sensor devices such as mobile and wearable devices and Facial Expression Recognition (FER) can add to the methods of detecting learners’ states. Furthermore, designing a learning companion requires a consideration of the different modalities of the involved technology, in addition to the appropriate design of application scenarios. It is also necessary to integrate the stakeholders (e.g. teachers) into the design process while also considering the data privacy of the target users (e.g. students). The dissertation employs DBR to investigate real-life educational issues, considering both theories and practical constraints. Even though the studies conducted are limited, as they involved only small sample sizes lacking in generalizability, some authentic educational needs were derived, and the corresponding solutions were devised and tested in this dissertation.
|
6 |
Learning Continuous Human-Robot Interactions from Human-Human DemonstrationsVogt, David 02 March 2018 (has links)
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
|
Page generated in 0.1273 seconds