• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Imitação de expressões faciais para aprendizado de emoções em robótica social / Imitation of facial expressions for emotion learning in social robotics

Santos, Valéria de Carvalho 12 July 2012 (has links)
Robôs sociáveis devem ser capazes de interagir, se comunicar, compreender e se relacionar com os seres humanos de uma maneira natural. Embora diversos robôs sociáveis tenham sido desenvolvidos com sucesso, ainda existem muitas limitações a serem superadas. São necessários importantes avanços no desenvolvimento de mecanismos que possibilitem interações mais realísticas, bem como regulem o relacionamento entre robôs e humanos. Um forma de tornar mais realísticas as interações é através de expressões faciais de emoção. Nesse contexto, este trabalho fornece capacidade de imitação de expressão facial de emoções a uma cabeça robótica virtual, com o objetivo de permitir interações mais realísticas e duradouras com o ser humano. Para isso, é incorporado à mesma aprendizado por imitação, no qual a cabeça robótica imita expressões faciais apresentadas por um usuário durante a interação social. O aprendizado por imitação foi realizado atráves de redes neurais artificiais. As expressões faciais consideradas neste trabalho são: neutra, alegria, raiva, surpresa e tristeza. Os resultados experimentais são apresentados, os quais mostram o bom desempenho do sistema de imitação proposto / Sociable robots must be able to interact, communicate, understand and relate to humans in a natural way. Although many social robots have been developed successfully, there are still many limitations to overcome. Important advances are needed in the development of mechanisms that allow more realistic interactions and that regulate the relationship between robots and humans. One way to make more realistic interactions is through facial expressions of emotion. In this context, this project provides ability for imitation of facial expressions of emotion to a virtual robotic head, in order to allow more realistic and lasting interactions with humans. For such, learning by imitation is used, in which the robotic head mimics facial expressions made by a user during social interaction. The imitation learning was performed by artificial neural networks. Facial expressions considered in this work are: neutral, happiness, anger, surprise and sadness. Experimental results are presented which show the good performance of the proposed system imitation
2

Imitação de expressões faciais para aprendizado de emoções em robótica social / Imitation of facial expressions for emotion learning in social robotics

Valéria de Carvalho Santos 12 July 2012 (has links)
Robôs sociáveis devem ser capazes de interagir, se comunicar, compreender e se relacionar com os seres humanos de uma maneira natural. Embora diversos robôs sociáveis tenham sido desenvolvidos com sucesso, ainda existem muitas limitações a serem superadas. São necessários importantes avanços no desenvolvimento de mecanismos que possibilitem interações mais realísticas, bem como regulem o relacionamento entre robôs e humanos. Um forma de tornar mais realísticas as interações é através de expressões faciais de emoção. Nesse contexto, este trabalho fornece capacidade de imitação de expressão facial de emoções a uma cabeça robótica virtual, com o objetivo de permitir interações mais realísticas e duradouras com o ser humano. Para isso, é incorporado à mesma aprendizado por imitação, no qual a cabeça robótica imita expressões faciais apresentadas por um usuário durante a interação social. O aprendizado por imitação foi realizado atráves de redes neurais artificiais. As expressões faciais consideradas neste trabalho são: neutra, alegria, raiva, surpresa e tristeza. Os resultados experimentais são apresentados, os quais mostram o bom desempenho do sistema de imitação proposto / Sociable robots must be able to interact, communicate, understand and relate to humans in a natural way. Although many social robots have been developed successfully, there are still many limitations to overcome. Important advances are needed in the development of mechanisms that allow more realistic interactions and that regulate the relationship between robots and humans. One way to make more realistic interactions is through facial expressions of emotion. In this context, this project provides ability for imitation of facial expressions of emotion to a virtual robotic head, in order to allow more realistic and lasting interactions with humans. For such, learning by imitation is used, in which the robotic head mimics facial expressions made by a user during social interaction. The imitation learning was performed by artificial neural networks. Facial expressions considered in this work are: neutral, happiness, anger, surprise and sadness. Experimental results are presented which show the good performance of the proposed system imitation
3

Robot Motion and Task Learning with Error Recovery

Chang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.
4

Robot Motion and Task Learning with Error Recovery

Chang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.

Page generated in 0.1553 seconds