• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 1
  • Tagged with
  • 19
  • 19
  • 19
  • 11
  • 10
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Expression synthesis on robots

Riek, Laurel Dawn January 2011 (has links)
No description available.
2

A study on nonverbal behaviors of humanoid robots / CUHK electronic theses & dissertations collection

January 2015 (has links)
As humanoid robots move from science fictions to reality, and are gradually being used in education, health care, and entertainment areas, interactions between humans and humanoid robots are becoming critically important. Previous findings show that in human-robot interactions (HRI) people tend to communicate with humanoid robots as if they were humans, which requires humanoid robots to be behaviorally more humanlike and socially more sophisticated. Non-verbal behaviors (e.g. gaze cues and gestures) are essential communication signals in human-human interactions (HHI), and they are equally important in HRI. This thesis reports a study on nonverbal behaviors of humanoid robots in hope to facilitate more natural HRI. Through extensive HHI and HRI experiments, intuitive robot gaze cues and gestures are studied, and their impacts on HRI are demonstrated. / Gaze cues can subtly mediate how human-human handovers take place. We conjecture that such effect also exists in human-robot handovers. Based on observations of the giver’s gaze behaviors during human-human handovers, several typical gaze patterns are extracted and transferred to a PR2 humanoid robot for carrying out robot-to-human handovers. In two consecutive HRI experiments the robot hands objects to human receivers while using different gaze patterns. Results show that where the robot gazes at and how it changes its gaze direction during the handover can significantly affect human receivers’ reaching time for the handed object and their subjective experience (likeability, anthropomorphism, etc.) of the handover. / Emblematic gestures are frequently used in HHI, because their meanings are self-contained and can be understood without spoken words, such as waving. We conjecture that emblematic gestures are also applicable to humanoid robots during HRI. Several commonly used emblematic gestures are identified and transferred to a NAO humanoid robot to be evaluated by human subjects. Results show that the perceived meanings of the robot’s emblematic gestures are generally consistent with the perceived meanings of a human’s emblematic gestures, but the recognition rate in the robot case is lower. To improve this situation, two design methods are implemented, i.e. by hand-puppeteering (designers manipulate the robot’s limbs with hands as if manipulating a puppet) and by motion mapping (human gesture trajectories are captured by an RGB-D sensor, and corresponding joint trajectories are mapped to the robot’s joints). Results show that gestures designed from the motion mapping method are faster and have larger range of motion, while gestures designed from the hand-puppeteering method are perceived subjectively as more likeable and as better conveying semantic meaning. / This research contributes to the design of humanoid robots’ nonverbal behaviors with theoretically and empirically grounded methodologies, and offers better understandings of gaze cues and gestures in both HHI and HRI. Findings from this research provide instructive and valuable references for many practical application scenarios involving interactive robots. / 人形機器人從科幻小說變成現實,逐漸被應用於教育、醫療、娛樂等領域,這使得人們與人形機器人之間的交互變得至關重要。之前的研究發現,在人-機器人交互中,人們傾向於以與人溝通的方式與人形機器人溝通,這就需要人形機器人具有更加似人的行為和社交類經驗。非語言行為(例如注視和手勢)在人-人交互中是必要的溝通信號,在人-機器人交互中這些行為也同樣重要。本論文致力於研究人形機器人的非語言行為,以期促進更自然的人-機器人交互。通過廣泛的人-人交互和人-機器人交互實驗,我們研究了直觀的機器人注視行為和手勢行為,並展示了這些行為對人-機器人交互的影響。 / 注視信號在人-人傳遞物品時能夠起到微妙的調節作用,我們推測這種作用也存在於人-機器人傳遞物品的過程中。通過觀察人-人傳遞物品時物品給予者的注視行為,我們提取了幾種典型的注視行為模式,並將這些注視行為轉化到一台PR2 人形機器人上,來進行機器人向人傳遞物品的實驗。在兩次人-機器人交互實驗中,受試者接收由機器人傳遞的物品,在此過程中機器人會使用不同的注視行為模式。實驗結果表明,機器人注視哪裡以及如何改變它的注視方向,能夠顯著影響接收者伸手接物品的時間以及他們對於傳遞物品這一事件的主觀感受(喜愛程度,似人程度等)。 / 象徵性手勢廣泛應用於人-人交互中,這類手勢具有獨立的含義、可以不依賴於語言而被人理解,比如揮手。我們推測在人-機器人交互中象徵性手勢也可為人形機器人所用。我們將幾個常用的象徵性手勢轉化到一台NAO 人形機器人上,並由受試者進行評估。實驗結果表明,人們對機器人的象徵性手勢的理解與對人的象徵性手勢的理解大致相同,但在機器人情形下的識別率較低。為了改善這種狀況,我們提出了兩種設計方法,即通過手把手操縱(設計者像操縱木偶一樣操縱機器人的肢體來獲得手勢動作),和通過動作映射(用RGB-D 傳感器捕捉人的手勢軌跡,將相應的關節軌跡映射到機器人的關節上)。實驗結果表明,通過動作映射方法設計的手勢速度更快、動作幅度更大,通過手把手操縱方法設計的手勢在主觀上更令人喜愛、更好地傳達了語義含義。 / 本研究為人形機器人的非語言行為設計提供了基於理論和實驗的方法論,有助於更好地理解人-人交互和人-機器人交互中的注視行為和手勢行為。本論文的研究成果對多種涉及交互式機器人的實際應用場合具有寶貴的指導價值。 / Zheng, Minhua. / Thesis Ph.D. Chinese University of Hong Kong 2015. / Includes bibliographical references (leaves 139-159). / Abstracts also in Chinese. / Title from PDF title page (viewed on 07, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
3

An Android and Visual C-based controller for a Delta Parallel Robot for use as a classroom training tool

Bezuidenhout, Sarel January 2013 (has links)
This report will show the development of a Delta Parallel robot, to aid in teaching the basics of robotic motion programming. The platform developed will be created at a fraction of the cost of conventional commercial training systems. This report will therefore show the development procedure as well as the development of some of the example training material. The system will use wireless serial data communication in the form of a Bluetooth connection. This connection will allow an Android tablet, functioning as the human-machine interface (HMI) for the system, to communicate with the motion controller. The motion controller is based in the C environment. This will allow future development of the machine, and allow the system to be used on an integral level, should the trainers require an in depth approach. The motion control software will be implemented on a RoBoard, a development board specifically designed for low- to mid-range robotics. The conclusion of this report will show an example task being completed on the training platform. This will demonstrate some of the basic robotic motion programming aspects which include point to point, linear, and circular motion types but will also include setting and resetting outputs. Performance parameters such as repeatability and reproducibility are important, as it will indirectly show the level of ease with which the system can be manipulated from the software. Finally, the results will be briefly discussed and some recommendations for improvements on the training system and suggestions for future development will be given.
4

Joint attention in human-robot interaction

Huang, Chien-Ming 07 July 2010 (has links)
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
5

Generation and use of a discrete robotic controls alphabet for high-level tasks

Gargas , Eugene Frank, III 06 April 2012 (has links)
The objective of this thesis is to generate a discrete alphabet of low-level robotic controllers rich enough to mimic the actions of high-level users using the robot for a specific task. This alphabet will be built through the analysis of various user data sets in a modified version of the motion description language, MDLe. It can then be used to mimic the actions of a future user attempting to perform the task by calling scaled versions of the controls in the alphabet, potentially reducing the amount of data required to be transmitted to the robot, with minimal error. In this thesis, theory is developed that will allow the construction of such an alphabet, as well as its use to mimic new actions. A MATLAB algorithm is then built to implement the theory. This is followed by an experiment in which various users drive a Khepera robot through different courses with a joystick. The thesis concludes by presenting results which suggest that a relatively small group of users can generate an alphabet capable of mimicking the actions of other users, while drastically reducing bandwidth.
6

Guided teaching interactions with robots: embodied queries and teaching heuristics

Cakmak, Maya 17 May 2012 (has links)
The vision of personal robot assistants continues to become more realistic with technological advances in robotics. The increase in the capabilities of robots, presents boundless opportunities for them to perform useful tasks for humans. However, it is not feasible for engineers to program robots for all possible uses. Instead, we envision general-purpose robots that can be programmed by their end-users. Learning from Demonstration (LfD), is an approach that allows users to program new capabilities on a robot by demonstrating what is required from the robot. Although LfD has become an established area of Robotics, many challenges remain in making it effective and intuitive for naive users. This thesis contributes to addressing these challenges in several ways. First, the problems that occur in teaching-learning interactions between humans and robots are characterized through human-subject experiments in three different domains. To address these problems, two mechanisms for guiding human teachers in their interactions are developed: embodied queries and teaching heuristics. Embodied queries, inspired from Active Learning queries, are questions asked by the robot so as to steer the teacher towards providing more informative demonstrations. They leverage the robot's embodiment to physically manipulate the environment and to communicate the question. Two technical contributions are made in developing embodied queries. The first is Active Keyframe-based LfD -- a framework for learning human-segmented skills in continuous action spaces and producing four different types of embodied queries to improve learned skills. The second is Intermittently-Active Learning in which a learner makes queries selectively, so as to create balanced interactions with the benefits of fully-active learning. Empirical findings from five experiments with human subjects are presented. These identify interaction-related issues in generating embodied queries, characterize human question asking, and evaluate implementations of Intermittently-Active Learning and Active Keyframe-based LfD on the humanoid robot Simon. The second mechanism, teaching heuristics, is a set of instructions given to human teachers in order to elicit more informative demonstrations from them. Such instructions are devised based on an understanding of what constitutes an optimal teacher for a given learner, with techniques grounded in Algorithmic Teaching. The utility of teaching heuristics is empirically demonstrated through six human-subject experiments, that involve teaching different concepts or tasks to a virtual agent, or teaching skills to Simon. With a diverse set of human subject experiments, this thesis demonstrates the necessity for guiding humans in teaching interactions with robots, and verifies the utility of two proposed mechanisms in improving sample efficiency and final performance, while enhancing the user interaction.
7

Choreographic abstractions for style-based robotic motion

LaViers, Amy 20 September 2013 (has links)
What does it mean to do the disco? Or perform a cheerleading routine? Or move in a style appropriate for a given mode of human interaction? Answering these questions requires an interpretation of what differentiates two distinct movement styles and a method for parsing this difference into quantitative parameters. Furthermore, such an understanding of principles of style has applications in control, robotics, and dance theory. This thesis present a definition for “style of motion” that is rooted in dance theory, a framework for stylistic motion generation that separates basic movement ordering from its precise trajectory, and an inverse optimal control method for extracting these stylistic parameters from real data. On the part of generation, the processes of sequencing and scaling are modulated by the stylistic parameters enumerated: an automation that lists basic primary movements, sets which determine the final structure of the state machine that encodes allowable sequences, and weights in an optimal control problem that generates motions of the desired quality. This generation framework is demonstrated on a humanoid robotic platform for two distinct case studies – disco dancing and cheerleading. In order to extract the parameters that comprise the stylistic definition put forth, two inverse optimal control problems are posed and solved -- one to classify individual movements and one to segment longer movement sequences into smaller motion primitives. The motion of a real human leg (recorded via motion capture) is classified in an example. Thus, the contents of the thesis comprise a tool to produce and understand stylistic motion.
8

The role of trust and relationships in human-robot social interaction

Wagner, Alan Richard. January 2009 (has links)
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010. / Committee Chair: Arkin, Ronald C.; Committee Member: Christensen, Henrik I.; Committee Member: Fisk, Arthur D.; Committee Member: Ram, Ashwin; Committee Member: Thomaz, Andrea. Part of the SMARTech Electronic Thesis and Dissertation Collection.
9

Expressive Motion Synthesis for Robot Actors in Robot Theatre

Sunardi, Mathias I. 01 January 2010 (has links)
Lately, personal and entertainment robotics are becoming more and more common. In this thesis, the application of entertainment robots in the context of a Robot Theatre is studied. Specifically, the thesis focuses on the synthesis of expressive movements or animations for the robot performers (Robot Actors). The novel paradigm emerged from computer animation is to represent the motion data as a set of signals. Thus, preprogrammed motion data can be quickly modified using common signal processing techniques such as multiresolution filtering and spectral analysis. However, manual adjustments of the filtering and spectral methods parameters, and good artistic skills are still required to obtain the desired expressions in the resulting animation. Music contains timing, timbre and rhythm information which humans can translate into affect, and express the affect through movement dynamics, such as in dancing. Music data is then assumed to contain affective information which can be expressed in the movements of a robot. In this thesis, music data is used as input signal to generate motion data (Dance) and to modify a sequence of pre-programmed motion data (Scenario) for a custom-made Lynxmotion robot and a KHR-1 robot, respectively. The music data in MIDI format is parsed for timing and melodic information, which are then mapped to joint angle values. Surveys were done to validate the usefulness and contribution of music signals to add expressiveness to the movements of a robot for the Robot Theatre application.
10

Mathematical modeling and control of a piezoelectric cellular actuator exhibiting quantization and flexibility

Schultz, Joshua Andrew 21 August 2012 (has links)
This thesis presents mathematical modeling and control techniques that can be used to predict and specify performance of biologically inspired actuation systems called cellular actuators. Cellular actuators are modular units designed to be connected in bundles in manner similar to human muscle fibers. They are characterized by inherent compliance and large numbers of on-off discrete control inputs. In this thesis, mathematical tools are developed that connect the performance to the physical manifestation of the device. A camera positioner inspired by the human eye is designed to demonstrate how these tools can be used to create an actuator with a useful force-displacement characteristic. Finally, control architectures are presented that use discrete switching inputs to produce smooth motion of these systems despite an innate tendency toward oscillation. These are demonstrated in simulation and experiment.

Page generated in 0.0668 seconds