• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 1
  • Tagged with
  • 19
  • 19
  • 19
  • 11
  • 10
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Vision-based place categorization

Bormann, Richard Klaus Eduard 18 November 2010 (has links)
In this thesis we investigate visual place categorization by combining successful global image descriptors with a method of visual attention in order to automatically detect meaningful objects for places. The idea behind this is to incorporate information about typical objects for place categorization without the need for tedious labelling of important objects. Instead, the applied attention mechanism is intended to find the objects a human observer would focus first, so that the algorithm can use their discriminative power to conclude the place category. Besides this object-based place categorization approach we employ the Gist and the Centrist descriptor as holistic image descriptors. To access the power of all these descriptors we employ SVM-DAS (discriminative accumulation scheme) for cue integration and furthermore smooth the output trajectory with a delayed Hidden Markov Model. For the classification of the variety of descriptors we present and evaluate several classification methods. Among them is a joint probability modelling approach with two approximations as well as a modified KNN classifier, AdaBoost and SVM. The latter two classifiers are enhanced for multi-class use with a probabilistic computation scheme which treats the individual classifiers as peers and not as a hierarchical sequence. We check and tweak the different descriptors and classifiers in extensive tests mainly with a dataset of six homes. After these experiments we extend the basic algorithm with further filtering and tracking methods and evaluate their influence on the performance. Finally, we also test our algorithm within a university environment and on a real robot within a home environment.
12

A study of human-robot interaction with an assistive robot to help people with severe motor impairments

Choi, Young Sang. January 2009 (has links)
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2010. / Committee Chair: Kemp, Charles; Committee Member: Glass, Jonathan; Committee Member: Griffin, Paul; Committee Member: Howard, Ayanna; Committee Member: Thomaz, Andrea. Part of the SMARTech Electronic Thesis and Dissertation Collection.
13

The audio/visual mismatch and the uncanny valley: an investigation using a mismatch in the human realism of facial and vocal aspects of stimuli

Szerszen, Kevin A. 16 March 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Empirical research on the uncanny valley has primarily been concerned with visual elements. The current study is intended to show how manipulating auditory variables of the stimuli affect participant’s ratings. The focus of research is to investigate whether an uncanny valley effect occurs when humans are exposed to stimuli that have an incongruity between auditory and visual aspects. Participants were exposed to sets of stimuli which are both congruent and incongruent in their levels of audio/visual humanness. Explicit measures were used to explore if a mismatch in the human realism of facial and vocal aspects produces an uncanny valley effect and attempt to explain a possible cause of this effect. Results indicate that an uncanny valley effect occurs when humans are exposed to stimuli that have an incongruity between auditory and visual aspects.
14

The role of trust and relationships in human-robot social interaction

Wagner, Alan Richard 10 November 2009 (has links)
Can a robot understand a human's social behavior? Moreover, how should a robot act in response to a human's behavior? If the goals of artificial intelligence are to understand, imitate, and interact with human level intelligence then researchers must also explore the social underpinnings of this intellect. Our endeavor is buttressed by work in biology, neuroscience, social psychology and sociology. Initially developed by Kelley and Thibaut, social psychology's interdependence theory serves as a conceptual skeleton for the study of social situations, a computational process of social deliberation, and relationships (Kelley&Thibaut, 1978). We extend and expand their original work to explore the challenge of interaction with an embodied, situated robot. This dissertation investigates the use of outcome matrices as a means for computationally representing a robot's interactions. We develop algorithms that allow a robot to create these outcome matrices from perceptual information and then to use them to reason about the characteristics of their interactive partner. This work goes on to introduce algorithms that afford a means for reasoning about a robot's relationships and the trustworthiness of a robot's partners. Overall, this dissertation embodies a general, principled approach to human-robot interaction which results in a novel and scientifically meaningful approach to topics such as trust and relationships.
15

Task transparency in learning by demonstration : gaze, pointing, and dialog

dePalma, Nicholas Brian 07 July 2010 (has links)
This body of work explores an emerging aspect of human-robot interaction, transparency. Socially guided machine learning has proven that highly immersive robotic behaviors have yielded better results than lesser interactive behaviors for performance and shorter training time. While other work explores this transparency in learning by demonstration using non-verbal cues to point out the importance or preference users may have towards behaviors, my work follows this argument and attempts to extend it by offering cues to the internal task representation. What I show is that task-transparency, or the ability to connect and discuss the task in a fluent way implores the user to shape and correct the learned goal in ways that may be impossible by other present day learning by demonstration methods. Additionally, some participants are shown to prefer task-transparent robots which appear to have the ability of "introspection" in which it can modify the learned goal by other methods than just demonstration.
16

Interactive text response for assistive robotics in the home

Ajulo, Morenike 18 May 2010 (has links)
In a home environment, there are many tasks that a human may need to accomplish. These activities, which range from picking up a telephone to clearing rooms in the house, all have the common trend of fetching. These tasks can only be completed correctly with the consideration of many things including an understanding of what the human wants, recognition of the correct item from the environment, and manipulation and grasping of the object of interest. The focus of this work is on addressing one aspect of this problem, decomposing an image scene such that a task-specific object of interest can be identified. In this work, communication between human and robot is represented using a feedback formalism. This involves the back-and-forth transfer of textual information between the human and the robot such that the robot receives all information necessary to recognize the task-specific object of interest. We name this new communication mechanism Interactive Text Response (ITR), which we believe will provide a novel contribution to the field of Human Robot Interaction. The methodology employed involves capturing a view of the scene that contains an object of interest. Then, the robot makes inquiries based on its current understanding of the scene to disambiguate between objects in the scene. In this work, we discuss development of ITR in human-robot interaction, and understanding of variability, ease of recognition, clutter, and workload needed to develop an interactive robot system.
17

Modeling of operator action for intelligent control of haptic human-robot interfaces

Gallagher, William John 13 January 2014 (has links)
Control of systems requiring direct physical human-robot interaction (pHRI) requires special consideration of the motion, dynamics, and control of both the human and the robot. Humans actively change their dynamic characteristics during motion, and robots should be designed with this in mind. Both the case of humans trying to control haptic robots using physical contact and the case of using wearable robots that must work with human muscles are pHRI systems. Force feedback haptic devices require physical contact between the operator and the machine, which creates a coupled system. This human contact creates a situation in which the stiffness of the system changes based on how the operator modulates the stiffness of their arm. The natural human tendency is to increase arm stiffness to attempt to stabilize motion. However, this increases the overall stiffness of the system, making it more difficult to control and reducing stability. Instability poses a threat of injury or load damage for large assistive haptic devices with heavy loads. Controllers do not typically account for this, as operator stiffness is often not directly measurable. The common solution of using a controller with significantly increased controller damping has the disadvantage of slowing the device and decreasing operator efficiency. By expanding the information available to the controller, it can be designed to adjust a robot's motion based on the how the operator is interacting with it and allow for faster movement in low stiffness situations. This research explored the utility of a system that can estimate operator arm stiffness and compensate accordingly. By measuring muscle activity, a model of the human arm was utilized to estimate the stiffness level of the operator, and then adjust the gains of an impedance-based controller to stabilize the device. This achieved the goal of reducing oscillations and increasing device performance, as demonstrated through a series of user trials with the device. Through the design of this system, the effectiveness of a variety of operator models were analyzed and several different controllers were explored. The final device has the potential to increase the performance of operators and reduce fatigue due to usage, which in industrial settings could translate into better efficiency and higher productivity. Similarly, wearable robots must consider human muscle activity. Wearable robots, often called exoskeleton robots, are used for a variety of tasks, including force amplification, rehabilitation, and medical diagnosis. Force amplification exoskeletons operate much like haptic assist devices, and could leverage the same adaptive control system. The latter two types, however, are designed with the purpose of modulating human muscles, in which case the wearer's muscles must adapt to the way the robot moves, the reverse of the robot adapting to how the human moves. In this case, the robot controller must apply a force to the arm to cause the arm muscles to adapt and generate a specific muscle activity pattern. This related problem is explored and a muscle control algorithm is designed that allows a wearable robot to induce a specified muscle pattern in the wearer's arm. The two problems, in which the robot must adapt to the human's motion and in which the robot must induce the human to adapt its motion, are related critical problems that must be solved to enable simple and natural physical human robot interaction.
18

A study of human-robot interaction with an assistive robot to help people with severe motor impairments

Choi, Young Sang 06 July 2009 (has links)
The thesis research aims to further the study of human-robot interaction (HRI) issues, especially regarding the development of an assistive robot designed to help individuals possessing motor impairments. In particular, individuals with amyotrophic lateral sclerosis (ALS), represent a potential user population that possess an array of motor impairment due to the progressive nature of the disease. Through review of the literature, an initial target for robotic assistance was determined to be object retrieval and delivery tasks to aid with dropped or otherwise unreachable objects, which represent a common and significant difficulty for individuals with limited motor capabilities. This thesis research has been conducted as part of a larger, collaborative project between the Georgia Institute of Technology and Emory University. To this end, we developed and evaluated a semi-autonomous mobile healthcare service robot named EL-E. I conducted four human studies involving patients with ALS with the following objectives: 1) to investigate and better understand the practical, everyday needs and limitations of people with severe motor impairments; 2) to translate these needs into pragmatic tasks or goals to be achieved through an assistive robot and reflect these needs and limitations into the robot's design; 3) to develop practical, usable, and effective interaction mechanisms by which the impaired users can control the robot; and 4) and to evaluate the performance of the robot and improve its usability. I anticipate that the findings from this research will contribute to the ongoing research in the development and evaluation of effective and affordable assistive manipulation robots, which can help to mitigate the difficulties, frustration, and lost independence experienced by individuals with significant motor impairments and improve their quality of life.
19

Design, analysis, and simulation of a humanoid robotic arm applied to catching

Yesmunt, Garrett Scot January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / There have been many endeavors to design humanoid robots that have human characteristics such as dexterity, autonomy and intelligence. Humanoid robots are intended to cooperate with humans and perform useful work that humans can perform. The main advantage of humanoid robots over other machines is that they are flexible and multi-purpose. In this thesis, a human-like robotic arm is designed and used in a task which is typically performed by humans, namely, catching a ball. The robotic arm was designed to closely resemble a human arm, based on anthropometric studies. A rigid multibody dynamics software was used to create a virtual model of the robotic arm, perform experiments, and collect data. The inverse kinematics of the robotic arm was solved using a Newton-Raphson numerical method with a numerically calculated Jacobian. The system was validated by testing its ability to find a kinematic solution for the catch position and successfully catch the ball within the robot's workspace. The tests were conducted by throwing the ball such that its path intersects different target points within the robot's workspace. The method used for determining the catch location consists of finding the intersection of the ball's trajectory with a virtual catch plane. The hand orientation was set so that the normal vector to the palm of the hand is parallel to the trajectory of the ball at the intersection point and a vector perpendicular to this normal vector remains in a constant orientation during the catch. It was found that this catch orientation approach was reliable within a 0.35 x 0.4 meter window in the robot's workspace. For all tests within this window, the robotic arm successfully caught and dropped the ball in a bin. Also, for the tests within this window, the maximum position and orientation (Euler angle) tracking errors were 13.6 mm and 4.3 degrees, respectively. The average position and orientation tracking errors were 3.5 mm and 0.3 degrees, respectively. The work presented in this study can be applied to humanoid robots in industrial assembly lines and hazardous environment recovery tasks, amongst other applications.

Page generated in 0.0478 seconds