• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 383
  • 383
  • 311
  • 126
  • 107
  • 69
  • 64
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 44
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Novel Approach for Performance Assessment of Human-Robotic Interaction

Abou Saleh, Jamil 16 March 2012 (has links)
Robots have always been touted as powerful tools that could be used effectively in a number of applications ranging from automation to human-robot interaction. In order for such systems to operate adequately and safely in the real world, they must be able to perceive, and must have abilities of reasoning up to a certain level. Toward this end, performance evaluation metrics are used as important measures. This research work is intended to be a further step toward identifying common metrics for task-oriented human-robot interaction. We believe that within the context of human-robot interaction systems, both humans' and robots' actions and interactions (jointly and independently) can significantly affect the quality of the accomplished task. As such, our goal becomes that of providing a foundation upon which we can assess how well the human and the robot perform as a team. Thus, we propose a generic performance metric to assess the performance of the human-robot team, where one or more robots are involved. Sequential and parallel robot cooperation schemes with varying levels of task dependency are considered, and the proposed performance metric is augmented and extended to accommodate such scenarios. This is supported by some intuitively derived mathematical models and some advanced numerical simulations. To efficiently model such a metric, we propose a two-level fuzzy temporal model to evaluate and estimate the human trust in automation, while collaborating and interacting with robots and machines to complete some tasks. Trust modelling is critical, as it directly influences the interaction time that should be directly and indirectly dedicated toward interacting with the robot. Another fuzzy temporal model is also presented to evaluate the human reliability during interaction time. A significant amount of research work stipulates that system failures are due almost equally to humans as to machines, and therefore, assessing this factor in human-robot interaction systems is crucial. The proposed framework is based on the most recent research work in the areas of human-machine interaction and performance evaluation metrics. The fuzzy knowledge bases are further updated by implementing an application robotic platform where robots and users interact via semi-natural language to achieve tasks with varying levels of complexity and completion rates. User feedback is recorded and used to tune the knowledge base where needed. This work intends to serve as a foundation for further quantitative research to evaluate the performance of the human-robot teams in achievement of collective tasks.
42

TOURBOT a research and product design study applying human robot interaction and universal design principles to the development of a tour guide robot /

Terrell, Robert Vern, Liu, Tsai Lu, January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Vita. Includes bibliographical references (p. 123-126).
43

Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver's Gift

Arsenio, Artur Miguel 26 September 2004 (has links)
The goal of this work is to build a cognitive system for the humanoid robot, Cog, that exploits human caregivers as catalysts to perceive and learn about actions, objects, scenes, people, and the robot itself. This thesis addresses a broad spectrum of machine learning problems across several categorization levels. Actions by embodied agents are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. Taking inspiration from the human brain, a framework of algorithms and methodologies was implemented to emulate different cognitive capabilities on the humanoid robot Cog. This framework is effectively applied to a collection of AI, computer vision, and signal processing problems. Cognitive capabilities of the humanoid robot are developmentally created, starting from infant-like abilities for detecting, segmenting, and recognizing percepts over multiple sensing modalities. Human caregivers provide a helping hand for communicating such information to the robot. This is done by actions that create meaningful events (by changing the world in which the robot is situated) thus inducing the "compliant perception" of objects from these human-robot interactions. Self-exploration of the world extends the robot's knowledge concerning object properties.This thesis argues for enculturating humanoid robots using infant development as a metaphor for building a humanoid robot's cognitive abilities. A human caregiver redesigns a humanoid's brain by teaching the humanoid robot as she would teach a child, using children's learning aids such as books, drawing boards, or other cognitive artifacts. Multi-modal object properties are learned using these tools and inserted into several recognition schemes, which are then applied to developmentally acquire new object representations. The humanoid robot therefore sees the world through the caregiver's eyes.Building an artificial humanoid robot's brain, even at an infant's cognitive level, has been a long quest which still lies only in the realm of our imagination. Our efforts towards such a dimly imaginable task are developed according to two alternate and complementary views: cognitive and developmental.
44

Can I Have a Robot Friend? / Kan en robot vara min vän?

Tistelgren, Mathias January 2018 (has links)
The development of autonomous social robots is still in its infancy, but there is no reason tothink that it will not continue. In fact, the robotics industry is growing rapidly. Since this trendis showing no signs of abating it is relevant to ask what type of relations we can have withthese machines. Is it for example possible to be friends with them? In this thesis I argue that it is unlikely that we will ever be able to be friends with robots. To believe otherwise is to be deceived, a trap it is all too easy to fall into since the efforts put on making social robots as human-like as possible and to make the human-robot interaction as smooth as possible are huge. But robots are not always what they seem. For instance, the capacity to enter into a friendship of one’s own volition is a core requirement for a relationship to be termed friendship. We also have a duty to act morally towards our friends, to treat them with due respect. To be able to do this we need to have self-knowledge, a sense of ourselves as persons in our own right. We do not have robots who display these capacities today, nor is it a given that we ever will. / Utvecklingen av autonoma sociala robotar är ännu i sin linda men det finns ingen anledning att tro att den inte kommer att fortsätta. Faktum är att robotindustrin växer kraftigt. Då denna trend inte visar några tecken på att avta är det relevant att fråga sig vilket slags relation vi kan ha till dessa maskiner. Är det t.ex. möjligt att bli vän med dem? I denna uppsats argumenterar jag för att det inte är troligt att vi någonsin kommer att kunna utveckla vänskap med en robot. Att tro något annat är en villfarelse, en fälla det är alltför lätt att falla i inte minst på grund av den möda som läggs ned på att göra robotarna så människoliknande som möjligt och robot-människa-interaktionen så smidig som möjligt. Men robotarna är inte alltid vad de verkar vara. Exempelvis är förmågan att kunna inleda ett vänskapsförhållande på eget bevåg engrundförutsättning för att relationen ska kunna klassas som vänskap. Vi har också en plikt att handla moraliskt gentemot våra vänner, att behandla dem med respekt. För att kunna göra detta måste vi ha självkännedom, en uppfattning om oss själva som personer i vår egen rätt. Robotar har inte dessa förmågor idag, och det är inte säkert att de någonsin kommer att besitta dem.
45

The impact of robot tutor social behaviour on children

Kennedy, James R. January 2017 (has links)
Robotic technologies possess great potential to enter our daily lives because they have the ability to interact with our world. But our world is inherently social. Whilst humans often have a natural understanding of this complex environment, it is much more challenging for robots. The field of social Human-Robot Interaction (HRI) seeks to endow robots with the characteristics and behaviours that would allow for intuitive multimodal interaction. Education is a social process and previous research has found strong links between the social behaviour of teachers and student learning. This therefore presents a promising application opportunity for social human-robot interaction. The thesis presented here is that a robot with tailored social behaviour will positively influence the outcomes of tutoring interactions with children and consequently lead to an increase in child learning when compared to a robot without this social behaviour. It has long been established that one-to-one tutoring provides a more effective means of learning than the current typical school classroom model (one teacher to many students). Schools increasingly supplement their teaching with technology such as tablets and laptops to offer this personalised experience, but a growing body of evidence suggests that robots lead to greater learning than other media. It is posited that this is due to the increased social presence of a robot. This work adds to the evidence that robots hold a social advantage over other technological media, and that this indeed leads to increased learning. In addition, the work here contributes to existing knowledge by seeking to expand our understanding of how to manipulate robot social behaviour in educational interactions such that the behaviour is tailored for this purpose. To achieve this, a means of characterising social behaviour is required, as is a means of measuring the success of the behaviour for the interaction. To characterise the social behaviour of the robot, the concept of immediacy is taken from the human-human literature and validated for use in HRI. Greater use of immediacy behaviours is also tied to increased cognitive learning gains in humans. This can be used to predict the same effect for the use of social behaviour by a robot, with learning providing an objective measure of success for the robot behaviour given the education application. It is found here that when implemented on a robot in tutoring scenarios, greater use of immediacy behaviours generally does tend to lead to increased learning, but a complex picture emerges. Merely the addition of more social behaviour is insufficient to increase learning; it is found that a balance should be struck between the addition of social cues, and the congruency of these cues.
46

An Android and Visual C-based controller for a Delta Parallel Robot for use as a classroom training tool

Bezuidenhout, Sarel January 2013 (has links)
This report will show the development of a Delta Parallel robot, to aid in teaching the basics of robotic motion programming. The platform developed will be created at a fraction of the cost of conventional commercial training systems. This report will therefore show the development procedure as well as the development of some of the example training material. The system will use wireless serial data communication in the form of a Bluetooth connection. This connection will allow an Android tablet, functioning as the human-machine interface (HMI) for the system, to communicate with the motion controller. The motion controller is based in the C environment. This will allow future development of the machine, and allow the system to be used on an integral level, should the trainers require an in depth approach. The motion control software will be implemented on a RoBoard, a development board specifically designed for low- to mid-range robotics. The conclusion of this report will show an example task being completed on the training platform. This will demonstrate some of the basic robotic motion programming aspects which include point to point, linear, and circular motion types but will also include setting and resetting outputs. Performance parameters such as repeatability and reproducibility are important, as it will indirectly show the level of ease with which the system can be manipulated from the software. Finally, the results will be briefly discussed and some recommendations for improvements on the training system and suggestions for future development will be given.
47

Generating Explanations of Robot Policies in Continuous State Spaces

Struckmeier, Oliver January 2018 (has links)
Transparency in HRI describes the method of making the current state of a robotor intelligent agent understandable to a human user. Applying transparencymechanisms to robots improves the quality of interaction as well as the userexperience. Explanations are an effective way to make a robot’s decision making transparent. We introduce a framework that uses natural language labels attached to a region inthe continuous state space of the robot to automatically generate local explanationsof a robot’s policy. We conducted a pilot study and investigated how the generated explanations helpedusers to understand and reproduce a robot policy in a debugging scenario.
48

Planning Challenges in Human-Robot Teaming

January 2014 (has links)
abstract: As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application scenarios are characterized by a need to leverage the strengths of each agent as part of a unified team to reach those common goals. To ensure that the robotic agent is truly a contributing team-member, it must exhibit some degree of autonomy in achieving goals that have been delegated to it. Indeed, a significant portion of the utility of such human-robot teams derives from the delegation of goals to the robot, and autonomy on the part of the robot in achieving those goals. In order to be considered truly autonomous, the robot must be able to make its own plans to achieve the goals assigned to it, with only minimal direction and assistance from the human. Automated planning provides the solution to this problem -- indeed, one of the main motivations that underpinned the beginnings of the field of automated planning was to provide planning support for Shakey the robot with the STRIPS system. For long, however, automated planners suffered from scalability issues that precluded their application to real world, real time robotic systems. Recent decades have seen a gradual abeyance of those issues, and fast planning systems are now the norm rather than the exception. However, some of these advances in speedup and scalability have been achieved by ignoring or abstracting out challenges that real world integrated robotic systems must confront. In this work, the problem of planning for human-hobot teaming is introduced. The central idea -- the use of automated planning systems as mediators in such human-robot teaming scenarios -- and the main challenges inspired from real world scenarios that must be addressed in order to make such planning seamless are presented: (i) Goals which can be specified or changed at execution time, after the planning process has completed; (ii) Worlds and scenarios where the state changes dynamically while a previous plan is executing; (iii) Models that are incomplete and can be changed during execution; and (iv) Information about the human agent's plan and intentions that can be used for coordination. These challenges are compounded by the fact that the human-robot team must execute in an open world, rife with dynamic events and other agents; and in a manner that encourages the exchange of information between the human and the robot. As an answer to these challenges, implemented solutions and a fielded prototype that combines all of those solutions into one planning system are discussed. Results from running this prototype in real world scenarios are presented, and extensions to some of the solutions are offered as appropriate. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
49

Facilitating Human-Robot Collaboration Using a Mixed-Reality Projection System

January 2017 (has links)
abstract: Human-Robot collaboration can be a challenging exercise especially when both the human and the robot want to work simultaneously on a given task. It becomes difficult for the human to understand the intentions of the robot and vice-versa. To overcome this problem, a novel approach using the concept of Mixed-Reality has been proposed, which uses the surrounding space as the canvas to augment projected information on and around 3D objects. A vision based tracking algorithm precisely detects the pose and state of the 3D objects, and human-skeleton tracking is performed to create a system that is both human-aware as well as context-aware. Additionally, the system can warn humans about the intentions of the robot, thereby creating a safer environment to work in. An easy-to-use and universal visual language has been created which could form the basis for interaction in various human-robot collaborations in manufacturing industries. An objective and subjective user study was conducted to test the hypothesis, that using this system to execute a human-robot collaborative task would result in higher performance as compared to using other traditional methods like printed instructions and through mobile devices. Multiple measuring tools were devised to analyze the data which finally led to the conclusion that the proposed mixed-reality projection system does improve the human-robot team's efficiency and effectiveness and hence, will be a better alternative in the future. / Dissertation/Thesis / Masters Thesis Computer Science 2017
50

Human-Robot Cooperation: Communication and Leader-Follower Dynamics

January 2014 (has links)
abstract: As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object of interest. Often these applications are in unstructured environments where many paths can accomplish the goal. This creates a need for the ability to communicate a preferred direction of motion between both participants in order to move in coordinated way. This communication method should be bidirectional to be able to fully utilize both the robot and human capabilities. Moreover, often in cooperative tasks between two humans, one human will operate as the leader of the task and the other as the follower. These roles may switch during the task as needed. The need for communication extends into this area of leader-follower switching. Furthermore, not only is there a need to communicate the desire to switch roles but also to control this switching process. Impedance control has been used as a way of dealing with some of the complexities of pHRI. For this investigation, it was examined if impedance control can be utilized as a way of communicating a preferred direction between humans and robots. The first set of experiments tested to see if a human could detect a preferred direction of a robot by grasping and moving an object coupled to the robot. The second set tested the reverse case if the robot could detect the preferred direction of the human. The ability to detect the preferred direction was shown to be up to 99% effective. Using these results, a control method to allow a human and robot to switch leader and follower roles during a cooperative task was implemented and tested. This method proved successful 84% of the time. This control method was refined using adaptive control resulting in lower interaction forces and a success rate of 95%. / Dissertation/Thesis / M.S. Mechanical Engineering 2014

Page generated in 0.0467 seconds