• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 383
  • 383
  • 311
  • 126
  • 107
  • 69
  • 64
  • 63
  • 57
  • 52
  • 50
  • 49
  • 45
  • 44
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Physical Human-Bicycle Interfaces for Robotic Balance Assistance

January 2020 (has links)
abstract: Riding a bicycle requires accurately performing several tasks, such as balancing and navigation, which may be difficult or even impossible for persons with disabilities. These difficulties may be partly alleviated by providing active balance and steering assistance to the rider. In order to provide this assistance while maintaining free maneuverability, it is necessary to measure the position of the rider on the bicycle and to understand the rider's intent. Applying autonomy to bicycles also has the potential to address some of the challenges posed by traditional automobiles, including CO2 emissions, land use for roads and parking, pedestrian safety, high ownership cost, and difficulty traversing narrow or partially obstructed paths. The Smart Bike research platform provides a set of sensors and actuators designed to aid in understanding human-bicycle interaction and to provide active balance control to the bicycle. The platform consists of two specially outfitted bicycles, one with force and inertial measurement sensors and the other with robotic steering and a control moment gyroscope, along with the associated software for collecting useful data and running controlled experiments. Each bicycle operates as a self-contained embedded system, which can be used for untethered field testing or can be linked to a remote user interface for real-time monitoring and configuration. Testing with both systems reveals promising capability for applications in human-bicycle interaction and robotics research. / Dissertation/Thesis / Masters Thesis Software Engineering 2020
52

Modeling Human Learning in Games

Alghamdi, Norah K. 12 1900 (has links)
Human-robot interaction is an important and broad area of study. To achieve success- ful interaction, we have to study human decision making rules. This work investigates human learning rules in games with the presence of intelligent decision makers. Par- ticularly, we analyze human behavior in a congestion game. The game models traffic in a simple scenario where multiple vehicles share two roads. Ten vehicles are con- trolled by the human player, where they decide on how to distribute their vehicles on the two roads. There are hundred simulated players each controlling one vehicle. The game is repeated for many rounds, allowing the players to adapt and formulate a strategy, and after each round, the cost of the roads and visual assistance is shown to the human player. The goal of all players is to minimize the total congestion experienced by the vehicles they control. In order to demonstrate our results, we first built a human player simulator using Fictitious play and Regret Matching algorithms. Then, we showed the passivity property of these algorithms after adjusting the passivity condition to suit discrete time formulation. Next, we conducted the experiment online to allow players to participate. A similar analysis was done on the data collected, to study the passivity of the human decision making rule. We observe different performances with different types of virtual players. However, in all cases, the human decision rule satisfied the passivity condition. This result implies that human behavior can be modeled as passive, and systems can be designed to use these results to influence human behavior and reach desirable outcomes.
53

Motion Analysis of Physical Human-Human Collaboration with Varying Modus

Freeman, Seth Michael 05 April 2022 (has links)
Despite the existence of robots that are capable of lifting heavy loads, robotic assistants that can help people move objects as part of a team are not available. This is because of a lack of critical intelligence that results in inefficient and ineffective performance of these robots. This work makes progress towards improved intelligence of robotic lifting assistants by studying human-human teams in order to understand basic principles of co-manipulation teamwork. The effect of modus, or the manner in which a team moves an object together, is the primary study of this work. Data was collected from over 30 human-human trials in which participants in teams of two co-manipulated an object that weighed 60 pounds. These participants maneuvered through a series of five obstacles while carrying the object, exhibiting one of four modi at any given time. The raw data from these experiments was cleaned and distilled into a pose trajectory, velocity trajectory, acceleration trajectory, and interaction wrench trajectory. Classifying on the original base set of four modi with a neural net showed that two of the three modi were very similar, such that classification between three modi was more appropriate. The three modi used in classification were \emph{quickly}, \emph{smoothly} and \emph{avoiding obstacles}. Using a convolutional neural net, three modi were able to be classified from a validation set with up to 85\% accuracy. Detecting modus has the potential to greatly improve human-robot co-manipulation by providing a means to determine an appropriate robot behavior objective function. Survey data showed that participants trust each other more after working together and that they feel that their partners are more qualified after they worked together. A number of modified scales were also shown to be reliable which will allow future researchers in human-robot co-manipulation to properly evaluate how humans feel about working with each other. These same scales will also provide a useful comparison to human-robot teams in order to determine how much humans trust robots as co-manipulation team members.
54

People Detection based on Points Tracked by an Omnidirectional Camera and Interaction Distance for Service Robots System / サービスロボットシステムのための全方位カメラによるトラッキング可能特徴点とインタラクション距離情報を用いた人物検出

Tasaki, Tsuyoshi 24 September 2013 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第17926号 / 情博第508号 / 新制||情||90(附属図書館) / 30746 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 奥乃 博, 教授 河原 達也, 教授 中村 裕一, 教授 五十嵐 淳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
55

Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control

Khan Mohd, Tauheed 06 September 2019 (has links)
No description available.
56

Using Augmented Reality technology to improve health and safety for workers in Human Robot Collaboration environment: A literature review

Chemmanthitta Gopinath, Dinesh January 2022 (has links)
Human Robot Collaboration (HRC) allows humans to operate more efficiently by reducing their human effort. Robots can do the majority of difficult and repetitive activities with or without human input. There is a risk of accidents and crashes when people and robots operate together closely. In this area, safety is extremely important. There are various techniques to increase worker safety, and one of the ways is to use Augmented Reality (AR). AR implementation in industries is still in its early stages. The goal of this study is to see how employees' safety may be enhanced when AR is used in an HRC setting. A literature review is carried out, as well as a case study in which managers and engineers from Swedish firms are questioned about their experiences with AR-assisted safety. This is a qualitative exploratory study with the goal of gathering extensive insight into the field, since the goal is to explore approaches for AR to improve safety. Inductive qualitative analysis was used to examine the data. Visualisation, awareness, ergonomics, and communication are the most critical areas where AR may improve safety, according to the studies. When doing a task, augmented reality aids the user in visualizing instructions and information, allowing them to complete the task more quickly and without mistakes. When working near robots, AR enhances awareness and predicts mishaps, as well as worker trust in a collaborative atmosphere. When AR is utilized to engage with collaborative robots, it causes less physical and psychological challenges than when traditional approaches are employed. AR allows operators to communicate with robots without having to touch them, as well as make adjustments. As a result, accidents are avoided and safety is ensured. There is a gap between theoretical study findings and data gathered from interviews in real time. Even though AR and HRC are not new topics, and many studies are being conducted on them, there are key aspects that influence their adoption in sectors. Due to considerations such as education, experience, suitability, system complexity, time, and technology, HRC and AR are employed less for assuring safety in industries by managers in various firms. In this study, possible future solutions to these challenges are also presented.
57

Initial steps toward human augmented mapping

Topp, Elin Anna January 2006 (has links)
With the progress in research and product development humans and robots get more and more close to each other and the idea of a personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment. The environment has to be assumed an arbitrary domestic or office-like environment that has to be shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become possible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that could be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot's workspace. The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that the robot acquires. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot could ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a "guided tour" in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations. Based on results from robotics research, psychology, human-robot interaction and cognitive science a general architecture for a system for Human Augmented Mapping is presented. This architecture combines a hierarchically organised robotic mapping approach with interaction abilities with the help of a high-level environment model. An initial system design and implementation that combines a tracking and following approach with a mapping system is described. Observations from a pilot study in which this initial system was used successfully are reported and support the assumptions about the usefulness of the environment model that is used as the link between robotic and human representation. / QC 20101125
58

Using Augmented Virtuality to Improve Human-Robot Interactions

Nielsen, Curtis W. 03 February 2006 (has links) (PDF)
Mobile robots can be used in situations and environments that are distant from an operator. In order for an operator to control a robot effectively he or she requires an understanding of the environment and situation around the robot. Since the robot is at a remote distant from the operator and cannot be directly observed, the information necessary for an operator to develop an understanding or awareness of the robot's situation comes from the user interface. The usefulness of the interface depends on the manner in which the information from the remote environment is presented. Conventional interfaces for interacting with mobile robots typically present information in a multi-windowed display where different sets of information are presented in different windows. The disjoint sets of information require significant cognitive processing on the part of the operator to interpret and understand the information. To reduce the cognitive effort to interpret the information from a mobile robot, requirements and technology for a three-dimensional augmented virtuality interface are presented. The 3D interface is designed to combine multiple sets of informationinto a single correlated window which can reduce the cognitive processing required to interpret and understand the information in comparison to a conventional (2D) interface. The usefulness of the 3D interface is validated, in comparison to a prototype of conventional 2D interfaces, through a series of navigation- and exploration-based user-studies. The user studies reveal that operators are able to drive the robot, build maps, find and identify items, and finish tasks faster with the 3D interface than with the 2D interface. Moreover, operators have fewer collisions, void walls better, and use a pan-tilt-zoom camera more with the 3D interface than with the 2D interface. Performance with the 3D interface is also more tolerant to network delay and distracting sets of information. Finally, principles for presenting multiple sets of information to a robot operator are presented. The principles are used to discuss and illustrate possible extensions of the 3D interface to other domains.
59

Effect of a human-teacher vs. a robot-teacher on human learning a pilot study

Smith, Melissa A. B. 01 August 2011 (has links)
Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed.
60

Investigation Of Tactile Displays For Robot To Human Communication

Barber, Daniel 01 January 2012 (has links)
Improvements in autonomous systems technology and a growing demand within military operations are spurring a revolution in Human-Robot Interaction (HRI). These mixed-initiative human-robot teams are enabled by Multi-Modal Communication (MMC), which supports redundancy and levels of communication that are more robust than single mode interaction. (Bischoff & Graefe, 2002; Partan & Marler, 1999). Tactile communication via vibrotactile displays is an emerging technology, potentially beneficial to advancing HRI. Incorporation of tactile displays within MMC requires developing messages equivalent in communication power to speech and visual signals used in the military. Toward that end, two experiments were performed to investigate the feasibility of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence structure for communication of messages for robot to human communication. Experiment one evaluated tactons from the literature with standardized parameters grouped into categories (directional, dynamic, and static) based on the nature and meaning of the patterns to inform design of a tactile syntax. Findings of this experiment revealed directional tactons showed better performance than non-directional tactons, therefore syntax for experiment two composed of a non-directional and a directional tacton was more likely to show performance better than chance. Experiment two tested the syntax structure of equally performing tactons identified from experiment one, revealing participants’ ability to interpret tactile sentences better than chance with or without the presence of an independent work imperative task. This finding advanced the state of the art in tactile displays from one to two word phrases facilitating inclusion of the tactile modality within MMC for HRI

Page generated in 0.0446 seconds