• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 24
  • 18
  • 18
  • 10
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 380
  • 380
  • 308
  • 125
  • 105
  • 68
  • 63
  • 63
  • 57
  • 51
  • 50
  • 47
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Optimal behavior composition for robotics

Bartholomew, Paul D. 22 May 2014 (has links)
The development of a humanoid robot that mimics human motion requires extensive programming as well as understanding the motion limitations of the robot. Programming the countless possibilities for a robot’s response to observed human motion can be time consuming. To simplify this process, this thesis presents a new approach for mimicking captured human motion data through the development of a composition routine. This routine is built upon a behavior-based framework and is coupled with optimization by calculus to determine the appropriate weightings of predetermined motion behaviors. The completion of this thesis helps to fill a void in human/robot interactions involving mimicry and behavior-based design. Technological advancements in the way computers and robots identify human motion and determine for themselves how to approximate that motion have helped make possible the mimicry of observed human subjects. In fact, many researchers have developed humanoid systems that are capable of mimicking human motion data; however, these systems do not use behavior-based design. This thesis will explain the framework and theory behind our optimal behavior composition algorithm and the selection of sinusoidal motion primitives that make up a behavior library. This algorithm breaks captured motion data into various time intervals, then optimally weights the defined behaviors to best approximate the captured data. Since this routine does not reference previous or following motion sequences, discontinuities may exist between time intervals. To address this issue, the addition of a PI controller to regulate and smooth out the transitions between time intervals will be shown. The effectiveness of using the optimal behavior composition algorithm to create an approximated motion that mimics capture motion data will be demonstrated through an example configuration of hardware and a humanoid robot platform. An example of arm motion mimicry will be presented and includes various image sequences from the mimicry as well as trajectories containing the joint positions for both the human and the robot.
122

Towards quantifying upper-arm rehabilitation metrics for children through interaction with a humanoid robot

Brooks, Douglas A. 24 April 2012 (has links)
The objective of this research effort is to further rehabilitation techniques for children by developing and validating the core technologies needed to integrate therapy instruction with child-robot play interaction in order to improve upper-arm rehabilitation. Using computer vision techniques such as Motion History Imaging (MHI), Multimodal Mean, edge detection, and Random Sample Consensus (RANSAC), movements can be quantified through robot observation. Also incorporating three-dimensional data obtained via an infrared projector coupled with a Principle Component Analysis (PCA), depth information can be utilized to create a robust algorithm. Finally, utilizing prior knowledge regarding exercise data, physical therapeutic metrics, and novel approaches, a mapping to therapist instructions can be created allowing robotic feedback and intelligent interaction.
123

Human coordination of robot teams an empirical study of multimodal interface design /

Cross, E. Vincent. Gilbert, Juan E., January 2009 (has links)
Thesis (Ph. D.)--Auburn University. / Abstract. Includes bibliographical references (p. 86-89).
124

Evaluation of a human-robot collaboration in an industrial workstation

Gonzalez, Victoria, Ruiz Castro, Pamela January 2018 (has links)
The fast changes in the industry require improved production workstations which ensure the workers' safety and improve the efficiency of the production. Technology developments and revised legislation have increased the possibility of using collaborative robots. This allows for new types of industry workstations where robots and humans cooperate in performing tasks. In addition to safety, the design of collaborative workstations needs to consider the areas of ergonomics and task allocation to ensure appropriate work conditions for the operators, while providing overall system efficiency. By facilitating the design development process of such workstations, the use of software simulations can help in gaining quality, save time and money by supporting decision making and testing concepts before creating a physical workstation, in turn, aimed to lead to better final solutions and a faster process of implementation or reconfiguration. The aim of this study is to investigate the possibility of having a human-robot collaboration in a workstation that is based on a use-case from the industry. The concept designs will be simulated and verified through a physical prototype, with which ergonomic analysis, time analysis, and risk assessments will be compared to validate the resultant collaborative workstation.
125

Metrics to evaluate human teaching engagement from a robot's point of view

Novanda, Ori January 2017 (has links)
This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot's point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot's perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a 'good' teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher's activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement.
126

HRC implementation in laboratory environment : Development of a HRC demonstrator

Boberg, Arvid January 2018 (has links)
Eurofins is one of the world's largest laboratories which, among other things, offer chemical and microbiological analyses in agriculture, food and environment. Several 100.000 tests of various foods are executed each year at Eurofins’ facility in Jönköping and the current processes include much repeated manual tasks which could cause ergonomic problems. The company therefore wants to investigate the possibilities of utilizing Human-Robot Collaboration (HRC) at their facility. Human-Robot Collaboration is a growing concept that has made a big impression in both robot development and Industry 4.0. A HRC approach allow humans and robots to share their workspaces and work side by side, without being separated by a protective fence which is common among traditional industrial robots. Human-Robot Collaboration is therefore believed to be able to optimize the workflows and relieve human workers from unergonomic tasks. The overall aim of the research project presented is to help the company to gain a better understanding about the existing HRC technologies. To achieve this goal, the state-of-the-art of HRC had to be investigated and the needs, possibilities and limitations of HRC applications had to be identified at Eurofins’ facility. Once these have been addressed, a demonstrator could be built which could be used for evaluating the applicability and suitability of HRC at Eurofins. The research project presented used the design science research process. The state-of-the-art of HRC was studied in a comprehensive literature review, reviewing sterile robots and mobile robotics as well. The presented literature review could identify possible research gaps in both HRC in laboratory environments and mobile solutions for HRC applications. These areas studied in the literature review formed together the basis of the prepared observations and interviews, used to generate the necessary data to develop the design science research artefact, the demonstrator. ABB's software for robotic simulation and offline programming, RobotStudio, were used in the development of the demonstrator, with the collaborative robot YuMi chosen for the HRC implementation. The demonstrator presented in the research project has been built, tested and refined in accordance to the design science research process. When the demonstrator could illustrate an applicable solution, it was evaluated for its performance and quality using a mixed methods approach. Limitations were identified in both the performance and quality of the demonstrator's illustrated HRC implementation, including adaptability and sterility constraints. The research project presented could conclude that a HRC application would be possible at a station which were of interest by the company, but would however not be recommended due to the identified constraints. Instead, the company were recommended to look for stations which are more standardized and have less hygienic requirements. By the end of the research project, additional knowledge was contributed to the company, including how HRC can affect today's working methods at Eurofins and in laboratory environments in general.
127

Localisation et suivi de visages à partir d'images et de sons : une approche Bayésienne temporelle et commumative / From images and sounds to face localization and tracking : a switching dynamical Bayesian framework

Drouard, Vincent 18 December 2017 (has links)
Dans cette thèse, nous abordons le problème de l’estimation de pose de visage dans le contexte des interactions homme-robot. Nous abordons la résolution de cette tâche à l’aide d’une approche en deux étapes. Tout d’abord en nous inspirant de [Deleforge 15], nous proposons une nouvelle façon d’estimer la pose d’un visage, en apprenant un lien entre deux espaces, l’espace des paramètres de pose et un espace de grande dimension représentant les observations perçues par une caméra. L’apprentissage de ce lien se fait à l’aide d’une approche probabiliste, utilisant un mélange de regressions affines. Par rapport aux méthodes d’estimation de pose de visage déjà existantes, nous incorporons de nouvelles informations à l’espace des paramètres de pose, ces additions sont nécessaires afin de pouvoir prendre en compte la diversité des observations, comme les differents visages et expressions mais aussi lesdécalages entre les positions des visages détectés et leurs positions réelles, cela permet d’avoir une méthode robuste aux conditions réelles. Les évaluations ont montrées que cette méthode permettait d’avoir de meilleurs résultats que les méthodes de regression standard et des résultats similaires aux méthodes de l’état de l’art qui pour certaines utilisent plus d’informations, comme la profondeur, pour estimer la pose. Dans un second temps, nous développons un modèle temporel qui utilise les capacités des traqueurs pour combiner l’information du présent avec celle du passé. Le but à travers cela est de produire une estimation de la pose plus lisse dans le temps, mais aussi de corriger les oscillations entre deux estimations consécutives indépendantes. Le modèle proposé intègre le précédent modèle de régression dans une structure de filtrage de Kalman. Cette extension fait partie de la famille des modèles dynamiques commutatifs et garde tous les avantages du mélange de regressionsaffines précédent. Globalement, le modèle temporel proposé permet d’obtenir des estimations de pose plus précises et plus lisses sur une vidéo. Le modèle dynamique commutatif donne de meilleurs résultats qu’un modèle de suivi utilsant un filtre de Kalman standard. Bien qu’appliqué à l’estimation de pose de visage le modèle presenté dans cette thèse est très général et peut être utilisé pour résoudre d’autres problèmes de régressions et de suivis. / In this thesis, we address the well-known problem of head-pose estimationin the context of human-robot interaction (HRI). We accomplish this taskin a two step approach. First, we focus on the estimation of the head pose from visual features. We design features that could represent the face under different orientations and various resolutions in the image. The resulting is a high-dimensional representation of a face from an RGB image. Inspired from [Deleforge 15] we propose to solve the head-pose estimation problem by building a link between the head-pose parameters and the high-dimensional features perceived by a camera. This link is learned using a high-to-low probabilistic regression built using probabilistic mixture of affine transformations. With respect to classic head-pose estimation methods we extend the head-pose parameters by adding some variables to take into account variety in the observations (e.g. misaligned face bounding-box), to obtain a robust method under realistic conditions. Evaluation of the methods shows that our approach achieve better results than classic regression methods and similar results thanstate of the art methods in head pose that use additional cues to estimate the head pose (e.g depth information). Secondly, we propose a temporal model by using tracker ability to combine information from both the present and the past. Our aim through this is to give a smoother estimation output, and to correct oscillations between two consecutives independent observations. The proposed approach embeds the previous regression into a temporal filtering framework. This extention is part of the family of switching dynamic models and keeps all the advantages of the mixture of affine regressions used. Overall the proposed tracker gives a more accurate and smoother estimation of the head pose on a video sequence. In addition, the proposed switching dynamic model gives better results than standard tracking models such as Kalman filter. While being applied to the head-pose estimation problem the methodology presented in this thesis is really general and can be used to solve various regression and tracking problems, e.g. we applied it to the tracking of a sound source in an image.
128

On Enhancing Myoelectric Interfaces by Exploiting Motor Learning and Flexible Muscle Synergies

January 2015 (has links)
abstract: Myoelectric control is lled with potential to signicantly change human-robot interaction. Humans desire compliant robots to safely interact in dynamic environments associated with daily activities. As surface electromyography non-invasively measures limb motion intent and correlates with joint stiness during co-contractions, it has been identied as a candidate for naturally controlling such robots. However, state-of-the-art myoelectric interfaces have struggled to achieve both enhanced functionality and long-term reliability. As demands in myoelectric interfaces trend toward simultaneous and proportional control of compliant robots, robust processing of multi-muscle coordinations, or synergies, plays a larger role in the success of the control scheme. This dissertation presents a framework enhancing the utility of myoelectric interfaces by exploiting motor skill learning and exible muscle synergies for reliable long-term simultaneous and proportional control of multifunctional compliant robots. The interface is learned as a new motor skill specic to the controller, providing long-term performance enhancements without requiring any retraining or recalibration of the system. Moreover, the framework oers control of both motion and stiness simultaneously for intuitive and compliant human-robot interaction. The framework is validated through a series of experiments characterizing motor learning properties and demonstrating control capabilities not seen previously in the literature. The results validate the approach as a viable option to remove the trade-o between functionality and reliability that have hindered state-of-the-art myoelectric interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric controlled applications beyond commonly perceived anthropomorphic and \intuitive control" constraints and into more advanced robotic systems designed for everyday tasks. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2015
129

Human Factors Analysis of Automated Planning Technologies for Human-Robot Teaming

January 2015 (has links)
abstract: Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim. In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios. Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general. / Dissertation/Thesis / Masters Thesis Computer Science 2015
130

Mixture of Interaction Primitives for Multiple Agents

January 2017 (has links)
abstract: In a collaborative environment where multiple robots and human beings are expected to collaborate to perform a task, it becomes essential for a robot to be aware of multiple agents working in its work environment. A robot must also learn to adapt to different agents in the workspace and conduct its interaction based on the presence of these agents. A theoretical framework was introduced which performs interaction learning from demonstrations in a two-agent work environment, and it is called Interaction Primitives. This document is an in-depth description of the new state of the art Python Framework for Interaction Primitives between two agents in a single as well as multiple task work environment and extension of the original framework in a work environment with multiple agents doing a single task. The original theory of Interaction Primitives has been extended to create a framework which will capture correlation between more than two agents while performing a single task. The new state of the art Python framework is an intuitive, generic, easy to install and easy to use python library which can be applied to use the Interaction Primitives framework in a work environment. This library was tested in simulated environments and controlled laboratory environment. The results and benchmarks of this library are available in the related sections of this document. / Dissertation/Thesis / Masters Thesis Computer Science 2017

Page generated in 0.0475 seconds