521 |
Controlling a Robot Hand in Simulation and RealityBirgestam, Magnus January 2008 (has links)
<p>This master thesis was made at the Institute of Technology Stockholm and is a part of a robot hand project called 10-X with the aim to develop a low-cost robot hand that is light and strong.</p><p>The project specification is to further improve the ability to control the robot hand in a user friendly way. This has been done by implementing a controller, earlier used and developed at KTH, which is intuitive and easy to customize after the needs in different kinds of grasps. To make the controller easy to use an user interface has been made.</p><p>Before the implementation of the controller was made on the real hand it was tested and development on a simulation created in MATLAB/simulink with help from a graphic physics engine called GraspIt! The movement of the robot finger is effected of the force from a leaf spring and a tendon that bends the finger. Also the finger is exposed of contact forces and all these components had to be modeled in the simulation to make the finger act properly.</p> / <p>Detta examensarbete är genomfört på KTH Stockholm och är en del av ett projekt, kallat 10-X, vars syfte är att utveckla och ta fram en robothand som är lätt och stark samtidigt som den är billig.</p><p>Projektets målsättning är att vidare förbättra och utveckla möjligheten att kontrollera robothanden på ett användarvänligt sätt. Detta har gjorts genom att implementera en regulator, tidigare utvecklad och använd på KTH, som är instruktiv och lätt att anpassa efter olika typer av grepp. För att göra regulatorn enkel att använda har ett användargränssnitt skapats.</p><p>Innan regulatorn implementerades på den riktiga robothanden utvecklades och testades den på en simuleringsmodell, skapad i MATLAB/simulink med hjälp av en grafisk fysikmotor GraspIt! Rörelsen hos ett robotfinger påverkas av krafter från en bladfjäder och den lina som böjer fingret. Fingret utsätts också för kontaktkrafter och alla dessa komponenter blev modellerade i simulatorn för att få fingret att bete sig korrekt.</p>
|
522 |
SYSTEME MULTISENSEUR DE PERCEPTION 3D POUR LE ROBOT MOBILE HILAREFerrer, Michel 21 December 1982 (has links) (PDF)
L'ETUDE PRESENTEE S'INSERE DANS LE VASTE DOMAINE DE LA VISION ARTIFICIELLE. ELLE CONCERNE PLUS PARTICULIEREMENT L'INTEGRATION DU SYSTEME DE PERCEPTION TROIS DIMENSIONS (3D) DU ROBOT MOBILE AUTONOME HILARE. CE SYSTEME EST COMPOSE D'UNE CAMERA MATRICIELLE A SEMICONDUCTEURS, D'UN TELEMETRE LASER ET D'UNE STRUCTURE MECANIQUE ASSURANT LA DEFLEXION DU FAISCEAU LASER. DANS CE MEMOIRE SONT DECRITS: LA CONCEPTION DE LA STRUCTURE DEFLECTRICE; LE LOGICIEL DE TRAITEMENT DES IMAGES VIDEO MULTINIVEAUX BASE SUR L'UTILISATION D'UN OPERATEUR TOPOLOGIQUE; LE LOGICIEL D'ANALYSE DES DONNEES TELEMETRIQUES. CES LOGICIELS PERMETTENT RESPECTIVEMENT DE LOCALISER LE ROBOT PAR DETECTION DE BALISES LUMINEUSES, ET DE DETERMINER LA PROJECTION AU SOL DES OBSTACLES PRESENTS DANS LE SECTEUR EXPLORE. CES INFORMATIONS SONT TRANSMISES AU NIVEAU SUPERIEUR PAR VOIE HERTZIENNE
|
523 |
Requirements for effective collision detection on industrial serial manipulatorsSchroeder, Kyle Anthony 16 October 2013 (has links)
Human-robot interaction (HRI) is the future of robotics. It is essential in the expanding markets, such as surgical, medical, and therapy robots. However, existing industrial systems can also benefit from safe and effective HRI. Many robots are now being fitted with joint torque sensors to enable effective human-robot collision detection. Many existing and off-the-shelf industrial robotic systems are not equipped with these sensors. This work presents and demonstrates a method for effective collision detection on a system with motor current feedback instead of joint torque sensors. The effectiveness of this system is also evaluated by simulating collisions with human hands and arms. Joint torques are estimated from the input motor currents. The joint friction and hysteresis losses are estimated for each joint of an SIA5D 7 Degree of Freedom (DOF) manipulator. The estimated joint torques are validated by comparing to joint torques predicted by the recursive application of Newton-Euler equations. During a pick and place motion, the estimation error in joint 2 is less than 10 Newton meters. Acceleration increased the estimation uncertainty resulting in estimation errors of 20 Newton meters over the entire workspace. When the manipulator makes contact with the environment or a human, the same technique can be used to estimate contact torques from motor current. Current-estimated contact torque is validated against the calculated torque due to a measured force. The error in contact force is less than 10 Newtons. Collision detection is demonstrated on the SIA5D using estimated joint torques. The effectiveness of the collision detection is explored through simulated collisions with the human hands and arms. Simulated collisions are performed both for a typical pick and place motion as well as trajectories that transverse the entire workspace. The simulated forces and pressures are compared to acceptable maximums for human hands and arms. During pick and place motions with vertical and lateral end effector motions at 10mm/s and 25mm/s, the maximum forces and pressures remained below acceptable levels. At and near singular configurations some collisions can be difficult to detect. Fortunately, these configurations are generally avoided for kinematic reasons. / text
|
524 |
非線形振動子を用いた脚ロボットの肢間協調メカニズムに関する研究 / Studies on underlying mechanism of interlimb coordination of legged robots using nonlinear oscillators藤木, 聡一朗 23 March 2015 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(工学) / 甲第18946号 / 工博第3988号 / 新制||工||1614 / 31897 / 京都大学大学院工学研究科航空宇宙工学専攻 / (主査)教授 泉田 啓, 教授 藤本 健治, 教授 松野 文俊 / 学位規則第4条第1項該当
|
525 |
Virtual lead-through robot programming : Programming virtual robot by demonstrationBoberg, Arvid January 2015 (has links)
This report describes the development of an application which allows a user to program a robot in a virtual environment by the use of hand motions and gestures. The application is inspired by the use of robot lead-through programming which is an easy and hands-on approach for programming robots, but instead of performing it online which creates loss in productivity the strength from offline programming where the user operates in a virtual environment is used as well. Thus, this is a method which saves on the economy and prevents contamination of the environment. To convey hand gesture information into the application which will be implemented for RobotStudio, a Kinect sensor is used for entering the data into the virtual environment. Similar work has been performed before where, by using hand movements, a physical robot’s movement can be manipulated, but for virtual robots not so much. The results could simplify the process of programming robots and supports the work towards Human-Robot Collaboration as it allows people to interact and communicate with robots, a major focus of this work. The application was developed in the programming language C# and has two different functions that interact with each other, one for the Kinect and its tracking and the other for installing the application in RobotStudio and implementing the calculated data into the robot. The Kinect’s functionality is utilized through three simple hand gestures to jog and create targets for the robot: open, closed and “lasso”. A prototype of this application was completed which through motions allowed the user to teach a virtual robot desired tasks by moving it to different positions and saving them by doing hand gestures. The prototype could be applied to both one-armed robots as well as to a two-armed robot such as ABB’s YuMi. The robot's orientation while running was too complicated to be developed and implemented in time and became the application's main bottleneck, but remained as one of several other suggestions for further work in this project.
|
526 |
Controlling a Robot Hand in Simulation and RealityBirgestam, Magnus January 2008 (has links)
This master thesis was made at the Institute of Technology Stockholm and is a part of a robot hand project called 10-X with the aim to develop a low-cost robot hand that is light and strong. The project specification is to further improve the ability to control the robot hand in a user friendly way. This has been done by implementing a controller, earlier used and developed at KTH, which is intuitive and easy to customize after the needs in different kinds of grasps. To make the controller easy to use an user interface has been made. Before the implementation of the controller was made on the real hand it was tested and development on a simulation created in MATLAB/simulink with help from a graphic physics engine called GraspIt! The movement of the robot finger is effected of the force from a leaf spring and a tendon that bends the finger. Also the finger is exposed of contact forces and all these components had to be modeled in the simulation to make the finger act properly. / Detta examensarbete är genomfört på KTH Stockholm och är en del av ett projekt, kallat 10-X, vars syfte är att utveckla och ta fram en robothand som är lätt och stark samtidigt som den är billig. Projektets målsättning är att vidare förbättra och utveckla möjligheten att kontrollera robothanden på ett användarvänligt sätt. Detta har gjorts genom att implementera en regulator, tidigare utvecklad och använd på KTH, som är instruktiv och lätt att anpassa efter olika typer av grepp. För att göra regulatorn enkel att använda har ett användargränssnitt skapats. Innan regulatorn implementerades på den riktiga robothanden utvecklades och testades den på en simuleringsmodell, skapad i MATLAB/simulink med hjälp av en grafisk fysikmotor GraspIt! Rörelsen hos ett robotfinger påverkas av krafter från en bladfjäder och den lina som böjer fingret. Fingret utsätts också för kontaktkrafter och alla dessa komponenter blev modellerade i simulatorn för att få fingret att bete sig korrekt.
|
527 |
Modélisation du profil émotionnel de l'utilisateur dans les interactions parlées Humain-MachineDelaborde, Agnès 19 December 2013 (has links) (PDF)
Les travaux de recherche de la thèse portent sur l'étude et la formalisation des interactions émotionnelles Humain-Machine. Au delà d'une détection d'informations paralinguistiques (émotions, disfluences,...) ponctuelles, il s'agit de fournir au système un profil interactionnel et émotionnel de l'utilisateur dynamique, enrichi pendant l'interaction. Ce profil permet d'adapter les stratégies de réponses de la machine au locuteur, et il peut également servir pour mieux gérer des relations à long terme. Le profil est fondé sur une représentation multi-niveau du traitement des indices émotionnels et interactionnels extraits à partir de l'audio via les outils de détection des émotions du LIMSI. Ainsi, des indices bas niveau (variations de la F0, d'énergie, etc.), fournissent des informations sur le type d'émotion exprimée, la force de l'émotion, le degré de loquacité, etc. Ces éléments à moyen niveau sont exploités dans le système afin de déterminer, au fil des interactions, le profil émotionnel et interactionnel de l'utilisateur. Ce profil est composé de six dimensions : optimisme, extraversion, stabilité émotionnelle, confiance en soi, affinité et domination (basé sur le modèle de personnalité OCEAN et les théories de l'interpersonal circumplex). Le comportement social du système est adapté en fonction de ce profil, de l'état de la tâche en cours, et du comportement courant du robot. Les règles de création et de mise à jour du profil émotionnel et interactionnel, ainsi que de sélection automatique du comportement du robot, ont été implémentées en logique floue à l'aide du moteur de décision développé par un partenaire du projet ROMEO. L'implémentation du système a été réalisée sur le robot NAO. Afin d'étudier les différents éléments de la boucle d'interaction émotionnelle entre l'utilisateur et le système, nous avons participé à la conception de plusieurs systèmes : système en Magicien d'Oz pré-scripté, système semi-automatisé, et système d'interaction émotionnelle autonome. Ces systèmes ont permis de recueillir des données en contrôlant plusieurs paramètres d'élicitation des émotions au sein d'une interaction ; nous présentons les résultats de ces expérimentations, et des protocoles d'évaluation de l'Interaction Humain-Robot via l'utilisation de systèmes à différents degrés d'autonomie.
|
528 |
Control system architectures for distributed manipulators and modular robotsThatcher, Terence W. January 1987 (has links)
This thesis outlines the evolution of computer hardware and software architectures which are suitable for the programming and control of modular robots and distributed manipulators. Fundamental aspects of automating manufacturing functions are considered and the use of flexible machines, constructed from components of a family of mechanical modules and associated control system elements, are proposed. Many of the features of these flexible machines can be identified with those of conventional industrial robots. However a broader class of manufacturing machine is represented in as much as the industrial user defines the kinematics and dynamics of the manipulator. Such flexible machines can be referred to as "modular robots" or, where the mechanical modules are arranged in concurrently operating but mechanically decoupled groups, as "distributed manipulators". The main body of the work reported centred on the design of a family of computer control system elements which can serve a range of distributed manipulator and modular robot forms. These control system elements, whose cost is commensurate with the size and complexity of the manipulator's mechanical configuration, necessarily have many of the features found in robot controllers but also require properties of reconfigurability, programmability, and control system performance for the considerable array of manipulator configurations which can be constructed.
|
529 |
An Augmented Reality Human-Robot Collaboration SystemGreen, Scott Armstrong January 2008 (has links)
Although robotics is well established as a research field, there has been relatively little work on human-robot collaboration. This type of collaboration is going to become an increasingly important issue as robots work ever more closely with humans. Clearly, there is a growing need for research on human-robot collaboration and communication between humans and robotic systems.
Research into human-human communication can be used as a starting point in developing a robust human-robot collaboration system. Previous research into collaborative efforts with humans has shown that grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication. Therefore, these items comprise a list of required attributes of an effective human-robot collaborative system.
Augmented Reality (AR) is a technology for overlaying three-dimensional virtual graphics onto the user's view of the real world. It also allows for real time interaction with these virtual graphics, enabling a user to reach into the augmented world and manipulate it directly. The internal state of a robot and its intended actions can be displayed through the virtual imagery in the AR environment. Therefore, AR can bridge the divide between human and robotic systems and enable effective human-robot collaboration.
This thesis describes the work involved in developing the Augmented Reality Human-Robot Collaboration (AR-HRC) System. It first garners design criteria for the system from a review of communication and collaboration in human-human interaction, the current state of Human-Robot Interaction (HRI) and related work in AR. A review of research in multimodal interfaces is then provided highlighting the benefits of using such an interface design. Therefore, an AR multimodal interface was developed to determine if this type of design improved performance over a single modality design. Indeed, the multimodal interface was found to improve performance, thereby providing the impetus to use a multimodal design approach for the AR-HRC system.
The architectural design of the system is then presented. A user study conducted to determine what kind of interaction people would use when collaborating with a mobile robot is discussed and then the integration of a mobile robot is described. Finally, an evaluation of the AR-HRC system is presented.
|
530 |
Approche cognitive pour la représentation de l'interaction proximale haptique entre un homme et un humanoïdeBussy, Antoine 10 October 2013 (has links) (PDF)
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateur. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques.
|
Page generated in 0.0353 seconds