551 |
Podpora vizuálního programování mobilního robota / Visual Programming Backend for a Mobile RobotStaněk, Ondřej January 2017 (has links)
Title: Visual Programming Backend for a Mobile Robot Author: Bc. Ondřej Staněk Department: The Department of Software Engineering Supervisor: RNDr. David Obdržálek, Ph.D. Supervisor's e-mail address: David.Obdrzalek@mff.cuni.cz Abstract: In this work, the author designs and implements a solution for programming small mobile robots using a visual programming language. A suitable visual programming front-end is selected and back-end layers are created that allow execution of the program in a mobile robot. The author designs and implements a virtual machine that runs alongside the original robot firmware on an 8-bit microcontroller with limited resources. A code generator layer compiles the visual representation of the program into a sequence of bytecode instructions that is interpreted on board of the mobile robot. The solution supports typical features of procedural programming languages, in particular: variables, expressions, conditional statements, loops, static arrays, function calls and recursion. The emphasis is put on robustness of the implementation. To verify and maintain code quality, methods of automated software testing are used. Keywords: visual programming language, virtual machine, mobile robot, Blockly Powered by TCPDF (www.tcpdf.org)
|
552 |
Haptic Perception, Decision-making, and Learning for Manipulation with Artificial HandsJanuary 2016 (has links)
abstract: Robotic systems are outmatched by the abilities of the human hand to perceive and manipulate the world. Human hands are able to physically interact with the world to perceive, learn, and act to accomplish tasks. Limitations of robotic systems to interact with and manipulate the world diminish their usefulness. In order to advance robot end effectors, specifically artificial hands, rich multimodal tactile sensing is needed. In this work, a multi-articulating, anthropomorphic robot testbed was developed for investigating tactile sensory stimuli during finger-object interactions. The artificial finger is controlled by a tendon-driven remote actuation system that allows for modular control of any tendon-driven end effector and capabilities for both speed and strength. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. Next, attention was focused on real-time artificial perception for decision-making. A robotic system needs to perceive its environment in order to make decisions. Specific actions such as “exploratory procedures” can be employed to classify and characterize object features. Prior work on offline perception was extended to develop an anytime predictive model that returns the probability of having touched a specific feature of an object based on minimally processed sensor data. Developing models for anytime classification of features facilitates real-time action-perception loops. Finally, by combining real-time action-perception with reinforcement learning, a policy was learned to complete a functional contour-following task: closing a deformable ziplock bag. The approach relies only on proprioceptive and localized tactile data. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards within a finite time period by balancing exploration versus exploitation of the action space. Performance of the C-MAB learner was compared to a benchmark Q-learner that eventually returns the optimal policy. To assess robustness and generalizability, the learned policy was tested on variations of the original contour-following task. The work presented contributes to the full range of tools necessary to advance the abilities of artificial hands with respect to dexterity, perception, decision-making, and learning. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2016
|
553 |
Decision Making in Human-Robot Interaction / Processus décisionnels pour l'interaction homme-robotFiore, Michelangelo 19 October 2016 (has links)
Un intérêt croissant est aujourd'hui porté sur les robots capables de conduire des activités de collaboration d'une manière naturelle et efficace. Nous avons développé une architecture et un système qui traitent des aspects décisionnels de ce problème. Nous avons mis en oeuvre cette architecture pour traiter trois problèmes différents: le robot observateur, le robot équipier et enfin le robot instructeur. Dans cette thèse, nous discutons des défis et problématiques de la coopération homme-robot, puis nous décrivons l'architecture que nous avons développée et enfin détaillons sa mise oeuvre et les algorithmiques spécifiques à chacun des scénarios.Dans le cadre du scénario du robot observateur, le robot maintient un état du monde à jour au moyen d'un raisonnement géométrique effectué sur les données de perception, produisant ainsi une description symbolique de l'état du monde et des agents présents. Nous montrons également, sur la base d'un système de raisonnement intégrant des processus de décision de Markov (MDPs) et des réseaux Bayésiens, comment le robot est capable d'inférer les intentions et les actions futures de ses partenaires humain, à partir d'une observation de leurs mouvements relatifs aux objets de l'environnement. Nous identifions deux types de comportements proactifs : corriger les croyances de l'homme en lui fournissant l'information pertinente qui lui permettra de réaliser son but, aider physiquement la personne dans la réalisation de sa tâche, une fois celle-ci identifiée par le robot.Dans le cas du robot équipier, ce dernier doir réaliser une tâche en coopération avec un partenaire human. Nous introduisons un planificateur nommé Human-Aware Task Planner et détaillons la gestion par notre systeme du plan partagé par un composant appelé Plan Management component. Grâce à se système, le robot peut collaborer avec les hommes selon trois modalités différentes : robot leader, human leader, ou equal partners. Nous discutons des fonctions qui permettent au robot de suivre les actions de son partenaire humain et de vérifier qu'elles sont compatibles ou non avec le plan partagé et nous montrons comment le robot est capable de produire des comportements sûrs qui permettent de réaliser la tâche en prenant en compte de manière explicite la présence et les actions de l'homme ainsi que ses préférences. L'approche est fondée sur des processus décisionnels de Markov hiérarchisés avec observabilité mixte et permet d'estimer l'engagement de l'homme et de réagir en conséquence à différents niveaux d'abstraction. Enfin, nous discutions d'une approche prospective fondée sur un planificateur multi-agent probabiliste mettant en œuvre des MDPs et de sa pertinence quant à l'amélioration du composant de gestion de plan partagé.Dans le scénario du robot instructeur, nous détaillons les processus décisionnels qui permettent au robot d'adapter le plan partagé (shared plan) en fonction de l'état de connaissance et des désirs de son partenaire humain. Selon, le cas, le robot donne plus ou moins de détails sur le plan et adapte son comportement aux connaissances de l'homme ; Une étude utilisateur a également été menée permettant de valider la pertinence de cette approche.Finalement, nous présentons la mise en œuvre d'un robot guide autonome et détaillons les processu décisionnels que nous y avons intégrés pour lui permettre de guider des voyageurs dans un hall d'aéroport en s'adaptant au mieux au contexte et aux désirs des personnes guidées. Nous illustrons dans ce contexte des comportement adaptatifs et pro-actifs. Ce système a été effectivement intégré sur le robot Spencer qui a été déployé dans le terminal principal de l'aéroport d'Amsterdam (Schiphol). Le robot a fonctionné de manière robuste et satisfaisante. Une étude utilisateur a permis, dans ce cas également, de mesurer les performances et de valider le système. / There has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users.
|
554 |
Využití provozní kapacity dojících robotů v systému svobodného pohybu zvířat. / The Exploitation of Functional Capacity of Robotic Milking Machines in System of Free Moving Animals.REICHOVÁ, Sandra January 2010 (has links)
The aim of the thesis was to assess objectively the exploitation of the functional capacity of milking machines in the system of free moving animals. There were data coming from seven agricultural companies analysed in the thesis. The data collection took place from January to November 2009. We were provided with the preliminary data by the individual farms taking part in a programme called T4C. The information on problematic dairy cows comes directly from their breeders. The average production of the dairy cows was the first assessed criterion. The highest production (28.79kg) was achieved by little private agricultural companies. The lowest production (25.22kg) was ascertained in middle-sized companies. As far as the breed of dairy cows is concerned, the Holstein dairy cows gave the highest possible amount of milk (40.43; 30.16 a 27.01 kg). The CRV Fleckvieh cattle dairy cows gave the lowest possible amount of milk (24.83; 21.04 a 22.74 kg). The assessment of the number of milking by means of the robotic milking system represents the next criterion. Little private agricultural companies showed the highest frequency of milking (140.88). Whereas the big agricultural companies showed the lowest frequency of milking (119.28). Mostly the Holstein dairy cows were milked by the robotic milking machines (130.34). On the other hand, the CRV Fleckvieh cattle dairy cows were milked least by the robotic milking machines by contrast (107.94). The middle-sized agricultural companies achieved the highest number of milking per dairy cow, per day (2.47). The little private agricultural companies proved the lowest number of milking per dairy cow (2.34). The mixed herds of the Holsteins and CRV Fleckviehs proved the rate of 2.47. The Holstein dairy cows showed the milking frequency 2.45 per day while the CRV Fleckvieh dairy cows 2.32. The dairy cows from the big agricultural companies went to be milked by the robotic milking machine most frequently from the point of view of willingness, what follows is that these dairy cows showed the highest number of refusals per one milking (2.19). The lowest values of this criterion were shown in little private agricultural companies (1.10). With reference to the breed, the values of this indicator proved the lowest number of refusals (1.85) whereas the CRV Fleckviehs showed the highest number of refusals (2.25). The exploitation of the time capacity has been proven as the most effective one - 78.61% in little private agricultural companies. Whereas the lowest time exploitation has been proven in big agricultural companies - 68.11%. As far as the cow breed is concerned, it was ascertained that the Holstein dairy cows were milked longest -73.21%. On the other hand the CRV Fleckviehs were milked shortest - 63.17%. The highest amount of the dairy cows that needed to by accompanied to the robotic milking machine was recorded in big agricultural companies - 20.1%. Remarkably lower number of problematic cows was ascertained in middle-sized agricultural companies - 9.7%. The number of problematic cows in little private agricultural companies proved to be similar to the number in middle-sized agricultural companies 9.3%. It was ascertained that the most problematic dairy cows came from the mixed herds - 18.7%. The CRV Fleckviehs were the least problematic - 8.8%.
|
555 |
Localização e navegação de robô autônomo através de odometria e visão estereoscópica / Localization and navigation of an autonomous mobile robot trough odometry and stereoscopic visionDelgado Vargas, Jaime Armando, 1986- 20 August 2018 (has links)
Orientador: Paulo Roberto Gardel Kurka / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-20T13:27:04Z (GMT). No. of bitstreams: 1
DelgadoVargas_JaimeArmando_M.pdf: 4350704 bytes, checksum: 8e7dab5b1630b88bde95e287a62b2f7e (MD5)
Previous issue date: 2012 / Resumo: Este trabalho apresenta a implementação de um sistema de navegação com visão estereoscópica em um robô móvel, que permite a construção de mapa de ambiente e localização. Para isto é necessário conhecer o modelo cinemático do robô, técnicas de controle, algoritmos de identificação de características em imagens (features), reconstrução 3D com visão estereoscópica e algoritmos de navegação. Utilizam-se métodos para a calibração de câmera desenvolvida no âmbito do grupo de pesquisa da FEM/UNICAMP e da literatura. Resultados de análises experimentais e teóricas são comparados. Resultados adicionais mostram a validação do algoritmo de calibração de câmera, acurácia dos sensores, resposta do sistema de controle, e reconstrução 3D. Os resultados deste trabalho são de importância para futuros estudos de navegação robótica e calibração de câmeras / Abstract: This paper presents a navigation system with stereoscopic vision on a mobile robot, which allows the construction of environment map and location. In that way must know the kinematic model of the robot, algorithms for identifying features in images (features) as a Sift, 3D reconstruction with stereoscopic vision and navigation algorithms. Methods are used to calibrate the camera developed within the research group of the FEM / UNICAMP and literature. Results of experimental and theoretical analyzes are compared. Additional results show the validation of the algorithm for camera calibration, accuracy of sensors, control system response, and 3D reconstruction. These results are important for future studies of robotic navigation and calibration of cameras / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
|
556 |
Desenvolvimento de um sistema de controle adaptativo e integrado para locomoção de um robo bipede com tronco / Development of an integrated adaptative control system for a biped robot with a trunkGonçalves, João Bosco 12 June 2004 (has links)
Orientador: Douglas Eduardo Zampieri / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica / Made available in DSpace on 2018-08-04T03:12:52Z (GMT). No. of bitstreams: 1
Goncalves_JoaoBosco_D.pdf: 9689870 bytes, checksum: db2fb279a1080765ab5bddf9068a356d (MD5)
Previous issue date: 2004 / Resumo: Este trabalho concebeu um robô bípede composto por uma sucessão de elos rígidos interconectados por 12 articulações rotativas, permitindo movimentos tridimensionais. O robô bípede é constituído por dois subsistemas: tronco e membros inferiores. A modelagem matemática foi realizada em separado para cada um dos subsistemas, que são integrados pelas forças reativas de vínculo. Nossa proposta permite ao robô bípede executar a andadura dinâmica utilizando o tronco para fornecer o balanço dinâmico (estabilidade postural). De forma inédita, foi desenvolvido um gerador automático de trajetória para o tronco que processa as informações de posições e acelerações impostas aos membros inferiores, dotando o robô bípede de reflexos. Foi desenvolvido um gerador de marcha que utiliza a capacidade do robô bípede de executar movimentos tridimensionais, implicando andadura dinamicamente estável sem a efetiva utilização do tronco. O gerador automático de trajetória para o tronco entra em ação se a marcha gerada não mantiver o balanço dinâmico, restabelecendo uma marcha estável. Foi projetado um sistema de controle adaptativo por modelo de referência que utiliza redes neurais artificiais. A avaliação de estabilidade é feita segundo o critério de Lyapunov. O sistema de controle e o gerador automático de trajetórias para o tronco são integrados, compondo os mecanismos adaptativos desenvolvidos para solucionar o modo de andar dinâmico / Abstract: The main objective of this work is to project a biped robot machine with a trunk. The mathematical model was realized by considering two sub-systems: the legs and the trunk. The trajectories of the trunk are planned to compensate torques inherent to the dynamic gait, permitting to preserve the dynamic balance of the biped robot. An automatic generator of trajectory for the trunk was developed that processes the infonnation of positions and accelerations imposed to the legs. A gait generator was developed that uses the capacity of the biped robot to execute three-dimensional movements, causing a steady dynamic gait without the effective use of the trunk. The automatic generator of trajectory for the trunk actuates, if the generated do not keep the dynamic balance, reestablishing he steady dynamic gait. A neural network reference model for the adaptive control was projected, which utilizes an RBF neural network and a stability evaluation is based on the criterion of Lyapunov. The system of control and the automatic generator of trajectories for the trunk are integrated, composing the adaptive mechanisms developed to solve the way of dynamic walking / Doutorado / Mecanica dos Sólidos e Projeto Mecanico / Doutor em Engenharia Mecânica
|
557 |
Des robots sur la scène, aspects du cyber-théâtre contemporain / Robots on stage, aspects of contemporanean cyber-theatreFénelon, Ian 30 January 2017 (has links)
« On considère que le XXIe siècle sera “le siècle des robots”comme le XXe siècle a été le siècle des ordinateurs » , annonce Franck Bauchard. Or, si les robots ne semblent pas avoir tout à fait envahi notre quotidien, en revanche, au théâtre, ils sont bien là. Depuis plusieurs années, le « cyber-théâtre » convoque sur scène, et ailleurs (dans les musées, dans la rue, dans le cadre d’expositions d’art contemporain) toutes sortes de robots, qu’il s’agisse d’androïdes, d’humanoïdes, de robots zoomorphes, ou « simplement » mécanomorphes. Quelles formes revêt ce théâtre ? Qu’advient-il de l’acteur, mis, une fois de plus, en présence d’un énième artefact technologique, cette fois mobile et intelligent, avec lequel il devra partager la scène recomposée ? Enfin, comment le public réagit-il à la présence concrète de ces monstres tridimensionnels, qui jusque-là n’existaient qu’au cinéma, ou dans la littérature ?Un corpus élargi nous permettra d’apprécier la diversité du cyber-théâtre contemporain, qu’il soit « anthropo-mimétique », ou « non-anthropo-mimétique ». Néanmoins, nous avancerons l’hypothèse que le premier se révèle étriqué et rigide, au point, parfois, de mettre en péril l’acte théâtral. En revanche, nous saluerons dans le second un théâtre riche d’esthétiques débridées, et de dramaturgies inédites. Ce théâtre-là interroge et refonde une relation homme / machine millénaire et tumultueuse, mais désormais basée sur l’égalité, et le respect des paradigmes de chacun. Sur ces scènes occupées par des créatures technologiques autonomes et imprévisibles, l’acteur aux aguets n’a jamais été autant immergé dans le hic et nunc, tandis que le robot, lui, s’augmente d’une dimension nouvelle : il accède à ce que nous avons baptisé la « vie scénique ». / « We assume that the XXIst century will be the century of robots, as well as the XXeth century was the one of computers » , reveals Franck Bauchard. Yet robots don’t seem to have invaded our daily life, indeed we meet them in theatre. For some years now, Robots have appeared on stage, and in other variuos places, such as museums, public spaces (street), art exhibitions. Those new technological creatures divide into humanoïds and androïds, zoomorphic machines, and « pure » machines, that generally don’t refer to anything familiar. What does this theatre look like ? Whats does the actor become, when dropped, one more time, on a technological stage, but which he has to share, this time, with mobile intelligent creatures ? Last, how does the public react to such genuine tridimensional monsters that were only present in cinema or in littérature so far ?Our enlarged corpus, will allow us to to figure out the diversity of contemporanean cyber-creation, either « anthropo-mimetic or » « non-anthropo-mimetic. Then we will put the hypothesis that anthropo-mimetic cyber-theatre shows rigid, and narrow, so far as to endanger the theatrical act. Whereas the other proposal reveals audacious aesthetics, and renewed dramaturgies. This theatre questions and reinvents the relashionships between robots and humans, that priviledge equality and collaboration. Thus, in the representation process, actors coping with such autonomus and unpredictable technological robots gain spontanity and presence, whereas robots find themselves elevated to a new prestigious dimension : they acquire what we have called « scenic life ».
|
558 |
A retro-projected robotic head for social human-robot interactionDelaunay, Frédéric C. January 2016 (has links)
As people respond strongly to faces and facial features, both consciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an original contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving software designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial expressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncontrolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evaluations present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face.
|
559 |
Conception et réalisation d'un contrôleur d'exécution pour un robot mobile à roues omnidirectionnel et non holonomeClavien, Lionel January 2017 (has links)
Les robots dits « de service » doivent cohabiter avec des humains dans la vie de tous les jours. Ils sont ainsi confrontés à des environnements dynamiques qui ne leur sont pas spécifiquement adaptés. Afin de pouvoir y évoluer efficacement, ils doivent posséder, entre autres, une base capable d’une grande mobilité. Les bases mobiles omnidirectionnelles utilisant des roues conventionnelles orientables (RCO) présentent un bon compromis entre mobilité et complexité mécanique. Possédant généralement plus d’actionneurs que de degrés de liberté, elles nécessitent cependant une coordination rigoureuse de leurs actionneurs afin de garantir un mouvement précis et sécuritaire.
La coordination des actionneurs est le rôle du contrôleur d’exécution. Une coordination basée sur le concept du mouvement du châssis autour de son centre instantané de rotation (CIR) est une méthode connue. Cependant, les paramétrisations communément utilisées pour décrire la position du CIR sont toutes entachées de singularités propres, ce qui nuit à la conception d’un contrôleur d’exécution efficace. De plus, la plupart des contrôleurs d’exécution présentés dans la littérature ne sont pas adaptés à l’utilisation de RCO qui possèdent un couplage mécanique entre direction et propulsion (dénommées roues AZIMUT), qui permettent par exemple de ressentir des forces qui seraient appliquées extérieurement sur la base. Enfin, ces contrôleurs d’exécution ne peuvent pas gérer de façon aisée les contraintes de position, vitesse et accélération imposées par les actionneurs.
Cette thèse adresse le problème du contrôle d’exécution pour AZIMUT-3, une base mobile omnidirectionnelle non holonome utilisant des roues AZIMUT. Un nouvel espace de configuration pour le mouvement du châssis ainsi qu’une paramétrisation de celui-ci ne possédant aucune singularité propre sont tout d’abord proposés. Afin de garantir la coordination des roues, le contrôle se fait explicitement dans cet espace de configuration, et les modèles cinématiques établis pour le robot permettent de passer de l’espace de configuration du mouvement du châssis à celui du mouvement des actionneurs et réciproquement. Le contrôle ne se faisant pas dans l’espace de configuration du mouvement des actionneurs, il est nécessaire d’estimer le mouvement du châssis à partir des données fournies par les actionneurs. Un nouvel algorithme itératif d’estimation de la position du CIR est ainsi proposé.
Le contrôleur d’exécution conçu sur la base de ces éléments permet de respecter les contraintes en position, vitesse et accélération des actionneurs et de gérer le couplage propre aux roues AZIMUT. Il permet aussi de gérer les singularités structurelles inhérentes aux robots mobiles utilisant des RCO. Les résultats de tests effectués avec AZIMUT-3 démontrent les performances du contrôleur d’exécution conçu en termes de respect des contraintes, de précision odométrique et de vitesse d’exécution de commande. L’extension du modèle cinématique et du contrôleur d’exécution à tous les robots mobiles omnidirectionnels non holonomes utilisant des RCO est aussi discutée.
|
560 |
Safe Human Robot Collaboration : By using laser scanners, robot safety monitoring system and trap routine speed controlYan, Nannan January 2016 (has links)
Nowadays, robot is commonly used to perform automation tasks. With the trend of low volume and customised products, flexible manufacturing is introduced to increase working efficiency and flexibility. Therefore, human robot collaboration plays an important role in automation production and safety should be considered in the design of this kind of robot cell. This work presents the design of safe human robot collaboration by equipping an industrial robot cell with SICK laser scanners, safety monitoring system and trap routine speed control. It also investigates the reliability of RGB-D camera for robot safety. This work aims to find a safety robot system using standard industrial robot for human robot collaboration. The challenge is to ensure the operator's safety at all times. It investigates safety standards and directives, safety requirements of collaboration, and related works for the design of collaborative robot cell, and makes risk assessment before carrying out the valuation. Based on literature review, it gives the concept of layout design and logic for slow down and resume of robot motion. The speed will be first reduced to manual speed and then zero speed depending on the distance between the human and the robot. Valuation and verification are made in the proposed safe solution for human robot collaboration to test the reliability and feasibility. This project realizes the automatic resume that the robot can con-tinue working without manually pressing reset button after the operator leaves the robot cell if there is no access to the prohibited area. In addition, it also adopts the manual reset at the same time to ensure the safety when people access the prohibited area. Several special cases that may happen in the human robot collaboration are described and analysed. Furthermore, the future work is presented to make improvements for the proposed safety robot cell design.
|
Page generated in 0.0636 seconds