Spelling suggestions: "subject:"collaborative robotics"" "subject:"kollaborative robotics""
1 |
DESIGN AND VALIDATION OF AN IMPROVED HYBRID PNEUMATIC-ELECTRIC ACTUATORAshby, Graham 11 1900 (has links)
As collaborative robotics become more prevalent, it is desirable to improve the inherent robot safety, on a mechanical level, while maintaining good position tracking. One method is to replace the electric motor+gearing currently used with an alternate actuator which introduces less inertia, friction, and stiffness. A promising approach is the use of hybrid pneumatic-electric actuators (HPEAs). A first generation (GEN1), proof-of-concept, HPEA with low payload capacity and poor mechanical reliability was improved upon to produce the next generation of HPEA. The 2nd generation (GEN2) actuator developed in this work was designed to increase payload capacity and improve mechanical reliability while maintaining low inertia, low friction and low stiffness. The torque capacity was improved by 511% while increasing inertia by only 292%.
The majority of the system was modeled via relevant physical laws. The solenoid valves’ inverse model was provided by a black box artificial neural network (ANN), and the electric motor’s was empirical. The models were used to develop a position controller with an inner loop pressure controller based upon the ANN. An alternate (non-model-based) pressure controller was also developed to compare to the ANN based controllers. The system could operate as a purely pneumatic actuator, or as a HPEA.
Experimentally it was found that the position control based upon the two pressure controllers led to similar performance, but the ANN based were superior more often. The hybrid mode reduced the purely pneumatic mode position error for vertical cycloidal position tracking by approximately 55%. The GEN2 achieved lower position tracking errors as compared to prior works of other HPEAs as well as purely pneumatic actuator control publications. Compared to the GEN1, the GEN2 achieved better position tracking errors in both pneumatic and hybrid operation. The GEN2 will serve as a superior testbed for future HPEA control and collaborative robotics research. / Thesis / Master of Applied Science (MASc) / Robots which work directly with people are becoming increasingly numerous in industry as their costs decrease. As robots and humans work more and more closely there is a desire for the robot to be more inherently safe, by merit of the underlying mechanical design. Previous research resulted in a prototype hybrid pneumatic-electric actuator (HPEA) designed to improve inherent safety by merit of its low inertia, low friction, and low stiffness. This prototype proved successful, but was of low payload capacity and unreliable mechanical design. The goal of the research was to design, build, model, control, and validate a second generation HPEA, with a larger payload capacity and of more reliable mechanical design while maintaining low friction, inertia and stiffness. Furthermore the improved actuator should maintain or improve upon the good position trajectory tracking of the prior actuator. These goals were successfully achieved with the improved prototype developed in this work.
|
2 |
Characterisation of remote nuclear environmentsWright, Thomas January 2018 (has links)
Many legacy nuclear facilities exist with the number of such facilities due to increase in the future. For a variety of reasons, some of these facilities have poorly documented blueprints and floor plans. This has led to many areas within such facilities being left unexplored and in an unknown state for some considerable time. The risk to health that these areas might pose has in some cases precluded human exploration and facilities have been maintained in a containment state for many years. However, in more recent years there has been a move to decommission such facilities. The change of strategy from containment to decommissioning will require knowledge of what it is that needs to be decommissioned. It is hoped that an autonomous or semi- autonomous robotic solution can satisfy the requirement. For successful mapping of such environments, it is required that the robot is capable of producing complete scans of the world around it. As it moves through the environment the robot will not only need to map the presence, type and extent of radioactivity, but do so in a way that is economical from the perspective of battery life. Additionally, the presence of radioactivity presents a threat to the robot electronics. Exposure to radiation will be necessary but should be minimised to prolong the functional life of the robot. Some tethered robots have been developed for such applications, but these can cause issues such as snagging or the tether inadvertently spreading contamination, due to being dragged along the floor. Nuclear environments have very unique challenges, due to the radiation. Alpha and beta radiation have a short emission distance and therefore cannot be detected until the robot is in very close proximity. Although the robot will not become disabled by these forms of radiation, it may become contaminated which is undesirable. Radiation from gamma sources can be detected at range, however pinpointing a source requires sensors to be taken close to the emitter, which has adverse effects on the robot's electronics, for example gamma radiation damages silicon based electronics. Anything entering these environments is deemed to be contaminated and will eventually require disposal. Consequently the number of entries made should ideally be minimised, to reduce the production and spread of potential waste/contamination. This thesis presents results from an investigation of ways to provide complete scans of an environment with novel algorithms which take advantage of common features found in industrial environments and thereby allow for gaps in the data set to be detected. From this data it is then possible to calculate a minimum set of way points required to be visited to allow for all of the gaps to be filled in. This is achieved by taking into account the sensor's parameters such as minimum and maximum sensor range, angle of incidence and optimal sensor distance, along with robot and environmental factors. An investigation into appropriate exploration strategies has been undertaken looking at the ways in which gamma radiation sources affect the coverage of an environment. It has discovered undesired behaviours exhibited by the robot when radiation is present. To overcome these behaviours a novel movement strategy has been presented, along with a set of linear and binary battery modifiers, which adapt common movement strategies to help improve overall coverage of an unknown environment. Collaborative exploration of unknown environments has also been investigated, looking into the specific challenges radiation and contamination offer. This work has presented new ways of allowing multiple robots to independently explore an environment, sharing knowledge as they go, whilst safely exploring unknown hazardous space where a robot may be lost due to contamination or radiation damage.
|
3 |
Intuitive, iterative and assisted virtual guides programming for human-robot comanipulation / Programmation intuitive, itérative et assistée de guides virtuels pour la comanipulation homme-robotSanchez Restrepo, Susana 01 February 2018 (has links)
Pendant très longtemps, l'automatisation a été assujettie à l'usage de robots industriels traditionnels placés dans des cages et programmés pour répéter des tâches plus ou moins complexes au maximum de leur vitesse et de leur précision. Cette automatisation, dite rigide, possède deux inconvénients majeurs : elle est chronophage dû aux contraintes contextuelles applicatives et proscrit la présence humaine. Il existe désormais une nouvelle génération de robots avec des systèmes moins encombrants, peu coûteux et plus flexibles. De par leur structure et leurs modes de fonctionnement ils sont intrinsèquement sûrs ce qui leurs permettent de travailler main dans la main avec les humains. Dans ces nouveaux espaces de travail collaboratifs, l'homme peut être inclus dans la boucle comme un agent décisionnel actif. En tant qu'instructeur ou collaborateur il peut influencer le processus décisionnel du robot : on parle de robots collaboratifs (ou cobots). Dans ce nouveau contexte, nous faisons usage de guides virtuels. Ils permettent aux cobots de soulager les efforts physiques et la charge cognitive des opérateurs. Cependant, la définition d'un guide virtuel nécessite souvent une expertise et une modélisation précise de la tâche. Cela restreint leur utilité aux scénarios à contraintes fixes. Pour palier ce problème et améliorer la flexibilité de la programmation du guide virtuel, cette thèse présente une nouvelle approche par démonstration : nous faisons usage de l'apprentissage kinesthésique de façon itérative et construisons le guide virtuel avec une spline 6D. Grâce à cette approche, l'opérateur peut modifier itérativement les guides tout en gardant leur assistance. Cela permet de rendre le processus plus intuitif et naturel ainsi que de réduire la pénibilité. La modification locale d'un guide virtuel en trajectoire est possible par interaction physique avec le robot. L'utilisateur peut déplacer un point clé cartésien ou modifier une portion entière du guide avec une nouvelle démonstration partielle. Nous avons également étendu notre approche aux guides virtuels 6D, où les splines en déplacement sont définies via une interpolation Akima (pour la translation) et une 'interpolation quadratique des quaternions (pour l'orientation). L'opérateur peut initialement définir un guide virtuel en trajectoire, puis utiliser l'assistance en translation pour ne se concentrer que sur la démonstration de l'orientation. Nous avons appliqué notre approche dans deux scénarios industriels utilisant un cobot. Nous avons ainsi démontré l'intérêt de notre méthode qui améliore le confort de l'opérateur lors de la comanipulation. / For a very long time, automation was driven by the use of traditional industrial robots placed in cages, programmed to repeat more or less complex tasks at their highest speed and with maximum accuracy. This robot-oriented solution is heavily dependent on hard automation which requires pre-specified fixtures and time consuming programming, hindering robots from becoming flexible and versatile tools. These robots have evolved towards a new generation of small, inexpensive, inherently safe and flexible systems that work hand in hand with humans. In these new collaborative workspaces the human can be included in the loop as an active agent. As a teacher and as a co-worker he can influence the decision-making process of the robot. In this context, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload during tasks accomplishment. However, the construction of virtual guides often requires expert knowledge and modeling of the task. These limitations restrict the usefulness of virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guides programming, this thesis presents a novel approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and displacement splines. Thanks to this approach, the worker is able to iteratively modify the guides while being assisted by them, making the process more intuitive and natural while reducing its painfulness. Our approach allows local refinement of virtual guiding trajectories through physical interaction with the robots. We can modify a specific cartesian keypoint of the guide or re- demonstrate a portion. We also extended our approach to 6D virtual guides, where displacement splines are defined via Akima interpolation (for translation) and quadratic interpolation of quaternions (for orientation). The worker can initially define a virtual guiding trajectory and then use the assistance in translation to only concentrate on defining the orientation along the path. We demonstrated that these innovations provide a novel and intuitive solution to increase the human's comfort during human-robot comanipulation in two industrial scenarios with a collaborative robot (cobot).
|
4 |
Characterizing Mental Workload in Physical Human-Robot Interaction Using Eye-Tracking MeasuresUpasani, Satyajit Abhay 06 July 2023 (has links)
Recent technological developments have ushered in an exciting era for collaborative robots (cobots), which can operate in close proximity with humans, sharing and supporting task goals. While there is increasing research on the biomechanical and ergonomic consequences of using cobots, there is relatively little work on the potential motor-cognitive demand associated with these devices. These cognitive demands primarily stem from the need to form accurate internal (mental) models of robot behavior, while also dealing with the intrinsic motor-cognitive demands of physical co-manipulation tasks, and visually monitoring the environment to ensure safe operation. The primary aim of this work was to investigate the viability of eye-tracking measures for characterizing mental workload during the use of cobots, while accounting for the potential effects of learning, task-type, expertise, and age-differences. While eye-tracking is gaining traction in surgical/rehabilitation robotics domains, systematic investigations of eye tracking for studying interactions with industrial cobots are currently lacking. We conducted three studies in which participants of different ages and expertise levels learned to perform upper- and lower-limb tasks using a dual-armed cobot and a whole-body powered exoskeleton respectively, over multiple trials. Robot-control difficulty was manipulated by changing the joint impedance on one of the robot arms (for the dual-armed cobot).
The first study demonstrated that when individuals were learning to interact with a dual-armed cobot to perform an upper-limb co-manipulation task simulated in a virtual reality (VR) environment, pupil dilation (PD) and stationary gaze entropy (SGE) were the most sensitive and reliable measures of mental workload. A combination of eye-tracking measures predicted performance with greater accuracy than experimental task variables. Measures of visual attentional focus were more sensitive to task difficulty manipulations than typical eye-tracking workload measures, and PD was most sensitive to changes in workload over learning. The second study showed that compared to walking freely, walking while using a complex whole-body powered exoskeleton: a) increased PD of novices but not experts, b) led to reduced SGE in both groups and c) led to greater downward focused gaze (on the walking path) in experts compared to novices. In the third study using an upper-limb co-manipulation task similar to Study 1, we found that the PD of younger adults reduced at a faster rate over learning, compared to that of older adults, and older adults showed a significantly greater drop in gaze transition entropy with an increase in task difficulty, compared to younger adults. Also, PD was sensitive to learning and robot-difficulty but not environmental-complexity (collisions with objects in the task environment), and gaze-behavior measures were generally more sensitive to environmental-complexity.
This research is the first to conduct a comprehensive analysis of mental workload in physical human-robot interaction using eye-tracking measures. PD was consistently found to show larger effects over learning, compared to task difficulty. Gaze-behavior measures quantifying visual attention towards environmental areas of interest were found to show relatively large effects of task difficulty and should continue to be explored in future research. While walking in a powered exoskeleton, both novices and experts exhibited compensatory gaze strategies. This finding highlights potentially persistent effects of using cobots on visual attention, with potential implications to safety and situational awareness. Older adults were found to apply greater mental effort (indicated by sustained PD) and followed more constrained gaze patterns in order to maintain similar levels of performance to younger adults. Perceived workload measures could not capture these age-differences, thus highlighting the advantages of eye-tracking measures. Lastly, the differential sensitivity of pupillary- and gaze behavior metrics to different types of task demands highlights the need for future research to employ both kinds of measures for evaluating pHRI. Important questions for future research are the potential sensitivity of eye-tracking workload measures over long-term adaptations to cobots, and the potential generalizability of eye-tracking measures to real-world (non-VR) tasks. / Doctor of Philosophy / Collaborative robots (cobots) are an exciting and novel technology that may be used to assist human workers in manual industrial work, reduce physical demand, and potentially enable older adults to re-enter the workforce. However, relatively little is known about the potential cognitive demands that cobots may impose on the human user. Although intended to assist humans, some cobots have been found to be difficult to use, because of the time and effort that is needed to learn their control dynamics (i.e. to learn how to physically control them to perform a complex manual task). Thus, it is important to better understand the potential mental demand/workload that a human operator may experience, while using a cobot, and how this demand may vary over time and learning to use the cobot. Eye-tracking is a promising technique to measure a cobot-operators' mental workload, since it can provide various measures that correlate with the involuntary physiological response to mental workload (e.g. pupil dilation - PD), as well as voluntary gaze strategies (e.g. the durations and patterns of where people look) in order to perform a physical/motor task. Eye-tracking measures may be used to continuously and precisely evaluate whether a cobot imposes excessive workload on the human operator, and if high workload is observed, the cobot may be programmed to adapt its behavior to reduce workload. Although eye-tracking is gaining traction in surgical/rehabilitation robotics domains, systematic investigations of eye tracking for studying interactions with industrial cobots are currently lacking. We designed three studies in which we investigated 1) the ability of eye-tracking measures to measure changes in mental workload while participants learned to use a cobot under different difficulty-levels 2) the changes in pupil diameter and gaze behavior when participants walked while wearing a whole-body powered exoskeleton as opposed to walking freely, and potential differences between novice- and expert exoskeleton-users 3) the differences in mental workload and visual attention between younger and older adults while learning to use a cobot. The first and third studies used virtual reality (VR) to simulate the task environment, to allow for precise control over the presentation of stimuli.
In study 1, we found that in higher difficulty-levels, participants' pupils were significantly more dilated, i.e., participants experienced higher mental workload, than in lower-difficulty levels. Also, PD gradually reduced as participants learned to better perform the task. In difficult task-conditions, participants gazed more frequently at the robot, and showed higher randomness (entropy) in their gaze patterns. The proportion of gaze falling on certain objects was at least as sensitive an indicator of task-difficulty, as PD and gaze entropy. In study 2, we found that walking in a whole-body exoskeleton was cognitively demanding, but only for novice participants. However, both novice and expert participants showed changes in their gaze patterns while walking in the exoskeleton – both groups lowered their gaze and focused on the walking path to a greater extent, compared to walking freely. Lastly, in study 3, we also found that older adults applied greater mental effort for maintaining similar levels of performance as younger adults. Older adults also exhibited more repetitive scanning patterns compared to younger adults, when task difficulty increased. This may have been due to potential reduction in the capacity to control attention with age. Our work demonstrates that eye-tracking measures are sensitive and reliable metrics of workload, and that different metrics are sensitive to different sources of workload. Specifically, PD was sensitive to robot-difficulty, and measures of visual attention were generally more sensitive to the complexity of the task environment. Important questions for future research are the potential changes in eye-tracking workload measures over longer time periods of learning to use cobots, and how these results generalize to real-world tasks that are not performed in virtual reality.
|
5 |
Design and validation of a medical robotic device system to control two collaborative robots for ultrasound-guided needle insertionsBerger, Johann, Unger, Michael, Keller, Johannes, Reich, C. Martin, Neumuth, Thomas, Melzer, Andreas 08 August 2024 (has links)
The percutaneous biopsy is a critical intervention for diagnosis and staging in cancer therapy. Robotic systems can improve the efficiency and outcome of such procedures while alleviating stress for physicians and patients. However, the high complexity of operation and the limited possibilities for robotic integration in the operating room (OR) decrease user acceptance and the number of deployed robots. Collaborative systems and standardized device communication may provide approaches to overcome named problems. Derived from the IEEE 11073 SDC standard terminology of medical device systems, we designed and validated a medical robotic device system (MERODES) to access and control a collaborative setup of two KUKA robots for ultrasound-guided needle insertions. The system is based on a novel standard for service-oriented device connectivity and utilizes collaborative principles to enhance user experience. Implementing separated workflow applications allows for a flexible system setup and configuration. The system was validated in three separate test scenarios to measure accuracies for 1) co-registration, 2) needle target planning in a water bath and 3) in an abdominal phantom. The co-registration accuracy averaged 0.94 ± 0.42 mm. The positioning errors ranged from 0.86 ± 0.42 to 1.19 ± 0.70 mm in the water bath setup and from 1.69 ± 0.92 to 1.96 ± 0.86 mm in the phantom. The presented results serve as a proof-of-concept and add to the current state of the art to alleviate system deployment and fast configuration for percutaneous robotic interventions.
|
6 |
Segmentation et reconaissance des gestes pour l'interaction homme-robot cognitive / Gesture Segmentation and Recognition for Cognitive Human-Robot InteractionSimao, Miguel 17 December 2018 (has links)
Cette thèse présente un cadre formel pour l'interaction Homme-robot (HRI), qui reconnaître un important lexique de gestes statiques et dynamiques mesurés par des capteurs portatifs. Gestes statiques et dynamiques sont classés séparément grâce à un processus de segmentation. Les tests expérimentaux sur la base de données de gestes UC2017 ont montré une haute précision de classification. La classification pas à pas en ligne utilisant des données brutes est fait avec des réseaux de neurones profonds « Long-Short Term Memory » (LSTM) et à convolution (CNN), et sont plus performants que les modèles statiques entraînés avec des caractéristiques spécialement conçues, au détriment du temps d'entraînement et d'inférence. La classification en ligne des gestes permet une classification prédictive avec réussit. Le rejet des gestes hors vocabulaire est proposé par apprentissage semi-supervisé par un réseau de neurones du type « Auxiliary Conditional Generative Adversarial Networks ». Le réseau propose a atteint une haute précision de rejet de les gestes non entraînés de la base de données UC2018 DualMyo. / This thesis presents a human-robot interaction (HRI) framework to classify large vocabularies of static and dynamic hand gestures, captured with wearable sensors. Static and dynamic gestures are classified separately thanks to the segmentation process. Experimental tests on the UC2017 hand gesture dataset showed high accuracy. In online frame-by-frame classification using raw incomplete data, Long Short-Term Memory (LSTM) deep networks and Convolutional Neural Networks (CNN) performed better than static models with specially crafted features at the cost of training and inference time. Online classification of dynamic gestures allows successful predictive classification. The rejection of out-of-vocabulary gestures is proposed to be done through semi-supervised learning of a network in the Auxiliary Conditional Generative Adversarial Networks framework. The proposed network achieved a high accuracy on the rejection of untrained patterns of the UC2018 DualMyo dataset.
|
7 |
Reconnaissance de gestes et actions pour la collaboration homme-robot sur chaîne de montage / Recognition of gestures and actions for man and robot collaboration on assembly lineCoupeté, Eva 10 November 2016 (has links)
Les robots collaboratifs sont de plus en plus présents dans nos vies quotidiennes. En milieu industriel, ils sont une solution privilégiée pour rendre les chaînes de montage plus flexibles, rentables et diminuer la pénibilité du travail des opérateurs. Pour permettre une collaboration fluide et efficace, les robots doivent être capables de comprendre leur environnement, en particulier les actions humaines.Dans cette optique, nous avons décidé d’étudier la reconnaissance de gestes techniques afin que le robot puisse se synchroniser avec l’opérateur, adapter son allure et comprendre si quelque chose d’inattendu survient.Pour cela, nous avons considéré deux cas d’étude, un cas de co-présence et un cas de collaboration, tous les deux inspirés de cas existant sur les chaînes de montage automobiles.Dans un premier temps, pour le cas de co-présence, nous avons étudié la faisabilité de la reconnaissance des gestes en utilisant des capteurs inertiels. Nos très bons résultats (96% de reconnaissances correctes de gestes isolés avec un opérateur) nous ont encouragés à poursuivre dans cette voie.Sur le cas de collaboration, nous avons privilégié l’utilisation de capteurs non-intrusifs pour minimiser la gêne des opérateurs, en l’occurrence une caméra de profondeur positionnée avec une vue de dessus pour limiter les possibles occultations.Nous proposons un algorithme de suivi des mains en calculant les distances géodésiques entre les points du haut du corps et le haut de la tête. Nous concevons également et évaluons un système de reconnaissance de gestes basé sur des Chaînes de Markov Cachées (HMM) discrètes et prenant en entrée les positions des mains. Nous présentons de plus une méthode pour adapter notre système de reconnaissance à un nouvel opérateur et nous utilisons des capteurs inertiels sur les outils pour affiner nos résultats. Nous obtenons le très bon résultat de 90% de reconnaissances correctes en temps réel pour 13 opérateurs.Finalement, nous formalisons et détaillons une méthodologie complète pour réaliser une reconnaissance de gestes techniques sur les chaînes de montage. / Collaborative robots are becoming more and more present in our everyday life. In particular, within the industrial environment, they emerge as one of the preferred solution to make assembly line in factories more flexible, cost-effective and to reduce the hardship of the operators’ work. However, to enable a smooth and efficient collaboration, robots should be able to understand their environment and in particular the actions of the humans around them.With this aim in mind, we decided to study technical gestures recognition. Specifically, we want the robot to be able to synchronize, adapt its speed and understand if something unexpected arises.We considered two use-cases, one dealing with copresence, the other with collaboration. They are both inspired by existing task on automotive assembly lines.First, for the co-presence use case, we evaluated the feasibility of technical gestures recognition using inertial sensors. We obtained a very good result (96% of correct recognition with one operator) which encouraged us to follow this idea.On the collaborative use-case, we decided to focus on non-intrusive sensors to minimize the disturbance for the operators and we chose to use a depth-camera. We filmed the operators with a top view to prevent most of the potential occultations.We introduce an algorithm that tracks the operator’s hands by calculating the geodesic distances between the points of the upper body and the top of the head.We also design and evaluate an approach based on discrete Hidden Markov Models (HMM) taking the hand positions as an input to recognize technical gestures. We propose a method to adapt our system to new operators and we embedded inertial sensors on tools to refine our results. We obtain the very good result of 90% of correct recognition in real time for 13 operators.Finally, we formalize and detail a complete methodology to realize technical gestures recognition on assembly lines.
|
Page generated in 0.1054 seconds