11 |
Challenges when introducing collaborative robots in SME manufacturing industrySchnell, Marie January 2021 (has links)
Collaborative robots, cobots, are seen as an alternative to traditional industrial robots since they are more flexible, less space-consuming, and can share the workspace with human operators. For small and medium-sized enterprises, SMEs, the adoption still is in an early stage. This study aims to examine the challenges for manufacturing SMEs when introducing collaborative robots in the business. A literature review is conducted as well as a case study, where managers and operators from five Swedish companies are interviewed about their experiences regarding the introduction of collaborative robots. Additional interviews with international researchers in the field are conducted as well. Since the aim is to understand the challenges in a rather new field, which human-robot collaboration still is for SMEs, this is a qualitative explorative study, with the purpose to gather rich insight about the field. The data has been analyzed in an inductive qualitative analysis. The results show that the biggest challenges for manufacturing SMEs when introducing collaborative robots are related to safety, performance, strategy, involvement, and training. Safety aspects are crucial since human operators work closely with collaborative robots and risk serious injuries even though the managers and operators in the case study do not seem to worry since they perceive the robots as quite slow and safe. Proper safety assessments are important as well, even though there is a concern about the lack of proper safety regulations. Other challenges are related to performance and strategy, e.g how to achieve cost-effectiveness with small production volumes and get the robotic investment to pay off in the long turn, but also to choose a proper cobot solution and a reliable supplier, find suitable work tasks, and obtain quality if the cobot fails to recognize a defective product or skewed inputs on the production line. The recommendation from the companies in the case study is to start with an easy task and to see it as a long-term investment. One important key to success is to find a flexible cobot solution that suits the company's individual needs. Employee involvement is another success factor since involving the operators from the beginning leads to better acceptance and understanding of the new technology and the changed work situation. There is a need for skilled, educated workers as well, although the case study shows that the SMEs highlight the importance of choosing a robot system that is easy to learn and easy to use for everyone. The researchers in the study highlight the need for smarter solutions equipped with enabling technologies and the SME managers call for flexible removable solutions with sensors and vision systems for quality control and the ability to handle surprises on the way.
|
12 |
<b>Design and Modeling of Variable Stiffness Mechanisms </b><b>for</b><b> </b><b>Collaborative</b><b> </b><b>Robots</b><b> </b><b>and</b><b> </b><b>Flexible</b><b> </b><b>Grasping</b>Jiaming Fu (18437502) 27 April 2024 (has links)
<p dir="ltr">To ensure safety, traditional industrial robots must operate within cages to separate them from human workers. This requirement has led to the rapid development of collaborative robots (cobots) designed to work closely to humans. However, existing cobots often prioritize <a href="" target="_blank">performance </a>aspects, such as precision, speed, and payload capacity, or prioritize safety, leading to a challenging balance between them. To address this issue, this dissertation introduces innovative concepts and methodologies for variable stiffness mechanisms. These mechanisms are applied to create easily fabricated cobot components to allow for controllable trade-offs between safety and performance in human-robot collaboration intrinsically. Additionally, the end-effectors developed based on these mechanisms enable the flexible and adaptive gripping of objects, enhancing the utility and efficiency of cobots in various applications.</p><p dir="ltr">This article-based dissertation comprises five peer-reviewed articles. The first essay introduces a reconfigurable variable stiffness parallel-guided beam (VSPB), whose stiffness can be adjusted discretely. An accurate stiffness model is also established, capable of leveraging a simple and reliable mechanical structure to achieve broad stiffness variation. The second essay discusses several discrete variable stiffness actuators (DVSAs) suitable for robotic joints. These DVSAs offer high stiffness ratios, rapid shifting speeds, low energy consumption, and compact structures compared to most existing variable stiffness actuators. The third essay introduces a discrete variable stiffness link (DVSL), applied to the robotic arm of a collaborative robot. Comprising three serially connected VSPBs, it offers eight different stiffness modes to accommodate diverse application scenarios, representing the first DVSL in the world. The fourth essay presents a variable stiffness gripper (VSG) with two fingers, each capable of continuous stiffness adjustment. The VSG is a low-cost, customizable universal robotic hand capable of successfully grasping objects of different types, shapes, weights, fragility, and hardness. The fifth essay introduces another robotic hand, the world's first discrete variable stiffness gripper (DVSG). It features four different stiffness modes for discrete stiffness adjustment in various gripper positions by on or off the ribs. Therefore, unlike the VSG, the DVSG focuses more on adaptability to object shapes during grasping.</p><p dir="ltr">These research achievements have the potential to facilitate the construction and popularize of next-generation collaborative robots, thereby enhancing productivity in industry and possibly leading to the integration of personal robotic assistants into countless households.</p>
|
13 |
Design, konstruktion och validering avrobothantering för omflyttning mellan fixturer / Design, construction and validation of a robot for objecttransportation between fixturesThisner, Mattias January 2022 (has links)
LightLab är ett företag som befinner sig i en expansionsfas. Automatisering innebär att göra enprocess mer eller mindre självfungerande, utan mänsklig inverkan. Detta används för att tillexempel effektivisera, öka kvalitén och/eller avlasta människans arbete i en process. I dettaavseende vill LightLab titta på att automatisera en förflyttning av ett glas (glaschipp) mellan tvåfixturer.I arbetet läggs en teoretisk grund där olika robotmodeller undersöks för att ta fram den somlämpar sig bäst efter LightLabs behov. Det avgörs i en nulägesanalys där kraven specificeras. Enkollaborativ version av länkarmsroboten tas fram som det bästa generella förslaget tillutförandet. Därtill kommer en analys av och förslag på verktyg till roboten för att kunna greppatag om glaset samt förslag på visionsystem för kvalitetssäkring av chippet. Den generellalösningen testas sedan i fysiska experiment samt i simuleringsmjukvaran RobotStudio.Efter att simuleringen är gjord kommer en marknadsundersökning för att hitta kollaborativalänkarmsrobotar som uppfyller kravspecifikationen. Slutligen läggs ett lösningsförslag på tremodeller fram som ett resultat av förstudien. GoFa (ABB), iisy ( KUKA) eller UR3e (UniversalRobots). ABB och KUKA är även intressanta utifrån deras övriga modeller av industrirobotaroch god kontakt med dem samt erfarenhet av deras robotsystem kan ge en god grund införframtiden.Det tillkommer fortsatt arbete med att designa och implementera gripdon av något slag där så välsugkoppar som mekaniskt gripdon är möjliga verktyg till roboten. Ett sugkoppsverktyg somgriper tag i chippet underifrån ger minst påverkan på chippet.Det viosionsystem som finns på LightLab, Gocator inline 2420, kan användas förkvalitetssäkring av glaschippen. Ett 2D-visionsystem är ett gångbart alternativ.I framtiden då produktionsledet blir mer fast kommer det behövas andra modeller av robotar somutför specifika uppgifter. KUKA och ABB är därför favoriter bland de tre då en god kontakt ocherfarenhet med deras system kan ge en mindre tröskel in till framtida utvecklingssteg.
|
14 |
Explainable Reinforcement Learning for Risk Mitigation in Human-Robot Collaboration Scenarios / Förklarbar förstärkningsinlärning inom människa-robot sammarbete för riskreduceringIucci, Alessandro January 2021 (has links)
Reinforcement Learning (RL) algorithms are highly popular in the robotics field to solve complex problems, learn from dynamic environments and generate optimal outcomes. However, one of the main limitations of RL is the lack of model transparency. This includes the inability to provide explanations of why the output was generated. The explainability becomes even more crucial when RL outputs influence human decisions, such as in Human-Robot Collaboration (HRC) scenarios, where safety requirements should be met. This work focuses on the application of two explainability techniques, “Reward Decomposition” and “Autonomous Policy Explanation”, on a RL algorithm which is the core of a risk mitigation module for robots’ operation in a collaborative automated warehouse scenario. The “Reward Decomposition” gives an insight into the factors that impacted the robot’s choice by decomposing the reward function into sub-functions. It also allows creating Minimal Sufficient Explanation (MSX), sets of relevant reasons for each decision taken during the robot’s operation. The second applied technique, “Autonomous Policy Explanation”, provides a global overview of the robot’s behavior by answering queries asked by human users. It also provides insights into the decision guidelines embedded in the robot’s policy. Since the synthesis of the policy descriptions and the queries’ answers are in natural language, this tool facilitates algorithm diagnosis even by non-expert users. The results proved that there is an improvement in the RL algorithm which now chooses more evenly distributed actions and a full policy to the robot’s decisions is produced which is for the most part aligned with the expectations. The work provides an analysis of the results of the application of both techniques which both led to increased transparency of the robot’s decision process. These explainability methods not only built trust in the robot’s choices, which proved to be among the optimal ones in most of the cases but also made it possible to find weaknesses in the robot’s policy, making them a tool helpful for debugging purposes. / Algoritmer för förstärkningsinlärning (RL-algoritmer) är mycket populära inom robotikområdet för att lösa komplexa problem, att lära sig av dynamiska miljöer och att generera optimala resultat. En av de viktigaste begränsningarna för RL är dock bristen på modellens transparens. Detta inkluderar den oförmåga att förklara bakomliggande process (algoritm eller modell) som genererade ett visst returvärde. Förklarbarheten blir ännu viktigare när resultatet från en RL-algoritm påverkar mänskliga beslut, till exempel i HRC-scenarier där säkerhetskrav bör uppfyllas. Detta arbete fokuserar på användningen av två förklarbarhetstekniker, “Reward Decomposition” och “Autonomous policy Explanation”, tillämpat på en RL-algoritm som är kärnan i en riskreduceringsmodul för drift av samarbetande robotars på ett automatiserat lager. “Reward Decomposition” ger en inblick i vilka faktorer som påverkade robotens val genom att bryta ner belöningsfunktionen i mindre funktioner. Det gör det också möjligt att formulera en MSX (minimal sufficient explanation), uppsättning av relevanta skäl för varje beslut som har fattas under robotens drift. Den andra tillämpade tekniken, “Autonomous Policy Explanation”, ger en generellt prespektiv över robotens beteende genom att mänskliga användare får ställa frågor till roboten. Detta ger även insikt i de beslutsriktlinjer som är inbäddade i robotens policy. Ty syntesen av policybeskrivningarna och frågornas svar är naturligt språk underlättar detta en algoritmdiagnos även för icke-expertanvändare. Resultaten visade att det finns en förbättring av RL-algoritmen som nu väljer mer jämnt fördelade åtgärder. Dessutom produceras en fullständig policy för robotens beslut som för det mesta är anpassad till förväntningarna. Rapporten ger en analys av resultaten av tillämpningen av båda teknikerna, som visade att båda ledde till ökad transparens i robotens beslutsprocess. Förklaringsmetoderna gav inte bara förtroende för robotens val, vilket visade sig vara bland de optimala i de flesta fall, utan gjorde det också möjligt att hitta svagheter i robotens policy, vilket gjorde dem till ett verktyg som är användbart för felsökningsändamål.
|
Page generated in 0.0611 seconds