Spelling suggestions: "subject:"robot programming"" "subject:"cobot programming""
11 |
Programování robotických akcí v rozšířené realitě / Robot Programming in Augmented RealitySabela, David January 2020 (has links)
The aim of this master's thesis was to develop an application, which would allow its users to program robotic actions with the help the augmented reality. The application is of demonstrative character and is made with the goal of intuitive handling and good integration of the augmented reality. This experimental application enables users to design a program for a robot using visual instructions, conditions and links and to test it by visualizing the passage through the program. The application is implemented with the use of Unity3D and the AR Foundation technology. The result was tested by a group of volunteers, whose feedback can be considered generally positive.
|
12 |
Design of a virtual robot cell at IKEA Industry : Digital twin of a packaging robot cellLarsson, Kevin, Winqvist, Max January 2022 (has links)
This report studies how a digital twin can be utilized through offlinerobot programming and simulation for a robot packaging line;additionally, the advantages and challenges of a robot digital twin arereported. This thesis project and the robot simulation is done incollaboration with Ikea Industry.The obtained result was in form of a digital twin which is a digital copyof a physical robot cell at Ikea Industry’s packaging line. The resultsshow that a digital twin can indeed be utilized for layout planning androbot optimization. An energy consumption chart which depends on thetime taken to package ten boards was created. This chart can be used forfurther optimization of the robot cell.The study also shows that a digital twin can save time and money,especially in the design phase, for medium to small companies that donot have the resources to create a dedicated physical robot cell fortesting.
|
13 |
Robotic 3D Printing of sustainable structures / Robot 3D-printing med hållbara strukturerAlkhatib, Tammam January 2023 (has links)
This bachelor thesis aims to integrate and evaluate a 3D printing robotic cell at the SmartIndustry Group – SIG lab at Linnaeus University (LNU).A sustainable structure consisting of wood fiber polymer composites was 3D printed withan industrial robot. Sustainable 3D printing material can be recycled or burned for energyafterwards. The 3D printing material used in this thesis stems from certificated forests. The objective is to utilise this technology in manufacturing courses and research projectsat the SIG lab at LNU. This objective is achieved by creating an operation manual and avideo tutorial in this thesis.The integration and evaluation process will involve offline robot programming,simulation, and practical experiments on the 3D printing robotic cell.
|
14 |
Realizing real-time digital twins for industrial cobotsCliment Giménez, Eire María January 2024 (has links)
This paper presents a comprehensive study on the implementation of real-time digital twins for industrial cobots, with a focus on the ABB GoFa CRB 15000 cobot. It highlights the relevance of digital twins in the context of Industry 4.0 and their ability to improve operational efficiency by representing physical processes in virtual models. The project's primary objective is to develop a real-time digital twin system to develop the bidirectional monitoring and control of industrial cobots, particularly in assembly tasks. This paper addresses the challenges encountered and the proposed solution, which include offline and online programming, the adoption of the OPC UA protocol for communication, and the use of ABB RobotStudio for simulation. A framework for understanding the implementation is provided, followed by a detailed analysis of the results obtained, as well as discussion and conclusions. An exploration of possible future work is also included, providing a comprehensive view of the project and its importance in the field of industrial robotics and digital twin technology.
|
15 |
Human-Inspired Robot Task Teaching and LearningWu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups.
The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback.
The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames.
An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction.
A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
|
16 |
Human-Inspired Robot Task Teaching and LearningWu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups.
The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback.
The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames.
An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction.
A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
|
17 |
Using Event-Based and Rule-Based Paradigms to Develop Context-Aware Reactive Applications.Le, Truong Giang 30 September 2013 (has links) (PDF)
Context-aware pervasive computing has attracted a significant research interest from both academy and industry worldwide. It covers a broad range of applications that support many manufacturing and daily life activities. For instance, industrial robots detect the changes of the working environment in the factory to adapt their operations to the requirements. Automotive control systems may observe other vehicles, detect obstacles, and monitor the essence level or the air quality in order to warn the drivers in case of emergency. Another example is power-aware embedded systems that need to work based on current power/energy availability since power consumption is an important issue. Those kinds of systems can also be considered as smart applications. In practice, successful implementation and deployment of context-aware systems depend on the mechanism to recognize and react to variabilities happening in the environment. In other words, we need a well-defined and efficient adaptation approach so that the systems' behavior can be dynamically customized at runtime. Moreover, concurrency should be exploited to improve the performance and responsiveness of the systems. All those requirements, along with the need for safety, dependability, and reliability pose a big challenge for developers.In this thesis, we propose a novel programming language called INI, which supports both event-based and rule-based programming paradigms and is suitable for building concurrent and context-aware reactive applications. In our language, both events and rules can be defined explicitly, in a stand-alone way or in combination. Events in INI run in parallel (synchronously or asynchronously) in order to handle multiple tasks concurrently and may trigger the actions defined in rules. Besides, events can interact with the execution environment to adjust their behavior if necessary and respond to unpredictable changes. We apply INI in both academic and industrial case studies, namely an object tracking program running on the humanoid robot Nao and a M2M gateway. This demonstrates the soundness of our approach as well as INI's capabilities for constructing context-aware systems. Additionally, since context-aware programs are wide applicable and more complex than regular ones, this poses a higher demand for quality assurance with those kinds of applications. Therefore, we formalize several aspects of INI, including its type system and operational semantics. Furthermore, we develop a tool called INICheck, which can convert a significant subset of INI to Promela, the input modeling language of the model checker SPIN. Hence, SPIN can be applied to verify properties or constraints that need to be satisfied by INI programs. Our tool allows the programmers to have insurance on their code and its behavior.
|
18 |
Mixed reality for assembly processes, programming and guidingPeirotén López de Arbina, Borja, Romero Luque, Elisabeth María January 2023 (has links)
Assembly processes are an integral part of many industries, including manufacturing and production. These processes typically involve the use of robots and automated equipment to perform tasks such as picking, placing, and joining components. One solution is Mixed Reality (MR), which combines virtual and real-world elements to create an immersive environment for the operator. MR technology can be used to guide operators through the assembly process, providing real-time feedback and instructions, as well as allowing them to program the assembly process and adjust as needed. The project was focused on developing a user interface for the Hololens 2 glasses that would allow operators to select different tools and robots and configure targets and processes for an assembly station. The team also developed a system to send information about targets, paths, and joint values to the virtual and real robot, which allowed operators to easily program the robot to perform the assembly process. It was possible to develop and test the MR system in a real-world assembly setting, evaluating its effectiveness in improving the efficiency and accuracy of the process. This project wants to demonstrate the potential of MR technology for improving assembly processes and to provide a proof-of-concept for future development in this field. / <p>Utbytesstudenter</p>
|
19 |
Improving the user experience of touchscreen text-based code editor in an industrial robot controller / Förbättring av användarupplevelsen för textbaserad kodredigerare med pekskärm i en industriell robotkontrollerXu, Xuanling January 2023 (has links)
This project investigated the touchscreen text-based code editor in OmniCore FlexPendant to improve its usability and user experience. This is a powerful but complex application used to program industrial robots. The objective is to redesign the user interface and interactions to make them more userfriendly and intuitive, with the goal of improving efficiency. The principles for designing complex applications and touchscreen products are generated as an outcome. From an academic standpoint, the research aims to fill the gap in text-based code editors for robot controller design and inspire touchscreen code editor design in other fields. Design thinking served as the framework for the design process, which encompassed seven steps that ranged from exploration to conceptualization and user testing. Guidance for improvement is ideated by ’become a user,’ competitive analysis, and user studies. In the design phase, a high-fidelity prototype is built upon the original design with completely new interfaces, structures, and interactions. The user experience and usability are evaluated during user testing by counting task completion time, applying two standard user experience measurements, and conducting a brief interview. The results indicate that the new design achieved better completion efficiency in tasks, better user experience and usability scores, and received positive feedback from participants. The new solution meets the objectives and is considered a good reference for the design of industrial robot programming solutions. / Denna studie undersökte den pekskärm- och textbaserade kodeditorn i OmniCore FlexPendant, för att förbättra dess användbarhet och användarupplevelse. Det är en kraftfull men komplex applikation som används för att programmera industrirobotar. Målet är en omarbetning av användargränssnittet och interaktionerna för att göra dem mer användarvänliga och intuitiva, med målet att förbättra effektiviteten. Principerna för att utforma komplexa applikationer och pekskärmsprodukter genereras som ett resultat. Ur ett akademiskt perspektiv syftar forskningen till att fylla luckan gällande design av textbaserade kodeditor för robotkontroller, och inspirera vid designen av pekskärmsbaserade kodeditorer inom andra fält. ”Design thinking” tjänade som ramverk för designprocessen, vilken omfattade sju steg som sträckte sig från utforskning till konceptualisering och användartestning. Vägledning för förbättringar tas fram genom ”att vara en användare”, konkurrensanalys och användarstudier. I designfasen byggs en högupplöst prototyp baserat på den ursprungliga designen med helt nya gränssnitt, struktur och interaktioner. Användarupplevelsen och användbarheten utvärderas under användartestning genom att räkna tid, tillämpa två standardmått för användarupplevelse och genomföra en kort intervju. Resultaten visar att den nya designen uppnådde högre effektivitet i uppgifter, bättre användarupplevelse och högre användbarhetspoäng samt fick positiv feedback från deltagarna. Den nya lösningen uppfyller målen och anses vara en bra referens för design av lösningar för programmering av industrirobotar.
|
20 |
Using Event-Based and Rule-Based Paradigms to Develop Context-Aware Reactive Applications / Programmation événementielle et programmation à base de règles pour le développement d'applications réactives sensibles au contexteLe, Truong Giang 30 September 2013 (has links)
Les applications réactives et sensibles au contexte sont des applications intelligentes qui observent l’environnement (ou contexte) dans lequel elles s’exécutent et qui adaptent, si nécessaire, leur comportement en cas de changements dans ce contexte, ou afin de satisfaire les besoins ou d'anticiper les intentions des utilisateurs. La recherche dans ce domaine suscite un intérêt considérable tant de la part des académiques que des industriels. Les domaines d'applications sont nombreux: robots industriels qui peuvent détecter les changements dans l'environnement de travail de l'usine pour adapter leurs opérations; systèmes de contrôle automobiles pour observer d'autres véhicules, détecter les obstacles, ou surveiller le niveau d'essence ou de la qualité de l'air afin d'avertir les conducteurs en cas d'urgence; systèmes embarqués monitorant la puissance énergétique disponible et modifiant la consommation en conséquence. Dans la pratique, le succès de la mise en œuvre et du déploiement de systèmes sensibles au contexte dépend principalement du mécanisme de reconnaissance et de réaction aux variations de l'environnement. En d'autres termes, il est nécessaire d'avoir une approche adaptative bien définie et efficace de sorte que le comportement des systèmes peut être modifié dynamiquement à l'exécution. En outre, la concurrence devrait être exploitée pour améliorer les performances et la réactivité des systèmes. Tous ces exigences, ainsi que les besoins en sécurité et fiabilité constituent un grand défi pour les développeurs.C’est pour permettre une écriture plus intuitive et directe d'applications réactives et sensibles au contexte que nous avons développé dans cette thèse un nouveau langage appelé INI. Pour observer les changements dans le contexte et y réagir, INI s’appuie sur deux paradigmes : la programmation événementielle et la programmation à base de règles. Événements et règles peuvent être définis en INI de manière indépendante ou en combinaison. En outre, les événements peuvent être reconfigurésdynamiquement au cours de l’exécution. Un autre avantage d’INI est qu’il supporte laconcurrence afin de gérer plusieurs tâches en parallèle et ainsi améliorer les performances et la réactivité des programmes. Nous avons utilisé INI dans deux études de cas : une passerelle M2M multimédia et un programme de suivi d’objet pour le robot humanoïde Nao. Enfin, afin d’augmenter la fiabilité des programmes écrits en INI, un système de typage fort a été développé, et la sémantique opérationnelle d’INI a été entièrement définie. Nous avons en outre développé un outil appelé INICheck qui permet de convertir automatiquement un sous-ensemble d’INI vers Promela pour permettre un analyse par model checking à l’aide de l’interpréteur SPIN. / Context-aware pervasive computing has attracted a significant research interest from both academy and industry worldwide. It covers a broad range of applications that support many manufacturing and daily life activities. For instance, industrial robots detect the changes of the working environment in the factory to adapt their operations to the requirements. Automotive control systems may observe other vehicles, detect obstacles, and monitor the essence level or the air quality in order to warn the drivers in case of emergency. Another example is power-aware embedded systems that need to work based on current power/energy availability since power consumption is an important issue. Those kinds of systems can also be considered as smart applications. In practice, successful implementation and deployment of context-aware systems depend on the mechanism to recognize and react to variabilities happening in the environment. In other words, we need a well-defined and efficient adaptation approach so that the systems' behavior can be dynamically customized at runtime. Moreover, concurrency should be exploited to improve the performance and responsiveness of the systems. All those requirements, along with the need for safety, dependability, and reliability pose a big challenge for developers.In this thesis, we propose a novel programming language called INI, which supports both event-based and rule-based programming paradigms and is suitable for building concurrent and context-aware reactive applications. In our language, both events and rules can be defined explicitly, in a stand-alone way or in combination. Events in INI run in parallel (synchronously or asynchronously) in order to handle multiple tasks concurrently and may trigger the actions defined in rules. Besides, events can interact with the execution environment to adjust their behavior if necessary and respond to unpredictable changes. We apply INI in both academic and industrial case studies, namely an object tracking program running on the humanoid robot Nao and a M2M gateway. This demonstrates the soundness of our approach as well as INI's capabilities for constructing context-aware systems. Additionally, since context-aware programs are wide applicable and more complex than regular ones, this poses a higher demand for quality assurance with those kinds of applications. Therefore, we formalize several aspects of INI, including its type system and operational semantics. Furthermore, we develop a tool called INICheck, which can convert a significant subset of INI to Promela, the input modeling language of the model checker SPIN. Hence, SPIN can be applied to verify properties or constraints that need to be satisfied by INI programs. Our tool allows the programmers to have insurance on their code and its behavior.
|
Page generated in 0.1037 seconds