Spelling suggestions: "subject:"programming by demonstration"" "subject:"programming by remonstration""
1 |
Automating iterative tasks with programming by demonstrationPaynter, Gordon W. January 2000 (has links)
Programming by demonstration is an end-user programming technique that allows people to create programs by showing the computer examples of what they want to do. Users do not need specialised programming skills. Instead, they instruct the computer by demonstrating examples, much as they might show another person how to do the task. Programming by demonstration empowers users to create programs that perform tedious and time-consuming computer chores. However, it is not in widespread use, and is instead confined to research applications that end users never see. This makes it difficult to evaluate programming by demonstration tools and techniques. This thesis claims that domain-independent programming by demonstration can be made available in existing applications and used to automate iterative tasks by end users. It is supported by Familiar, a domain-independent, AppleScript-based programming-by-demonstration tool embodying standard machine learning algorithms. Familiar is designed for end users, so works in the existing applications that they regularly use. The assertion that programming by demonstration can be made available in existing applications is validated by identifying the relevant platform requirements and a range of platforms that meet them. A detailed scrutiny of AppleScript highlights problems with the architecture and with many implementations, and yields a set of guidelines for designing applications that support programming-by-demonstration. An evaluation shows that end users are capable of using programming by demonstration to automate iterative tasks. However, the subjects tended to prefer other tools, choosing Familiar only when the alternatives were unsuitable or unavailable. Familiar's inferencing is evaluated on an extensive set of examples, highlighting the tasks it can perform and the functionality it requires.
|
2 |
Extracting Human Strategies for Use in Robotic AssemblyBirkhimer, Craig E. 10 January 2005 (has links)
No description available.
|
3 |
Automatizovaná extracia informácií z internetu / Automated web information extractionSmotrila, Tomáš January 2011 (has links)
1 Web sites offer a huge amount of information. Often it is a page generated from data stored in databases. However, emphasis is placed on the display of information, but not on their machine processing. Part of the thesis is design and implementation of a prototype system to retrieve data from dynamically generated web using programming by demonstration technique. Such a system allows the user to show with mouse to the system how to proceed with gathering information from the website. Based on such a example, the system will derive a procedure to acquire information on similar sites. The implemented system is able to collect user relevant information from similar sites for example in form of a simple table suitable for further machine processing.
|
4 |
Virtual lead-through robot programming : Programming virtual robot by demonstrationBoberg, Arvid January 2015 (has links)
This report describes the development of an application which allows a user to program a robot in a virtual environment by the use of hand motions and gestures. The application is inspired by the use of robot lead-through programming which is an easy and hands-on approach for programming robots, but instead of performing it online which creates loss in productivity the strength from offline programming where the user operates in a virtual environment is used as well. Thus, this is a method which saves on the economy and prevents contamination of the environment. To convey hand gesture information into the application which will be implemented for RobotStudio, a Kinect sensor is used for entering the data into the virtual environment. Similar work has been performed before where, by using hand movements, a physical robot’s movement can be manipulated, but for virtual robots not so much. The results could simplify the process of programming robots and supports the work towards Human-Robot Collaboration as it allows people to interact and communicate with robots, a major focus of this work. The application was developed in the programming language C# and has two different functions that interact with each other, one for the Kinect and its tracking and the other for installing the application in RobotStudio and implementing the calculated data into the robot. The Kinect’s functionality is utilized through three simple hand gestures to jog and create targets for the robot: open, closed and “lasso”. A prototype of this application was completed which through motions allowed the user to teach a virtual robot desired tasks by moving it to different positions and saving them by doing hand gestures. The prototype could be applied to both one-armed robots as well as to a two-armed robot such as ABB’s YuMi. The robot's orientation while running was too complicated to be developed and implemented in time and became the application's main bottleneck, but remained as one of several other suggestions for further work in this project.
|
5 |
Intuitive, iterative and assisted virtual guides programming for human-robot comanipulation / Programmation intuitive, itérative et assistée de guides virtuels pour la comanipulation homme-robotSanchez Restrepo, Susana 01 February 2018 (has links)
Pendant très longtemps, l'automatisation a été assujettie à l'usage de robots industriels traditionnels placés dans des cages et programmés pour répéter des tâches plus ou moins complexes au maximum de leur vitesse et de leur précision. Cette automatisation, dite rigide, possède deux inconvénients majeurs : elle est chronophage dû aux contraintes contextuelles applicatives et proscrit la présence humaine. Il existe désormais une nouvelle génération de robots avec des systèmes moins encombrants, peu coûteux et plus flexibles. De par leur structure et leurs modes de fonctionnement ils sont intrinsèquement sûrs ce qui leurs permettent de travailler main dans la main avec les humains. Dans ces nouveaux espaces de travail collaboratifs, l'homme peut être inclus dans la boucle comme un agent décisionnel actif. En tant qu'instructeur ou collaborateur il peut influencer le processus décisionnel du robot : on parle de robots collaboratifs (ou cobots). Dans ce nouveau contexte, nous faisons usage de guides virtuels. Ils permettent aux cobots de soulager les efforts physiques et la charge cognitive des opérateurs. Cependant, la définition d'un guide virtuel nécessite souvent une expertise et une modélisation précise de la tâche. Cela restreint leur utilité aux scénarios à contraintes fixes. Pour palier ce problème et améliorer la flexibilité de la programmation du guide virtuel, cette thèse présente une nouvelle approche par démonstration : nous faisons usage de l'apprentissage kinesthésique de façon itérative et construisons le guide virtuel avec une spline 6D. Grâce à cette approche, l'opérateur peut modifier itérativement les guides tout en gardant leur assistance. Cela permet de rendre le processus plus intuitif et naturel ainsi que de réduire la pénibilité. La modification locale d'un guide virtuel en trajectoire est possible par interaction physique avec le robot. L'utilisateur peut déplacer un point clé cartésien ou modifier une portion entière du guide avec une nouvelle démonstration partielle. Nous avons également étendu notre approche aux guides virtuels 6D, où les splines en déplacement sont définies via une interpolation Akima (pour la translation) et une 'interpolation quadratique des quaternions (pour l'orientation). L'opérateur peut initialement définir un guide virtuel en trajectoire, puis utiliser l'assistance en translation pour ne se concentrer que sur la démonstration de l'orientation. Nous avons appliqué notre approche dans deux scénarios industriels utilisant un cobot. Nous avons ainsi démontré l'intérêt de notre méthode qui améliore le confort de l'opérateur lors de la comanipulation. / For a very long time, automation was driven by the use of traditional industrial robots placed in cages, programmed to repeat more or less complex tasks at their highest speed and with maximum accuracy. This robot-oriented solution is heavily dependent on hard automation which requires pre-specified fixtures and time consuming programming, hindering robots from becoming flexible and versatile tools. These robots have evolved towards a new generation of small, inexpensive, inherently safe and flexible systems that work hand in hand with humans. In these new collaborative workspaces the human can be included in the loop as an active agent. As a teacher and as a co-worker he can influence the decision-making process of the robot. In this context, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload during tasks accomplishment. However, the construction of virtual guides often requires expert knowledge and modeling of the task. These limitations restrict the usefulness of virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guides programming, this thesis presents a novel approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and displacement splines. Thanks to this approach, the worker is able to iteratively modify the guides while being assisted by them, making the process more intuitive and natural while reducing its painfulness. Our approach allows local refinement of virtual guiding trajectories through physical interaction with the robots. We can modify a specific cartesian keypoint of the guide or re- demonstrate a portion. We also extended our approach to 6D virtual guides, where displacement splines are defined via Akima interpolation (for translation) and quadratic interpolation of quaternions (for orientation). The worker can initially define a virtual guiding trajectory and then use the assistance in translation to only concentrate on defining the orientation along the path. We demonstrated that these innovations provide a novel and intuitive solution to increase the human's comfort during human-robot comanipulation in two industrial scenarios with a collaborative robot (cobot).
|
6 |
Programming by demonstration of robot manipulatorsSkoglund, Alexander January 2009 (has links)
If a non-expert wants to program a robot manipulator he needs a natural interface that does not require rigorous robot programming skills. Programming-by-demonstration (PbD) is an approach which enables the user to program a robot by simply showing the robot how to perform a desired task. In this approach, the robot recognizes what task it should perform and learn how to perform it by imitating the teacher. One fundamental problem in imitation learning arises from the fact that embodied agents often have different morphologies. Thus, a direct skill transfer from human to a robot is not possible in the general case. Therefore, we need a systematic approach to PbD that takes the capabilities of the robot into account–regarding both perception and body structure. In addition, the robot should be able to learn from experience and improve over time. This raises the question of how to determine the demonstrator’s goal or intentions. We show that this is possible–to some degree–to infer from multiple demonstrations. We address the problem of generation of a reach-to-grasp motion that produces the same results as a human demonstration. It is also of interest to learn what parts of a demonstration provide important information about the task. The major contribution is the investigation of a next-state-planner using a fuzzy time-modeling approach to reproduce a human demonstration on a robot. We show that the proposed planner can generate executable robot trajectories based on a generalization of multiple human demonstrations. We use the notion of hand-states as a common motion language between the human and the robot. It allows the robot to interpret the human motions as its own, and it also synchronizes reaching with grasping. Other contributions include the model-free learning of human to robot mapping, and how an imitation metric ca be used for reinforcement learning of new robot skills. The experimental part of this thesis presents the implementation of PbD of pick-and-place-tasks on different robotic hands/grippers. The different platforms consist of manipulators and motion capturing devices.
|
7 |
Applying Machine Learning Techniques to Rule Generation in Intelligent Tutoring SystemsJarvis, Matthew P 29 April 2004 (has links)
The purpose of this research was to apply machine learning techniques to automate rule generation in the construction of Intelligent Tutoring Systems. By using a pair of somewhat intelligent iterative-deepening, depth-first searches, we were able to generate production rules from a set of marked examples and domain background knowledge. Such production rules required independent searches for both the“if" and“then" portion of the rule. This automated rule generation allows generalized rules with a small number of sub-operations to be generated in a reasonable amount of time, and provides non-programmer domain experts with a tool for developing Intelligent Tutoring Systems.
|
8 |
Programmation d'un robot par des non-experts / End-user Robot Programming in Cobotic EnvironmentsLiang, Ying Siu 12 June 2019 (has links)
Le sujet de recherche est dans la continuité des travaux réalisés au cours de mon M2R sur la programmation par démonstration appliqué à la cobotique en milieu industriel. Ce sujet est à la croisée de plusieurs domaines (interaction Humain-Robot, planification automatique, apprentissage artificiel). Il s'agit maintenant d'aller au delà de ces premiers résultats obtenus au cours de mon M2R et de trouver un cadre générique pour la programmation de « cobots » (robots collaboratifs) en milieu industriel. L'approche cobotique consiste à ce qu'un opérateur humain, en tant qu'expert métier directement impliqué dans la réalisation des tâches en ligne, apprenne au robot à effectuer de nouvelles tâches et à utiliser le robot comme assistant « agile ». Dans ce contexte la thèse propose un mode d'apprentissage de type « end-user programming », c'est-à-dire simple et ne nécessitant pas d'être expert en robotique pour programmer le robot industriel Baxter. / The increasing presence of robots in industries has not gone unnoticed.Cobots (collaborative robots) are revolutionising industries by allowing robots to work in close collaboration with humans.Large industrial players have incorporated them into their production lines, but smaller companies hesitate due to high initial costs and the lack of programming expertise.In this thesis we introduce a framework that combines two disciplines, Programming by Demonstration and Automated Planning, to allow users without programming knowledge to program a robot.The user constructs the robot's knowledge base by teaching it new actions by demonstration, and associates their semantic meaning to enable the robot to reason about them.The robot adopts a goal-oriented behaviour by using automated planning techniques, where users teach action models expressed in a symbolic planning language.In this thesis we present preliminary work on user experiments using a Baxter Research Robot to evaluate our approach.We conducted qualitative user experiments to evaluate the user's understanding of the symbolic planning language and the usability of the framework's programming process.We showed that users with little to no programming experience can adopt the symbolic planning language, and use the framework.We further present our work on a Programming by Demonstration system used for organisation tasks.The system includes a goal inference model to accelerate the programming process by predicting the user's intended product configuration.
|
9 |
Robot Task Learning from Human DemonstrationEkvall, Staffan January 2007 (has links)
Today, most robots used in the industry are preprogrammed and require a welldefined and controlled environment. Reprogramming such robots is often a costly process requiring an expert. By enabling robots to learn tasks from human demonstration, robot installation and task reprogramming are simplified. In a longer time perspective, the vision is that robots will move out of factories into our homes and offices. Robots should be able to learn how to set a table or how to fill the dishwasher. Clearly, robot learning mechanisms are required to enable robots to adapt and operate in a dynamic environment, in contrast to the well defined factory assembly line. This thesis presents contributions in the field of robot task learning. A distinction is made between direct and indirect learning. Using direct learning, the robot learns tasks while being directly controlled by a human, for example in a teleoperative setting. Indirect learning, however, allows the robot to learn tasks by observing a human performing them. A challenging and realistic assumption that is decisive for the indirect learning approach is that the task relevant objects are not necessarily at the same location at execution time as when the learning took place. Thus, it is not sufficient to learn movement trajectories and absolute coordinates. Different methods are required for a robot that is to learn tasks in a dynamic home or office environment. This thesis presents contributions to several of these enabling technologies. Object detection and recognition are used together with pose estimation in a Programming by Demonstration scenario. The vision system is integrated with a localization module which enables the robot to learn mobile tasks. The robot is able to recognize human grasp types, map human grasps to its own hand and also evaluate suitable grasps before grasping an object. The robot can learn tasks from a single demonstration, but it also has the ability to adapt and refine its knowledge as more demonstrations are given. Here, the ability to generalize over multiple demonstrations is important and we investigate a method for automatically identifying the underlying constraints of the tasks. The majority of the methods have been implemented on a real, mobile robot, featuring a camera, an arm for manipulation and a parallel-jaw gripper. The experiments were conducted in an everyday environment with real, textured objects of various shape, size and color. / QC 20100706
|
10 |
Un robot curieux pour l’apprentissage actif par babillage d’objectifs : choisir de manière stratégique quoi, comment, quand et de qui apprendre / A Curious Robot Learner for Interactive Goal-Babbling : Strategically Choosing What, How, When and from Whom to LearnNguyen, Sao Mai 27 November 2013 (has links)
Les défis pour voir des robots opérant dans l’environnement de tous les jours des humains et sur unelongue durée soulignent l’importance de leur adaptation aux changements qui peuvent être imprévisiblesau moment de leur construction. Ils doivent être capable de savoir quelles parties échantillonner, et quelstypes de compétences il a intérêt à acquérir. Une manière de collecter des données est de décider par soi-même où explorer. Une autre manière est de se référer à un mentor. Nous appelons ces deux manièresde collecter des données des modes d’échantillonnage. Le premier mode d’échantillonnage correspondà des algorithmes développés dans la littérature pour automatiquement pousser l’agent vers des partiesintéressantes de l’environnement ou vers des types de compétences utiles. De tels algorithmes sont appelésdes algorithmes de curiosité artificielle ou motivation intrinsèque. Le deuxième mode correspond au guidagesocial ou l’imitation, où un partenaire humain indique où explorer et où ne pas explorer.Nous avons construit une architecture algorithmique intrinsèquement motivée pour apprendre commentproduire par ses actions des effets et conséquences variées. Il apprend de manière active et en ligne encollectant des données qu’il choisit en utilisant plusieurs modes d’échantillonnage. Au niveau du metaapprentissage, il apprend de manière active quelle stratégie d’échantillonnage est plus efficace pour améliorersa compétence et généraliser à partir de son expérience à un grand éventail d’effets. Par apprentissage parinteraction, il acquiert de multiples compétences de manière structurée, en découvrant par lui-même lesséquences développementale. / The challenges posed by robots operating in human environments on a daily basis and in the long-termpoint out the importance of adaptivity to changes which can be unforeseen at design time. The robot mustlearn continuously in an open-ended, non-stationary and high dimensional space. It must be able to knowwhich parts to sample and what kind of skills are interesting to learn. One way is to decide what to exploreby oneself. Another way is to refer to a mentor. We name these two ways of collecting data sampling modes.The first sampling mode correspond to algorithms developed in the literature in order to autonomously drivethe robot in interesting parts of the environment or useful kinds of skills. Such algorithms are called artificialcuriosity or intrinsic motivation algorithms. The second sampling mode correspond to social guidance orimitation where the teacher indicates where to explore as well as where not to explore. Starting fromthe study of the relationships between these two concurrent methods, we ended up building an algorithmicarchitecture with a hierarchical learning structure, called Socially Guided Intrinsic Motivation (SGIM).We have built an intrinsically motivated active learner which learns how its actions can produce variedconsequences or outcomes. It actively learns online by sampling data which it chooses by using severalsampling modes. On the meta-level, it actively learns which data collection strategy is most efficient forimproving its competence and generalising from its experience to a wide variety of outcomes. The interactivelearner thus learns multiple tasks in a structured manner, discovering by itself developmental sequences.
|
Page generated in 0.1281 seconds