• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 2
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 14
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Learning Multi-step Dual-arm Tasks From Demonstrations

Natalia S Sanchez Tamayo (9156518) 29 July 2020 (has links)
Surgeon expertise can be difficult to capture through direct robot programming. Deep imitation learning (DIL) is a popular method for teaching robots to autonomously execute tasks through learning from demonstrations. DIL approaches have been previously applied to surgical automation. However, previous approaches do not consider the full range of robot dexterous motion required in general surgical task, by leaving out tooltip rotation changes or modeling one robotic arm only. Hence, they are not directly applicable for tasks that require rotation and dual-arm collaboration such as debridement. We propose to address this limitation by formulating a DIL approach for the execution of dual-arm surgical tasks including changes in tooltip orientation, position and gripper actions.<br><br>In this thesis, a framework for multi-step surgical task automation is designed and implemented by leveraging deep imitation learning. The framework optimizes Recurrent Neural Networks (RNNs) for the execution of the whole surgical tasks while considering tooltip translations, rotations as well as gripper actions. The network architecture proposed implicitly optimizes for the interaction between two robotic arms as opposed to modeling each arm independently. The networks were trained directly from the human demonstrations and do not require to create task specific hand-crafted models or to manually segment the demonstrations.<br><br>The proposed framework was implemented and evaluated in simulation for two relevant surgical tasks, the peg transfer task and the surgical debridement. The tasks were tested under random initial conditions to challenge the robustness of the networks to generalize to variable settings. The performance of the framework was assessed using task and subtask success as well as a set of quantitative metrics. Experimental evaluation showed favorable results for automating surgical tasks under variable conditions for the surgical debridement, which obtained a task success rate comparable to the human task success. For the peg transfer task, the framework displayed moderate overall task success. Quantitative metrics indicate that the robot generated trajectories possess similar or better motion economy that the human demonstrations.
22

On the Efficiency of Transfer Learning in a Fighter Pilot Behavior Modelling Context / Effektiviteten av överföringsinlärning vid beteendemodellering av stridspiloter

Sandström, Viktor January 2021 (has links)
Creating realistic models of human fighter pilot behavior is made possible with recent deep learning techniques. However, these techniques are often highly dependent on large datasets, often unavailable in many settings, or expensive to produce. Transfer learning is an active research field where the idea is to leverage the knowledge gained from studying a problem for which large amounts of training data are more readily available, when considering a different, related problem. The related problem is called the target task and the initial problem is called the source task. Given a successful transfer scenario, a smaller amount of data, or less training, can be required to reach high quality results on the target task. The first part of this thesis focuses on the development of a fighter pilot model using behavior cloning, a method for reducing an imitation learning problem to standard supervised learning. The resulting model, called a policy, is capable of imitating a human pilot controlling a fighter jet in the military combat simulator Virtual BattleSpace 3. In this simulator, the forces acting on the aircraft can be modelled using one of several flight dynamic models (FDMs). In the second part, the efficiency of transfer learning is measured. This is done by replacing the built-in FDM to one with a significant variation in the input response, and subsequently train two policies on successive amount of data. One policy was trained using only the latter FDM, whereas the other policy exploits the gained knowledge from the first part of the thesis, using a technique called fine-tuning. The results indicate that a model already capable of handling one FDM, adapts to a different FDM with less data compared to a previously untrained policy. / Realistiska modeller av mänskligt pilotbeteende kan potentiellt skapas med djupinlärningstekniker. För detta krävs ofta stora datamängder som för många tillämpningar saknas, eller är dyra att ta fram. Överföringsinlärning är ett aktivt forskningsfält där grundidén är att utnyttja redan inlärd kunskap från ett problem där stora mängder träningsdata finns tillgängligt, vid undersökning av ett relaterat problem. Vid lyckad överföringinlärning behövs en mindre mängd data, eller mindre träning, för att uppnå ett önskvärt resultat på denna måluppgift. Första delen av detta examensarbete handlar om utvecklingen av en pilotmodell med hjälp av beteendekloning, en metod som reducerar imitationsinlärning till vanlig övervakad inlärning. Den resulterande pilotmodellen klarar av att imitera en mänsklig pilot som styr ett stridsflygplan i den militära simulatormiljön Virtual BattleSpace 3, där krafterna som verkar på flygplanet modelleras med en enkel inbyggd flygdynamiksmodell. I den andra delen av arbetet utvärderas överföringsförmågan mellan olika flygdynamiksmodeller. Detta gjordes genom att ersätta den inbyggda dynamiken till en dynamik som modellerar ett annat flygplan och som svarar på styrsignaler på ett vida olikartat sätt. Sedan tränades två stridspilotmodeller successivt på ökad mängd data. Den ena pilotmodellen tränas endast med den ena dynamiken varvid den andra pilotmodellen utnyttjar det redan inlärda beteendet från första delen av arbetet, med hjälp av en teknik som kallas finjustering. Resultaten visar att en pilotmodell som redan lärt sig att flyga med en specifik flygdynamik har lättare att lära sig en ny dynamik, jämfört med en pilotmodell som inte förtränats.
23

Flying High: Deep Imitation Learning of Optimal Control for Unmanned Aerial Vehicles / Far &amp; Flyg: Djup Imitationsinlärning av Optimal Kontroll för Obemannade Luftfarkoster

Ericson, Ludvig January 2018 (has links)
Optimal control for multicopters is difficult in part due to the low processing power available, and the instability inherent to multicopters. Deep imitation learning is a method for approximating an expert control policy with a neural network, and has the potential of improving control for multicopters. We investigate the performance and reliability of deep imitation learning with trajectory optimization as the expert policy by first defining a dynamics model for multicopters and applying a trajectory optimization algorithm to it. Our investigation shows that network architecture plays an important role in the characteristics of both the learning process and the resulting control policy, and that in particular trajectory optimization can be leveraged to improve convergence times for imitation learning. Finally, we identify some limitations and future areas of study and development for the technology. / Optimal kontroll för multikoptrar är ett svårt problem delvis på grund av den vanligtvis låga processorkraft som styrdatorn har, samt att multikoptrar är synnerligen instabila system. Djup imitationsinlärning är en metod där en beräkningstung expert approximeras med ett neuralt nätverk, och gör det därigenom möjligt att köra dessa tunga experter som realtidskontroll för multikoptrar. I detta arbete undersöks prestandan och pålitligheten hos djup imitationsinlärning med banoptimering som expert genom att först definiera en dynamisk modell för multikoptrar, sedan applicera en välkänd banoptimeringsmetod på denna modell, och till sist approximera denna expert med imitationsinlärning. Vår undersökning visar att nätverksarkitekturen spelar en avgörande roll för karakteristiken hos både inlärningsprocessens konvergenstid, såväl som den resulterande kontrollpolicyn, och att särskilt banoptimering kan nyttjas för att förbättra konvergenstiden hos imitationsinlärningen. Till sist påpekar vi några begränsningar hos metoden och identifierar särskilt intressanta områden för framtida studier.
24

Performance Evaluation of Imitation Learning Algorithms with Human Experts

Båvenstrand, Erik, Berggren, Jakob January 2019 (has links)
The purpose of this thesis was to compare the performance of three different imitation learning algorithms with human experts, with limited expert time. The central question was, ”How should one implement imitation learning in a simulated car racing environment, using human experts, to achieve the best performance when access to the experts is limited?”. We limited the work to only consider the three algorithms Behavior Cloning, DAGGER, and HG-DAGGER and limited the implementation to the car racing simulator TORCS. The agents consisted of the same type of feedforward neural network that utilized sensor data provided by TORCS. Through comparison in the performance of the different algorithms on a different amount of expert time, we can conclude that HGDAGGER performed the best. In this case, performance is regarded as a distance covered given set time. Its performance also seemed to scale well with more expert time, which the others did not. This result confirmed previously published results when comparing these algorithms. / Målet med detta examensarbete var att jämföra prestandan av tre olika algoritmer inom området imitationinlärning med mänskliga experter, där experttiden är begränsad. Arbetets frågeställning var, ”Hur ska man implementera imitationsinlärning i en bilsimulator, för att få bäst prestanda, med mänskliga experter där experttiden är begränsad?”. Vi begränsade arbetet till att endast omfatta de tre algoritmerna, Behavior Cloning, DAGGER och HG-DAGGER, och begränsade implementationsmiljön till bilsimulatorn TORCS. Alla agenterna bestod av samma sorts feedforward neuralt nätverk som använde sig av sensordata från TROCS. Genom jämförelse i prestanda på olika mängder experttid kan vi dra slutsatsen att HG-DAGGER gav bäst resultat. I detta fall motsvarar prestanda körsträcka, givet en viss tid. Dess prestanda verkar även utvecklas väl med ytterligare experttid, vilket de övriga inte gjorde. Detta resultat bekräftar tidigare publicerade resultat om jämförelse av de tre olika algoritmerna.
25

Training an Adversarial Non-Player Character with an AI Demonstrator : Applying Unity ML-Agents

Jlali, Yousra Ramdhana January 2022 (has links)
Background. Game developers are continuously searching for new ways of populating their vast game worlds with competent and engaging Non-Player Characters (NPCs), and researchers believe Deep Reinforcement Learning (DRL) might be the solution for emergent behavior. Consequently, fusing NPCs with DRL practices has surged in recent years, however, proposed solutions rarely outperform traditional script-based NPCs. Objectives. This thesis explores a novel method of developing an adversarial DRL NPC by combining Reinforcement Learning (RL) algorithms. Our goal is to produce an agent that surpasses its script-based opponents by first mimicking their actions. Methods. The experiment commences with Imitation Learning (IL) before proceeding with supplementary DRL training where the agent is expected to improve its strategies. Lastly, we make all agents participate in 100-deathmatch tournaments to statistically evaluate and differentiate their deathmatch performances. Results. Statistical tests reveal that the agents reliably differ from one another and that our learning agent performed poorly in comparison to its script-based opponents. Conclusions. Based on our computed statistics, we can conclude that our solution was unsuccessful in developing a talented hostile DRL agent as it was unable to convey any form of proficiency in deathmatches. No further improvements could be applied to our ML agent due to the time constraints. However, we believe our outcome can be used as a stepping-stone for future experiments within this branch of research.
26

AI-based modeling of brain and behavior : combining neuroimaging, imitation learning and video games

Kemtur, Anirudha 07 1900 (has links)
Les récentes avancées dans le domaine de l'intelligence artificielle ont ouvert la voie au développement de nouveaux modèles d'activité cérébrale. Les réseaux neuronaux artificiels (RNA) formés à des tâches complexes, telles que la reconnaissance d'images, peuvent être utilisés pour prédire la dynamique cérébrale en réponse à une série de stimuli avec une précision sans précédent, un processus appelé encodage cérébral. Les jeux vidéo ont fait l'objet d'études approfondies dans le domaine de l'intelligence artificielle, mais n'ont pratiquement pas été utilisés pour l'encodage cérébral. Les jeux vidéo offrent un cadre prometteur pour comprendre l'activité cérébrale dans un environnement riche, engageant et actif, contrairement aux tâches essentiellement passives qui dominent actuellement le domaine, telles que la visualisation d'images. Un défi majeur soulevé par les jeux vidéo complexes est que le comportement individuel est très variable d'un sujet à l'autre, et nous avons émis l'hypothèse que les RNAs doivent prendre en compte le comportement spécifique du sujet afin de capturer correctement les dynamiques cérébrales. Dans cette étude, nous avons cherché à utiliser des RNAs pour modéliser l'imagerie par résonance magnétique fonctionnelle (IRMf) et les données comportementales des participants, que nous avons collectées pendant que les sujets jouaient au jeu vidéo Shinobi III. En utilisant l'apprentissage par imitation, nous avons entraîné un RNA à jouer au jeu vidéo en reproduisant fidèlement le style de jeu unique de chaque participant. Nous avons constaté que les couches cachées de notre modèle d'apprentissage par imitation parvenaient à encoder des représentations neuronales pertinentes pour la tâche et à prédire la dynamique cérébrale individuelle avec une plus grande précision que divers modèles de contrôle, y compris des modèles entraînés sur les actions d'autres sujets. Les corrélations les plus fortes entre les activations des couches cachées et les signaux cérébraux ont été observées dans des zones cérébrales biologiquement plausibles, à savoir les réseaux somatosensoriels, attentionnels et visuels. Nos résultats soulignent le potentiel de la combinaison de l'apprentissage par imitation, de l'imagerie cérébrale et des jeux vidéo pour découvrir des relations spécifiques entre le cerveau et le comportement. / Recent advances in the field of Artificial Intelligence have paved the way for the development of novel models of brain activity. Artificial Neural networks (ANN) trained on complex tasks, such as image recognition and language processing, can be used to predict brain dynamics in response to wide range of stimuli with unprecedented accuracy, a process called brain encoding. Videogames have been extensively studied in the AI field, but have hardly been used yet for brain encoding. Videogames provide a promising framework to understand brain activity in rich, engaging and active environments, in contrast to mostly passive tasks currently dominating the field, such as image viewing. A major challenge raised by complex videogames is that individual behavior is highly variable across subjects, and we hypothesized that ANNs need to account for subject-specific behavior in order to properly capture brain dynamics. In this study, we aimed to use ANNs to model functional magnetic resonance imaging (fMRI) and behavioral gameplay data, which we collected while subjects played the Shinobi III videogame. Using imitation learning, we trained an ANN to play the game closely replicating the unique gameplay style of individual participants. We found that hidden layers of our imitation learning model successfully encode task-relevant neural representations and predict individual brain dynamics with higher accuracy than various control models, including models trained on other subjects' actions. The highest correlations between layer activations and brain signals were observed in biologically plausible brain areas, i.e. somatosensory, attentional and visual networks. Our results highlight the potential of combining imitation learning, brain imaging, and videogames to uncover subject-specific relationships between brain and behavior.
27

Imitation Learning on Branching Strategies for Branch and Bound Problems / Imitationsinlärning av Grenstrategier för Branch and Bound-Problem

Axén, Magnus January 2023 (has links)
A new branch of machine and deep learning models has evolved in constrained optimization, specifically in mixed integer programming problems (MIP). These models draw inspiration from earlier solver methods, primarily the heuristic, branch and bound. While utilizing the branch and bound framework, machine and deep learning models enhance either the computational efficiency or performance of the model. This thesis examines how imitating different variable selection strategies of classical MIP solvers behave on a state-of-the-art deep learning model. A recently developed deep learning algorithm is used in this thesis, which represents the branch and bound state as a bipartite graph. This graph serves as the input to a graph network model, which determines the variable in the MIP on which branching occurs. This thesis compares how imitating different classical branching strategies behaves on different algorithm outputs and, most importantly, time span. More specifically, this thesis conducts an empirical study on a MIP known as the facility location problem (FLP) and compares the different methods for imitation. This thesis shows that the deep learning algorithm can outperform the classical methods in terms of time span. More specifically, imitating the branching strategies resulting in small branch and bound trees give rise to a more rapid performance in finding the global optimum. Lastly, it is shown that a smaller embedding size in the network model is preferred for these instances when looking at the trade-off between variable selection and time cost. / En ny typ av maskin och djupinlärningsmodeller har utvecklats inom villkors optimering, specifikt för så kallade blandade heltalsproblem (MIP). Dessa modeller hämtar inspiration från tidigare lösningsmetoder, främst en heuristisk som kallas “branch and bound”. Genom att använda “branch and bound” ramverket förbättrar maskin och djupinlärningsmodeller antingen beräkningshastigheten eller prestandan hos modellen. Denna uppsats undersöker hur imitation av olika strategier för val av variabler från klassiska MIP-algoritmer beter sig på en modern djupinlärningsmodell. I denna uppsats används en nyligen utvecklad djupinlärningsalgoritm som representerar “branch and bound” tillståndet som en bipartit graf. Denna graf används som indata till en “graph network” modell som avgör vilken variabel i MIP-problemet som tas hänsyn till. Uppsatsen jämför hur imitation av olika klassiska “branching” strategier påverkar olika algoritmutgångar, framför allt, tidslängd. Mer specifikt utför denna uppsats en empirisk studie på ett MIP-problem som kallas för “facility location problem” (FLP) och jämför imitationen av de olika metoderna. I denna uppsats visas det att denna djupinlärningsalgoritm kan överträffa de klassiska metoderna när det gäller tidslängd. Mer specifikt ger imitation av “branching” strategier som resulterar i små “branch and bound” träd upphov till en snabbare prestation vid sökning av den globala optimala lösningen. Slutligen visas det att en mindre inbäddningsstorlek i nätverksmodellen föredras i dessa fall när man ser på avvägningen mellan val av variabler och tidskostnad.
28

Emergence of language-like latents in deep neural networks

Lu, Yuchen 05 1900 (has links)
L'émergence du langage est considérée comme l'une des marques de l'intelligence humaine. Par conséquent, nous émettons l'hypothèse que l'émergence de latences ou de représentations similaires au langage dans un système d'apprentissage profond pourrait aider les modèles à obtenir une meilleure généralisation compositionnelle et hors distribution. Dans cette thèse, nous présentons une série d'articles qui explorent cette hypothèse dans différents domaines, notamment l'apprentissage interactif du langage, l'apprentissage par imitation et la vision par ordinateur. / The emergence of language is regarded as one of the hallmarks of human intelligence. Therefore, we hypothesize that the emergence of language-like latents or representations in a deep learning system could help models achieve better compositional and out-of-distribution generalization. In this thesis, we present a series of papers that explores this hypothesis in different fields including interactive language learning, imitation learning and computer vision.
29

Offline Reinforcement Learning for Downlink Link Adaption : A study on dataset and algorithm requirements for offline reinforcement learning. / Offline Reinforcement Learning för nedlänksanpassning : En studie om krav på en datauppsättning och algoritm för offline reinforcement learning

Dalman, Gabriella January 2024 (has links)
This thesis studies offline reinforcement learning as an optimization technique for downlink link adaptation, which is one of many control loops in Radio access networks. The work studies the impact of the quality of pre-collected datasets, in terms of how much the data covers the state-action space and whether it is collected by an expert policy or not. The data quality is evaluated by training three different algorithms: Deep Q-networks, Critic regularized regression, and Monotonic advantage re-weighted imitation learning. The performance is measured for each combination of algorithm and dataset, and their need for hyperparameter tuning and sample efficiency is studied. The results showed Critic regularized regression to be the most robust because it could learn well from any of the datasets that were used in the study and did not require extensive hyperparameter tuning. Deep Q-networks required careful hyperparameter tuning, but paired with the expert data it managed to reach rewards equally as high as the agents trained with Critic Regularized Regression. Monotonic advantage re-weighted imitation learning needed data from an expert policy to reach a high reward. In summary, offline reinforcement learning can perform with success in a telecommunication use case such as downlink link adaptation. Critic regularized regression was the preferred algorithm because it could perform great with all the three different datasets presented in the thesis. / Denna avhandling studerar offline reinforcement learning som en optimeringsteknik för nedlänks länkanpassning, vilket är en av många kontrollcyklar i radio access networks. Arbetet undersöker inverkan av kvaliteten på förinsamlade dataset, i form av hur mycket datan täcker state-action rymden och om den samlats in av en expertpolicy eller inte. Datakvaliteten utvärderas genom att träna tre olika algoritmer: Deep Q-nätverk, Critic regularized regression och Monotonic advantage re-weighted imitation learning. Prestanda mäts för varje kombination av algoritm och dataset, och deras behov av hyperparameterinställning och effektiv användning av data studeras. Resultaten visade att Critic regularized regression var mest robust, eftersom att den lyckades lära sig mycket från alla dataseten som användes i studien och inte krävde omfattande hyperparameterinställning. Deep Q-nätverk krävde noggrann hyperparameterinställning och tillsammans med expertdata lyckades den nå högst prestanda av alla agenter i studien. Monotonic advantage re-weighted imitation learning behövde data från en expertpolicy för att lyckas lära sig problemet. Det datasetet som var mest framgångsrikt var expertdatan. Sammanfattningsvis kan offline reinforcement learning vara framgångsrik inom telekommunikation, specifikt nedlänks länkanpassning. Critic regularized regression var den föredragna algoritmen för att den var stabil och kunde prestera bra med alla tre olika dataseten som presenterades i avhandlingen.
30

Un robot curieux pour l’apprentissage actif par babillage d’objectifs : choisir de manière stratégique quoi, comment, quand et de qui apprendre / A Curious Robot Learner for Interactive Goal-Babbling : Strategically Choosing What, How, When and from Whom to Learn

Nguyen, Sao Mai 27 November 2013 (has links)
Les défis pour voir des robots opérant dans l’environnement de tous les jours des humains et sur unelongue durée soulignent l’importance de leur adaptation aux changements qui peuvent être imprévisiblesau moment de leur construction. Ils doivent être capable de savoir quelles parties échantillonner, et quelstypes de compétences il a intérêt à acquérir. Une manière de collecter des données est de décider par soi-même où explorer. Une autre manière est de se référer à un mentor. Nous appelons ces deux manièresde collecter des données des modes d’échantillonnage. Le premier mode d’échantillonnage correspondà des algorithmes développés dans la littérature pour automatiquement pousser l’agent vers des partiesintéressantes de l’environnement ou vers des types de compétences utiles. De tels algorithmes sont appelésdes algorithmes de curiosité artificielle ou motivation intrinsèque. Le deuxième mode correspond au guidagesocial ou l’imitation, où un partenaire humain indique où explorer et où ne pas explorer.Nous avons construit une architecture algorithmique intrinsèquement motivée pour apprendre commentproduire par ses actions des effets et conséquences variées. Il apprend de manière active et en ligne encollectant des données qu’il choisit en utilisant plusieurs modes d’échantillonnage. Au niveau du metaapprentissage, il apprend de manière active quelle stratégie d’échantillonnage est plus efficace pour améliorersa compétence et généraliser à partir de son expérience à un grand éventail d’effets. Par apprentissage parinteraction, il acquiert de multiples compétences de manière structurée, en découvrant par lui-même lesséquences développementale. / The challenges posed by robots operating in human environments on a daily basis and in the long-termpoint out the importance of adaptivity to changes which can be unforeseen at design time. The robot mustlearn continuously in an open-ended, non-stationary and high dimensional space. It must be able to knowwhich parts to sample and what kind of skills are interesting to learn. One way is to decide what to exploreby oneself. Another way is to refer to a mentor. We name these two ways of collecting data sampling modes.The first sampling mode correspond to algorithms developed in the literature in order to autonomously drivethe robot in interesting parts of the environment or useful kinds of skills. Such algorithms are called artificialcuriosity or intrinsic motivation algorithms. The second sampling mode correspond to social guidance orimitation where the teacher indicates where to explore as well as where not to explore. Starting fromthe study of the relationships between these two concurrent methods, we ended up building an algorithmicarchitecture with a hierarchical learning structure, called Socially Guided Intrinsic Motivation (SGIM).We have built an intrinsically motivated active learner which learns how its actions can produce variedconsequences or outcomes. It actively learns online by sampling data which it chooses by using severalsampling modes. On the meta-level, it actively learns which data collection strategy is most efficient forimproving its competence and generalising from its experience to a wide variety of outcomes. The interactivelearner thus learns multiple tasks in a structured manner, discovering by itself developmental sequences.

Page generated in 0.1367 seconds