• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 8
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 40
  • 40
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Prostředí pro vývoj modulárních řídících systémů v robotice / Prostředí pro vývoj modulárních řídících systémů v robotice

Petrůšek, Tomáš January 2010 (has links)
The subject of the thesis is the design and implementation of a modular control system environment, which could be used in robotics. Both autonomous and guided robots are supported. The higher-level software com- ponents like localization, steering, decision making, etc. are effectively sepa- rated from the underlying hardware devices and their communication protocols in the environment. Based on the layered design, hardware-independent algo- rithms can be implemented. These can run on different hardware platforms just by exchanging specific device drivers. Written in C++ using standard libraries, the final software is highly portable and extensible. Support for new platforms and hardware modules can be implemented easily. The whole sys- tem was tested on two robots and the particular instances of the systems built using this development environment are included in the solution and partially described in the thesis.
22

The Hippocampus code : a computational study of the structure and function of the hippocampus

Rennó Costa, César 17 September 2012 (has links)
Actualment, no hi ha consens científic respecte a la informació representada en la activitat de les célules del hipocamp. D'una banda, experiments amb humans sostenen una visión de la funció de l'hipocamp com a un sistema per l'emmagatzematge de memóries episódiques, mentre que la recerca amb rodents enfatitza una visió com a sistema cognitiu espacial. Tot i que existeix abundant evidència experimental que indica una possible sobreposició d'ambdues teories, aquesta dissociació també es manté en part en base a dades fisiològiques aparentment incompatibles. Aquesta tèsi poposa que l'hippocamp té un rol funcional que s'hauría d'analitzar en termes de la seva estructura i funció, enlloc de mitjança estudis correlació entre activitat neuronal i comportament. La identificació d'un codi a l'hipocamp, es a dir, el conjunt de principis computacionals que conformen les transformacions d'entrada i sortida de l'activitat neuronal, hauría de proporcionar un explicació unificada de la seva funció. En aquesta tèsi presentem un model teòric que descriu quantitativament i que interpreta la selectivitat de certes regions de l'hipocamp en funció de variables espaials i no-espaials, tal i com observada en experiments amb rates. Aquest resultat suggereix que multiples aspectes de la memòria expressada en humans i rodents deriven d'uns mateixos principis. Per aquest motius, proposem nous principis per la memòria, l'auto-completat de patrons i plasticitat. A més, mitjançant aplicacions robòtiques, creem d'un nexe causal entre el circuit neural i el comportament amb el que demostrem la naturalesa conjuntiva de la selectivitat neuronal observada en el hipocamp es necessària per la solució de problemes pràctics comuns, com per example la cerca d'aliments. Tot plegat, aquests resultats avancen en l'idea general de que el codi de l'hipocamp es genèric i aplicable als diversos tipus de memòries estudiades en la literatura. / There is no consensual understanding on what the activity of the hippocampus neurons represents. While experiments with humans foster a dominant view of an episodic memory system, experiments with rodents promote its role as a spatial cognitive system. Although there is abundant evidence pointing to an overlap between these two theories, the dissociation is sustained by conflicting physiological data. This thesis proposes that the functional role of the hippocampus should be analyzed in terms of its structure and function rather than by the correlation of neuronal activity and behavioral performance. The identification of the hippocampus code, i.e. the set of computational principles underlying the input-output transformations of neural activity, might ultimately provide a unifying understanding of its role. In this thesis we present a theoretical model that quantitatively describes and interprets the selectivity of regions of the hippocampus to spatial and non-spatial variables observed in experiments with rats. The results suggest that the multiple aspects of memory expressed in human and rodent data are derived form similar principles. This approach suggests new principles for memory, pattern completion and plasticity. In addition, by creating a causal tie between the neural circuitry and behavior through a robotic control framework we show that the conjunctive nature of neural selectivity observed in the hippocampus is needed for effective problem solving in real-world tasks such as foraging. Altogether, these results advance the concept that the hippocampal code is generic to the different aspects of memory highlighted in the literature.
23

Stratégie de navigation sûre dans un environnement industriel partiellement connu en présence d’activité humaine / Safe navigation strategy in a partially known industrial environment in the presence of human activity

Burtin, Gabriel Louis 26 June 2019 (has links)
Dans ces travaux, nous proposons un système sûr pour la localisation de robot mobile en milieu intérieur structuré. Le principe repose sur l’utilisation de deux capteurs (lidar et caméra monoculaire) combinés astucieusement pour assurer une rapidité de calcul et une robustesse d’utilisation. En choisissant des capteurs reposant sur des principes physiques différents, les chances qu'ils se retrouvent simultanément perturbés sont minimes. L’algorithme de localisation doit être rapide et efficient tout en conservant la possibilité de fournir un mode dégradé dans éventualité où l’un des capteurs serait endommagé. Pour atteindre cet objectif de localisation rapide, nous optimisons le traitement des données à divers niveaux tels que la quantité de données à traiter ou l’optimisation algorithmique. Nous opérons une approximation polygonale des données du lidar 2D ainsi qu’une détection des segments verticaux dans l’image couleur. Le croisement de ces deux informations, à l’aide d’un filtre de Kalman étendu, nous donne alors une localisation fiable. En cas de perte du lidar, le filtre de Kalman peut toujours fonctionner et, en cas de perte de la caméra, le robot peut faire un recalage laser avec le lidar. Les données des deux capteurs peuvent également servir à d’autres objectifs. Les données lidar permettent d’identifier les portes (points de collision potentiels avec des humains), les données caméra peuvent permettre la détection et le suivi des piétons. Les travaux ont été majoritairement menés et validés avec un simulateur robotique avancé (4DV-Sim) puis ont été confirmés par des expériences réelles. Cette méthodologie permet à la fois de développer nos travaux et de valider et améliorer le caractère fonctionnel de cet outil de robotique. / In this work, we propose a safe system for robot navigation in an indoor and structured environment. The main idea is the use of two combined sensors (lidar and monocular camera) to ensure fast computation and robustness. The choice of these sensors is based on the physic principles behind their measures. They are less likely to go blind with the same disturbance. The localization algorithm is fast and efficient while keeping in mind the possibility of a downgraded mode in case of the failure of one sensor. To reach this objective, we optimized the data processing at different levels. We applied a polygonal approximation to the 2D lidar data and a vertical contour detection to the colour image. The fusion of these data in an extended Kalman filter provides a reliable localization system. In case of a lidar failure, the Kalman filter still works, in case of a camera failure the robot can rely upon a lidar scan matching. Data provided by these sensors can also deserve other purposes. The lidar provides us the localization of doors, potential location for encounter with humans. The camera can help to detect and track humans. This work has been done and validated using an advanced robotic simulator (4DV-Sim), then confirmed with real experiments. This methodology allowed us to both develop our ideas and confirm the usefulness of this robotic tool.
24

Domain Adaptation to Meet the Reality-Gap from Simulation to Reality

Forsberg, Fanny January 2022 (has links)
Being able to train machine learning models on simulated data can be of great interest in several applications, one of them being for autonomous driving of cars. The reason is that it is easier to collect large labeled datasets as well as performing reinforcement learning in simulations. However, transferring these learned models to the real-world environment can be hard due to differences between the simulation and the reality; for example, differences in material, textures, lighting and content. One approach is to use domain adaptation, by making the simulations as similar as possible to the reality. The thesis's main focus is to investigate domain adaptation as a way to meet the reality-gap, and also compare it to an alternative method, domain randomization. Two different methods of domain adaptation; one adapting the simulated data to reality, and the other adapting the test data to simulation, are compared to using domain randomization. These are evaluated with a classifier making decisions for a robot car while driving in reality. The evaluation consists of a quantitative evaluation on real-world data and a qualitative evaluation aiming to observe how well the robot is driving and avoiding obstacles. The results show that the reality-gap is very large and that the examined methods reduce it, with the two using domain adaptation resulting in the largest decrease. However, none of them led to satisfactory driving.
25

Robot Proficiency Self-Assessment Using Assumption-Alignment Tracking

Cao, Xuan 01 April 2024 (has links) (PDF)
A robot is proficient if its performance for its task(s) satisfies a specific standard. While the design of autonomous robots often emphasizes such proficiency, another important attribute of autonomous robot systems is their ability to evaluate their own proficiency. A robot should be able to conduct proficiency self-assessment (PSA), i.e. assess how well it can perform a task before, during, and after it has attempted the task. We propose the assumption-alignment tracking (AAT) method, which provides time-indexed assessments of the veracity of robot generators' assumptions, for designing autonomous robots that can effectively evaluate their own performance. AAT can be considered as a general framework for using robot sensory data to extract useful features, which are then used to build data-driven PSA models. We develop various AAT-based data-driven approaches to PSA from different perspectives. First, we use AAT for estimating robot performance. AAT features encode how the robot's current running condition varies from the normal condition, which correlates with the deviation level between the robot's current performance and normal performance. We use the k-nearest neighbor algorithm to model that correlation. Second, AAT features are used for anomaly detection. We treat anomaly detection as a one-class classification problem where only data from the robot operating in normal conditions are used in training, decreasing the burden on acquiring data in various abnormal conditions. The cluster boundary of data points from normal conditions, which serves as the decision boundary between normal and abnormal conditions, can be identified by mainstream one-class classification algorithms. Third, we improve PSA models that predict robot success/failure by introducing meta-PSA models that assess the correctness of PSA models. The probability that a PSA model's prediction is correct is conditioned on four features: 1) the mean distance from a test sample to its nearest neighbors in the training set; 2) the predicted probability of success made by the PSA model; 3) the ratio between the robot's current performance and its performance standard; and 4) the percentage of the task the robot has already completed. Meta-PSA models trained on the four features using a Random Forest algorithm improve PSA models with respect to both discriminability and calibration. Finally, we explore how AAT can be used to generate a new type of explanation of robot behavior/policy from the perspective of a robot's proficiency. AAT provides three pieces of information for explanation generation: (1) veracity assessment of the assumptions on which the robot's generators rely; (2) proficiency assessment measured by the probability that the robot will successfully accomplish its task; and (3) counterfactual proficiency assessment computed with the veracity of some assumptions varied hypothetically. The information provided by AAT fits the situation awareness-based framework for explainable artificial intelligence. The efficacy of AAT is comprehensively evaluated using robot systems with a variety of robot types, generators, hardware, and tasks, including a simulated robot navigating in a maze-based (discrete time) Markov chain environment, a simulated robot navigating in a continuous environment, and both a simulated and a real-world robot arranging blocks of different shapes and colors in a specific order on a table.
26

Machine Learning for Road Following by Autonomous Mobile Robots

Warren, Emily Amanda 25 September 2008 (has links)
No description available.
27

ODAR : Obstacle Detecting Autonomous Robot / ODAR : Autonom hinderupptäckande robot

HALTORP, EMILIA, BREDHE, JOHANNA January 2020 (has links)
The industry for autonomous vehicles is growing. According to studies nine out of ten traffic accidents are due to the human factor, if the safety can get good enough in autonomous cars they have the potential to save thousands of lives every year. But obstacle detecting autonomous robots can be used in other situations as well, for example where the terrain is inaccessible for humans because of different reasons. In this project, a self navigating obstacle detecting robot was made. The robot uses ultrasonic sensors to detect obstacles and avoid them. An algorithm of the navigation of the robot was created and implemented to the Arduino. For driving the wheels, two servo motors were used. The robot consisted of three wheels, two in the back to which the servo motors were attached and one caster wheel in the front. This made it possible to implement differential drive which enabled quick and tight turns. Tests were performed which showed that the robot could successfully navigate in a room with various obstacles placed out. The placement of the sensors worked good considering the amount of sensors that was used. Improvements in detection of obstacles could have been made if more sensors had been used. The tests also confirmed that ultrasonic sensors works good for this kind of task. / Industrin för självkörande fordon växer. Enligt studier beror nio av tio trafikolyckor på den mänskliga faktorn, om säkerheten kan bli tillräckligt bra i självkörande bilar har de potential att rädda tusentals liv varje år. Men hinderupptäckande självkörande robotar kan användas i andra situationer också, till exempel i terräng som är otillgänglig för människor av olika anledningar.  I det här projektet har en självnavigerande hinderupptäckande robot byggts. Roboten använder ultraljudssensorer för att upptäcka hinder och unvika dem. En algoritm för navigationen av roboten skapades och implementerades i Arduinon. För drivningen av hjulen användes två servomotorer. Roboten hade tre hjul, två i den bakre änden till vilka servomotorerna var fästa och ett länkhjul fram. Det möjliggjorde differentialstyrning vilket också tillät snabba och snäva svängar.  Tester genomfördes som visade att roboten kunde navigera i ett rum med olika hinder utplacerade utan större problem. Placeringen av sensorerna fungerade bra med tanke på det antal sensorer som användes. Förbättringar av hinderupptäckningen hade kunnat göras om fler sensorer hade använts. Testerna bekräftade också att ultraljudssensorer fungerar bra för denna typ av uppgift.
28

Obstacle Avoidance for an Autonomous Robot Car using Deep Learning / En autonom robotbil undviker hinder med hjälp av djupinlärning

Norén, Karl January 2019 (has links)
The focus of this study was deep learning. A small, autonomous robot car was used for obstacle avoidance experiments. The robot car used a camera for taking images of its surroundings. A convolutional neural network used the images for obstacle detection. The available dataset of 31 022 images was trained with the Xception model. We compared two different implementations for making the robot car avoid obstacles. Mapping image classes to steering commands was used as a reference implementation. The main implementation of this study was to separate obstacle detection and steering logic in different modules. The former reached an obstacle avoidance ratio of 80 %, the latter reached 88 %. Different hyperparameters were looked at during training. We found that frozen layers and number of epochs were important to optimize. Weights were loaded from ImageNet before training. Frozen layers decided how many layers that were trainable after that. Training all layers (no frozen layers) was proven to work best. Number of epochs decided how many epochs a model trained. We found that it was important to train between 10-25 epochs. The best model used no frozen layers and trained for 21 epochs. It reached a test accuracy of 85.2 %.
29

Experiments in off-policy reinforcement learning with the GQ(lambda) algorithm

Delp, Michael Unknown Date
No description available.
30

Experiments in off-policy reinforcement learning with the GQ(lambda) algorithm

Delp, Michael 06 1900 (has links)
Off-policy reinforcement learning is useful in many contexts. Maei, Sutton, Szepesvari, and others, have recently introduced a new class of algorithms, the most advanced of which is GQ(lambda), for off-policy reinforcement learning. These algorithms are the first stable methods for general off-policy learning whose computational complexity scales linearly with the number of parameters, thereby making them potentially applicable to large applications involving function approximation. Despite these promising theoretical properties, these algorithms have received no significant empirical test of their effectiveness in off-policy settings prior to the current work. Here, GQ(lambda) is applied to a variety of prediction and control domains, including on a mobile robot, where it is able to learn multiple optimal policies in parallel from random actions. Overall, we find GQ(lambda) to be a promising algorithm for use with large real-world continuous learning tasks. We believe it could be the base algorithm of an autonomous sensorimotor robot.

Page generated in 0.0438 seconds