• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 44
  • 44
  • 34
  • 17
  • 13
  • 12
  • 9
  • 6
  • 6
  • 5
  • 1
  • Tagged with
  • 482
  • 110
  • 104
  • 102
  • 94
  • 92
  • 87
  • 78
  • 60
  • 55
  • 50
  • 48
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

[en] GRAPH OPTIMIZATION AND PROBABILISTIC SLAM OF MOBILE ROBOTS USING AN RGB-D SENSOR / [pt] OTIMIZAÇÃO DE GRAFOS E SLAM PROBABILÍSTICO DE ROBÔS MÓVEIS USANDO UM SENSOR RGB-D

23 March 2021 (has links)
[pt] Robôs móveis têm uma grande gama de aplicações, incluindo veículos autônomos, robôs industriais e veículos aéreos não tripulados. Navegação móvel autônoma é um assunto desafiador devido à alta incerteza e nãolinearidade inerente a ambientes não estruturados, locomoção e medições de sensores. Para executar navegação autônoma, um robô precisa de um mapa do ambiente e de uma estimativa de sua própria localização e orientação em relação ao sistema de referência global. No entando, geralmente o robô não possui informações prévias sobre o ambiente e deve criar o mapa usando informações de sensores e se localizar ao mesmo tempo, um problema chamado Mapeamento e Localização Simultâneos (SLAM). As formulações de SLAM usam algoritmos probabilísticos para lidar com as incertezas do problema, e a abordagem baseada em grafos é uma das soluções estado-da-arte para SLAM. Por muitos anos os sensores LRF (laser range finders) eram as escolhas mais populares de sensores para SLAM. No entanto, sensores RGB-D são uma alternativa interessante, devido ao baixo custo. Este trabalho apresenta uma implementação de RGB-D SLAM com uma abordagem baseada em grafos. A metodologia proposta usa o Sistema Operacional de Robôs (ROS) como middleware do sistema. A implementação é testada num robô de baixo custo e com um conjunto de dados reais obtidos na literatura. Também é apresentada a implementação de uma ferramenta de otimização de grafos para MATLAB. / [en] Mobile robots have a wide range of applications, including autonomous vehicles, industrial robots and unmanned aerial vehicles. Autonomous mobile navigation is a challenging subject due to the high uncertainty and nonlinearity inherent to unstructured environments, robot motion and sensor measurements. To perform autonomous navigation, a robot need a map of the environment and an estimation of its own pose with respect to the global coordinate system. However, usually the robot has no prior knowledge about the environment, and has to create a map using sensor information and localize itself at the same time, a problem called Simultaneous Localization and Mapping (SLAM). The SLAM formulations use probabilistic algorithms to handle the uncertainties of the problem, and the graph-based approach is one of the state-of-the-art solutions for SLAM. For many years, the LRF (laser range finders) were the most popular sensor choice for SLAM. However, RGB-D sensors are an interesting alternative, due to their low cost. This work presents an RGB-D SLAM implementation with a graph-based probabilistic approach. The proposed methodology uses the Robot Operating System (ROS) as middleware. The implementation is tested in a low cost robot and with real-world datasets from literature. Also, it is presented the implementation of a pose-graph optimization tool for MATLAB.
222

A vision system based real-time SLAM applications / Un système de vision pour la localisation et cartographie temps-réel

Nguyen, Dai-Duong 07 December 2018 (has links)
SLAM (localisation et cartographie simultanées) joue un rôle important dans plusieurs applications telles que les robots autonomes, les véhicules intelligents, les véhicules aériens sans pilote (UAV) et autres. De nos jours, les applications SLAM basées sur la vision en temps réel deviennent un sujet d'intérêt général dans de nombreuses recherches. L'une des solutions pour résoudre la complexité de calcul des algorithmes de traitement d'image, dédiés aux applications SLAM, consiste à effectuer un traitement de haut ou de bas niveau sur les coprocesseurs afin de créer un système sur puce. Les architectures hétérogènes ont démontré leur capacité à devenir des candidats potentiels pour un système sur puce dans une approche de co-conception de logiciels matériels. L'objectif de cette thèse est de proposer un système de vision implémentant un algorithme SLAM sur une architecture hétérogène (CPU-GPU ou CPU-FPGA). L'étude permettra d'évaluer ce type d'architectures et contribuer à répondre aux questions relatives à la définition des fonctions et/ou opérateurs élémentaires qui devraient être implantés et comment intégrer des algorithmes de traitement de données tout en prenant en considération l'architecture cible (dans un contexte d'adéquation algorithme architecture). Il y a deux parties dans un système SLAM visuel : Front-end (extraction des points d'intérêt) et Back-end (cœur de SLAM). Au cours de la thèse, concernant la partie Front-end, nous avons étudié plusieurs algorithmes de détection et description des primitives dans l’image. Nous avons développé notre propre algorithme intitulé HOOFR (Hessian ORB Overlapped FREAK) qui possède une meilleure performance par rapport à ceux de l’état de l’art. Cet algorithme est basé sur la modification du détecteur ORB et du descripteur bio-inspiré FREAK. Les résultats de l’amélioration ont été validés en utilisant des jeux de données réel connus. Ensuite, nous avons proposé l'algorithme HOOFR-SLAM Stereo pour la partie Back-end. Cet algorithme utilise les images acquises par une paire de caméras pour réaliser la localisation et cartographie simultanées. La validation a été faite sur plusieurs jeux de données (KITTI, New_College, Malaga, MRT, St_lucia…). Par la suite, pour atteindre un système temps réel, nous avons étudié la complexité algorithmique de HOOFR SLAM ainsi que les architectures matérielles actuelles dédiées aux systèmes embarqués. Nous avons utilisé une méthodologie basée sur la complexité de l'algorithme et le partitionnement des blocs fonctionnels. Le temps de traitement de chaque bloc est analysé en tenant compte des contraintes des architectures ciblées. Nous avons réalisé une implémentation de HOOFR SLAM sur une architecture massivement parallèle basée sur CPU-GPU. Les performances ont été évaluées sur un poste de travail puissant et sur des systèmes embarqués basés sur des architectures. Dans cette étude, nous proposons une architecture au niveau du système et une méthodologie de conception pour intégrer un algorithme de vision SLAM sur un SoC. Ce système mettra en évidence un compromis entre polyvalence, parallélisme, vitesse de traitement et résultats de localisation. Une comparaison avec les systèmes conventionnels sera effectuée pour évaluer l'architecture du système définie. Vue de la consommation d'énergie, nous avons étudié l'implémentation la partie Front-end sur l'architecture configurable type soc-FPGA. Le SLAM kernel est destiné à être exécuté sur un processeur. Nous avons proposé une architecture par la méthode HLS (High-level synthesis) en utilisant langage OpenCL. Nous avons validé notre architecture sur la carte Altera Arria 10 soc. Une comparaison avec les systèmes les plus récents montre que l’architecture conçue présente de meilleures performances et un compromis entre la consommation d’énergie et les temps de traitement. / SLAM (Simultaneous Localization And Mapping) has an important role in several applications such as autonomous robots, smart vehicles, unmanned aerial vehicles (UAVs) and others. Nowadays, real-time vision based SLAM applications becomes a subject of widespread interests in many researches. One of the solutions to solve the computational complexity of image processing algorithms, dedicated to SLAM applications, is to perform high or/and low level processing on co-processors in order to build a System on Chip. Heterogeneous architectures have demonstrated their ability to become potential candidates for a system on chip in a hardware software co-design approach. The aim of this thesis is to propose a vision system implementing a SLAM algorithm on a heterogeneous architecture (CPU-GPU or CPU-FPGA). The study will allow verifying if these types of heterogeneous architectures are advantageous, what elementary functions and/or operators should be added on chip and how to integrate image-processing and the SLAM Kernel on a heterogeneous architecture (i. e. How to map the vision SLAM on a System on Chip).There are two parts in a visual SLAM system: Front-end (feature extraction, image processing) and Back-end (SLAM kernel). During this thesis, we studied several features detection and description algorithms for the Front-end part. We have developed our own algorithm denoted as HOOFR (Hessian ORB Overlapped FREAK) extractor which has a better compromise between precision and processing times compared to those of the state of the art. This algorithm is based on the modification of the ORB (Oriented FAST and rotated BRIEF) detector and the bio-inspired descriptor: FREAK (Fast Retina Keypoint). The improvements were validated using well-known real datasets. Consequently, we propose the HOOFR-SLAM Stereo algorithm for the Back-end part. This algorithm uses images acquired by a stereo camera to perform simultaneous localization and mapping. The HOOFR SLAM performances were evaluated on different datasets (KITTI, New-College , Malaga, MRT, St-Lucia, ...).Afterward, to reach a real-time system, we studied the algorithmic complexity of HOOFR SLAM as well as the current hardware architectures dedicated for embedded systems. We used a methodology based on the algorithm complexity and functional blocks partitioning. The processing time of each block is analyzed taking into account the constraints of the targeted architectures. We achieved an implementation of HOOFR SLAM on a massively parallel architecture based on CPU-GPU. The performances were evaluated on a powerful workstation and on architectures based embedded systems. In this study, we propose a system-level architecture and a design methodology to integrate a vision SLAM algorithm on a SoC. This system will highlight a compromise between versatility, parallelism, processing speed and localization results. A comparison related to conventional systems will be performed to evaluate the defined system architecture. In order to reduce the energy consumption, we have studied the implementation of the Front-end part (image processing) on an FPGA based SoC system. The SLAM kernel is intended to run on a CPU processor. We proposed a parallelized architecture using HLS (High-level synthesis) method and OpenCL language programming. We validated our architecture for an Altera Arria 10 SoC. A comparison with systems in the state-of-the-art showed that the designed architecture presents better performances and a compromise between power consumption and processing times.
223

Towards visual navigation in dynamic and unknown environment : trajectory learning and following, with detection and tracking of moving objects / Vers une navigation visuelle en environnement dynamique inconnu : apprentissage et exécution de trajectoire avec détection et suivi d'objets mobiles

Márquez-Gámez, David Alberto 26 October 2012 (has links)
L’objectif de ces travaux porte sur la navigation de robots autonomes sur de grandes distances dans des environnements extérieurs dynamiques, plus précisément sur le développement et l’évaluation de fonctions avancées de perception, embarquées sur des véhicules se déplaçant en convoi sur un itinéraire inconnu a priori, dans un environnement urbain ou naturel. Nous avons abordé trois problématiques : d’abord nous avons exploité plusieurs méthodes de l’état de l’art, pour qu’un véhicule A, équipé d’un capteur stéréoscopique, apprenne à la fois une trajectoire et un modèle de l’environnement supposé d’abord statique. Puis nous avons proposé deux modes pour l’exécution de cette trajectoire par un véhicule B équipé d’une simple caméra : soit un mode différé, dans lequel B charge toute la trajectoire apprise par A, puis l’exécute seul, soit un mode convoi, dans lequel B suit A, qui lui envoie par une communication HF, les tronçons de la trajectoire au fur et à mesure qu’ils sont appris. Enfin nous avons considéré le cas des environnements évolutifs et dynamiques, en traitant de la détection d’événements depuis les images acquises depuis un véhicule mobile: détection des changements (disparition ou apparition d’objets statiques, typiquement des véhicules garés dans un milieu urbain), ou de la détection d’objets mobiles (autres véhicules ou piétons) / The global objective of these works concerns the navigation of autonomous robots on long routes in outdoor dynamic environments, more precisely on the development and the evaluation of advanced perception functions, embedded on vehicles moving in a convoy formation, on an a priori unknown route in urban or natural environments. Three issues are tackled: first several methods from the State of the Art have been integrated in order to cope with the visual mapping and the trajectory learning problems for a vehicle A equipped with a stereovision sensor, moving in a large-scale environment, assumed static. Then it is proposed two modes for the execution of this trajectory by a vehicle B equipped by a single camera: either a delayed mode, in which B loads initially all representations learnt by A, and executes alone the recorded trajectory, or a convoy mode, in which B follows A, which sends him by a communication link, the trajectory sections as soon as they are learnt. Finally, it has been considered changing and dynamic environments, dealing with the detection of events from images acquired on a dynamic vehicle: detection of changes (disappearances or appearances of static objects, typically cars parked in a urban environment), or detection of mobile objects (pedestrians or other vehicles)
224

Hybridation GPS/Vision monoculaire pour la navigation autonome d'un robot en milieu extérieur / Outdoor robotic navigation by GPS and monocular vision sensors fusion

Codol, Jean-Marie 15 February 2012 (has links)
On assiste aujourd'hui à l'importation des NTIC (Nouvelles Technologies de l'Information et de la Télécommunication) dans la robotique. L'union de ces technologies donnera naissance, dans les années à venir, à la robotique de service grand-public.Cet avenir, s'il se réalise, sera le fruit d'un travail de recherche, amont, dans de nombreux domaines : la mécatronique, les télécommunications, l'automatique, le traitement du signal et des images, l'intelligence artificielle ... Un des aspects particulièrement intéressant en robotique mobile est alors le problème de la localisation et de la cartographie simultanée. En effet, dans de nombreux cas, un robot mobile, pour accéder à une intelligence, doit nécessairement se localiser dans son environnement. La question est alors : quelle précision pouvons-nous espérer en terme de localisation? Et à quel coût?Dans ce contexte, un des objectifs de tous les laboratoires de recherche en robotique, objectif dont les résultats sont particulièrement attendus dans les milieux industriels, est un positionnement et une cartographie de l'environnement, qui soient à la fois précis, tous-lieux, intègre, bas-coût et temps-réel. Les capteurs de prédilection sont les capteurs peu onéreux tels qu'un GPS standard (de précision métrique), et un ensemble de capteurs embarquables en charge utile (comme les caméras-vidéo). Ce type de capteurs constituera donc notre support privilégié, dans notre travail de recherche. Dans cette thèse, nous aborderons le problème de la localisation d'un robot mobile, et nous choisirons de traiter notre problème par l'approche probabiliste. La démarche est la suivante, nous définissons nos 'variables d'intérêt' : un ensemble de variables aléatoires. Nous décrivons ensuite leurs lois de distribution, et leur modèles d'évolution, enfin nous déterminons une fonction de coût, de manière à construire un observateur (une classe d'algorithme dont l'objectif est de déterminer le minimum de notre fonction de coût). Notre contribution consistera en l'utilisation de mesures GPS brutes GPS (les mesures brutes - ou raw-datas - sont les mesures issues des boucles de corrélation de code et de phase, respectivement appelées mesures de pseudo-distances de code et de phase) pour une navigation bas-coût précise en milieu extérieur suburbain. En utilisant la propriété dite 'entière' des ambiguïtés de phase GPS, nous étendrons notre navigation pour réaliser un système GPS-RTK (Real Time Kinematic) en mode différentiel local précise et bas-coût. Nos propositions sont validées par des expérimentations réalisées sur notre démonstrateur robotique. / We are witnessing nowadays the importation of ICT (Information and Communications Technology) in robotics. These technologies will give birth, in upcoming years, to the general public service robotics. This future, if realised, shall be the result of many research conducted in several domains: mechatronics, telecommunications, automatics, signal and image processing, artificial intelligence ... One particularly interesting aspect in mobile robotics is hence the simultaneous localisation and mapping problem. Consequently, to access certain informations, a mobile robot has, in many cases, to map/localise itself inside its environment. The following question is then posed: What precision can we aim for in terms of localisation? And at what cost?In this context, one of the objectives of many laboratories indulged in robotics research, and where results impact directly the industry, is the positioning and mapping of the environment. These latter tasks should be precise, adapted everywhere, integrated, low-cost and real-time. The prediction sensors are inexpensive ones, such as a standard GPS (of metric precision), and a set of embeddable payload sensors (e.g. video cameras). These type of sensors constitute the main support in our work.In this thesis, we shed light on the localisation problem of a mobile robot, which we choose to handle with a probabilistic approach. The procedure is as follows: we first define our "variables of interest" which are a set of random variables, and then we describe their distribution laws and their evolution models. Afterwards, we determine a cost function in such a manner to build up an observer (an algorithmic class where the objective is to minimize the cost function).Our contribution consists of using brute GPS measures (brute measures or raw datas are measures issued from code and phase correlation loops, called pseudo-distance measures of code and phase, respectively) for a low-cost navigation, which is precise in an external suburban environment. By implementing the so-called "whole" property of GPS phase ambiguities, we expand the navigation to achieve a GPS-RTK (Real-Time Kinematic) system in a precise and low-cost local differential mode.Our propositions has been validated through experimentations realized on our robotic demonstrator.
225

[en] CONTEMPORARY COLLECTIVE READING PRACTICES: A COMPARATIVE STUDY IN GROUPS OF YOUNG AND ELDERLY PEOPLE / [pt] PRÁTICAS DE LEITURA COLETIVAS NA CONTEMPORANEIDADE: UM ESTUDO COMPARATIVO EM GRUPOS DE JOVENS E IDOSOS

PHILIPPE CUNHA FERRARI 12 May 2022 (has links)
[pt] A tese trata do estudo de três práticas de leitura coletivas e híbridas da contemporaneidade: as redes sociais, o booktok e o booktube e os Slams. O objetivo principal deste estudo é analisar como jovens e idosos interagem com essas práticas de leitura, observando as diferenças e semelhanças entre esses públicos etários. O método utilizado foi o método da pesquisa qualitativa através de entrevistas semiestruturadas com 40 jovens com idades de 18 a 30 anos e 40 idosos com idades de 60 a 80 anos, ambos os grupos com nível superior completo ou cursando nível superior, moradores do Rio de Janeiro. Essas três práticas de leitura utilizam-se de formatos online e offline, podendo ser realizada uma leitura compartilhada, com possibilidade de trocas e comentários entre os leitores. O virtual e o presencial nas práticas de leitura da contemporaneidade apresentam-se como mais integrados, acumulados, linkados e, por isso, também, emaranhados uns nos outros. Os conteúdos que são lidos continuam sendo diversos, lê-se sobre todos os assuntos, mas o modo de ler contemporâneo recebe nova formas, novas práticas de leitura que se conjugam e aglutinam na contemporaneidade como variadas formas de ler que coexistem. / [en] The thesis deals with the study of three collective and hybrid reading practices of contemporaneity: social networks, booktok and booktube and Slams. The main objective of this study is to analyze how young and old people interact with these reading practices, observing the differences and similarities between these age groups. The method used was the qualitative research method through semi-structured interviews with 40 young people aged between 18 and 30 years and 40 elderly people aged between 60 and 80 years, both groups with complete higher education or attending higher education, residents of Rio de Janeiro. January. These three reading practices use online and offline formats, and a shared reading can be carried out, with the possibility of exchanges and comments between readers. The virtual and the face-to-face in contemporary reading practices are presented as more integrated, accumulated, linked and, therefore, also entangled with each other. The contents that are read continue to be diverse, one reads on all subjects, but the contemporary way of reading receives new forms, new reading practices that are combined and agglutinate in the contemporaneity as different ways of reading that coexist.
226

[en] REAL-TIME METRIC-SEMANTIC VISUAL SLAM FOR DYNAMIC AND CHANGING ENVIRONMENTS / [pt] SLAM VISUAL MÉTRICO-SEMÂNTICO EM TEMPO REAL PARA AMBIENTES EM MUDANÇA E DINÂMICOS

JOAO CARLOS VIRGOLINO SOARES 05 July 2022 (has links)
[pt] Robôs móveis são cada dia mais importantes na sociedade moderna, realizando tarefas consideradas tediosas ou muito repetitivas para humanos, como limpeza ou patrulhamento. A maioria dessas tarefas requer um certo nível de autonomia do robô. Para que o robô seja considerado autônomo, ele precisa de um mapa do ambiente, e de sua posição e orientação nesse mapa. O problema de localização e mapeamento simultâneos (SLAM) é a tarefa de estimar tanto o mapa quanto a posição e orientação simultaneamente, usando somente informações dos sensores, sem ajuda externa. O problema de SLAM visual consiste na tarefa de realizar SLAM usando somente câmeras para o sensoriamento. A maior vantagem de usar câmeras é a possibilidade de resolver problemas de visão computacional que provêm informações de alto nível sobre a cena, como detecção de objetos. Porém a maioria dos sistemas de SLAM visual assume um ambiente estático, o que impõe limitações para a sua aplicabilidade em cenários reais. Esta tese apresenta soluções para o problema de SLAM visual em ambientes dinâmicos e em mudança. Especificamente, a tese propõe um método para ambientes com multidões, junto com um detector de pessoas customizado baseado em aprendizado profundo. Além disso, também é proposto um método de SLAM visual para ambientes altamente dinâmicos contendo objetos em movimento, combinando um rastreador de objetos robusto com um algoritmo de filtragem de pontos. Além disso, esta tese propõe um método de SLAM visual para ambientes em mudança, isto é, em cenas onde os objetos podem mudar de lugar após o robô já os ter mapeado. Todos os métodos propostos são testados com dados públicos e experimentos, e comparados com diversos métodos da literatura, alcançando um bom desempenho em tempo real. / [en] Mobile robots have become increasingly important in modern society, as they can perform tasks that are tedious or too repetitive for humans, such as cleaning and patrolling. Most of these tasks require a certain level of autonomy of the robot. To be fully autonomous and perform navigation, the robot needs a map of the environment and its pose within this map. The Simultaneous Localization and Mapping (SLAM) problem is the task of estimating both map and localization, simultaneously, only using sensor measurements. The visual SLAM problem is the task of performing SLAM only using cameras for sensing. The main advantage of using cameras is the possibility of solving computer vision problems that provide high-level information about the scene, such as object detection. However, most visual SLAM systems assume a static environment, which imposes a limitation on their applicability in real-world scenarios. This thesis presents solutions to the visual SLAM problem in dynamic and changing environments. A custom deep learning-based people detector allows our solution to deal with crowded environments. Also, a combination of a robust object tracker and a filtering algorithm enables our visual SLAM system to perform well in highly dynamic environments containing moving objects. Furthermore, this thesis proposes a visual SLAM method for changing environments, i.e., in scenes where the objects are moved after the robot has already mapped them. All proposed methods are tested in datasets and experiments and compared with several state-of-the-art methods, achieving high accuracy in real time.
227

Improving business performance with organizational learning : A case study of factors affecting organizational learning and its relationship with business performance / Förbättra företagets resultat med organisatoriskt lärande : En fallstudie med fokus på faktorer som påverkar organisatoriskt lärande och dess relation med organisationens

BENGTSSON, LUDVIG, SKOG, PONTUS January 2018 (has links)
This thesis is an intra-organizational case study which investigates the concept of organizational learning and its relationship with business performance. Furthermore, factors affecting organizational learning are explored. A mixed method approach is used, combining quantitative data from a survey instrument called the Strategic Learning Assessment Map (SLAM) with qualitative data from interviews and observations. This thesis shows that at the studied organization the organizational level knowledge stock has the highest association with business performance, followed by the group level knowledge stock. The individual level knowledge stock and misalignment does not achieve reasonable significance. When it comes to factors affecting organizational learning, Organizational culture and information processing capacity were identified as main barriers. Furthermore, individuals at the targeted organization acquire knowledge in informal ways and they learn routines over heuristics which also were identified as main factors affecting business performance. / Detta är en fallstudie med fokus på att undersöka konceptet organisatoriskt lärande och dess relation till företagets resultat. Faktorer som påverkar organisatoriskt lärande är även undersökt. En kombinerad kvalitativ och kvantitativ metod är använd i rapporten. Kvantitativ data är insamlad genom frågeformuläret Strategic Learning Assessment Map (SLAM) och kvalitativ data är insamlad genom intervjuer och observationer. Resultatet från studien är att den organisatoriska kunskapsnivån har störst påverkan på företagets resultat följt av gruppnivån som även har en betydande påverkan på företagets resultat. Den individuella kunskapsnivån och ojämnheter i det organisatoriska lärandet uppnår inte en tillräckligt hög nivå av signifikans. Företagskultur och informationskapacitet är identifierade som de två största barriärerna till organisatoriskt lärande. Individer på organisationen lär sig informellt och i större utsträckning rutiner över regler.
228

AUV SLAM constraint formation using side scan sonar / AUV SLAM Begränsningsbildning med hjälp av sidescan sonar

Schouten, Marco January 2022 (has links)
Autonomous underwater vehicle (AUV) navigation has been a challenging problem for a long time. Navigation is challenging due to the drift present in underwater environments and the lack of precise localisation systems such as GPS. Therefore, the uncertainty of the vehicle’s pose grows with the mission’s duration. This research investigates methods to form constraints on the vehicle’s pose throughout typical surveys. Current underwater navigation relies on acoustic sensors. Side Scan Sonar (SSS) is cheaper than Multibeam echosounder (MBES) but can generate 2D intensity images of wide sections of the seafloor instead of 3D representations. The methodology consists in extracting information from pairs of side-scan sonar images representing overlapping portions of the seafloor and computing the sensor pose transformation between the two reference frames of the image to generate constraints on the pose. The chosen approach relies on optimisation methods within a Simultaneous Localisation and Mapping (SLAM) framework to directly correct the trajectory and provide the best estimate of the AUV pose. I tested the optimisation system on simulated data to evaluate the proof of concept. Lastly, as an experiment trial, I tested the implementation on an annotated dataset containing overlapping side-scan sonar images provided by SMaRC. The simulated results indicate that AUV pose error can be reduced by optimisation, even with various noise levels in the measurements. / Navigering av autonoma undervattensfordon (AUV) har länge varit ett utmanande problem. Navigering är en utmaning på grund av den drift som förekommer i undervattensmiljöer och bristen på exakta lokaliseringssystem som GPS. Därför ökar osäkerheten i fråga om fordonets position med uppdragets längd. I denna forskning undersöks metoder för att skapa begränsningar för fordonets position under typiska undersökningar. Nuvarande undervattensnavigering bygger på akustiska sensorer. Side Scan Sonar (SSS) är billigare än Multibeam echosounder (MBES) men kan generera 2D-intensitetsbilder av stora delar av havsbotten i stället för 3D-bilder. Metoden består i att extrahera information från par av side-scan sonarbilder som representerar överlappande delar av havsbotten och beräkna sensorns posetransformation mellan bildens två referensramar för att generera begränsningar för poset. Det valda tillvägagångssättet bygger på optimeringsmetoder inom en SLAM-ram (Simultaneous Localisation and Mapping) för att direkt korrigera banan och ge den bästa uppskattningen av AUV:s position. Jag testade optimeringssystemet på simulerade data för att utvärdera konceptet. Slutligen testade jag genomförandet på ett annoterat dataset med överlappande side-scan sonarbilder från SMaRC. De simulerade resultaten visar att AUV:s poseringsfel kan minskas genom optimering, även med olika brusnivåer i mätningarna.
229

Registration and Localization of Unknown Moving Objects in Markerless Monocular SLAM

Troutman, Blake 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Simultaneous localization and mapping (SLAM) is a general device localization technique that uses realtime sensor measurements to develop a virtualization of the sensor's environment while also using this growing virtualization to determine the position and orientation of the sensor. This is useful for augmented reality (AR), in which a user looks through a head-mounted display (HMD) or viewfinder to see virtual components integrated into the real world. Visual SLAM (i.e., SLAM in which the sensor is an optical camera) is used in AR to determine the exact device/headset movement so that the virtual components can be accurately redrawn to the screen, matching the perceived motion of the world around the user as the user moves the device/headset. However, many potential AR applications may need access to more than device localization data in order to be useful; they may need to leverage environment data as well. Additionally, most SLAM solutions make the naive assumption that the environment surrounding the system is completely static (non-moving). Given these circumstances, it is clear that AR may benefit substantially from utilizing a SLAM solution that detects objects that move in the scene and ultimately provides localization data for each of these objects. This problem is known as the dynamic SLAM problem. Current attempts to address the dynamic SLAM problem often use machine learning to develop models that identify the parts of the camera image that belong to one of many classes of potentially-moving objects. The limitation with these approaches is that it is impractical to train models to identify every possible object that moves; additionally, some potentially-moving objects may be static in the scene, which these approaches often do not account for. Some other attempts to address the dynamic SLAM problem also localize the moving objects they detect, but these systems almost always rely on depth sensors or stereo camera configurations, which have significant limitations in real-world use cases. This dissertation presents a novel approach for registering and localizing unknown moving objects in the context of markerless, monocular, keyframe-based SLAM with no required prior information about object structure, appearance, or existence. This work also details a novel deep learning solution for determining SLAM map initialization suitability in structure-from-motion-based initialization approaches. This dissertation goes on to validate these approaches by implementing them in a markerless, monocular SLAM system called LUMO-SLAM, which is built from the ground up to demonstrate this approach to unknown moving object registration and localization. Results are collected for the LUMO-SLAM system, which address the accuracy of its camera localization estimates, the accuracy of its moving object localization estimates, and the consistency with which it registers moving objects in the scene. These results show that this solution to the dynamic SLAM problem, though it does not act as a practical solution for all use cases, has an ability to accurately register and localize unknown moving objects in such a way that makes it useful for some applications of AR without thwarting the system's ability to also perform accurate camera localization.
230

The Interconnectivity Between SLAM and Autonomous Exploration : Investigation Through Integration / Interaktionen mellan SLAM och autonom utforskning : Undersökning genom integration

Ívarsson, Elliði January 2023 (has links)
Two crucial functionalities of a fully autonomous robotic agent are localization and navigation. The problem of enabling an agent to localize itself in an unknown environment is an extensive and widely studied topic. One of the main areas of this topic focuses on Simultaneous Localization and Mapping (SLAM). Many advancements in this field have been made over the years resulting in robust and accurate localization systems. Navigation progress has also improved substantially throughout the years resulting in efficient path planning algorithms and effective exploration strategies. Although an abundance of research exists on these two topics, less so exists about the combination of the two and their effect on each other. Therefore, the aim of this thesis was to integrate two state-of-the-art components from each respective area of research into a functioning system. This was done with the aim of studying the interconnectivity between these components while also documenting the integration process and identifying important considerations for similar future endeavours. Evaluations of the system showed that it performed with surprisingly good accuracy although it was severely lacking in robustness. Integration efforts showed good promise; however, it is clear that the two fields are heavily linked and need to be considered in a mutual context when it comes to a complete integrated system. / Förmågor som lokalisering och navigering är inom robotik förutsättande för att kunna möjliggöra en fullt autonom agent. Att för en agent kunna lokalisera sig i en okänd miljö är ett omfattande och brett studerat ämne, och ett huvudfokus inom ämnet är Simultaneous Localization and Mapping (SLAM) som avser lokalisering som sker parallellt med en aktiv kartläggning av omgivningen. Stora framsteg har gjorts inom detta område genom åren, vilket har resulterat i robusta och exakta system för robotlokalisering. Motsvarande framsteg inom robotnavigering har dessutom möjliggjort effektiva algoritmer och strategier för path planning och autonom utforskning. Trots den stora mängd forskning som existerar inom ämnena lokalisering och navigation var för sig, är samspelet mellan de två områdena samt möjligheten att sammankoppla de två aspekterna mindre studerat. I syfte att undersöka detta var målet med detta examensarbete således att integrera två toppmoderna system från de respektive områdena till ett sammankopplat system. Utöver att förmågorna och prestandan hos det integrerade systemet kunde studeras, genomfördes studien med avsikten att möjliggöra dokumentering av integrationsprocessen samt att viktiga insikter kring integrationen kunde identifieras i syfte att främja framtida studier inom samspelet mellan områdena lokalisering och navigation. Utvärderingar av det integrerade systemet påvisade en högre nivå av noggrannhet än förväntat, men fann en markant avsaknad av robusthet. Resultaten från integrationsarbetet anses lovande, och belyser framförallt att finns ett starkt samband mellan de två områdena samt att de bör beaktas i ett gemensamt kontext när de avses användas i ett komplett integrerat system.

Page generated in 0.3445 seconds