• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 14
  • 10
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 123
  • 89
  • 44
  • 37
  • 34
  • 34
  • 27
  • 26
  • 24
  • 21
  • 20
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Odometria visual baseada em t?cnicas de structure from motion

Silva, Bruno Marques Ferreira da 15 February 2011 (has links)
Made available in DSpace on 2014-12-17T14:55:51Z (GMT). No. of bitstreams: 1 BrunoMFS_DISSERT.pdf: 2462891 bytes, checksum: b8ea846d0fcc23b0777a6002e9ba92ac (MD5) Previous issue date: 2011-02-15 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform / Odometria Visual ? o processo pelo qual consegue-se obter a posi??o e orienta??o de uma c?mera, baseado somente em imagens e consequentemente, em caracter?sticas (proje??es de marcos visuais da cena) nelas contidas. Com o avan?o nos algoritmos e no poder de processamento dos computadores, a sub?rea de Vis?o Computacional denominada de Structure from Motion (SFM) passou a fornecer ferramentas que comp?em sistemas de localiza??o visando aplica??es como rob?tica e Realidade Aumentada, em contraste com o seu prop?sito inicial de ser usada em aplica??es predominantemente offline como reconstru??o 3D e modelagem baseada em imagens. Sendo assim, este trabalho prop?e um pipeline de obten??o de posi??o relativa que tem como caracter?sticas fazer uso de uma ?nica c?mera calibrada como sensor posicional e ser baseado interamente nos modelos e algoritmos de SFM. T?cnicas usualmente presentes em sistemas de localiza??o de c?mera como filtros de Kalman e filtros de part?culas n?o s?o empregadas, dispensando que informa??es adicionais como um modelo probabil?stico de transi??o de estados para a c?mera sejam necess?rias. Experimentos foram realizados com o prop?sito de avaliar tanto a reconstru??o 3D quanto a posi??o de c?mera retornada pelo sistema, atrav?s de sequ?ncias de imagens capturadas em ambientes reais de opera??o e compara??es com um ground truth fornecido pelos dados do od?metro de uma plataforma rob?tica
42

Terrain sensor for semi active suspension in CV90

Nordin, Fredrik January 2017 (has links)
The combat vehicle, CV90 has a semi-active hydraulic suspension system which uses inertial measurements for regulation to improve accessibility. To improve performance further measurements of future terrain can be used to, for example, prepare for impacts. This master's thesis investigates the ability to use existing sensors and new sensors to facilitate these measurements. Two test runs were performed, with very different conditions and outcomes. The results seem to suggest that a sweeping LIDAR was the most accurate and robust solution. However, using a very recent visual odometry algorithm, promising results were achieved using an Infra-red heat camera. Especially given that no efforts were put into adjusting parameters for that particular algorithm.
43

Um sistema de localização robótica para ambientes internos baseado em redes neurais. / An indoor robot localization system based on neural networks.

Vitor Luiz Martinez Sanches 15 April 2009 (has links)
Nesta pesquisa são estudados aspectos relacionados à problemática da localização robótica, e um sistema de localização robótica é construído. Para determinação da localização de um robô móvel em relação a um mapa topológico do ambiente, é proposta uma solução determinística. Esta solução é empregada a fim de prover localização para problemas de rastreamento de posição, embora seja de interesse também a observação da eficácia, do método proposto, frente a problemas de localização global. O sistema proposto baseia-se no uso de vetores de atributos, compostos de medições momentâneas extraídas do ambiente através de sensoriamentos pertencentes à percepção do robô. Estimativas feitas a partir da odometria e leitura de sensores de ultra-som são utilizadas em conjunto nestes vetores de atributos, de forma a caracterizar as observações feitas pelo robô. Uma bússola magnética também é empregada na solução. O problema de localização é então resolvido como um problema de reconhecimento de padrões. A topologia do ambiente é conhecida, e a correlação entre cada local neste ambiente e seus atributos são armazenados através do uso de redes neurais artificiais. O sistema de localização foi avaliado de maneira experimental, em campo, em uma plataforma robótica real, e resultados promissores foram obtidos e são apresentados. / In this research aspects related to the robot localization problem have been studied. In order to determine the localization of a mobile robot in relation to a topological map of its environment, a deterministic solution has been proposed. This solution is applied to provide localization for position tracking problems, although it is also of interest to observe the performance of the proposed method applied to global localization problems. The proposed system is based on feature vectors, which are composed of momentaneous measures extracted from sensory data of the robots perception. Estimative made from odometry, sonars and magnetic compass readings are used together in these feature vectors, in order to characterize observed scenes by the robot. Thus, the localization problem is solved as a pattern recognition problem. The topology of the environment is known, and the correlation between each place of this environment and its features is stored using an artificial neural network. The localization system was experimentally evaluated, in a real robotic platform. The results obtained allow validation of the methodology.
44

Localização baseada em odometria visual / Localization based on visual odometry

André Toshio Nogueira Nishitani 26 June 2015 (has links)
O problema da localização consiste em estimar a posição de um robô com relação a algum referencial externo e é parte essencial de sistemas de navegação de robôs e veículos autônomos. A localização baseada em odometria visual destaca-se em relação a odometria de encoders na obtenção da rotação e direção do movimento do robô. Esse tipo de abordagem é também uma escolha atrativa para sistemas de controle de veículos autônomos em ambientes urbanos, onde a informação visual é necessária para a extração de informações semânticas de placas, semáforos e outras sinalizações. Neste contexto este trabalho propõe o desenvolvimento de um sistema de odometria visual utilizando informação visual de uma câmera monocular baseado em reconstrução 3D para estimar o posicionamento do veículo. O problema da escala absoluta, inerente ao uso de câmeras monoculares, é resolvido utilizando um conhecimento prévio da relação métrica entre os pontos da imagem e pontos do mundo em um mesmo plano. / The localization problem consists of estimating the position of the robot with regards to some external reference and it is an essential part of robots and autonomous vehicles navigation systems. Localization based on visual odometry, compared to encoder based odometry, stands out at the estimation of rotation and direction of the movement. This kind of approach is an interesting choice for vehicle control systems in urban environment, where the visual information is mandatory for the extraction of semantic information contained in the street signs and marks. In this context this project propose the development of a visual odometry system based on structure from motion using visual information acquired from a monocular camera to estimate the vehicle pose. The absolute scale problem, inherent with the use of monocular cameras, is achieved using som previous known information regarding the metric relation between image points and points lying on a same world plane.
45

Learning-based Visual Odometry - A Transformer Approach

Rao, Anantha N 04 October 2021 (has links)
No description available.
46

Návrh a realizace odometrických snímačů pro mobilní robot s Ackermannovým řízením / Design and realization of odometry sensors for mobile robot with Ackermann steering

Porteš, Petr January 2017 (has links)
Aim of this thesis is to design and construct odometric sensors for a mobile robot with Ackermann steering Bender 2 and to design a mathematical model which would evaluate the the trajectory of the robot using measured data of these sensors. The first part summarizes theoretical knowledge, while the second, the practical part, describes the design of the front axle, the design and the operating software of the front encoders and the odometric models. The last part deals with the processing and evaluation of the measured data.
47

Ovládání robotického ramene s využitím rozšířené reality a tabletu / Control of Robot Manipulator Using Augmented Reality and Tablet

Pristaš, Martin January 2018 (has links)
The aim of this thesis is to create an experimental application for manipulating virtual objects in the augmented reality using an tablet for controlling the robotic arm. There is created various ways of manipulating virtual objects for their translation, rotation, and scale change. These methods are tested on several users and compared within their usability. The application allows you to send the position of virtual object changes to the PR2 robot arm and simulate manipulation of virtual objects with augmented reality.
48

Metody současné sebelokalizace a mapování pro hloubkové kamery / Methods for Simultaneous Self-localization and Mapping for Depht Cameras

Ligocki, Adam January 2017 (has links)
Tato diplomová práce se zabývá tvorbou fúze pozičních dat z existující realtimové im- plementace vizuálního SLAMu a kolové odometrie. Výsledkem spojení dat je potlačení nežádoucích chyb u každé ze zmíněných metod měření, díky čemuž je možné vytvořit přesnější 3D model zkoumaného prostředí. Práce nejprve uvádí teorií potřebnou pro zvládnutí problematiky 3D SLAMu. Dále popisuje vlastnosti použitého open source SLAM projektu a jeho jednotlivé softwarové úpravy. Následně popisuje principy spo- jení pozičních informací získaných vizuálními a odometrickými snímači, dále uvádí popis diferenciálního podvozku, který byl použit pro tvorbu kolové odometrie. Na závěr práce shrnuje výsledky dosažené datovou fúzí a srovnává je s původní přesností vizuálního SLAMu.
49

Amélioration des méthodes de navigation vision-inertiel par exploitation des perturbations magnétiques stationnaires de l’environnement / Improving Visual-Inertial Navigation Using Stationary Environmental Magnetic Disturbances

Caruso, David 01 June 2018 (has links)
Cette thèse s'intéresse au problème du positionnement (position et orientation) dans un contexte de réalité augmentée et aborde spécifiquement les solutions à base de capteurs embarqués. Aujourd'hui, les systèmes de navigation vision-inertiel commencent à combler les besoins spécifiques de cette application. Néanmoins, ces systèmes se basent tous sur des corrections de trajectoire issues des informations visuelles à haute fréquence afin de pallier la rapide dérive des capteurs inertiels bas-coûts. Pour cette raison, ces méthodes sont mises en défaut lorsque l'environnement visuel est défavorable.Parallèlement, des travaux récents menés par la société Sysnav ont démontré qu'il était possible de réduire la dérive de l'intégration inertielle en exploitant le champ magnétique, grâce à un nouveau type d'UMI bas-coût composée – en plus des accéléromètres et gyromètres traditionnels – d'un réseau de magnétomètres. Néanmoins, cette méthode est également mise en défaut si des hypothèses de non-uniformité et de stationnarité du champ magnétique ne sont pas vérifiées localement autour du capteur.Nos travaux portent sur le développement d'une solution de navigation à l'estime robuste combinant toutes ces sources d'information: magnétiques, visuelles et inertielles.Nous présentons plusieurs approches pour la fusion de ces données, basées sur des méthodes de filtrage ou d’optimisation et nous développons un modèle de prédiction du champ magnétique inspiré d'approximation proposées en inertiel et permettant d’intégrer efficacement des termes magnétiques dans les méthodes d’ajustement de faisceaux. Les performances de ces différentes approches sont évaluées sur des données réelles et nous démontrons le bénéfice de la fusion de données comparées aux solutions vision-inertielles ou magnéto-inertielles. Des propriétés théoriques de ces méthodes liées à la théorie de l’invariance des estimateurs sont également étudiées. / This thesis addresses the issue of positioning in 6-DOF that arises from augmented reality applications and focuses on embedded sensors based solutions.Nowadays, the performance reached by visual-inertial navigation systems is starting to be adequate for AR applications. Nonetheless, those systems are based on position correction from visual sensors involved at a relatively high frequency to mitigate the quick drift of low-cost inertial sensors. This is a problem when the visual environment is unfavorable.In parallel, recent works have shown it was feasible to leverage magnetic field to reduce inertial integration drift thanks to a new type of low-cost sensor, which includes – in addition to the accelerometers and gyrometers – a network of magnetometers. Yet, this magnetic approach for dead-reckoning fails if stationarity and non-uniformity hypothesis on the magnetic field are unfulfilled in the vicinity of the sensor.We develop a robust dead-reckoning solution combining simultaneously information from all these sources: magnetic, visual, and inertial sensor. We present several approaches to solve for the fusion problem, using either filtering or non-linear optimization paradigm and we develop an efficient way to use magnetic error term in a classical bundle adjustment that was inspired from already used idea for inertial terms. We evaluate the performance of these estimators on data from real sensors. We demonstrate the benefits of the fusion compared to visual-inertial and magneto-inertial solutions. Finally, we study theoretical properties of the estimators that are linked to invariance theory.
50

Tracking motion in mineshafts : Using monocular visual odometry

Suikki, Karl January 2022 (has links)
LKAB has a mineshaft trolley used for scanning mineshafts. It is suspended down into a mineshaft by wire, scanning the mineshaft on both descent and ascent using two LiDAR (Light Detection And Ranging) sensors and an IMU (Internal Measurement Unit) used for tracking the position. With good tracking, one could use the LiDAR scans to create a three-dimensional model of the mineshaft which could be used for monitoring, planning and visualization in the future. Tracking with IMU is very unstable since most IMUs are susceptible to disturbances and will drift over time; we strive to track the movement using monocular visual odometry instead. Visual odometry is used to track movement based on video or images. It is the process of retrieving the pose of a camera by analyzing a sequence of images from one or multiple cameras. The mineshaft trolley is also equipped with one camera which is filming the descent and ascent and we aim to use this video for tracking. We present a simple algorithm for visual odometry and test its tracking on multiple datasets being: KITTI datasets of traffic scenes accompanied by their ground truth trajectories, mineshaft data intended for the mineshaft trolley operator and self-captured data accompanied by an approximate ground truth trajectory. The algorithm is feature based, meaning that it is focused on tracking recognizable keypoints in sequent images. We compare the performance of our algortihm by tracking the different datasets using two different feature detection and description systems, ORB and SIFT. We find that our algorithm performs well on tracking the movement of the KITTI datasets using both ORB and SIFT whose largest total errors of estimated trajectories are $3.1$ m and $0.7$ m for ORB and SIFT respectively in $51.8$ m moved. This was compared to their ground truth trajectories. The tracking of the self-captured dataset shows by visual inspection that the algorithm can perform well on data which has not been as carefully captured as the KITTI datasets. We do however find that we cannot track the movement with the current data from the mineshaft. This is due to the algorithm finding too few matching features in sequent images, breaking the pose estimation of the visual odometry. We make a comparison of how ORB and SIFT finds features in the mineshaft images and find that SIFT performs better by finding more features. The mineshaft data was never intended for visual odometry and therefore it is not suitable for this purpose either. We argue that the tracking could work in the mineshaft if the visual conditions are made better by focusing on more even lighting and camera placement or if it can be combined with other sensors such as an IMU, that assist the visual odometry when it fails.

Page generated in 0.0709 seconds