• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 14
  • 10
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 126
  • 92
  • 45
  • 37
  • 34
  • 34
  • 29
  • 27
  • 24
  • 21
  • 21
  • 21
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Interest Point Sampling for Range Data Registration in Visual Odometry

PANWAR, VIVEK 07 November 2011 (has links)
Accurate registration of 3D data is one of the most challenging problems in a number of Computer Vision applications. Visual Odometry is one such application, which determines the motion, or change in position of a moving rover by registering 3D data captured by an on-board range sensor, in a pairwise manner. The performance of Visual Odometry depends upon two main factors, the first being the quality of 3D data, which itself depends upon the type of sensor being used. The second factor is the robustness of the registration algorithm. Where sensors like stereo cameras and LIDAR scanners have been used in the past to improve the performance of Visual Odometry, the introduction of the Velodyne LIDAR scanner is fairly new and has been less investigated, particularly for odometry applications. This thesis presents and examines a new method for registering 3D point clouds generated by a Velodyne scanner mounted on a moving rover. The method is based on one of the the most widely used registration algorithms called Iterative Closest Point (ICP). The proposed method is divided into two steps. The first step, which is also the main contribution of this work, is the introduction of a new point sampling method, which prudently select points that belong to the regions of greatest geometric variance in the scan. Interest Point (Region) Sampling plays an important role in the performance of ICP by effectively discounting the regions with non-uniform resolution and selecting regions with a high geometric variance and uniform resolution. Second step is to use sampled scan pairs as the input to a new plane-to-plane variant of ICP, known as Generalized ICP. Several experiments have been executed to test the compatibility and robustness of Interest Point Sampling (IPS) for a variety of terrain landscapes. Through these experiments, which include comparisons of variants of ICP and past sampling methods, this work demonstrates that the combination of IPS and GICP results in the least localization error as compared to all other tested method. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-11-03 11:12:43.596
12

Visual Odometry Aided by a Sun Sensor and an Inclinometer

Lambert, Andrew 12 December 2011 (has links)
Due to the absence of any satellite-based global positioning system on Mars, the Mars Exploration Rovers commonly track position changes of the vehicle using a technique called visual odometry (VO), where updated rover poses are determined by tracking keypoints between stereo image pairs. Unfortunately, the error of VO grows super-linearly with the distance traveled, primarily due to the contribution of orientation error. This thesis outlines a novel approach incorporating sun sensor and inclinometer measurements directly into the VO pipeline, utilizing absolute orientation information to reduce the error growth of the motion estimate. These additional measurements have very low computation, power, and mass requirements, providing a localization improvement at nearly negligible cost. The mathematical formulation of this approach is described in detail, and extensive results are presented from experimental trials utilizing data collected during a 10 kilometre traversal of a Mars analogue site on Devon Island in the Canadian High Arctic.
13

Visual Odometry Aided by a Sun Sensor and an Inclinometer

Lambert, Andrew 12 December 2011 (has links)
Due to the absence of any satellite-based global positioning system on Mars, the Mars Exploration Rovers commonly track position changes of the vehicle using a technique called visual odometry (VO), where updated rover poses are determined by tracking keypoints between stereo image pairs. Unfortunately, the error of VO grows super-linearly with the distance traveled, primarily due to the contribution of orientation error. This thesis outlines a novel approach incorporating sun sensor and inclinometer measurements directly into the VO pipeline, utilizing absolute orientation information to reduce the error growth of the motion estimate. These additional measurements have very low computation, power, and mass requirements, providing a localization improvement at nearly negligible cost. The mathematical formulation of this approach is described in detail, and extensive results are presented from experimental trials utilizing data collected during a 10 kilometre traversal of a Mars analogue site on Devon Island in the Canadian High Arctic.
14

Standalone and embedded stereo visual odometry based navigation solution

Chermak, L 17 July 2015 (has links)
This thesis investigates techniques and designs an autonomous visual stereo based navigation sensor to improve stereo visual odometry for purpose of navigation in unknown environments. In particular, autonomous navigation in a space mission context which imposes challenging constraints on algorithm development and hardware requirements. For instance, Global Positioning System (GPS) is not available in this context. Thus, a solution for navigation cannot rely on similar external sources of information. Support to handle this problem is required with the conception of an intelligent perception-sensing device that provides precise outputs related to absolute and relative 6 degrees of freedom (DOF) positioning. This is achieved using only images from stereo calibrated cameras possibly coupled with an inertial measurement unit (IMU) while fulfilling real time processing requirements. Moreover, no prior knowledge about the environment is assumed. Robotic navigation has been the motivating research to investigate different and complementary areas such as stereovision, visual motion estimation, optimisation and data fusion. Several contributions have been made in these areas. Firstly, an efficient feature detection, stereo matching and feature tracking strategy based on Kanade-Lucas-Tomasi (KLT) feature tracker is proposed to form the base of the visual motion estimation. Secondly, in order to cope with extreme illumination changes, High dynamic range (HDR) imaging solution is investigated and a comparative assessment of feature tracking performance is conducted. Thirdly, a two views local bundle adjustment scheme based on trust region minimisation is proposed for precise visual motion estimation. Fourthly, a novel KLT feature tracker using IMU information is integrated into the visual odometry pipeline. Finally, a smart standalone stereo visual/IMU navigation sensor has been designed integrating an innovative combination of hardware as well as the novel software solutions proposed above. As a result of a balanced combination of hardware and software implementation, we achieved 5fps frame rate processing up to 750 initials features at a resolution of 1280x960. This is the highest reached resolution in real time for visual odometry applications to our knowledge. In addition visual odometry accuracy of our algorithm achieves the state of the art with less than 1% relative error in the estimated trajectories. / © Cranfield University, 2014
15

RGB-D SLAM : an implementation framework based on the joint evaluation of spatial velocities

Coppejans, Hugo Herman Godelieve January 2017 (has links)
In pursuit of creating a fully automated navigation system that is capable of operating in dynamic environments, a large amount of research is being devoted to systems that use visual odometry assisted methods to estimate the position of a platform with regards to the environment surrounding it. This includes systems that do and do not know the environment a priori, as both rely on the same methods for localisation. For the combined problem of localisation and mapping, Simultaneous Localisation and Mapping (SLAM) is the de facto choice, and in recent years with the advent of color and depth (RGB-D) sensors, RGB-D SLAM has become a hot topic for research. Most research being performed is on improving the overall system accuracy or more specifically the performance with regards to the overall trajectory error. While this approach quantifies the performance of the system as a whole, the individual frame-to-frame performance is often not mentioned or explored properly. While this will directly tie in to the overall performance, the level of scene cohesion experienced between two successive observations can vary greatly over a single dataset of observations. The focus of this dissertation will be the relevant levels of translational and rotational velocities experienced by the sensor between two successive observations and the effect on the final accuracy of the SLAM implementation. The frame rate will specifically be used to alter and evaluate the different spatial velocities experienced over multiple datasets of RGB-D data. Two systems were developed to illustrate and evaluate the potential of various approaches to RGB-D SLAM. The first system is a real-world implementation where SLAM is used to localise and map the environment surrounding a quadcopter platform. A Microsoft Kinect is directly mounted to the quadcopter and is used to provide a RGB-D datastream to a remote processing terminal. This terminal runs a SLAM implementation that can alternate between different visual odometry methods. The remote terminal acts as the position controller for the quadcopter, replacing the need for a direct human operator. A semi-automated system is implemented, that allows a human operator to designate waypoints within the environment that the quadcopter moves to. The second system uses a series of publicly available RGB-D datasets with their accompanying ground-truth readings to simulate a real RGB-D datasteam. This is used to evaluate the performance of the various RGB-D SLAM approaches to visual odometry. For each of the datasets, the accompanying translational and angular velocity on a frame-to-frame basis can be calculated. This can, in turn, be used to evaluate the frame-to-frame accuracy of the SLAM implementation, where the spatial velocity can be manually altered by occluding frames within the sequence. Thus, an accurate relationship can be calculated between the frame rate, the spatial velocity and the performance of the SLAM implementation. Three image processing techniques were used to implement the visual odometry for RGB-D SLAM. SIFT, SURF and ORB were compared across eight of the TUM database datasets. SIFT had the best performance, with a 30% increase over SURF and doubling the performance of ORB. By implementing SIFT using CUDA, the feature detection and description process only takes 18ms, negating the disadvantage that SIFT has compared to SURF and ORB. The RGB-D SLAM implementation was compared to four prominent research papers, and showed comparable results. The effect of rotation and translation was evaluated, based on the effect of each rotation and translation axis. It was found that the z-axis (scale) and the roll-axis (scene orientation) have a lower effect on the average RPE error in a frame-to-frame basis. It was found that rotation has a much greater impact on the performance, when evaluating rotation and translation separately. On average, a rotation of 1deg resulted in a 4mm translation error and a 20% rotation error , where a translation of 10mm resulted in a rotation error of 0.2deg and a translation error of 45%. The combined effect of rotation and translation had a multiplicative effect on the error metric. The quadcopter platform designed to work with the SLAM implementation did not function ideally, but it was sufficient for the purpose. The quadcopter is able to self stabilise within the environment, given a spacious area. For smaller, enclosed areas the backdraft generated by the quadcopter motors lead to some instability in the system. A frame-to-frame error of 40.34mm and 1.93deg was estimated for the quadcopter system. / Dissertation (MEng)--University of Pretoria, 2017. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
16

Visual Odometry for Autonomous MAV with On-Board Processing / Visuell odometri för autonom MAV med processorkraft ombord

Greenberg, Jacob January 2015 (has links)
A new visual registration algorithm (Adaptive Iterative Closest Keypoint, AICK) is tested and evaluated as a positioning tool on a Micro Aerial Vehicle (MAV). Captured frames from a Kinect like RGB-D camera are analyzed and an estimated position of the MAV is extracted. The hope is to find a positioning solution for GPS-denied environments. This thesis is focused on an indoor office environment. The MAV is flown manually, capturing in-flight RGB-D images which are registered with the AICK algorithm. The result is analyzed to come to a conclusion if AICK is viable or not for autonomous flight based on on-board positioning estimates. The result shows potential for a working autonomous MAV in GPS-denied environments, however there are some surroundings that have proven difficult. The lack of visual features on e.g., a white wall causes problems and uncertainties in the positioning, which is even more troublesome when the distance to the surroundings exceed the RGB-D cameras depth range. With further work on these weaknesses we believe that a robust autonomous MAV using AICK for positioning is plausible. / En ny visuell registreringsalgoritm (Adaptive Iterative Closest Keypoint, AICK) testas och utvärderas som ett positioneringsverktyg på en Micro Aerial Vehicle (MAV). Tagna bilder från en Kinect liknande RGB-D kamera analyseras och en approximerad position av MAVen beräknas. Förhoppningen är att hitta en positioneringslösning för miljöer utan GPS förbindelse, där detta arbete fokuserar på kontorsmiljöer inomhus. MAVen flygs manuellt samtidigt som RGB-D bilder tas, dessa registreras sedan med hjälp av AICK. Resultatet analyseras för att kunna dra en slutsats om AICK är en rimlig metod eller inte för att åstadkomma autonom flygning med hjälp av den uppskattade positionen. Resultatet visar potentialen för en fungerande autonom MAV i miljöer utan GPS förbindelse, men det finns testade miljöer där AICK i dagsläget fungerar undermåligt. Bristen på visuella särdrag på t.ex. en vit vägg inför problem och osäkerheter i positioneringen, ännu mer besvärande är det när avståndet till omgivningen överskrider RGB-D kamerornas räckvidd. Med fortsatt arbete med dessa svagheter är en robust autonom MAV som använder AICK för positioneringen rimlig.
17

Dynamics-Enabled Localization of UAVs using Unscented Kalman Filter

Omotuyi, Oyindamola January 2021 (has links)
No description available.
18

Visual odometry from omnidirectional camera / Visual odometry from omnidirectional camera

Diviš, Jiří January 2013 (has links)
We present a system that estimates the motion of a robot relying solely on images from onboard omnidirectional camera (visual odometry). Compared to other visual odometry hardware, ours is unusual in utilizing high resolution, low frame-rate (1 to 3 Hz) omnidirectional camera mounted on a robot that is propelled using continuous tracks. We focus on high precision estimates in scenes, where objects are far away from the camera. This is achieved by utilizing omnidirectional camera that is able to stabilize the motion estimates between camera frames that are known to be ill-conditioned for narrow field of view cameras. We employ feature based-approach for estimation camera motion. Given our hardware, possibly high ammounts of camera rotation between frames can occur. Thus we use techniques of feature matching rather than feature tracking.
19

Visual odometry from omnidirectional camera / Visual odometry from omnidirectional camera

Diviš, Jiří January 2013 (has links)
We present a system that estimates the motion of a robot relying solely on images from onboard omnidirectional camera (visual odometry). Compared to other visual odometry hardware, ours is unusual in utilizing high resolution, low frame-rate (1 to 3 Hz) omnidirectional camera mounted on a robot that is propelled using continuous tracks. We focus on high precision estimates in scenes, where objects are far away from the camera. This is achieved by utilizing omnidirectional camera that is able to stabilize the motion estimates between camera frames that are known to be ill-conditioned for narrow field of view cameras. We employ feature based-approach for estimation camera motion. Given our hardware, possibly high ammounts of camera rotation between frames can occur. Thus we use techniques of feature matching rather than feature tracking.
20

Improving the Utility of Egocentric Videos

Biao Ma (6848807) 15 August 2019 (has links)
<div>For either entertainment or documenting purposes, people are starting to record their life using egocentric cameras, mounted on either a person or a vehicle. Our target is to improve the utility of these egocentric videos. </div><div><br></div><div>For egocentric videos with an entertainment purpose, we aim to enhance the viewing experience to improve overall enjoyment. We focus on First-Person Videos (FPVs), which are recorded by wearable cameras. People record FPVs in order to share their First-Person Experience (FPE). However, raw FPVs are usually too shaky to watch, which ruins the experience. We explore the mechanism of human perception and propose a biometric-based measurement called the Viewing Experience (VE) score, which measures both the stability and the First-person Motion Information (FPMI) of a FPV. This enables us to further develop a system to stabilize FPVs while preserving their FPMI. Experimental results show that our system is robust and efficient in measuring and improving the VE of FPVs.</div><div><br></div><div>For egocentric videos whose goal is documentation, we aim to build a system that can centrally collect, compress and manage the videos. We focus on Dash Camera Videos (DCVs), which are used by people to document the route they drive each day. We proposed a system that can classify videos according to the route they drove using GPS information and visual information. When new DCVs are recorded, their bit-rate can be reduced by jointly compressing them with videos recorded on the similar route. Experimental results show that our system outperforms other similar solutions and the standard HEVC particularly in varying illumination.</div><div><br></div><div>The First-Person Video viewing experience topic and the Dashcam Video compression topic are two representations of applications rely on Visual Odometers (VOs): visual augmentation and robotic perception. Different applications have different requirement for VOs. And the performance of VOs are also influenced by many different factors. To help our system and other users that also work on similar applications, we further propose a system that can investigate the performance of different VOs under various factors. The proposed system is shown to be able to provide suggestion on selecting VOs based on the application.</div>

Page generated in 0.0779 seconds