• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 14
  • 13
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Matching of image features and vector objects to automatically correct spatial misalignment between image and vector data sets

O'Donohue, Daniel Gerard January 2010 (has links)
Direct georeferencing of aerial imagery has the potential to meet escalating demand for image data sets of increasingly higher temporal and spatial resolution. However, variability in terms of spatial accuracy within the resulting images may severely limit the use of this technology with regard to operations involving other data sets. Spatial misalignment between data sets can be corrected manually; however, an automated solution is preferable given the volume of data involved. This research has developed and tested an automated custom solution to the spatial misalignment between directly georeference aerial thermal imagery and vector data representing building outlines. The procedure uses geometric matches between image features and vector objects to relate pixel locations to geographic coordinates. The results suggest that the concept is valid and capable of significantly improving the spatial accuracy of directly georeferencing aerial imagery.
2

Using linear features for absolute and exterior orientation

Park, David W. G. January 1999 (has links)
No description available.
3

Navigation eines mobilen Roboters durch ebene Innenräume

Buchmann, Lennart 07 February 2023 (has links)
Die Begutachtung, der Handel und das Sammeln von Kunstgegenständen findet nicht ausschließlich analog statt. Die Firma 4ARTechnologies entwickelt Softwarelösungen für das digitale Kollektionsmanagement physischer und digitaler Kunst. Mittels Applikationen auf mobilen Endgeräten können Nutzer ihre Gemälde registrieren, authentifizieren und periodisch präzise Zustandsberichte erstellen. Die Erstellung von Zustandsberichten führt jedoch aufgrund von menschlichen Limitierungen zu Problemen in der Handhabung der Applikation und soll mithilfe eines mobilen Roboters automatisiert werden. Das Ziel dieser Arbeit ist die Entwicklung einer Navigation für einen mobilen Roboter. Diese soll folgendes Problem lösen: Lokalisierung eines Gemäldes, kollisionsfreie Annäherung und horizontal-mittige Positionierung davor. Zielplattform dieser Software ist das mobile Betriebssystem iOS. Für die Lösung wurden Verfahren der Navigation mobiler Roboter und der computergestützten Erkennung von Bildern untersucht. Die Navigationssoftware nutzt zur Zielfindung das Feature-Matching aus der OpenCV-Bibliothek. Für die Schätzung der eigenen Position werden relative Lokalisierungverfahren wie Posenverfolgung und Odometrie eingesetzt. Die Abbildung der Umgebung sowie der Bewegungsverlauf des Roboters werden auf einer topologischen Karte dargestellt. Mittels implementiertem BUG3-Algorithmus werden Hindernisse umfahren.:1. Einleitung 1.1. Problembeschreibung und thematische Abgrenzung 1.2. Aufbau Roboter 1.3. Randbedingungen und Anforderungen 2. Theoretische Grundlagen 2.1. Robotik 2.1.1. Mobile Robotik 2.2. Navigation 2.2.1. Lokalisierung 2.2.2. Kartierung 2.2.3. SLAM 2.2.4. Pfadfindung 2.2.5. Augmented Reality 2.3. Computer Vision 2.3.1. OpenCV 2.3.2. Vorlagen Erkennung 2.3.3. Template-basiertes Matching 2.3.4. Feature-basiertes Matching 3. Praktische Umsetzung 3.1. Programmablauf der Navigation 3.1.1. Verbindung mit dem Roboter 3.1.2. Initiale Exploration 3.1.3. Lokalisation und Annäherung 3.1.4. Kollisionsvermeidung 3.1.5. Zielanfahrt und Positionierung 4. Tests 4.1. Störfaktoren 5. Fazit und Ausblick 5.1. Fazit 5.2. Ausblick / The appraisal, trading and collecting of art objects does not only take place analogously. The company 4ARTechnologies develops software solutions for the digital collection management of physical and digital art. Using applications on mobile devices, users can register and authenticate their paintings and periodically create precise condition reports. The creation of condition reports leads to problems in handling the application due to human limitations and should be automated with the help of a mobile robot. The goal of this work is the development of a navigation system for a mobile robot. This should solve the following problem: Localization of a painting and the collision-free arrival and horizontal-center position in front of it. The target platform of this software is the mobile operating system iOS. Several methods, including the navigation of mobile robots and the computer-aided recognition of images were examined for the solution. The navigation software uses feature matching from the Open-CV library to find the destination. Relative localization methods such as pose tracking and odometry are used to estimate the robots own position. The environment and the movement of the robot are shown in a topological map. Obstacles are bypassed using the implemented BUG3 algorithm.:1. Einleitung 1.1. Problembeschreibung und thematische Abgrenzung 1.2. Aufbau Roboter 1.3. Randbedingungen und Anforderungen 2. Theoretische Grundlagen 2.1. Robotik 2.1.1. Mobile Robotik 2.2. Navigation 2.2.1. Lokalisierung 2.2.2. Kartierung 2.2.3. SLAM 2.2.4. Pfadfindung 2.2.5. Augmented Reality 2.3. Computer Vision 2.3.1. OpenCV 2.3.2. Vorlagen Erkennung 2.3.3. Template-basiertes Matching 2.3.4. Feature-basiertes Matching 3. Praktische Umsetzung 3.1. Programmablauf der Navigation 3.1.1. Verbindung mit dem Roboter 3.1.2. Initiale Exploration 3.1.3. Lokalisation und Annäherung 3.1.4. Kollisionsvermeidung 3.1.5. Zielanfahrt und Positionierung 4. Tests 4.1. Störfaktoren 5. Fazit und Ausblick 5.1. Fazit 5.2. Ausblick
4

Visual control of multi-rotor UAVs

Duncan, Stuart Johann Maxwell January 2014 (has links)
Recent miniaturization of computer hardware, MEMs sensors, and high energy density batteries have enabled highly capable mobile robots to become available at low cost. This has driven the rapid expansion of interest in multi-rotor unmanned aerial vehicles. Another area which has expanded simultaneously is small powerful computers, in the form of smartphones, which nearly always have a camera attached, many of which now contain a OpenCL compatible graphics processing units. By combining the results of those two developments a low-cost multi-rotor UAV can be produced with a low-power onboard computer capable of real-time computer vision. The system should also use general purpose computer vision software to facilitate a variety of experiments. To demonstrate this I have built a quadrotor UAV based on control hardware from the Pixhawk project, and paired it with an ARM based single board computer, similar those in high-end smartphones. The quadrotor weights 980 g and has a flight time of 10 minutes. The onboard computer capable of running a pose estimation algorithm above the 10 Hz requirement for stable visual control of a quadrotor. A feature tracking algorithm was developed for efficient pose estimation, which relaxed the requirement for outlier rejection during matching. Compared with a RANSAC- only algorithm the pose estimates were less variable with a Z-axis standard deviation 0.2 cm compared with 2.4 cm for RANSAC. Processing time per frame was also faster with tracking, with 95 % confidence that tracking would process the frame within 50 ms, while for RANSAC the 95 % confidence time was 73 ms. The onboard computer ran the algorithm with a total system load of less than 25 %. All computer vision software uses the OpenCV library for common computer vision algorithms, fulfilling the requirement for running general purpose software. The tracking algorithm was used to demonstrate the capability of the system by per- forming visual servoing of the quadrotor (after manual takeoff). Response to external perturbations was poor however, requiring manual intervention to avoid crashing. This was due to poor visual controller tuning, and to variations in image acquisition and attitude estimate timing due to using free running image acquisition. The system, and the tracking algorithm, serve as proof of concept that visual control of a quadrotor is possible using small low-power computers and general purpose computer vision software.
5

Robust Cooperative Strategy for Contour Matching Using Epipolar Geometry

Yuan, Miaolong, Xie, Ming, Yin, Xiaoming 01 1900 (has links)
Feature matching in images plays an important role in computer vision such as for 3D reconstruction, motion analysis, object recognition, target tracking and dynamic scene analysis. In this paper, we present a robust cooperative strategy to establish the correspondence of the contours between two uncalibrated images based on the recovered epipolar geometry. We take into account two representations of contours in image as contour points and contour chains. The method proposed in the paper is composed of the following two consecutive steps: (1) The first step uses the LMedS method to estimate the fundamental matrix based on Hartley’s 8-point algorithm, (2) The second step uses a new robust cooperative strategy to match contours. The presented approach has been tested with various real images and experimental results show that our method can produce more accurate contour correspondences. / Singapore-MIT Alliance (SMA)
6

Article identification for inventory list in a warehouse environment

Gao, Yang January 2014 (has links)
In this paper, an object recognition system has been developed that uses local image features. In the system, multiple classes of objects can be recognized in an image. This system is basically divided into two parts: object detection and object identification. Object detection is based on SIFT features, which are invariant to image illumination, scaling and rotation. SIFT features extracted from a test image are used to perform a reliable matching between a database of SIFT features from known object images. Method of DBSCAN clustering is used for multiple object detection. RANSAC method is used for decreasing the amount of false detection. Object identification is based on 'Bag-of-Words' model. The 'BoW' model is a method based on vector quantization of SIFT descriptors of image patches. In this model, K-means clustering and Support Vector Machine (SVM) classification method are applied.
7

Visual Stereo Odometry for Indoor Positioning

Johansson, Fredrik January 2012 (has links)
In this master thesis a visual odometry system is implemented and explained. Visual odometry is a technique, which could be used on autonomous vehicles to determine its current position and is preferably used indoors when GPS is notworking. The only input to the system are the images from a stereo camera and the output is the current location given in relative position. In the C++ implementation, image features are found and matched between the stereo images and the previous stereo pair, which gives a range of 150-250 verified feature matchings. The image coordinates are triangulated into a 3D-point cloud. The distance between two subsequent point clouds is minimized with respect to rigid transformations, which gives the motion described with six parameters, three for the translation and three for the rotation. Noise in the image coordinates gives reconstruction errors which makes the motion estimation very sensitive. The results from six experiments show that the weakness of the system is the ability to distinguish rotations from translations. However, if the system has additional knowledge of how it is moving, the minimization can be done with only three parameters and the system can estimate its position with less than 5 % error.
8

Wide Baseline Stereo Image Rectification and Matching

Hao, Wei 01 December 2011 (has links)
Perception of depth information is central to three-dimensional (3D) vision problems. Stereopsis is an important passive vision technique for depth perception. Wide baseline stereo is a challenging problem that attracts much interest recently from both the theoretical and application perspectives. In this research we approach the problem of wide baseline stereo using the geometric and structural constraints within feature sets. The major contribution of this dissertation is that we proposed and implemented a more efficient paradigm to handle the challenges introduced by perspective distortion in wide baseline stereo, compared to the state-of-the-art. To facilitate the paradigm, a new feature-matching algorithm that extends the state-of-the-art matching methods to larger baseline cases is proposed. The proposed matching algorithm takes advantage of both the local feature descriptor and the structure pattern of the feature set, and enhances the matching results in the case of large viewpoint change. In addition, an innovative rectification for uncalibrated images is proposed to make wide baseline stereo dense matching possible. We noticed that present rectification methods did not take into account the need for shape adjustment. By introducing the geometric constraints of the pattern of the feature points, we propose a rectification method that maximizes the structure congruency based on Delaunay triangulation nets and thus avoid some existing problems of other methods. The rectified stereo images can then be used to generate a dense depth map of the scene. The task is much simplified compared to some existing method because the 2D searching problem is reduced to 1D searching. To validate the proposed methods, real world images are applied to test the performance and comparisons to the state-of-the-art methods are provided. The performance of the dense matching with respect to the changing baseline is also studied.
9

Online 3D Reconstruction and Ground Segmentation using Drone based Long Baseline Stereo Vision System

Kumar, Prashant 16 November 2018 (has links)
This thesis presents online 3D reconstruction and ground segmentation using unmanned aerial vehicle (UAV) based stereo vision. For this purpose, a long baseline stereo vision system has been designed and built. Application of this system is to work as part of an air and ground based multi-robot autonomous terrain surveying project at Unmanned Systems Lab (USL), Virginia Tech, to act as a first responder robotic system in disaster situations. Areas covered by this thesis are design of long baseline stereo vision system, study of stereo vision raw output, techniques to filter out outliers from raw stereo vision output, a 3D reconstruction method and a study to improve running time by controlling the density of point clouds. Presented work makes use of filtering methods and implementations in Point Cloud Library (PCL) and feature matching on graphics processing unit (GPU) using OpenCV with CUDA. Besides 3D reconstruction, the challenge in the project was speed and several steps and ideas are presented to achieve it. Presented 3D reconstruction algorithm uses feature matching in 2D images, converts keypoints to 3D using disparity images, estimates rigid body transformation between matched 3D keypoints and fits point clouds. To correct and control orientation and localization errors, it fits re-projected UAV positions on GPS recorded UAV positions using iterative closest point (ICP) algorithm as the correction step. A new but computationally intensive process of use of superpixel clustering and plane fitting to increase resolution of disparity images to sub-pixel resolution is also presented. Results section provides accuracy of 3D reconstruction results. The presented process is able to generate application acceptable semi-dense 3D reconstruction and ground segmentation at 8-12 frames per second (fps). In 3D reconstruction of an area of size 25 x 40 m2, with UAV flight altitude of 23 m, average obstacle localization error and average obstacle size/dimension error is found to be of 17 cm and 3 cm, respectively. / MS / This thesis presents near real-time, called online, visual reconstruction in 3-dimensions (3D) using ground facing camera system on an unmanned aerial vehicle. Another result of this thesis is separating ground from obstacles on the ground. To do this the camera system using two cameras, called stereo vision system, with the cameras being positioned comparatively far away from each other at 60 cm was designed as well as an algorithm and software to do the visual 3D reconstruction was developed. Application of this system is to work as part of an air and ground based multi-robot autonomous terrain surveying project at Unmanned Systems Lab, Virginia Tech, to act as a first responder robotic system in disaster situations. Presented work makes use of Point Cloud Library and library functions on graphics processing unit using OpenCV with CUDA, which are popular Computer Vision libraries. Besides 3D reconstruction, the challenge in the project was speed and several steps and ideas are presented to achieve it. Presented 3D reconstruction algorithm is based on feature matching, which is a popular way to mathematically identify unique pixels in an image. Besides using image features in 3D reconstruction, the algorithm also presents a correction step to correct and control orientation and localization errors using iterative closest point algorithm. A new but computationally intensive process to improve resolution of disparity images, which is an output of the developed stereo vision system, from single pixel accuracy to sub-pixel accuracy is also presented. Results section provides accuracy of 3D reconstruction results. The presented process is able to generate application acceptable 3D reconstruction and ground segmentation at 8-12 frames per second. In 3D reconstruction of an area of size 25 x 40 m2 , with UAV flight altitude of 23 m, average obstacle localization error and average obstacle size/dimension error is found to be of 17 cm and 3 cm, respectively.
10

Real-time Aerial Photograph Alignment using Feature Matching / Placering av flygfoton i realtid utifrån bildegenskaper

Magnvall, Andreas, Henne, Alexander January 2021 (has links)
With increased mobile hardware capabilities, improved UAVs and modern algorithms, accurate maps can be created in real-time by capturing overlapping photographs of the ground. A method for mapping that can be used is to position photos by relying purely on the GPS position and altitude. However, GPS inaccuracies will be visible in the created map. In this paper, we will instead present a method for aligning the photos correctly with the help of feature matching. Feature matching is a well-known method which analyses two photos to find similar parts. If an overlap exists, feature matching can be used to find and localise those parts, which can be used for positioning one image over the other at the overlap. When repeating the process, a whole map can be created. For this purpose, we have also evaluated a selection of feature detection and matching algorithms. The algorithm found to be the best was SIFT with FLANN, which was then used in a prototype for creating a complete map of a forest. Feature matching is in many cases superior to GPS positioning, although it cannot be fully depended on as failed or incorrect matching is a common occurrence.

Page generated in 0.0798 seconds