• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Indoor scene verification : Evaluation of indoor scene representations for the purpose of location verification / Verifiering av inomhusbilder : Bedömning av en inomhusbilder framställda i syfte att genomföra platsverifiering

Finfando, Filip January 2020 (has links)
When human’s visual system is looking at two pictures taken in some indoor location, it is fairly easy to tell whether they were taken in exactly the same place, even when the location has never been visited in reality. It is possible due to being able to pay attention to the multiple factors such as spatial properties (windows shape, room shape), common patterns (floor, walls) or presence of specific objects (furniture, lighting). Changes in camera pose, illumination, furniture location or digital alteration of the image (e.g. watermarks) has little influence on this ability. Traditional approaches to measuring the perceptual similarity of images struggled to reproduce this skill. This thesis defines the Indoor scene verification (ISV) problem as distinguishing whether two indoor scene images were taken in the same indoor space or not. It explores the capabilities of state-of-the-art perceptual similarity metrics by introducing two new datasets designed specifically for this problem. Perceptual hashing, ORB, FaceNet and NetVLAD are evaluated as the baseline candidates. The results show that NetVLAD provides the best results on both datasets and therefore is chosen as the baseline for the experiments aiming to improve it. Three of them are carried out testing the impact of using the different training dataset, changing deep neural network architecture and introducing new loss function. Quantitative analysis of AUC score shows that switching from VGG16 to MobileNetV2 allows for improvement over the baseline. / Med mänskliga synförmågan är det ganska lätt att bedöma om två bilder som tas i samma inomhusutrymme verkligen har tagits i exakt samma plats även om man aldrig har varit där. Det är möjligt tack vare många faktorer, sådana som rumsliga egenskaper (fönsterformer, rumsformer), gemensamma mönster (golv, väggar) eller närvaro av särskilda föremål (möbler, ljus). Ändring av kamerans placering, belysning, möblernas placering eller digitalbildens förändring (t. ex. vattenstämpel) påverkar denna förmåga minimalt. Traditionella metoder att mäta bildernas perceptuella likheter hade svårigheter att reproducera denna färdighet . Denna uppsats definierar verifiering av inomhusbilder, Indoor SceneVerification (ISV), som en ansats att ta reda på om två inomhusbilder har tagits i samma utrymme eller inte. Studien undersöker de främsta perceptuella identitetsfunktionerna genom att introducera två nya datauppsättningar designade särskilt för detta. Perceptual hash, ORB, FaceNet och NetVLAD identifierades som potentiella referenspunkter. Resultaten visar att NetVLAD levererar de bästa resultaten i båda datauppsättningarna, varpå de valdes som referenspunkter till undersökningen i syfte att förbättra det. Tre experiment undersöker påverkan av användning av olika datauppsättningar, ändring av struktur i neuronnätet och införande av en ny minskande funktion. Kvantitativ AUC-värdet analys visar att ett byte frånVGG16 till MobileNetV2 tillåter förbättringar i jämförelse med de primära lösningarna.
12

Localisation par l'image en milieu urbain : application à la réalité augmentée / Image-based localization in urban environment : application to augmented reality

Fond, Antoine 06 April 2018 (has links)
Dans cette thèse on aborde le problème de la localisation en milieux urbains. Inférer un positionnement précis en ville est important dans nombre d’applications comme la réalité augmentée ou la robotique mobile. Or les systèmes basés sur des capteurs inertiels (IMU) sont sujets à des dérives importantes et les données GPS peuvent souffrir d’un effet de vallée qui limite leur précision. Une solution naturelle est de s’appuyer le calcul de pose de caméra en vision par ordinateur. On remarque que les bâtiments sont les repères visuels principaux de l’humain mais aussi des objets d’intérêt pour les applications de réalité augmentée. On cherche donc à partir d’une seule image à calculer la pose de la caméra par rapport à une base de données de bâtiments références connus. On décompose le problème en deux parties : trouver les références visibles dans l’image courante (reconnaissance de lieux) et calculer la pose de la caméra par rapport à eux. Les approches classiques de ces deux sous-problèmes sont mises en difficultés dans les environnements urbains à cause des forts effets perspectives, des répétitions fréquentes et de la similarité visuelle entre façades. Si des approches spécifiques à ces environnements ont été développés qui exploitent la grande régularité structurelle de tels milieux, elles souffrent encore d’un certain nombre de limitations autant pour la détection et la reconnaissance de façades que pour le calcul de pose par recalage de modèle. La méthode originale développée dans cette thèse s’inscrit dans ces approches spécifiques et vise à dépasser ces limitations en terme d’efficacité et de robustesse aux occultations, aux changements de points de vue et d’illumination. Pour cela, l’idée principale est de profiter des progrès récents de l’apprentissage profond par réseaux de neurones convolutionnels pour extraire de l’information de haut-niveau sur laquelle on peut baser des modèles géométriques. Notre approche est donc mixte Bottom-Up/Top-Down et se décompose en trois étapes clés. Nous proposons tout d’abord une méthode d’estimation de la rotation de la pose de caméra. Les 3 points de fuite principaux des images en milieux urbains, dits points de fuite de Manhattan sont détectés grâce à un réseau de neurones convolutionnels (CNN) qui fait à la fois une estimation de ces points de fuite mais aussi une segmentation de l’image relativement à eux. Une second étape de raffinement utilise ces informations et les segments de l’image dans une formulation bayésienne pour estimer efficacement et plus précisément ces points. L’estimation de la rotation de la caméra permet de rectifier les images et ainsi s’affranchir des effets de perspectives pour la recherche de la translation. Dans une seconde contribution, nous visons ainsi à détecter les façades dans ces images rectifiées et à les reconnaître parmi une base de bâtiments connus afin d’estimer une translation grossière. Dans un soucis d’efficacité, on a proposé une série d’indices basés sur des caractéristiques spécifiques aux façades (répétitions, symétrie, sémantique) qui permettent de sélectionner rapidement des candidats façades potentiels. Ensuite ceux-ci sont classifiés en façade ou non selon un nouveau descripteur CNN contextuel. Enfin la mise en correspondance des façades détectées avec les références est opérée par un recherche au plus proche voisin relativement à une métrique apprise sur ces descripteurs [...] / This thesis addresses the problem of localization in urban areas. Inferring accurate positioning in the city is important in many applications such as augmented reality or mobile robotics. However, systems based on inertial sensors (IMUs) are subject to significant drifts and GPS data can suffer from a valley effect that limits their accuracy. A natural solution is to rely on the camera pose estimation in computer vision. We notice that buildings are the main visual landmarks of human beings but also objects of interest for augmented reality applications. We therefore aim to compute the camera pose relatively to a database of known reference buildings from a single image. The problem is twofold : find the visible references in the current image (place recognition) and compute the camera pose relatively to them. Conventional approaches to these two sub-problems are challenged in urban environments due to strong perspective effects, frequent repetitions and visual similarity between facades. While specific approaches to these environments have been developed that exploit the high structural regularity of such environments, they still suffer from a number of limitations in terms of detection and recognition of facades as well as pose computation through model registration. The original method developed in this thesis is part of these specific approaches and aims to overcome these limitations in terms of effectiveness and robustness to clutter and changes of viewpoints and illumination. For do so, the main idea is to take advantage of recent advances in deep learning by convolutional neural networks to extract high-level information on which geometric models can be based. Our approach is thus mixed Bottom- Up/Top-Down and is divided into three key stages. We first propose a method to estimate the rotation of the camera pose. The 3 main vanishing points of the image of urban environnement, known as Manhattan vanishing points, are detected by a convolutional neural network (CNN) that estimates both these vanishing points and the image segmentation relative to them. A second refinement step uses this information and image segmentation in a Bayesian model to estimate these points effectively and more accurately. By estimating the camera’s rotation, the images can be rectified and thus free from perspective effects to find the translation. In a second contribution, we aim to detect the facades in these rectified images to recognize them among a database of known buildings and estimate a rough translation. For the sake of efficiency, a series of cues based on facade specific characteristics (repetitions, symmetry, semantics) have been proposed to enable the fast selection of facade proposals. Then they are classified as facade or non-facade according to a new contextual CNN descriptor. Finally, the matching of the detected facades to the references is done by a nearest neighbor search using a metric learned on these descriptors. Eventually we propose a method to refine the estimation of the translation relying on the semantic segmentation inferred by a CNN for its robustness to changes of illumination ans small deformations. If we can already estimate a rough translation from these detected facades, we choose to refine this result by relying on the se- mantic segmentation of the image inferred from a CNN for its robustness to changes of illuminations and small deformations. Since the facade is identified in the previous step, we adopt a model-based approach by registration. Since the problems of registration and segmentation are linked, a Bayesian model is proposed which enables both problems to be jointly solved. This joint processing improves the results of registration and segmentation while remaining efficient in terms of computation time. These three parts have been validated on consistent community data sets. The results show that our approach is fast and more robust to changes in shooting conditions than previous methods
13

Lokalizace mobilního robota v prostředí / Localisation of Mobile Robot in the Environment

Urban, Daniel January 2018 (has links)
This diploma thesis deals with the problem of mobile robot localisation in the environment based on current 2D and 3D sensor data and previous records. Work is focused on detecting previously visited places by robot. The implemented system is suitable for loop detection, using the Gestalt 3D descriptors. The output of the system provides corresponding positions on which the robot was already located. The functionality of the system has been tested and evaluated on LiDAR data.
14

Relative Navigation of Micro Air Vehicles in GPS-Degraded Environments

Wheeler, David Orton 01 December 2017 (has links)
Most micro air vehicles rely heavily on reliable GPS measurements for proper estimation and control, and therefore struggle in GPS-degraded environments. When GPS is not available, the global position and heading of the vehicle is unobservable. This dissertation establishes the theoretical and practical advantages of a relative navigation framework for MAV navigation in GPS-degraded environments. This dissertation explores how the consistency, accuracy, and stability of current navigation approaches degrade during prolonged GPS dropout and in the presence of heading uncertainty. Relative navigation (RN) is presented as an alternative approach that maintains observability by working with respect to a local coordinate frame. RN is compared with several current estimation approaches in a simulation environment and in hardware experiments. While still subject to global drift, RN is shown to produce consistent state estimates and stable control. Estimating relative states requires unique modifications to current estimation approaches. This dissertation further provides a tutorial exposition of the relative multiplicative extended Kalman filter, presenting how to properly ensure observable state estimation while maintaining consistency. The filter is derived using both inertial and body-fixed state definitions and dynamics. Finally, this dissertation presents a series of prolonged flight tests, demonstrating the effectiveness of the relative navigation approach for autonomous GPS-degraded MAV navigation in varied, unknown environments. The system is shown to utilize a variety of vision sensors, work indoors and outdoors, run in real-time with onboard processing, and not require special tuning for particular sensors or environments. Despite leveraging off-the-shelf sensors and algorithms, the flight tests demonstrate stable front-end performance with low drift. The flight tests also demonstrate the onboard generation of a globally consistent, metric, and localized map by identifying and incorporating loop-closure constraints and intermittent GPS measurements. With this map, mission objectives are shown to be autonomously completed.
15

Robust Optimization for Simultaneous Localization and Mapping / Robuste Optimierung für simultane Lokalisierung und Kartierung

Sünderhauf, Niko 25 April 2012 (has links) (PDF)
SLAM (Simultaneous Localization And Mapping) has been a very active and almost ubiquitous problem in the field of mobile and autonomous robotics for over two decades. For many years, filter-based methods have dominated the SLAM literature, but a change of paradigms could be observed recently. Current state of the art solutions of the SLAM problem are based on efficient sparse least squares optimization techniques. However, it is commonly known that least squares methods are by default not robust against outliers. In SLAM, such outliers arise mostly from data association errors like false positive loop closures. Since the optimizers in current SLAM systems are not robust against outliers, they have to rely heavily on certain preprocessing steps to prevent or reject all data association errors. Especially false positive loop closures will lead to catastrophically wrong solutions with current solvers. The problem is commonly accepted in the literature, but no concise solution has been proposed so far. The main focus of this work is to develop a novel formulation of the optimization-based SLAM problem that is robust against such outliers. The developed approach allows the back-end part of the SLAM system to change parts of the topological structure of the problem\'s factor graph representation during the optimization process. The back-end can thereby discard individual constraints and converge towards correct solutions even in the presence of many false positive loop closures. This largely increases the overall robustness of the SLAM system and closes a gap between the sensor-driven front-end and the back-end optimizers. The approach is evaluated on both large scale synthetic and real-world datasets. This work furthermore shows that the developed approach is versatile and can be applied beyond SLAM, in other domains where least squares optimization problems are solved and outliers have to be expected. This is successfully demonstrated in the domain of GPS-based vehicle localization in urban areas where multipath satellite observations often impede high-precision position estimates.
16

Superpixels and their Application for Visual Place Recognition in Changing Environments

Neubert, Peer 03 December 2015 (has links) (PDF)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
17

Robust Optimization for Simultaneous Localization and Mapping

Sünderhauf, Niko 19 April 2012 (has links)
SLAM (Simultaneous Localization And Mapping) has been a very active and almost ubiquitous problem in the field of mobile and autonomous robotics for over two decades. For many years, filter-based methods have dominated the SLAM literature, but a change of paradigms could be observed recently. Current state of the art solutions of the SLAM problem are based on efficient sparse least squares optimization techniques. However, it is commonly known that least squares methods are by default not robust against outliers. In SLAM, such outliers arise mostly from data association errors like false positive loop closures. Since the optimizers in current SLAM systems are not robust against outliers, they have to rely heavily on certain preprocessing steps to prevent or reject all data association errors. Especially false positive loop closures will lead to catastrophically wrong solutions with current solvers. The problem is commonly accepted in the literature, but no concise solution has been proposed so far. The main focus of this work is to develop a novel formulation of the optimization-based SLAM problem that is robust against such outliers. The developed approach allows the back-end part of the SLAM system to change parts of the topological structure of the problem\'s factor graph representation during the optimization process. The back-end can thereby discard individual constraints and converge towards correct solutions even in the presence of many false positive loop closures. This largely increases the overall robustness of the SLAM system and closes a gap between the sensor-driven front-end and the back-end optimizers. The approach is evaluated on both large scale synthetic and real-world datasets. This work furthermore shows that the developed approach is versatile and can be applied beyond SLAM, in other domains where least squares optimization problems are solved and outliers have to be expected. This is successfully demonstrated in the domain of GPS-based vehicle localization in urban areas where multipath satellite observations often impede high-precision position estimates.
18

Cooperative Navigation of Fixed-Wing Micro Air Vehicles in GPS-Denied Environments

Ellingson, Gary James 05 November 2019 (has links)
Micro air vehicles have recently gained popularity due to their potential as autonomous systems. Their future impact, however, will depend in part on how well they can navigate in GPS-denied and GPS-degraded environments. In response to this need, this dissertation investigates a potential solution for GPS-denied operations called relative navigation. The method utilizes keyframe-to-keyframe odometry estimates and their covariances in a global back end that represents the global state as a pose graph. The back end is able to effectively represent nonlinear uncertainties and incorporate opportunistic global constraints. The GPS-denied research community has, for the most part, neglected to consider fixed-wing aircraft. This dissertation enables fixed-wing aircraft to utilize relative navigation by accounting for their sensing requirements. The development of an odometry-like, front-end, EKF-based estimator that utilizes only a monocular camera and an inertial measurement unit is presented. The filter uses the measurement model of the multi-state-constraint Kalman filter and regularly performs relative resets in coordination with keyframe declarations. In addition to the front-end development, a method is provided to account for front-end velocity bias in the back-end optimization. Finally a method is presented for enabling multiple vehicles to improve navigational accuracy by cooperatively sharing information. Modifications to the relative navigation architecture are presented that enable decentralized, cooperative operations amidst temporary communication dropouts. The proposed framework also includes the ability to incorporate inter-vehicle measurements and utilizes a new concept called the coordinated reset, which is necessary for optimizing the cooperative odometry and improving localization. Each contribution is demonstrated through simulation and/or hardware flight testing. Simulation and Monte-Carlo testing is used to show the expected quality of the results. Hardware flight-test results show the front-end estimator performance, several back-end optimization examples, and cooperative GPS-denied operations.
19

Superpixels and their Application for Visual Place Recognition in Changing Environments

Neubert, Peer 01 December 2015 (has links)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
20

Visual Place Recognition in Changing Environments using Additional Data-Inherent Knowledge

Schubert, Stefan 15 November 2023 (has links)
Visual place recognition is the task of finding same places in a set of database images for a given set of query images. This becomes particularly challenging for long-term applications when the environmental condition changes between or within the database and query set, e.g., from day to night. Visual place recognition in changing environments can be used if global position data like GPS is not available or very inaccurate, or for redundancy. It is required for tasks like loop closure detection in SLAM, candidate selection for global localization, or multi-robot/multi-session mapping and map merging. In contrast to pure image retrieval, visual place recognition can often build upon additional information and data for improvements in performance, runtime, or memory usage. This includes additional data-inherent knowledge about information that is contained in the image sets themselves because of the way they were recorded. Using data-inherent knowledge avoids the dependency on other sensors, which increases the generality of methods for an integration into many existing place recognition pipelines. This thesis focuses on the usage of additional data-inherent knowledge. After the discussion of basics about visual place recognition, the thesis gives a systematic overview of existing data-inherent knowledge and corresponding methods. Subsequently, the thesis concentrates on a deeper consideration and exploitation of four different types of additional data-inherent knowledge. This includes 1) sequences, i.e., the database and query set are recorded as spatio-temporal sequences so that consecutive images are also adjacent in the world, 2) knowledge of whether the environmental conditions within the database and query set are constant or continuously changing, 3) intra-database similarities between the database images, and 4) intra-query similarities between the query images. Except for sequences, all types have received only little attention in the literature so far. For the exploitation of knowledge about constant conditions within the database and query set (e.g., database: summer, query: winter), the thesis evaluates different descriptor standardization techniques. For the alternative scenario of continuous condition changes (e.g., database: sunny to rainy, query: sunny to cloudy), the thesis first investigates the qualitative and quantitative impact on the performance of image descriptors. It then proposes and evaluates four unsupervised learning methods, including our novel clustering-based descriptor standardization method K-STD and three PCA-based methods from the literature. To address the high computational effort of descriptor comparisons during place recognition, our novel method EPR for efficient place recognition is proposed. Given a query descriptor, EPR uses sequence information and intra-database similarities to identify nearly all matching descriptors in the database. For a structured combination of several sources of additional knowledge in a single graph, the thesis presents our novel graphical framework for place recognition. After the minimization of the graph's error with our proposed ICM-based optimization, the place recognition performance can be significantly improved. For an extensive experimental evaluation of all methods in this thesis and beyond, a benchmark for visual place recognition in changing environments is presented, which is composed of six datasets with thirty sequence combinations.

Page generated in 0.1082 seconds