Spelling suggestions: "subject:"brennverfahren"" "subject:"schreibverfahren""
1 |
Robust Optimization for Simultaneous Localization and Mapping / Robuste Optimierung für simultane Lokalisierung und KartierungSünderhauf, Niko 25 April 2012 (has links) (PDF)
SLAM (Simultaneous Localization And Mapping) has been a very active and almost ubiquitous problem in the field of mobile and autonomous robotics for over two decades. For many years, filter-based methods have dominated the SLAM literature, but a change of paradigms could be observed recently.
Current state of the art solutions of the SLAM problem are based on efficient sparse least squares optimization techniques. However, it is commonly known that least squares methods are by default not robust against outliers. In SLAM, such outliers arise mostly from data association errors like false positive loop closures. Since the optimizers in current SLAM systems are not robust against outliers, they have to rely heavily on certain preprocessing steps to prevent or reject all data association errors. Especially false positive loop closures will lead to catastrophically wrong solutions with current solvers. The problem is commonly accepted in the literature, but no concise solution has been proposed so far.
The main focus of this work is to develop a novel formulation of the optimization-based SLAM problem that is robust against such outliers. The developed approach allows the back-end part of the SLAM system to change parts of the topological structure of the problem\'s factor graph representation during the optimization process. The back-end can thereby discard individual constraints and converge towards correct solutions even in the presence of many false positive loop closures. This largely increases the overall robustness of the SLAM system and closes a gap between the sensor-driven front-end and the back-end optimizers. The approach is evaluated on both large scale synthetic and real-world datasets.
This work furthermore shows that the developed approach is versatile and can be applied beyond SLAM, in other domains where least squares optimization problems are solved and outliers have to be expected. This is successfully demonstrated in the domain of GPS-based vehicle localization in urban areas where multipath satellite observations often impede high-precision position estimates.
|
2 |
Robust Optimization for Simultaneous Localization and MappingSünderhauf, Niko 19 April 2012 (has links)
SLAM (Simultaneous Localization And Mapping) has been a very active and almost ubiquitous problem in the field of mobile and autonomous robotics for over two decades. For many years, filter-based methods have dominated the SLAM literature, but a change of paradigms could be observed recently.
Current state of the art solutions of the SLAM problem are based on efficient sparse least squares optimization techniques. However, it is commonly known that least squares methods are by default not robust against outliers. In SLAM, such outliers arise mostly from data association errors like false positive loop closures. Since the optimizers in current SLAM systems are not robust against outliers, they have to rely heavily on certain preprocessing steps to prevent or reject all data association errors. Especially false positive loop closures will lead to catastrophically wrong solutions with current solvers. The problem is commonly accepted in the literature, but no concise solution has been proposed so far.
The main focus of this work is to develop a novel formulation of the optimization-based SLAM problem that is robust against such outliers. The developed approach allows the back-end part of the SLAM system to change parts of the topological structure of the problem\'s factor graph representation during the optimization process. The back-end can thereby discard individual constraints and converge towards correct solutions even in the presence of many false positive loop closures. This largely increases the overall robustness of the SLAM system and closes a gap between the sensor-driven front-end and the back-end optimizers. The approach is evaluated on both large scale synthetic and real-world datasets.
This work furthermore shows that the developed approach is versatile and can be applied beyond SLAM, in other domains where least squares optimization problems are solved and outliers have to be expected. This is successfully demonstrated in the domain of GPS-based vehicle localization in urban areas where multipath satellite observations often impede high-precision position estimates.
|
3 |
Faktorgraph-basierte Sensordatenfusion zur Anwendung auf einem Quadrocopter / Factor Graph Based Sensor Fusion for a Quadrotor UAVLange, Sven 13 December 2013 (has links) (PDF)
Die Sensordatenfusion ist eine allgegenwärtige Aufgabe im Bereich der mobilen Robotik und darüber hinaus. In der vorliegenden Arbeit wird das typischerweise verwendete Verfahren zur Sensordatenfusion in der Robotik in Frage gestellt und anhand von neuartigen Algorithmen, basierend auf einem Faktorgraphen, gelöst sowie mit einer korrespondierenden Extended-Kalman-Filter-Implementierung verglichen. Im Mittelpunkt steht dabei das technische sowie algorithmische Sensorkonzept für die Navigation eines Flugroboters im Innenbereich. Ausführliche Experimente zeigen die Qualitätssteigerung unter Verwendung der neuen Variante der Sensordatenfusion, aber auch Einschränkungen und Beispiele mit nahezu identischen Ergebnissen beider Varianten der Sensordatenfusion. Neben Experimenten anhand einer hardwarenahen Simulation wird die Funktionsweise auch anhand von realen Hardwaredaten evaluiert.
|
4 |
Faktorgraph-basierte Sensordatenfusion zur Anwendung auf einem QuadrocopterLange, Sven 12 December 2013 (has links)
Die Sensordatenfusion ist eine allgegenwärtige Aufgabe im Bereich der mobilen Robotik und darüber hinaus. In der vorliegenden Arbeit wird das typischerweise verwendete Verfahren zur Sensordatenfusion in der Robotik in Frage gestellt und anhand von neuartigen Algorithmen, basierend auf einem Faktorgraphen, gelöst sowie mit einer korrespondierenden Extended-Kalman-Filter-Implementierung verglichen. Im Mittelpunkt steht dabei das technische sowie algorithmische Sensorkonzept für die Navigation eines Flugroboters im Innenbereich. Ausführliche Experimente zeigen die Qualitätssteigerung unter Verwendung der neuen Variante der Sensordatenfusion, aber auch Einschränkungen und Beispiele mit nahezu identischen Ergebnissen beider Varianten der Sensordatenfusion. Neben Experimenten anhand einer hardwarenahen Simulation wird die Funktionsweise auch anhand von realen Hardwaredaten evaluiert.
|
5 |
Structureless Camera Motion Estimation of Unordered Omnidirectional ImagesSastuba, Mark 08 August 2022 (has links)
This work aims at providing a novel camera motion estimation pipeline from large collections of unordered omnidirectional images. In oder to keep the pipeline as general and flexible as possible, cameras are modelled as unit spheres, allowing to incorporate any central camera type. For each camera an unprojection lookup is generated from intrinsics, which is called P2S-map (Pixel-to-Sphere-map), mapping pixels to their corresponding positions on the unit sphere. Consequently the camera geometry becomes independent of the underlying projection model. The pipeline also generates P2S-maps from world map projections with less distortion effects as they are known from cartography. Using P2S-maps from camera calibration and world map projection allows to convert omnidirectional camera images to an appropriate world map projection in oder to apply standard feature extraction and matching algorithms for data association. The proposed estimation pipeline combines the flexibility of SfM (Structure from Motion) - which handles unordered image collections - with the efficiency of PGO (Pose Graph Optimization), which is used as back-end in graph-based Visual SLAM (Simultaneous Localization and Mapping) approaches to optimize camera poses from large image sequences. SfM uses BA (Bundle Adjustment) to jointly optimize camera poses (motion) and 3d feature locations (structure), which becomes computationally expensive for large-scale scenarios. On the contrary PGO solves for camera poses (motion) from measured transformations between cameras, maintaining optimization managable. The proposed estimation algorithm combines both worlds. It obtains up-to-scale transformations between image pairs using two-view constraints, which are jointly scaled using trifocal constraints. A pose graph is generated from scaled two-view transformations and solved by PGO to obtain camera motion efficiently even for large image collections. Obtained results can be used as input data to provide initial pose estimates for further 3d reconstruction purposes e.g. to build a sparse structure from feature correspondences in an SfM or SLAM framework with further refinement via BA.
The pipeline also incorporates fixed extrinsic constraints from multi-camera setups as well as depth information provided by RGBD sensors. The entire camera motion estimation pipeline does not need to generate a sparse 3d structure of the captured environment and thus is called SCME (Structureless Camera Motion Estimation).:1 Introduction
1.1 Motivation
1.1.1 Increasing Interest of Image-Based 3D Reconstruction
1.1.2 Underground Environments as Challenging Scenario
1.1.3 Improved Mobile Camera Systems for Full Omnidirectional Imaging
1.2 Issues
1.2.1 Directional versus Omnidirectional Image Acquisition
1.2.2 Structure from Motion versus Visual Simultaneous Localization and Mapping
1.3 Contribution
1.4 Structure of this Work
2 Related Work
2.1 Visual Simultaneous Localization and Mapping
2.1.1 Visual Odometry
2.1.2 Pose Graph Optimization
2.2 Structure from Motion
2.2.1 Bundle Adjustment
2.2.2 Structureless Bundle Adjustment
2.3 Corresponding Issues
2.4 Proposed Reconstruction Pipeline
3 Cameras and Pixel-to-Sphere Mappings with P2S-Maps
3.1 Types
3.2 Models
3.2.1 Unified Camera Model
3.2.2 Polynomal Camera Model
3.2.3 Spherical Camera Model
3.3 P2S-Maps - Mapping onto Unit Sphere via Lookup Table
3.3.1 Lookup Table as Color Image
3.3.2 Lookup Interpolation
3.3.3 Depth Data Conversion
4 Calibration
4.1 Overview of Proposed Calibration Pipeline
4.2 Target Detection
4.3 Intrinsic Calibration
4.3.1 Selected Examples
4.4 Extrinsic Calibration
4.4.1 3D-2D Pose Estimation
4.4.2 2D-2D Pose Estimation
4.4.3 Pose Optimization
4.4.4 Uncertainty Estimation
4.4.5 PoseGraph Representation
4.4.6 Bundle Adjustment
4.4.7 Selected Examples
5 Full Omnidirectional Image Projections
5.1 Panoramic Image Stitching
5.2 World Map Projections
5.3 World Map Projection Generator for P2S-Maps
5.4 Conversion between Projections based on P2S-Maps
5.4.1 Proposed Workflow
5.4.2 Data Storage Format
5.4.3 Real World Example
6 Relations between Two Camera Spheres
6.1 Forward and Backward Projection
6.2 Triangulation
6.2.1 Linear Least Squares Method
6.2.2 Alternative Midpoint Method
6.3 Epipolar Geometry
6.4 Transformation Recovery from Essential Matrix
6.4.1 Cheirality
6.4.2 Standard Procedure
6.4.3 Simplified Procedure
6.4.4 Improved Procedure
6.5 Two-View Estimation
6.5.1 Evaluation Strategy
6.5.2 Error Metric
6.5.3 Evaluation of Estimation Algorithms
6.5.4 Concluding Remarks
6.6 Two-View Optimization
6.6.1 Epipolar-Based Error Distances
6.6.2 Projection-Based Error Distances
6.6.3 Comparison between Error Distances
6.7 Two-View Translation Scaling
6.7.1 Linear Least Squares Estimation
6.7.2 Non-Linear Least Squares Optimization
6.7.3 Comparison between Initial and Optimized Scaling Factor
6.8 Homography to Identify Degeneracies
6.8.1 Homography for Spherical Cameras
6.8.2 Homography Estimation
6.8.3 Homography Optimization
6.8.4 Homography and Pure Rotation
6.8.5 Homography in Epipolar Geometry
7 Relations between Three Camera Spheres
7.1 Three View Geometry
7.2 Crossing Epipolar Planes Geometry
7.3 Trifocal Geometry
7.4 Relation between Trifocal, Three-View and Crossing Epipolar Planes
7.5 Translation Ratio between Up-To-Scale Two-View Transformations
7.5.1 Structureless Determination Approaches
7.5.2 Structure-Based Determination Approaches
7.5.3 Comparison between Proposed Approaches
8 Pose Graphs
8.1 Optimization Principle
8.2 Solvers
8.2.1 Additional Graph Solvers
8.2.2 False Loop Closure Detection
8.3 Pose Graph Generation
8.3.1 Generation of Synthetic Pose Graph Data
8.3.2 Optimization of Synthetic Pose Graph Data
9 Structureless Camera Motion Estimation
9.1 SCME Pipeline
9.2 Determination of Two-View Translation Scale Factors
9.3 Integration of Depth Data
9.4 Integration of Extrinsic Camera Constraints
10 Camera Motion Estimation Results
10.1 Directional Camera Images
10.2 Omnidirectional Camera Images
11 Conclusion
11.1 Summary
11.2 Outlook and Future Work
Appendices
A.1 Additional Extrinsic Calibration Results
A.2 Linear Least Squares Scaling
A.3 Proof Rank Deficiency
A.4 Alternative Derivation Midpoint Method
A.5 Simplification of Depth Calculation
A.6 Relation between Epipolar and Circumferential Constraint
A.7 Covariance Estimation
A.8 Uncertainty Estimation from Epipolar Geometry
A.9 Two-View Scaling Factor Estimation: Uncertainty Estimation
A.10 Two-View Scaling Factor Optimization: Uncertainty Estimation
A.11 Depth from Adjoining Two-View Geometries
A.12 Alternative Three-View Derivation
A.12.1 Second Derivation Approach
A.12.2 Third Derivation Approach
A.13 Relation between Trifocal Geometry and Alternative Midpoint Method
A.14 Additional Pose Graph Generation Examples
A.15 Pose Graph Solver Settings
A.16 Additional Pose Graph Optimization Examples
Bibliography
|
6 |
Towards Dense Visual SLAMPietzsch, Tobias 05 December 2011 (has links) (PDF)
Visual Simultaneous Localisation and Mapping (SLAM) is concerned with simultaneously estimating the pose of a camera and a map of the environment from a sequence of images. Traditionally, sparse maps comprising isolated point features have been employed, which facilitate robust localisation but are not well suited to advanced applications. In this thesis, we present map representations that allow a more dense description of the environment. In one approach, planar features are used to represent textured planar surfaces in the scene. This model is applied within a visual SLAM framework based on the Extended Kalman Filter. We presents solutions to several challenges which arise from this approach.
|
7 |
Towards Dense Visual SLAMPietzsch, Tobias 07 June 2011 (has links)
Visual Simultaneous Localisation and Mapping (SLAM) is concerned with simultaneously estimating the pose of a camera and a map of the environment from a sequence of images. Traditionally, sparse maps comprising isolated point features have been employed, which facilitate robust localisation but are not well suited to advanced applications. In this thesis, we present map representations that allow a more dense description of the environment. In one approach, planar features are used to represent textured planar surfaces in the scene. This model is applied within a visual SLAM framework based on the Extended Kalman Filter. We presents solutions to several challenges which arise from this approach.
|
8 |
Visual Place Recognition in Changing Environments using Additional Data-Inherent KnowledgeSchubert, Stefan 15 November 2023 (has links)
Visual place recognition is the task of finding same places in a set of database images for a given set of query images. This becomes particularly challenging for long-term applications when the environmental condition changes between or within the database and query set, e.g., from day to night. Visual place recognition in changing environments can be used if global position data like GPS is not available or very inaccurate, or for redundancy. It is required for tasks like loop closure detection in SLAM, candidate selection for global localization, or multi-robot/multi-session mapping and map merging.
In contrast to pure image retrieval, visual place recognition can often build upon additional information and data for improvements in performance, runtime, or memory usage. This includes additional data-inherent knowledge about information that is contained in the image sets themselves because of the way they were recorded. Using data-inherent knowledge avoids the dependency on other sensors, which increases the generality of methods for an integration into many existing place recognition pipelines.
This thesis focuses on the usage of additional data-inherent knowledge. After the discussion of basics about visual place recognition, the thesis gives a systematic overview of existing data-inherent knowledge and corresponding methods. Subsequently, the thesis concentrates on a deeper consideration and exploitation of four different types of additional data-inherent knowledge. This includes 1) sequences, i.e., the database and query set are recorded as spatio-temporal sequences so that consecutive images are also adjacent in the world, 2) knowledge of whether the environmental conditions within the database and query set are constant or continuously changing, 3) intra-database similarities between the database images, and 4) intra-query similarities between the query images. Except for sequences, all types have received only little attention in the literature so far.
For the exploitation of knowledge about constant conditions within the database and query set (e.g., database: summer, query: winter), the thesis evaluates different descriptor standardization techniques. For the alternative scenario of continuous condition changes (e.g., database: sunny to rainy, query: sunny to cloudy), the thesis first investigates the qualitative and quantitative impact on the performance of image descriptors. It then proposes and evaluates four unsupervised learning methods, including our novel clustering-based descriptor standardization method K-STD and three PCA-based methods from the literature. To address the high computational effort of descriptor comparisons during place recognition, our novel method EPR for efficient place recognition is proposed. Given a query descriptor, EPR uses sequence information and intra-database similarities to identify nearly all matching descriptors in the database. For a structured combination of several sources of additional knowledge in a single graph, the thesis presents our novel graphical framework for place recognition. After the minimization of the graph's error with our proposed ICM-based optimization, the place recognition performance can be significantly improved. For an extensive experimental evaluation of all methods in this thesis and beyond, a benchmark for visual place recognition in changing environments is presented, which is composed of six datasets with thirty sequence combinations.
|
Page generated in 0.0472 seconds