Spelling suggestions: "subject:"landreadjustment"" "subject:"andmisadjustment""
1 |
Monocular vision based localization and mappingJama, Michal January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / Dale E. Schinstock / In this dissertation, two applications related to vision-based localization and mapping are considered:
(1) improving navigation system based satellite location estimates by using on-board camera images, and
(2) deriving position information from video stream and using it to aid an auto-pilot of an unmanned aerial vehicle (UAV).
In the first part of this dissertation, a method for analyzing a minimization process called bundle adjustment (BA) used in stereo imagery based 3D terrain reconstruction to refine estimates of camera poses (positions and orientations) is presented. In particular, imagery obtained with pushbroom cameras is of interest. This work proposes a method to identify cases in which BA does not work as intended, i.e., the cases in which the pose estimates returned by the BA are not more accurate than estimates provided by a satellite navigation systems due to the existence of degrees of freedom (DOF) in BA. Use of inaccurate pose estimates causes warping and scaling effects in the reconstructed terrain and prevents the terrain from being used in scientific analysis.
Main contributions of this part of work include: 1) formulation of a method for detecting DOF in the BA; and 2) identifying that two camera geometries commonly used to obtain stereo imagery have DOF.
Also, this part presents results demonstrating that avoidance of the DOF can give significant accuracy gains in aerial imagery.
The second part of this dissertation proposes a vision based system for UAV navigation. This is a monocular vision based simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video-stream from a single camera. This is different from common SLAM solutions that use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. The SLAM solution was built by significantly modifying and extending a recent open-source SLAM solution that is fundamentally different from a traditional approach to solving SLAM problem.
The modifications made are those needed to provide the position measurements necessary for the navigation solution on a UAV while simultaneously building the map, all while maintaining control of the UAV.
The main contributions of this part include: 1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; 2) improved performance of the SLAM algorithm for lower camera frame rates; and 3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible, and can be effective in Global Positioning System denied environments.
|
2 |
Precise Image Registration and Occlusion DetectionKhare, Vinod 08 September 2011 (has links)
No description available.
|
3 |
Hybridation GPS/Vision monoculaire pour la navigation autonome d'un robot en milieu extérieur / Outdoor robotic navigation by GPS and monocular vision sensors fusionCodol, Jean-Marie 15 February 2012 (has links)
On assiste aujourd'hui à l'importation des NTIC (Nouvelles Technologies de l'Information et de la Télécommunication) dans la robotique. L'union de ces technologies donnera naissance, dans les années à venir, à la robotique de service grand-public.Cet avenir, s'il se réalise, sera le fruit d'un travail de recherche, amont, dans de nombreux domaines : la mécatronique, les télécommunications, l'automatique, le traitement du signal et des images, l'intelligence artificielle ... Un des aspects particulièrement intéressant en robotique mobile est alors le problème de la localisation et de la cartographie simultanée. En effet, dans de nombreux cas, un robot mobile, pour accéder à une intelligence, doit nécessairement se localiser dans son environnement. La question est alors : quelle précision pouvons-nous espérer en terme de localisation? Et à quel coût?Dans ce contexte, un des objectifs de tous les laboratoires de recherche en robotique, objectif dont les résultats sont particulièrement attendus dans les milieux industriels, est un positionnement et une cartographie de l'environnement, qui soient à la fois précis, tous-lieux, intègre, bas-coût et temps-réel. Les capteurs de prédilection sont les capteurs peu onéreux tels qu'un GPS standard (de précision métrique), et un ensemble de capteurs embarquables en charge utile (comme les caméras-vidéo). Ce type de capteurs constituera donc notre support privilégié, dans notre travail de recherche. Dans cette thèse, nous aborderons le problème de la localisation d'un robot mobile, et nous choisirons de traiter notre problème par l'approche probabiliste. La démarche est la suivante, nous définissons nos 'variables d'intérêt' : un ensemble de variables aléatoires. Nous décrivons ensuite leurs lois de distribution, et leur modèles d'évolution, enfin nous déterminons une fonction de coût, de manière à construire un observateur (une classe d'algorithme dont l'objectif est de déterminer le minimum de notre fonction de coût). Notre contribution consistera en l'utilisation de mesures GPS brutes GPS (les mesures brutes - ou raw-datas - sont les mesures issues des boucles de corrélation de code et de phase, respectivement appelées mesures de pseudo-distances de code et de phase) pour une navigation bas-coût précise en milieu extérieur suburbain. En utilisant la propriété dite 'entière' des ambiguïtés de phase GPS, nous étendrons notre navigation pour réaliser un système GPS-RTK (Real Time Kinematic) en mode différentiel local précise et bas-coût. Nos propositions sont validées par des expérimentations réalisées sur notre démonstrateur robotique. / We are witnessing nowadays the importation of ICT (Information and Communications Technology) in robotics. These technologies will give birth, in upcoming years, to the general public service robotics. This future, if realised, shall be the result of many research conducted in several domains: mechatronics, telecommunications, automatics, signal and image processing, artificial intelligence ... One particularly interesting aspect in mobile robotics is hence the simultaneous localisation and mapping problem. Consequently, to access certain informations, a mobile robot has, in many cases, to map/localise itself inside its environment. The following question is then posed: What precision can we aim for in terms of localisation? And at what cost?In this context, one of the objectives of many laboratories indulged in robotics research, and where results impact directly the industry, is the positioning and mapping of the environment. These latter tasks should be precise, adapted everywhere, integrated, low-cost and real-time. The prediction sensors are inexpensive ones, such as a standard GPS (of metric precision), and a set of embeddable payload sensors (e.g. video cameras). These type of sensors constitute the main support in our work.In this thesis, we shed light on the localisation problem of a mobile robot, which we choose to handle with a probabilistic approach. The procedure is as follows: we first define our "variables of interest" which are a set of random variables, and then we describe their distribution laws and their evolution models. Afterwards, we determine a cost function in such a manner to build up an observer (an algorithmic class where the objective is to minimize the cost function).Our contribution consists of using brute GPS measures (brute measures or raw datas are measures issued from code and phase correlation loops, called pseudo-distance measures of code and phase, respectively) for a low-cost navigation, which is precise in an external suburban environment. By implementing the so-called "whole" property of GPS phase ambiguities, we expand the navigation to achieve a GPS-RTK (Real-Time Kinematic) system in a precise and low-cost local differential mode.Our propositions has been validated through experimentations realized on our robotic demonstrator.
|
4 |
Computer vision system for identifying road signs using triangulation and bundle adjustmentKrishnan, Anupama January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Christopher L. Lewis / This thesis describes the development of an automated computer vision system that
identifies and inventories road signs from imagery acquired from the Kansas Department
of Transportation's road profiling system that takes images every 26.4 feet on highways
through out the state. Statistical models characterizing the typical size, color, and physical location of signs are used to help identify signs from the imagery. First, two phases of a computationally efficient K-Means clustering algorithm are applied to the images to achieve over-segmentation. The novel second phase ensures over-segmentation without excessive computation. Extremely large and very small segments are rejected. The remaining segments are then classified based on color. Finally, the frame to frame trajectories of sign colored segments are analyzed using triangulation and Bundle adjustment to determine their physical location relative to the road video log system. Objects having the appropriate color, and
physical placement are entered into a sign database. To develop the statistical models used for classification, a representative set of images was segmented and manually labeled determining the joint probabilistic models characterizing the color and location typical to that of road signs. Receiver Operating Characteristic curves were generated and analyzed to adjust the thresholds for the class identification. This system was tested and its performance characteristics are presented.
|
5 |
Large volume artefact for calibration of multi-sensor projected fringe systemsTarvaz, Tahir January 2015 (has links)
Fringe projection is a commonly used optical technique for measuring the shapes of objects with dimensions of up to about 1 m across. There are however many instances in the aerospace and automotive industries where it would be desirable to extend the benefits of the technique (e.g., high temporal and spatial sampling rates, non-contacting measurements) to much larger measurement volumes. This thesis describes a process that has been developed to allow the creation of a large global measurement volume from two or more independent shape measurement systems. A new 3-D large volume calibration artefact, together with a hexapod positioning stage, have been designed and manufactured to allow calibration of volumes of up to 3 x 1 x 1 m3. The artefact was built from carbon fibre composite tubes, chrome steel spheres, and mild steel end caps with rare earth rod magnets. The major advantage over other commonly used artefacts is the dimensionally stable relationship between features spanning multiple individual measurement volumes, thereby allowing calibration of several scanners within a global coordinate system, even when they have non-overlapping fields of view. The calibration artefact is modular, providing the scalability needed to address still larger measurement volumes and volumes of different geometries. Both it and the translation stage are easy to transport and to assemble on site. The artefact also provides traceabitity for calibration through independent measurements on a mechanical CMM. The dimensions of the assembled artefact have been found to be consistent with those of the individual tube lengths, demonstrating that gravitational distortion corrections are not needed for the artefact size considered here. Deformations due to thermal and hygral effects have also been experimentally quantified. The thesis describes the complete calibration procedure: large volume calibration artefact design, manufacture and testing; initial estimation of the sensor geometry parameters; processing of the calibration data from manually selected regions-of-interest (ROI) of the artefact features; artefact pose estimation; automated control point selection, and finally bundle adjustment. An accuracy of one part in 17 000 of the global measurement volume diagonal was achieved and verified.
|
6 |
Estimation de pose de grands blocs d'images panoramiques issues de systèmes de numérisation mobile / Pose estimation on large block of panoramic images from mobile mapping systemsCannelle, Bertrand 04 December 2013 (has links)
Tirée par le développement et la démocratisation des globes numériques et des systèmes de géolocalisation grand public, la numérisation 3D mobile terrestre en milieux urbains s'est développée de manière très importante ces dix dernières années. Les principaux verrous résiduels de ces systèmes reste d'une part la localisation précise des données pour certaines applications (conduite autonome urbaine, levers de géomètres, etc.) du fait des masques et multi-trajets GPS dans les canyons urbains et d'autre part le passage à l'échelle du traitement de l'information vu les volumes de données considérables acquis chaque jour (plusieurs To).La première partie de cette thèse est consacrée à la présentation de la numérisation mobile, aussi bien du point de vue système que du point de vue usage. Une description fine du système Stéréopolis V2, véhicule de numérisation mobile multi-caméras développée au laboratoire MATIS de l'Institut National de l'Information Géographique et Forestière, est faite afin de présenter les données utilisées dans cette thèse. Les blocs d'images manipulés dans ces travaux sont constitués de plusieurs centaines de milliers à un million d'image. La seconde partie est consacrée à la calibration du système: calibration intrinsèque de caméra, tout d'abord, en utilisant une géométrie d'acquisition de type panoramique, qui permet de s'affranchir de réseaux de cibles 3D métrologiques. Une calibration extrinsèque des imageurs du véhicule, ensuite, qui permet de déterminer de façon précise la position et l'orientation de caméras sur un dispositif de numérisation mobile. Deux procédures sont détaillées et comparées: l'une dite "off-line" nécessitant une acquisition spécifique avec un réseau de cibles métrologiques et l'autre dite "on-line" utilisant uniquement les données d'acquisition standards. Nous démontrons que la méthode "on-line" produit dans les mêmes conditions une précision comparable à celle "off-line" tout en étant plus adaptée aux variations de conditions d'acquisition in-situ. La troisième partie détaille le processus de compensation par faisceaux appliquée aux données de numérisation mobile multi-caméras qui permet d'estimer la pose d'un grand nombre d'images. La mise en équation ainsi que différents cas d'utilisations de la méthode sont explicités. La structuration et la gestion des données dans un entrepôt est elle aussi développée car elle permet la gestion d'importants volumes et donc le passage à l'échelle tout en restant performante. La quatrième et dernière partie propose différentes méthodes de recalage qui peuvent être utilisées aussi bien de manière individuelle que combinées afin de permettre de mettre en cohérence des séquences d'images distinctes (boucles, passage multi-dates, etc.) dans des contextes applicatifs différents / Mobile mapping technology has grown exponentially the last ten years, particularly due to advances in computer and sensor performances. However, the very accurate positioning of data generated by such technique remains a crucial issue. The first part of this thesis presents the mobile mapping system that has been designed in the MATIS lab of IGN as well as its operational use. A detailed analysis of image data is proposed and data used for this work is discussed. The second part tackles the standard calibration procedure. First, camera calibration is performed by using a panoramic-based acquisition geometry, which allows not to required ground control points. Secondly, a full calibration procedure dedicated to the Stéréopolis V2is proposed so as to determine accurately the position and orientation of all the cameras. For that purpose, two procedures are explained : one requiring an area with points positioned with high accuracy ,and the other one based only the data acquisition. The third section details the compensation applied to the mobile mapping car that allows to improve poses of a large number of images. The mathematical formulation is proposed, and various cases of the method are explained. Data management is also presented since it is a mandatory step for efficient large amount of data management The fourth and final part of the thesis presents different registration scenarii, where methods developed in this work can be used individually as well as combined with other ones so as to bring higher coherence between sequences of distinct images
|
7 |
3D mapping with iPhone / 3D-kartering med iPhoneLundqvist, Tobias January 2011 (has links)
Today, 3D models of cities are created from aerial images using a camera rig. Images, together with sensor data from the flights, are stored for further processing when building 3D models. However, there is a market demand for a more mobile solution of satisfactory quality. If the camera position can be calculated for each image, there is an existing algorithm available for the creation of 3D models. This master thesis project aims to investigate whether the iPhone 4 offers good enough image and sensor data quality from which 3D models can be created. Calculations on movements and rotations from sensor data forms the foundation of the image processing, and should refine the camera position estimations. The 3D models are built only from image processing since sensor data cannot be used due to poor data accuracy. Because of that, the scaling of the 3D models are unknown and a measurement is needed on the real objects to make scaling possible. Compared to a test algorithm that calculates 3D models from only images, already available at the SBD’s system, the quality of the 3D model in this master thesis project is almost the same or, in some respects, even better when compared with the human eye.
|
8 |
Fusion of carrier-phase differential GPS, bundle-adjustment-based visual SLAM, and inertial navigation for precisely and globally-registered augmented realityShepard, Daniel Phillip 16 September 2013 (has links)
Methodologies are proposed for combining carrier-phase differential GPS (CDGPS), visual simultaneous localization and mapping (SLAM), and inertial measurements to obtain precise and globally-referenced position and attitude estimates of a rigid structure connecting a GPS receiver, a camera, and an inertial measurement unit (IMU). As part of developing these methodologies, observability of globally-referenced attitude based solely on GPS-based position estimates and visual feature measurements is proven. Determination of attitude in this manner eliminates the need for attitude estimates based on magnetometer and accelerometer measurements, which are notoriously susceptible to magnetic disturbances. This combination of navigation techniques, if coupled properly, is capable of attaining centimeter-level or better absolute positioning and degree-level or better absolute attitude accuracies in any space, both indoors and out. Such a navigation system is ideally suited for application to augmented reality (AR), which often employs a GPS receiver, a camera, and an IMU, and would result in tight registration of virtual elements to the real world. A prototype AR system is presented that represents a first step towards coupling CDGPS, visual SLAM, and inertial navigation. While this prototype AR system does not couple CDGPS and visual SLAM tightly enough to obtain some of the benefit of the proposed methodologies, the system is capable of demonstrating an upper bound on the precision that such a combination of navigation techniques could attain. Test results for the prototype AR system are presented for a dynamic scenario that demonstrate sub-centimeter-level positioning precision and sub-degree-level attitude precision. This level of precision would enable convincing augmented visuals. / text
|
9 |
3D Modeling using Multi-View ImagesJanuary 2010 (has links)
abstract: There is a growing interest in the creation of three-dimensional (3D) images and videos due to the growing demand for 3D visual media in commercial markets. A possible solution to produce 3D media files is to convert existing 2D images and videos to 3D. The 2D to 3D conversion methods that estimate the depth map from 2D scenes for 3D reconstruction present an efficient approach to save on the cost of the coding, transmission and storage of 3D visual media in practical applications. Various 2D to 3D conversion methods based on depth maps have been developed using existing image and video processing techniques. The depth maps can be estimated either from a single 2D view or from multiple 2D views. This thesis presents a MATLAB-based 2D to 3D conversion system from multiple views based on the computation of a sparse depth map. The 2D to 3D conversion system is able to deal with the multiple views obtained from uncalibrated hand-held cameras without knowledge of the prior camera parameters or scene geometry. The implemented system consists of techniques for image feature detection and registration, two-view geometry estimation, projective 3D scene reconstruction and metric upgrade to reconstruct the 3D structures by means of a metric transformation. The implemented 2D to 3D conversion system is tested using different multi-view image sets. The obtained experimental results of reconstructed sparse depth maps of feature points in 3D scenes provide relative depth information of the objects. Sample ground-truth depth data points are used to calculate a scale factor in order to estimate the true depth by scaling the obtained relative depth information using the estimated scale factor. It was found out that the obtained reconstructed depth map is consistent with the ground-truth depth data. / Dissertation/Thesis / M.S. Electrical Engineering 2010
|
10 |
Motion blur in digital images : analys, detection and correction of motion blur in photogrammetrySieberth, Till January 2016 (has links)
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming.
|
Page generated in 0.055 seconds