Spelling suggestions: "subject:"islam.""
1 |
Visual Simultaneous Localization and Mapping for a tree climbing robotWisely Babu, Benzun Pious 19 September 2013 (has links)
"This work addresses the problem of generating a 3D mesh grid model of a tree by a climbing robot for tree inspection. In order to generate a consistent model of the tree while climbing, the robot needs to be able to track its location while generating the model. Hence we explored this problem as a subset of Simultaneous Localization and Mapping problem. The monocular camera based Visual Simultaneous Localization and Mapping(VSLAM) algorithm was adopted to map the features on the tree. Multi-scale grid based FAST feature detector combined with Lucas Kande Optical flow was used to extract features from the tree. Inverse depth representation of feature was selected to seamlessly handle newly initialized features. The camera and the feature states along with their co-variances are managed in an Extended Kalman filter. In our VSLAM implementation we have attempted to track a large number of features. From the sparse spatial distribution of features we get using Extended Kalman filter we attempt to generate a 3D mesh grid model with the help of an unordered triangle fitting algorithm. We explored the implementation in C++ using Eigen, OpenCV and Point Cloud Library. A multi-threaded software design of the VSLAM algorithm was implemented. The algorithm was evaluated with image sets from trees susceptible to Asian Long Horn Beetle. "
|
2 |
Localisation précise d'un véhicule par couplage vision/capteurs embarqués/systèmes d'informations géographiques / Localisation of a vehicle through low-cost sensors and geographic information systems fusionSalehi, Achkan 11 April 2018 (has links)
La fusion entre un ensemble de capteurs et de bases de données dont les erreurs sont indépendantes est aujourd’hui la solution la plus fiable et donc la plus répandue de l’état de l’art au problème de la localisation. Les véhicules semi-autonomes et autonomes actuels, ainsi que les applications de réalité augmentée visant les contextes industriels exploitent des graphes de capteurs et de bases de données de tailles considérables, dont la conception, la calibration et la synchronisation n’est, en plus d’être onéreuse, pas triviale. Il est donc important afin de pouvoir démocratiser ces technologies, d’explorer la possibilité de l’exploitation de capteurs et bases de données bas-coûts et aisément accessibles. Cependant, ces sources d’information sont naturellement plus incertaines, et plusieurs obstacles subsistent à leur utilisation efficace en pratique. De plus, les succès récents mais fulgurants des réseaux profonds dans des tâches variées laissent penser que ces méthodes peuvent représenter une alternative peu coûteuse et efficace à certains modules des systèmes de SLAM actuels. Dans cette thèse, nous nous penchons sur la localisation à grande échelle d’un véhicule dans un repère géoréférencé à partir d’un système bas-coût. Celui-ci repose sur la fusion entre le flux vidéo d’une caméra monoculaire, des modèles 3d non-texturés mais géoréférencés de bâtiments,des modèles d’élévation de terrain et des données en provenance soit d’un GPS bas-coût soit de l’odométrie du véhicule. Nos travaux sont consacrés à la résolution de deux problèmes. Le premier survient lors de la fusion par terme barrière entre le VSLAM et l’information de positionnement fournie par un GPS bas-coût. Cette méthode de fusion est à notre connaissance la plus robuste face aux incertitudes du GPS, mais est plus exigeante en matière de ressources que la fusion via des fonctions de coût linéaires. Nous proposons une optimisation algorithmique de cette méthode reposant sur la définition d’un terme barrière particulier. Le deuxième problème est le problème d’associations entre les primitives représentant la géométrie de la scène(e.g. points 3d) et les modèles 3d des bâtiments. Les travaux précédents se basent sur des critères géométriques simples et sont donc très sensibles aux occultations en milieu urbain. Nous exploitons des réseaux convolutionnels profonds afin d’identifier et d’associer les éléments de la carte correspondants aux façades des bâtiments aux modèles 3d. Bien que nos contributions soient en grande partie indépendantes du système de SLAM sous-jacent, nos expériences sont basées sur l’ajustement de faisceaux contraint basé images-clefs. Les solutions que nous proposons sont évaluées sur des séquences de synthèse ainsi que sur des séquence urbaines réelles sur des distances de plusieurs kilomètres. Ces expériences démontrent des gains importants en performance pour la fusion VSLAM/GPS, et une amélioration considérable de la robustesse aux occultations dans la définition des contraintes. / The fusion between sensors and databases whose errors are independant is the most re-liable and therefore most widespread solution to the localization problem. Current autonomousand semi-autonomous vehicles, as well as augmented reality applications targeting industrialcontexts exploit large sensor and database graphs that are difficult and expensive to synchro-nize and calibrate. Thus, the democratization of these technologies requires the exploration ofthe possiblity of exploiting low-cost and easily accessible sensors and databases. These infor-mation sources are naturally tainted by higher uncertainty levels, and many obstacles to theireffective and efficient practical usage persist. Moreover, the recent but dazzling successes ofdeep neural networks in various tasks seem to indicate that they could be a viable and low-costalternative to some components of current SLAM systems.In this thesis, we focused on large-scale localization of a vehicle in a georeferenced co-ordinate frame from a low-cost system, which is based on the fusion between a monocularvideo stream, 3d non-textured but georeferenced building models, terrain elevation models anddata either from a low-cost GPS or from vehicle odometry. Our work targets the resolutionof two problems. The first one is related to the fusion via barrier term optimization of VS-LAM and positioning measurements provided by a low-cost GPS. This method is, to the bestof our knowledge, the most robust against GPS uncertainties, but it is more demanding in termsof computational resources. We propose an algorithmic optimization of that approach basedon the definition of a novel barrier term. The second problem is the data association problembetween the primitives that represent the geometry of the scene (e.g. 3d points) and the 3d buil-ding models. Previous works in that area use simple geometric criteria and are therefore verysensitive to occlusions in urban environments. We exploit deep convolutional neural networksin order to identify and associate elements from the map that correspond to 3d building mo-del façades. Although our contributions are for the most part independant from the underlyingSLAM system, we based our experiments on constrained key-frame based bundle adjustment.The solutions that we propose are evaluated on synthetic sequences as well as on real urbandatasets. These experiments show important performance gains for VSLAM/GPS fusion, andconsiderable improvements in the robustness of building constraints to occlusions.
|
3 |
COMPARISON OF THE GRAPH-OPTIMIZATION FRAMEWORKS G2O AND SBAVictorin, Henning January 2016 (has links)
This thesis starts with an introduction to Simulataneous Localization and Mapping (SLAM) and more background on Visual SLAM (VSLAM). The goal of VSLAM is to map the world with a camera, and at the same time localize the camera in that world. One important step is to optimize the acquired map, which can be done in several different ways. In this thesis, two state-of-the-art optimization algorithms are identified and compared, namely the g2o package and the SBA package. The results show that SBA is better on smaller datasets, and g2o on larger. It is also discovered that there is an error in the implementation of the pinhole camera model in the SBA package.
|
4 |
Taking Man Out of the Loop: Methods to Reduce Human Involvement in Search and Surveillance ApplicationsBrink, Kevin Michael 2010 December 1900 (has links)
There has always been a desire to apply technology to human endeavors to increase
a person's capabilities and reduce the numbers or skill level required of the
people involved, or replace the people altogether. Three fundamental areas are investigated
where technology can enable the reduction or removal of humans in complex
tasks.
The fi rst area of research is the rapid calibration of multiple camera systems
when cameras share an overlapping fi eld of view allowing for 3D computer vision
applications. A simple method for the rapid calibration of such systems is introduced.
The second area of research is the autonomous exploration of hallways or other urbancanyon
environments in the absence of a global positions system (GPS) using only
an inertial motion unit (IMU) and a monocular camera. Desired paths that generate
accurate vehicle state estimates for simple ground vehicles are identi fied and the
bene fits of integrated estimation and control are investigated. It is demonstrated
that considering estimation accuracy is essential to produce efficient guidance and
control. The Schmidt-Kalman filter is applied to the vision-aided inertial navigation system in a novel manner, reducing the state vector size signi ficantly. The final area
of research is a decentralized swarm based approach to source localization using a
high fidelity environment model to directly provide vehicle updates. The approach is
an extension of a standard quadratic model that provides linear updates. The new
approach leverages information from the higher-order terms of the environment model
showing dramatic improvement over the standard method.
|
5 |
Localization of UAVs Using Computer Vision in a GPS-Denied EnvironmentAluri, Ram Charan 05 1900 (has links)
The main objective of this thesis is to propose a localization method for a UAV using various computer vision and machine learning techniques. It plays a major role in planning the strategy for the flight, and acts as a navigational contingency method, in event of a GPS failure. The implementation of the algorithms employs high processing capabilities of the graphics processing unit, making it more efficient. The method involves the working of various neural networks, working in synergy to perform the localization. This thesis is a part of a collaborative project between The University of North Texas, Denton, USA, and the University of Windsor, Ontario, Canada. The localization has been divided into three phases namely object detection, recognition, and location estimation. Object detection and position estimation were discussed in this thesis while giving a brief understanding of the recognition. Further, future strategies to aid the UAV to complete the mission, in case of an eventuality, like the introduction of an EDGE server and wireless charging methods, was also given a brief introduction.
|
6 |
RELOCALIZATION AND LOOP CLOSING IN VISION SIMULTANEOUS LOCALIZATION AND MAPPING (VSLAM) OF A MOBILE ROBOT USING ORB METHODVenkatanaga Amrusha Aryasomyajula (8728027) 24 April 2020 (has links)
<p><a>It is essential for a mobile robot
during autonomous navigation to be able to detect revisited places or loop
closures while performing Vision Simultaneous Localization And Mapping (VSLAM).
Loop closing has been identified as one of the critical data association
problem when building maps. It is an efficient way to eliminate errors and
improve the accuracy of the robot localization and mapping. In order to solve loop
closing problem, the ORB-SLAM algorithm, a feature based simultaneous
localization and mapping system that operates in real time is used. This system
includes loop closing and relocalization and allows automatic initialization. </a></p>
<p>In order to check the
performance of the algorithm, the monocular and stereo and RGB-D cameras are
used. The aim of this thesis is to show the accuracy of relocalization and loop
closing process using ORB SLAM algorithm in a variety of environmental
settings. The performance of relocalization and loop closing in different challenging
indoor scenarios are demonstrated by conducting various experiments. Experimental
results show the applicability of the approach in real time application like
autonomous navigation.</p>
|
7 |
Reconstruction 3D de l'environnement dynamique d'un véhicule à l'aide d'un système multi-caméras hétérogène en stéréo wide-baseline / 3D reconstruction of the dynamic environment surrounding a vehicle using a heterogeneous multi-camera system in wide-baseline stereoMennillo, Laurent 05 June 2019 (has links)
Cette thèse a été réalisée dans le secteur de l'industrie automobile, en collaboration avec le Groupe Renault et concerne en particulier le développement de systèmes d'aide à la conduite avancés et de véhicules autonomes. Les progrès réalisés par la communauté scientifique durant les dernières décennies, dans les domaines de l'informatique et de la robotique notamment, ont été si importants qu'ils permettent aujourd'hui la mise en application de systèmes complexes au sein des véhicules. Ces systèmes visent dans un premier temps à réduire les risques inhérents à la conduite en assistant les conducteurs, puis dans un second temps à offrir des moyens de transport entièrement autonomes. Les méthodes de SLAM multi-objets actuellement intégrées au sein de ces véhicules reposent pour majeure partie sur l'utilisation de capteurs embarqués très performants tels que des télémètres laser, au coût relativement élevé. Les caméras numériques en revanche, de par leur coût largement inférieur, commencent à se démocratiser sur certains véhicules de grande série et assurent généralement des fonctions d'assistance à la conduite, pour l'aide au parking ou le freinage d'urgence, par exemple. En outre, cette implantation plus courante permet également d'envisager leur utilisation afin de reconstruire l'environnement dynamique proche des véhicules en trois dimensions. D'un point de vue scientifique, les techniques de SLAM visuel multi-objets existantes peuvent être regroupées en deux catégories de méthodes. La première catégorie et plus ancienne historiquement concerne les méthodes stéréo, faisant usage de plusieurs caméras à champs recouvrants afin de reconstruire la scène dynamique observée. La plupart reposent en général sur l'utilisation de paires stéréo identiques et placées à faible distance l'une de l'autre, ce qui permet un appariement dense des points d'intérêt dans les images et l'estimation de cartes de disparités utilisées lors de la segmentation du mouvement des points reconstruits. L'autre catégorie de méthodes, dites monoculaires, ne font usage que d'une unique caméra lors du processus de reconstruction. Cela implique la compensation du mouvement propre du système d'acquisition lors de l'estimation du mouvement des autres objets mobiles de la scène de manière indépendante. Plus difficiles, ces méthodes posent plusieurs problèmes, notamment le partitionnement de l'espace de départ en plusieurs sous-espaces représentant les mouvements individuels de chaque objet mobile, mais aussi le problème d'estimation de l'échelle relative de reconstruction de ces objets lors de leur agrégation au sein de la scène statique. La problématique industrielle de cette thèse, consistant en la réutilisation des systèmes multi-caméras déjà implantés au sein des véhicules, majoritairement composés d'un caméra frontale et de caméras surround équipées d'objectifs très grand angle, a donné lieu au développement d'une méthode de reconstruction multi-objets adaptée aux systèmes multi-caméras hétérogènes en stéréo wide-baseline. Cette méthode est incrémentale et permet la reconstruction de points mobiles éparses, grâce notamment à plusieurs contraintes géométriques de segmentation des points reconstruits ainsi que de leur trajectoire. Enfin, une évaluation quantitative et qualitative des performances de la méthode a été menée sur deux jeux de données distincts, dont un a été développé durant ces travaux afin de présenter des caractéristiques similaires aux systèmes hétérogènes existants. / This Ph.D. thesis, which has been carried out in the automotive industry in association with Renault Group, mainly focuses on the development of advanced driver-assistance systems and autonomous vehicles. The progress made by the scientific community during the last decades in the fields of computer science and robotics has been so important that it now enables the implementation of complex embedded systems in vehicles. These systems, primarily designed to provide assistance in simple driving scenarios and emergencies, now aim to offer fully autonomous transport. Multibody SLAM methods currently used in autonomous vehicles often rely on high-performance and expensive onboard sensors such as LIDAR systems. On the other hand, digital video cameras are much cheaper, which has led to their increased use in newer vehicles to provide driving assistance functions, such as parking assistance or emergency braking. Furthermore, this relatively common implementation now allows to consider their use in order to reconstruct the dynamic environment surrounding a vehicle in three dimensions. From a scientific point of view, existing multibody visual SLAM techniques can be divided into two categories of methods. The first and oldest category concerns stereo methods, which use several cameras with overlapping fields of view in order to reconstruct the observed dynamic scene. Most of these methods use identical stereo pairs in short baseline, which allows for the dense matching of feature points to estimate disparity maps that are then used to compute the motions of the scene. The other category concerns monocular methods, which only use one camera during the reconstruction process, meaning that they have to compensate for the ego-motion of the acquisition system in order to estimate the motion of other objects. These methods are more difficult in that they have to address several additional problems, such as motion segmentation, which consists in clustering the initial data into separate subspaces representing the individual movement of each object, but also the problem of the relative scale estimation of these objects before their aggregation within the static scene. The industrial motive for this work lies in the use of existing multi-camera systems already present in actual vehicles to perform dynamic scene reconstruction. These systems, being mostly composed of a front camera accompanied by several surround fisheye cameras in wide-baseline stereo, has led to the development of a multibody reconstruction method dedicated to such heterogeneous systems. The proposed method is incremental and allows for the reconstruction of sparse mobile points as well as their trajectory using several geometric constraints. Finally, a quantitative and qualitative evaluation conducted on two separate datasets, one of which was developed during this thesis in order to present characteristics similar to existing multi-camera systems, is provided.
|
8 |
Visual simultaneous localization and mapping in a noisy static environmentMakhubela, J. K. 03 1900 (has links)
M. Tech. (Department of Information and Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology / Simultaneous Localization and Mapping (SLAM) has seen tremendous interest amongst the research community in recent years due to its ability to make the robot truly independent in navigation. Visual Simultaneous Localization and Mapping (VSLAM) is when an autonomous mobile robot is embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map an unknown environment. The purpose of this research is to address the problem of environmental noise, such as light intensity in a static environment, which has been an issue that makes a Visual Simultaneous Localization and Mapping (VSLAM) system to be ineffective. In this study, we have introduced a Light Filtering Algorithm into the Visual Simultaneous Localization and Mapping (VSLAM) method to reduce the amount of noise in order to improve the robustness of the system in a static environment, together with the Extended Kalman Filter (EKF) algorithm for localization and mapping and A* algorithm for navigation. Simulation is utilized to execute experimental performance. Experimental results show a 60% landmark or landfeature detection of the total landmark or landfeature within a simulation environment and a root mean square error (RMSE) of 0.13m, which is minimal when compared with other Simultaneous Localization and Mapping (SLAM) systems from literature. The inclusion of a Light Filtering Algorithm has enabled the Visual Simultaneous Localization and Mapping (VSLAM) system to navigate in an obscure environment.
|
Page generated in 0.0413 seconds