• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Visual navigation in unmanned air vehicles with simultaneous location and mapping (SLAM)

Li, X 15 August 2014 (has links)
This thesis focuses on the theory and implementation of visual navigation techniques for Autonomous Air Vehicles in outdoor environments. The target of this study is to fuse and cooperatively develop an incremental map for multiple air vehicles under the application of Simultaneous Location and Mapping (SLAM). Without loss of generality, two unmanned air vehicles (UAVs) are investigated for the generation of ground maps from current and a priori data. Each individual UAV is equipped with inertial navigation systems and external sensitive elements which can provide the possible mixture of visible, thermal infrared (IR) image sensors, with a special emphasis on the stereo digital cameras. The corresponding stereopsis is able to provide the crucial three-dimensional (3-D) measurements. Therefore, the visual aerial navigation problems tacked here are interpreted as stereo vision based SLAM (vSLAM) for both single and multiple UAVs applications. To begin with, the investigation is devoted to the methodologies of feature extraction. Potential landmarks are selected from airborne camera images as distinctive points identified in the images are the prerequisite for the rest. Feasible feature extraction algorithms have large influence over feature matching/association in 3-D mapping. To this end, effective variants of scale-invariant feature transform (SIFT) algorithms are employed to conduct comprehensive experiments on feature extraction for both visible and infrared aerial images. As the UAV is quite often in an uncertain location within complex and cluttered environments, dense and blurred images are practically inevitable. Thus, it becomes a challenge to find feature correspondences, which involves feature matching between 1st and 2nd image in the same frame, and data association of mapped landmarks and camera measurements. A number of tests with different techniques are conducted by incorporating the idea of graph theory and graph matching. The novel approaches, which could be tagged as classification and hypergraph transformation (HGTM) based respectively, have been proposed to solve the data association in stereo vision based navigation. These strategies are then utilised and investigated for UAV application within SLAM so as to achieve robust matching/association in highly cluttered environments. The unknown nonlinearities in the system model, including noise would introduce undesirable INS drift and errors. Therefore, appropriate appraisals on the pros and cons of various potential data filtering algorithms to resolve this issue are undertaken in order to meet the specific requirements of the applications. These filters within visual SLAM were put under investigation for data filtering and fusion of both single and cooperative navigation. Hence updated information required for construction and maintenance of a globally consistent map can be provided by using a suitable algorithm with the compromise between computational accuracy and intensity imposed by the increasing map size. The research provides an overview of the feasible filters, such as extended Kalman Filter, extended Information Filter, unscented Kalman Filter and unscented H Infinity Filter. As visual intuition always plays an important role for humans to recognise objects, research on 3-D mapping in textures is conducted in order to fulfil the purpose of both statistical and visual analysis for aerial navigation. Various techniques are proposed to smooth texture and minimise mosaicing errors during the reconstruction of 3-D textured maps with vSLAM for UAVs. Finally, with covariance intersection (CI) techniques adopted on multiple sensors, various cooperative and data fusion strategies are introduced for the distributed and decentralised UAVs for Cooperative vSLAM (C-vSLAM). Together with the complex structure of high nonlinear system models that reside in cooperative platforms, the robustness and accuracy of the estimations in collaborative mapping and location are achieved through HGTM association and communication strategies. Data fusion among UAVs and estimation for visual navigation via SLAM were impressively verified and validated in conditions of both simulation and real data sets. / © Cranfield University, 2013
2

Visual navigation in unmanned air vehicles with simultaneous location and mapping (SLAM)

Li, X. January 2014 (has links)
This thesis focuses on the theory and implementation of visual navigation techniques for Autonomous Air Vehicles in outdoor environments. The target of this study is to fuse and cooperatively develop an incremental map for multiple air vehicles under the application of Simultaneous Location and Mapping (SLAM). Without loss of generality, two unmanned air vehicles (UAVs) are investigated for the generation of ground maps from current and a priori data. Each individual UAV is equipped with inertial navigation systems and external sensitive elements which can provide the possible mixture of visible, thermal infrared (IR) image sensors, with a special emphasis on the stereo digital cameras. The corresponding stereopsis is able to provide the crucial three-dimensional (3-D) measurements. Therefore, the visual aerial navigation problems tacked here are interpreted as stereo vision based SLAM (vSLAM) for both single and multiple UAVs applications. To begin with, the investigation is devoted to the methodologies of feature extraction. Potential landmarks are selected from airborne camera images as distinctive points identified in the images are the prerequisite for the rest. Feasible feature extraction algorithms have large influence over feature matching/association in 3-D mapping. To this end, effective variants of scale-invariant feature transform (SIFT) algorithms are employed to conduct comprehensive experiments on feature extraction for both visible and infrared aerial images. As the UAV is quite often in an uncertain location within complex and cluttered environments, dense and blurred images are practically inevitable. Thus, it becomes a challenge to find feature correspondences, which involves feature matching between 1st and 2nd image in the same frame, and data association of mapped landmarks and camera measurements. A number of tests with different techniques are conducted by incorporating the idea of graph theory and graph matching. The novel approaches, which could be tagged as classification and hypergraph transformation (HGTM) based respectively, have been proposed to solve the data association in stereo vision based navigation. These strategies are then utilised and investigated for UAV application within SLAM so as to achieve robust matching/association in highly cluttered environments. The unknown nonlinearities in the system model, including noise would introduce undesirable INS drift and errors. Therefore, appropriate appraisals on the pros and cons of various potential data filtering algorithms to resolve this issue are undertaken in order to meet the specific requirements of the applications. These filters within visual SLAM were put under investigation for data filtering and fusion of both single and cooperative navigation. Hence updated information required for construction and maintenance of a globally consistent map can be provided by using a suitable algorithm with the compromise between computational accuracy and intensity imposed by the increasing map size. The research provides an overview of the feasible filters, such as extended Kalman Filter, extended Information Filter, unscented Kalman Filter and unscented H Infinity Filter. As visual intuition always plays an important role for humans to recognise objects, research on 3-D mapping in textures is conducted in order to fulfil the purpose of both statistical and visual analysis for aerial navigation. Various techniques are proposed to smooth texture and minimise mosaicing errors during the reconstruction of 3-D textured maps with vSLAM for UAVs. Finally, with covariance intersection (CI) techniques adopted on multiple sensors, various cooperative and data fusion strategies are introduced for the distributed and decentralised UAVs for Cooperative vSLAM (C-vSLAM). Together with the complex structure of high nonlinear system models that reside in cooperative platforms, the robustness and accuracy of the estimations in collaborative mapping and location are achieved through HGTM association and communication strategies. Data fusion among UAVs and estimation for visual navigation via SLAM were impressively verified and validated in conditions of both simulation and real data sets.
3

Six DOF tracking system based on smartphones internal sensors for standalone mobile VR

Duque, Fredd January 2019 (has links)
Nowadays mid-range smartphones have enough computational power to run simultaneous location and mapping (SLAM) algorithms that, together with their onboard inertial sensors makes them capable of position and rotation tracking. Based on this, Google and Apple have released their own respective software development kits (SDKs) that allow smartphones to run augmented reality applications using six degrees of freedom tracking. However, this same approach could be implemented to virtual reality head-mounted-display (HMD) based on smartphones, but current virtual reality SDKs only offer rotational tracking. In this study the positional tracking technology used for augmented reality mobile applications has been implemented in a virtual reality head-mounted-display only powered by a smartphone by combining virtual and augmented reality SDKs. Compatibility issues between SDKs have been faced to develop a working prototype. An objective and controlled measurement study has been conducted that included 34.200 measurements, to test the accuracy, precision and jitter tracking of the protype against the Oculus Rift, a dedicated virtual reality system. Results show that the developed prototype offers a decent tracking precision and accuracy in optimal conditions. It was concluded to be highly dependent on the camera view. Although, jitter presented the opposite behavior, being dependent to the device used but independent on the camera view. In its optimal conditions, user studies demonstrated that the prototype was capable of offering the same tracking performance feeling as the Oculus Rift although jitter was quite noticeable, and a common user complain. Further studies are proposed that can improve the tracking performance of the prototype by filtering jitter and using two or more cameras with a different angular to correlate feature points and obtain a wider view of the environment were the prototype is used. / Idag har mellanklass-smartphones tillräckligt med beräkningskapacitet för att simultant köra lokalisering och kartläggnings(SLAM) algoritmer tillsammans med deras tröghetssensorer ombord, vilket gör att de kan positionera och rotera spårning. Baserat på det här så har Google och Apple släppt sina egna respektive programvaror (SDK) som gör att smartphones kan köra ökade realitetsapplikationer med sex graders frihetsspårning. Emellertid kan samma tillvägagångssätt implementeras till virtuell verklighet på en huvudmonterad display (HMD) baserat på smartphones, men nuvarande VR SDK erbjuder endast rotationsspårning. I denna studie så har positionell spårningsteknik som används för AR i mobila applikationer implementerats i ett VRheadset som endast drivs av en smartphone genom att kombinera VR och ARSDKs. Kompatibilitetsproblem mellan SDKs har resulterat i att utveckla en fungerande prototyp. En objektiv och kontrollerad mätstudie har genomförts som inkluderade 34.200 mätningar, för att testa noggrannheten, precision och jitterspårning av protyp mot Oculus Rift, ett dedikerat virtuellt verklighetssystem. Resultat visar att den utvecklade prototypen ger en anständig spårningsprecision och noggrannhet i optimala betingelser. Denna slutsats var mycket beroende av kameravy. Även om jitter presenterade det motsatta beteendet, beroende på vilken enhet som används men oberoende av kamerans vy. I sina optimala förhållanden visade användarstudier att prototypen kunde erbjuda samma spårningsförmåga som Oculus Rift, även om jitter var ganska märkbar, och en vanlig användares klagomål. Ytterligare studier föreslås som kan förbättra prototypens spårningsprestanda genom att filtrera jitter och använder två eller flera kameror med en annan vinkling till att korrelera funktionspunkter och få en bredare bild av miljön var prototypen används.

Page generated in 0.1205 seconds