1 |
Monitoring-Camera-Assisted SLAM for Indoor Positioning and NavigationZheng, Haoyue January 2021 (has links)
In the information age, intelligent indoor positioning and navigation services are required in many application scenarios. However, most current visual positioning systems cannot function alone and have to rely on additional information from other modules. Nowadays, public places are usually equipped with monitoring cameras, which can be exploited as anchors for positioning, thus enabling the vision module to work independently.
In this thesis, a high-precision indoor positioning and navigation system is proposed, which integrates monitoring cameras and smartphone cameras. Firstly, based on feature matching and geometric relationships, the system obtains the transformation scale from relative lengths in the cameras’ perspective to actual distances in the floor plan. Secondly, by scale transformation, projection, rotation and translation, the user's initial position in the real environment can be determined. Then, as the user moves forward, the system continues to track and provide correct navigation prompts.
The designed system is implemented and tested in different application scenarios. It is proved that our system achieves a positioning accuracy of 0.46m and a successful navigation rate of 90.6%, which outperforms the state-of-the-art schemes by 13% and 3% respectively. Moreover, the system latency is only 0.2s, which meets the real-time demands.
In summary, assisted by widely deployed monitoring cameras, our system can provide users with accurate and reliable indoor positioning and navigation services. / Thesis / Master of Applied Science (MASc)
|
2 |
Exploração autônoma utilizando SLAM monocular esparsoPittol, Diego January 2018 (has links)
Nos últimos anos, observamos o alvorecer de uma grande quantidade de aplicações que utilizam robôs autônomos. Para que um robô seja considerado verdadeiramente autônomo, é primordial que ele possua a capacidade de aprender sobre o ambiente no qual opera. Métodos de SLAM (Localização e Mapeamento Simultâneos) constroem um mapa do ambiente por onde o robô trafega ao mesmo tempo em que estimam a trajetória correta do robô. No entanto, para obter um mapa completo do ambiente de forma autônoma é preciso guiar o robô por todo o ambiente, o que é feito no problema de exploração. Câmeras são sensores baratos que podem ser utilizadas para a construção de mapas 3D. Porém, o problema de exploração em mapas gerados por métodos de SLAM monocular, i.e. que extraem informações de uma única câmera, ainda é um problema em aberto, pois tais métodos geram mapas esparsos ou semi-densos, que são inadequados para navegação e exploração. Para tal situação, é necessário desenvolver métodos de exploração capazes de lidar com a limitação das câmeras e com a falta de informação nos mapas gerados por SLAMs monoculares. Propõe-se uma estratégia de exploração que utilize mapas volumétricos locais, gerados através das linhas de visão, permitindo que o robô navegue em segurança. Nestes mapas locais, são definidos objetivos que levem o robô a explorar o ambiente desviando de obstáculos. A abordagem proposta visa responder a questão fundamental em exploração: "Para onde ir?". Além disso, busca determinar corretamente quando o ambiente está suficientemente explorado e a exploração deve parar. A abordagem proposta é avaliada através de experimentos em um ambiente simples (i.e. apenas uma sala) e em um ambiente compostos por diversas salas. / In recent years, we have seen the dawn of a large number of applications that use autonomous robots. For a robot to be considered truly autonomous, it is primordial that it has the ability to learn about the environment in which it operates. SLAM (Simultaneous Location and Mapping) methods build a map of the environment while estimating the robot’s correct trajectory. However, to autonomously obtain a complete map of the environment, it is necessary to guide the robot throughout the environment, which is done in the exploration problem. Cameras are inexpensive sensors that can be used for building 3D maps. However, the exploration problem in maps generated by monocular SLAM methods (i.e. that extract information from a single camera) is still an open problem, since such methods generate sparse or semi-dense maps that are ill-suitable for navigation and exploration. For such a situation, it is necessary to develop exploration methods capable of dealing with the limitation of the cameras and the lack of information in the maps generated by monocular SLAMs. We proposes an exploration strategy that uses local volumetric maps, generated using the lines of sight, allowing the robot to safely navigate. In these local maps, objectives are defined to lead the robot to explore the environment while avoiding obstacles. The proposed approach aims to answer the fundamental question in exploration: "Where to go?". In addition, it seeks to determine correctly when the environment is sufficiently explored and the exploration must stop. The effectiveness of the proposed approach is evaluated in experiments on single and multi-room environments.
|
3 |
Exploração autônoma utilizando SLAM monocular esparsoPittol, Diego January 2018 (has links)
Nos últimos anos, observamos o alvorecer de uma grande quantidade de aplicações que utilizam robôs autônomos. Para que um robô seja considerado verdadeiramente autônomo, é primordial que ele possua a capacidade de aprender sobre o ambiente no qual opera. Métodos de SLAM (Localização e Mapeamento Simultâneos) constroem um mapa do ambiente por onde o robô trafega ao mesmo tempo em que estimam a trajetória correta do robô. No entanto, para obter um mapa completo do ambiente de forma autônoma é preciso guiar o robô por todo o ambiente, o que é feito no problema de exploração. Câmeras são sensores baratos que podem ser utilizadas para a construção de mapas 3D. Porém, o problema de exploração em mapas gerados por métodos de SLAM monocular, i.e. que extraem informações de uma única câmera, ainda é um problema em aberto, pois tais métodos geram mapas esparsos ou semi-densos, que são inadequados para navegação e exploração. Para tal situação, é necessário desenvolver métodos de exploração capazes de lidar com a limitação das câmeras e com a falta de informação nos mapas gerados por SLAMs monoculares. Propõe-se uma estratégia de exploração que utilize mapas volumétricos locais, gerados através das linhas de visão, permitindo que o robô navegue em segurança. Nestes mapas locais, são definidos objetivos que levem o robô a explorar o ambiente desviando de obstáculos. A abordagem proposta visa responder a questão fundamental em exploração: "Para onde ir?". Além disso, busca determinar corretamente quando o ambiente está suficientemente explorado e a exploração deve parar. A abordagem proposta é avaliada através de experimentos em um ambiente simples (i.e. apenas uma sala) e em um ambiente compostos por diversas salas. / In recent years, we have seen the dawn of a large number of applications that use autonomous robots. For a robot to be considered truly autonomous, it is primordial that it has the ability to learn about the environment in which it operates. SLAM (Simultaneous Location and Mapping) methods build a map of the environment while estimating the robot’s correct trajectory. However, to autonomously obtain a complete map of the environment, it is necessary to guide the robot throughout the environment, which is done in the exploration problem. Cameras are inexpensive sensors that can be used for building 3D maps. However, the exploration problem in maps generated by monocular SLAM methods (i.e. that extract information from a single camera) is still an open problem, since such methods generate sparse or semi-dense maps that are ill-suitable for navigation and exploration. For such a situation, it is necessary to develop exploration methods capable of dealing with the limitation of the cameras and the lack of information in the maps generated by monocular SLAMs. We proposes an exploration strategy that uses local volumetric maps, generated using the lines of sight, allowing the robot to safely navigate. In these local maps, objectives are defined to lead the robot to explore the environment while avoiding obstacles. The proposed approach aims to answer the fundamental question in exploration: "Where to go?". In addition, it seeks to determine correctly when the environment is sufficiently explored and the exploration must stop. The effectiveness of the proposed approach is evaluated in experiments on single and multi-room environments.
|
4 |
Exploração autônoma utilizando SLAM monocular esparsoPittol, Diego January 2018 (has links)
Nos últimos anos, observamos o alvorecer de uma grande quantidade de aplicações que utilizam robôs autônomos. Para que um robô seja considerado verdadeiramente autônomo, é primordial que ele possua a capacidade de aprender sobre o ambiente no qual opera. Métodos de SLAM (Localização e Mapeamento Simultâneos) constroem um mapa do ambiente por onde o robô trafega ao mesmo tempo em que estimam a trajetória correta do robô. No entanto, para obter um mapa completo do ambiente de forma autônoma é preciso guiar o robô por todo o ambiente, o que é feito no problema de exploração. Câmeras são sensores baratos que podem ser utilizadas para a construção de mapas 3D. Porém, o problema de exploração em mapas gerados por métodos de SLAM monocular, i.e. que extraem informações de uma única câmera, ainda é um problema em aberto, pois tais métodos geram mapas esparsos ou semi-densos, que são inadequados para navegação e exploração. Para tal situação, é necessário desenvolver métodos de exploração capazes de lidar com a limitação das câmeras e com a falta de informação nos mapas gerados por SLAMs monoculares. Propõe-se uma estratégia de exploração que utilize mapas volumétricos locais, gerados através das linhas de visão, permitindo que o robô navegue em segurança. Nestes mapas locais, são definidos objetivos que levem o robô a explorar o ambiente desviando de obstáculos. A abordagem proposta visa responder a questão fundamental em exploração: "Para onde ir?". Além disso, busca determinar corretamente quando o ambiente está suficientemente explorado e a exploração deve parar. A abordagem proposta é avaliada através de experimentos em um ambiente simples (i.e. apenas uma sala) e em um ambiente compostos por diversas salas. / In recent years, we have seen the dawn of a large number of applications that use autonomous robots. For a robot to be considered truly autonomous, it is primordial that it has the ability to learn about the environment in which it operates. SLAM (Simultaneous Location and Mapping) methods build a map of the environment while estimating the robot’s correct trajectory. However, to autonomously obtain a complete map of the environment, it is necessary to guide the robot throughout the environment, which is done in the exploration problem. Cameras are inexpensive sensors that can be used for building 3D maps. However, the exploration problem in maps generated by monocular SLAM methods (i.e. that extract information from a single camera) is still an open problem, since such methods generate sparse or semi-dense maps that are ill-suitable for navigation and exploration. For such a situation, it is necessary to develop exploration methods capable of dealing with the limitation of the cameras and the lack of information in the maps generated by monocular SLAMs. We proposes an exploration strategy that uses local volumetric maps, generated using the lines of sight, allowing the robot to safely navigate. In these local maps, objectives are defined to lead the robot to explore the environment while avoiding obstacles. The proposed approach aims to answer the fundamental question in exploration: "Where to go?". In addition, it seeks to determine correctly when the environment is sufficiently explored and the exploration must stop. The effectiveness of the proposed approach is evaluated in experiments on single and multi-room environments.
|
5 |
Sensor fusion to detect scale and direction of gravity in monocular SLAM systemsTucker, Seth C. January 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Monocular simultaneous localization and mapping (SLAM) is an important technique that enables very inexpensive environment mapping and pose estimation in
small systems such as smart phones and unmanned aerial vehicles. However, the information generated by monocular SLAM is in an arbitrary and unobservable scale,
leading to drift and making it difficult to use with other sources of odometry for control or navigation. To correct this, the odometry needs to be aligned with metric scale
odometry from another device, or else scale must be recovered from known features in
the environment. Typically known environmental features are not available, and for
systems such as cellphones or unmanned aerial vehicles (UAV), which may experience
sustained, small scale, irregular motion, an IMU is often the only practical option.
Because accelerometers measure acceleration and gravity, an inertial measurement
unit (IMU) must filter out gravity and track orientation with complex algorithms in
order to provide a linear acceleration measurement that can be used to recover SLAM
scale. In this thesis, an alternative method will be proposed, which detects and removes gravity from the accelerometer measurement by using the unscaled direction
of acceleration derived from the SLAM odometry.
|
6 |
Implementation and Evaluation of Monocular SLAMMartinsson, Jesper January 2022 (has links)
This thesis report aims to explain the research, implementation, and testing of a monocular SLAM system in an application developed by Voysys AB called Oden, as well as the making and investigation of a new data set used to test the SLAM system. Using CUDASIFT to find and match feature points, OpenCV to compute the initial guess, and the Ceres Solver to optimize the results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
7 |
Fusion of IMU and Monocular-SLAM in a Loosely Coupled EKFHenrik, Fåhraeus January 2017 (has links)
Camera based navigation is getting more and more popular and is the often the cornerstone in Augmented and Virtual Reality. However, navigation systems using camera are less accurate during fast movements and the systems are often resource intensive in terms of CPU and battery consumption. Also, the image processing algorithms introduce latencies in the systems, causing the information of the current position to be delayed. This thesis investigates if a camera and an IMU can be fused in a loosely coupled Extended Kalman Filter to reduce these problems. An IMU introduces unnoticeable latencies and the performance of the IMU is not affected by fast movements. For accurate tracking using an IMU it is important to estimate the bias correctly. Thus, a new method was used in a calibration step to see if it could improve the result. Also, a method to estimate the relative position and orientation between the camera and IMU is evaluated. The filter shows promising results estimating the orientation. The filter can estimate the orientation without latencies and can also offer accurate tracking during fast rotation when the camera is not able to estimate the orientation. However, the position is much harder and no performance gain could be seen. Some methods that are likely to improve the tracking are discussed and suggested as future work.
|
8 |
Monocular Depth Estimation Using Deep Convolutional Neural NetworksLarsson, Susanna January 2019 (has links)
For a long time stereo-cameras have been deployed in visual Simultaneous Localization And Mapping (SLAM) systems to gain 3D information. Even though stereo-cameras show good performance, the main disadvantage is the complex and expensive hardware setup it requires, which limits the use of the system. A simpler and cheaper alternative are monocular cameras, however monocular images lack the important depth information. Recent works have shown that having access to depth maps in monocular SLAM system is beneficial since they can be used to improve the 3D reconstruction. This work proposes a deep neural network that predicts dense high-resolution depth maps from monocular RGB images by casting the problem as a supervised regression task. The network architecture follows an encoder-decoder structure in which multi-scale information is captured and skip-connections are used to recover details. The network is trained and evaluated on the KITTI dataset achieving results comparable to state-of-the-art methods. With further development, this network shows good potential to be incorporated in a monocular SLAM system to improve the 3D reconstruction.
|
9 |
Collaborative SLAM with Crowdsourced DataHuai, Jianzhu 18 May 2017 (has links)
No description available.
|
10 |
Localization of Combat Aircraft at High Altitude using Visual OdometryNilsson Boij, Jenny January 2022 (has links)
Most of the navigation systems used in today’s aircraft rely on Global Navigation Satellite Systems (GNSS). However, GNSS is not fully reliable. For example, it can be jammed by attacks on the space or ground segments of the system or denied at inaccessible areas. Hence to ensure successful navigation it is of great importance to continuously be able to establish the aircraft’s location without having to rely on external reference systems. Localization is one of many sub-problems in navigation and will be the focus of this thesis. This brings us to the field of visual odometry (VO), which involves determining position and orientation with the help of images from one or more camera sensors. But to date, most VO systems have primarily been established on ground vehicles and low flying multi-rotor systems. This thesis seeks to extend VO to new applications by exploring it in a fairly new context; a fixed-wing piloted combat aircraft, for vision-only pose estimation in applications of extremely large scene depth. A major part of this research work is the data gathering, where the data is collected using the flight simulator X-Plane 11. Three different flight routes are flown; a straight line, a curve and a loop, for two types of visual conditions; in clear weather with daylight and during sunset. The method used in this work is ORB-SLAM3, an open-source library for visual simultaneous localization and mapping (SLAM). It has shown excellent results in previous works and has become a benchmark method often used in the field of visual pose estimation. ORB-SLAM3 tracks the straight line of 78 km very well at an altitude over 2700 m. The absolute trajectory error (ATE) is 0.072% of the total distance traveled in daylight and 0.11% during sunset. These results are of the same magnitude as ORB-SLAM3 on the EuRoC MAV dataset. For the curved trajectory of 79 km ATE is 2.0% and 1.2% of total distance traveled in daylight and sunset respectively. The longest flight route of 258 km shows the challenges of visual pose estimation. Although it is managing to close loops in daylight, it has an ATE of 3.6% during daylight. During sunset the features do not possess enough invariant characteristics to close loops, resulting in an even larger ATE of 14% of total distance traveled. Hence to be able to use and properly rely on vision in localization, more sensor information is needed. But since all aircraft already possess an inertial measurement unit (IMU), the future work naturally includes IMU data in the system. Nevertheless, the results from this research show that vision is useful, even at the high altitudes and speeds used by a combat aircraft.
|
Page generated in 0.0612 seconds