Return to search

Fusion of Ladybug3 omnidirectional camera and Velodyne Lidar

The advent of autonomous vehicles expedites the revolution of car industry. Volvo Car Corporation has an ambition of developing the next generation of autonomous vehicle. In the Volvo Car Corporation, Active Safety CAE group, enthusiastic engineers have initiated a series of relevant research to enhance the safety function for autonomous vehicle and this thesis work is also implemented at Active Safety CAE with their support.    Perception of vehicle plays a pivotal role in autonomous driving, therefore an idea of improving vision by fusing two different types of data from Velodyne HDL-64E S3 High Definition LiDAR Sensor and Ladybug3 camera respectively, is proposed.  This report presents the whole process of fusion of point clouds and image data. An experiment is implemented for collecting and synchronizing multi-sensor data streams by building a platform which supports the mounting of Velodyne, Ladybug 3 and their accessories, as well as the connection to GPS unit, laptop. Related software/programming environment for recording, synchronizing and storing data will also be mentioned. Synchronization is mainly achieved by matching timestamps between different datasets. Creating log files for timestamps is the primary task in synchronization. External Calibration between Velodyne and Ladybug3 camera for matching two different datasets correctly is the focus of this report. In the project, we will develop a semi-automatic calibration method with very little human intervention using a checkerboard for acquiring a small set of feature points from laser point cloud and image feature correspondences. Based on these correspondences, the displacement is computed. Using the computed result, the laser points are back-projected into the image. If the original and back-projected images are sufficiently consistent, then the transformation parameters can be accepted. Displacement between camera and laser scanner are estimated through two separate steps: first, we will estimate the pose for the checkerboard in image and get its depth information in camera coordinate system; and then a transformation relation between the camera and the laser scanner will be computed within three dimensional space.  Fusion of datasets will finally be done by combing color information from image and range information from point cloud together. Other applications related to data fusion will be developed as the support of future work.  In the end, a conclusion will be drawn. Possible improvements are also expected in future work. For example, better accuracy of calibration might be achieved with other methods and adding texture to cloud points will generate a more realistic model.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:kth-172431
Date January 2015
CreatorsZhao, Guanyi
PublisherKTH, Geodesi och satellitpositionering
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationTRITA-GIT EX ; 15-011

Page generated in 0.0025 seconds