Return to search

SENSOR FUSION FOR AUTONOMOUS NAVIGATION

This report describes the progress made in designing and constructing an unmanned surface vehicle. The focus is on the development of the vessel's perception system, which includes three LiDAR sensors and a camera. To process and integrate the data from these sensors, a sensor fusion architecture was implemented using a method called Point Painting. This approach involves labelling the points of a point cloud with class labels by projecting the pixels from a semantically segmented image onto the cloud. Since the initial publication of the algorithm, more modern segmentation networks have been invented. To improve the model's accuracy, the segmentation network is replaced with a more modern architecture. Using the CityScapes dataset, the implementation achieved accuracy comparable to the state-of-the-art, which is competitive with the state-of-the-art models for semantic segmentation.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:mdh-61810
Date January 2023
CreatorsÖsterlund, Dan
PublisherMälardalens universitet, Akademin för innovation, design och teknik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.002 seconds