Return to search

Semantic segmentation of seabed sonar imagery using deep learning / Semantisk segmentering av sonarbilder från havsbotten med deep learning

For investigating the large parts of the ocean which have yet to be mapped, there is a need for autonomous underwater vehicles. Current state-of-the-art underwater positioning often relies on external data from other vessels or beacons. Processing seabed image data could potentially improve autonomy for underwater vehicles. In this thesis, image data from a synthetic aperture sonar (SAS) was manually segmented into two classes: sand and gravel. Two different convolutional neural networks (CNN) were trained using different loss functions, and the results were examined. The best performing network, U-Net trained with the IoU loss function, achieved dice coefficient and IoU scores of 0.645 and 0.476, respectively. It was concluded that CNNs are a viable approach for segmenting SAS image data, but there is much room for improvement.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-160561
Date January 2019
CreatorsGranli, Petter
PublisherLinköpings universitet, Programvara och system
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0054 seconds