A key component of autonomous outdoor navigation in unstructured environments is the classification of terrain. Recent development in the area of machine learning show promising results in the task of scene segmentation but are limited by the labels used during their supervised training. In this work, we present and evaluate a flexible strategy for terrain classification based on three components: A deep convolutional neural network trained on colour, depth and infrared data which provides feature vectors for image segmentation, a set of exchangeable segmentation engines that operate in this feature space and a novel, air pressure based actuator responsible for distinguishing rigid obstacles from those that only appear as such. Through the use of unsupervised learning we eliminate the need for labeled training data and allow our system to adapt to previously unseen terrain classes. We evaluate the performance of this classification scheme on a mobile robot platform in an environment containing vegetation and trees with a Kinect v2 sensor as low-cost depth camera. Our experiments show that the features generated by our neural network are currently not competitive with state of the art implementations and that our system is not yet ready for real world applications.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-60893 |
Date | January 2016 |
Creators | Zeltner, Felix |
Publisher | Luleå tekniska universitet, Institutionen för system- och rymdteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0019 seconds