Common Machine-Learning (ML) approaches for scene classification require a large amountof training data. However, for classification of depth sensor data, in contrast to image data, relativelyfew databases are publicly available and manual generation of semantically labeled 3D point clouds isan even more time-consuming task. To simplify the training data generation process for a wide rangeof domains, we have developed theBLAINDERadd-on package for the open-source 3D modelingsoftware Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniquesLight Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within theBLAINDERadd-on, different depth sensors can be loaded from presets, customized sensors can beimplemented and different environmental conditions (e.g., influence of rain, dust) can be simulated.The semantically labeled data can be exported to various 2D and 3D formats and are thus optimizedfor different ML applications and visualizations. In addition, semantically labeled images can beexported using the rendering functionalities of Blender.
Identifer | oai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:92418 |
Date | 02 July 2024 |
Creators | Reitmann, Stefan, Neumann, Lorenzo, Jung, Bernhard |
Contributors | Technische Universität Bergakademie Freiberg |
Publisher | MDPI |
Source Sets | Hochschulschriftenserver (HSSS) der SLUB Dresden |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/publishedVersion, doc-type:article, info:eu-repo/semantics/article, doc-type:Text |
Rights | info:eu-repo/semantics/openAccess |
Relation | 1424-8220, 2052857-7, https://doi.org/10.3390/s21062144 |
Page generated in 0.0528 seconds