As the Automotive industry is heavily regulated from a quality point of view, excellence in pro-duction is obligatory. Due to the fact that removing human error from humans is impossible, new solutions must be found. The transition to more data driven production strategies enables the implantation of automated vision systems for replacing humans in simple classification tasks. As research in the field of artificial intelligence advances, the hardware required to run the algorithms decreases. Concurrently small computing platforms break new performance records and the innovation space converges. This work harnesses state-of-the-art from both domains by implementing a plug-on vision system, driven by a resource-constrained edge device in a production line. The implemented CNN-model based on the MobileNetV2 architecture achieved 97.80, 99.93, and 95.67% in accuracy, precision, and recall respectively. The model was trained using only 100 physical samples, which were expanded by a ratio of 1:15 through innovative real world and digital augmentations. The core of the vision system was a commodity device, the Raspberry Pi 4. The solution fulfilled all the requirements while sparking new development ideas for future work.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:mdh-63130 |
Date | January 2023 |
Creators | Moberg, John, Widén, Jonathan |
Publisher | Mälardalens universitet, Akademin för innovation, design och teknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0019 seconds