The utilization of image analysis and object detection spans various industries, serving purposes such as anomaly detection, automated workflows, and monitoring tool wear and tear. This thesis addresses the challenge of achieving precise robot navigation between fixed start and end points by combining GPS and image analysis. The underlying motivation for tackling this issue lies in facilitating the creation of immersive videos, mainly aimed at individuals with disabilities, enabling them to virtually explore diverse locations through a compilation of shorter video clips. The research delves into diverse models for object detection frameworks and tools, including NVIDIA Detectnet, and YOLOv5. Through a comprehensive evaluation of their performance and accuracy, the thesis proceeds to implement a prototype system utilizing an Elegoo Smart Robot Car, a camera, a GPS module, and an embedded NVIDIA Jetson Nano system. Performance metrics such as precision, recall, and map are employed to assess the models' effectiveness. The findings indicate that the system demonstrates high accuracy and speed in detection, exhibiting robustness across varying lighting conditions and camera settings
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-527635 |
Date | January 2024 |
Creators | Balusulapalem, Hanumat Sri Naga Sai, Amarwani, Julie Rajkumar |
Publisher | Uppsala universitet, Institutionen för informationsteknologi |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | IT ; mDV 24 004 |
Page generated in 0.0022 seconds