Object detection in autonomous driving systems is a critical functionality demanding precise implementation. However, existing solutions often rely on single-sensor systems, leading to insufficient data representation and diminished accuracy and speed in object detection. Our research addresses these challenges by integrating fusion-based object detection frameworks and augmentation techniques, incorporating both camera and LiDAR sensor data. Firstly, we introduce Sniffer Faster R-CNN (SFR-CNN), a novel fusion framework that enhances regional proposal generation by refining proposals from both LiDAR and image-based sources, thereby accelerating detection speed. Secondly, we propose Sniffer Faster R-CNN++, a late fusion network that integrates pre-trained single-modality detectors, improving detection accuracy while reducing computational complexity. Our approach employs enhanced proposal refinement algorithms to enhance the detection of distant objects, resulting in significant improvements in accuracy on challenging datasets like KITTI and nuScenes. Finally, to address the sparsity inherent in LiDAR data, we introduce a novel method that generates virtual LiDAR points from camera images, augmented with semantic labels to detect sparsely distributed and occluded objects effectively and integration of distance-aware data augmentation (DADA) further enhances the model's ability to recognize distant objects, leading to significant improvements in detection accuracy overall.
Identifer | oai:union.ndltd.org:unt.edu/info:ark/67531/metadc2332584 |
Date | 05 1900 |
Creators | Dhakal, Sudip |
Contributors | Yang, Qing, Fu, Song, Morozov, Kirill, Zhao, Hui |
Publisher | University of North Texas |
Source Sets | University of North Texas |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation |
Format | Text |
Rights | Public, Dhakal, Sudip, Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved. |
Page generated in 0.0024 seconds