This study addresses the challenge of lens obfuscations in off-road autonomous vehicles, which compromise the essential visual inputs for safe navigation. Using a tiered approach, the research employs neural network architectures for preliminary image classification, semantic segmentation, and image-to-image translation to rectify obscured visual inputs. Initial classification using MobileNetV2 sets the stage for U-Net-driven semantic segmentation to identify obfuscated regions, followed by a modified Pix-to-Pix model for image restoration. The evaluation showcases promising results in improving visual clarity, marking a significant stride towards enhancing autonomous vehicle operational robustness in off-road environments. This work lays a foundation for future explorations into advanced neural network architectures for real-time implementations in off-road terrains.
Identifer | oai:union.ndltd.org:MSSTATE/oai:scholarsjunction.msstate.edu:td-7126 |
Date | 10 May 2024 |
Creators | Harvel, Nicholas J. |
Publisher | Scholars Junction |
Source Sets | Mississippi State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Page generated in 0.0019 seconds