This thesis investigates the application of index based, region segmentation, and deep learning methods to the sensor fusion of omnidirectional (O-D) Infrared (IR) sensors, Kinnect sensors, and O-D vision sensors to increase the level of intelligent perception for unmanned robotic platforms. The goals of this work is first to provide a more robust calibration approach and improve the calibration of low resolution and noisy IR O-D cameras. Then our goal was to explore the best approach to sensor fusion for vegetation detection. We looked at index based, region segmentation, and deep learning methods and compared them with a goal of significant reduction in false positives while maintaining reasonable vegetation detection.
The results are as follows:
Direct Spherical Calibration of the IR camera provided a more consistent and robust calibration board capture and resulted in the best overall calibration results with sub-pixel accuracy
The best approach for sensor fusion for vegetation detection was the deep learning approach, the three methods are detailed in the following chapters with the results summarized here.
Modified Normalized Difference Vegetation Index approach achieved 86.74% recognition and 32.5% false positive, with peaks to 80%
Thermal Region Fusion (TRF) achieved a lower recognition rate at 75.16% but reduced false positives to 11.75% (a 64% reduction)
Our Deep Learning Fusion Network (DeepFuseNet) results demonstrated that deep learning approach showed the best results with a significant (92%) reduction in false positives when compared to our modified normalized difference vegetation index approach. The recognition was 95.6% with 2% false positive.
Current approaches are primarily focused on O-D color vision for localization, mapping, and tracking and do not adequately address the application of these sensors to vegetation detection. We will demonstrate the contradiction between current approaches and our deep sensor fusion (DeepFuseNet) for vegetation detection. The combination of O-D IR and O-D color vision coupled with deep learning for the extraction of vegetation material type, has great potential for robot perception. This thesis will look at two architectures: 1) the application of Autoencoders Feature Extractors feeding a deep Convolution Neural Network (CNN) fusion network (DeepFuseNet), and 2) Bottleneck CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors. We show that the vegetation recognition rate and the number of false detects inherent in the classical indices based spectral decomposition are greatly improved using our DeepFuseNet architecture.
We first investigate the calibration of omnidirectional infrared (IR) camera for intelligent perception applications. The low resolution omnidirectional (O-D) IR image edge boundaries are not as sharp as with color vision cameras, and as a result, the standard calibration methods were harder to use and less accurate with the low definition of the omnidirectional IR camera. In order to more fully address omnidirectional IR camera calibration, we propose a new calibration grid center coordinates control point discovery methodology and a Direct Spherical Calibration (DSC) approach for a more robust and accurate method of calibration. DSC addresses the limitations of the existing methods by using the spherical coordinates of the centroid of the calibration board to directly triangulate the location of the camera center and iteratively solve for the camera parameters. We compare DSC to three Baseline visual calibration methodologies and augment them with additional output of the spherical results for comparison. We also look at the optimum number of calibration boards using an evolutionary algorithm and Pareto optimization to find the best method and combination of accuracy, methodology and number of calibration boards. The benefits of DSC are more efficient calibration board geometry selection, and better accuracy than the three Baseline visual calibration methodologies.
In the context of vegetation detection, the fusion of omnidirectional (O-D) Infrared (IR) and color vision sensors may increase the level of vegetation perception for unmanned robotic platforms. A literature search found no significant research in our area of interest. The fusion of O-D IR and O-D color vision sensors for the extraction of feature material type has not been adequately addressed. We will look at augmenting indices based spectral decomposition with IR region based spectral decomposition to address the number of false detects inherent in indices based spectral decomposition alone. Our work shows that the fusion of the Normalized Difference Vegetation Index (NDVI) from the O-D color camera fused with the IR thresholded signature region associated with the vegetation region, minimizes the number of false detects seen with NDVI alone. The contribution of this work is the demonstration of two new techniques, Thresholded Region Fusion (TRF) technique for the fusion of O-D IR and O-D Color. We also look at the Kinect vision sensor fused with the O-D IR camera. Our experimental validation demonstrates a 64% reduction in false detects in our method compared to classical indices based detection.
We finally compare our DeepFuseNet results with our previous work with Normalized Difference Vegetation index (NDVI) and IR region based spectral fusion. This current work shows that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet deep learning approach out performs the previous NVDI fused with far infrared region segmentation. Our experimental validation demonstrates an 92% reduction in false detects in our method compared to classical indices based detection. This work contributes a new technique for the fusion of O-D vision and O-D IR sensors using two deep CNN feature extractors feeding into a fully connected CNN Network (DeepFuseNet).
Identifer | oai:union.ndltd.org:vcu.edu/oai:scholarscompass.vcu.edu:etd-6795 |
Date | 01 January 2019 |
Creators | Stone, David L. |
Publisher | VCU Scholars Compass |
Source Sets | Virginia Commonwealth University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | © The Author |
Page generated in 0.0025 seconds