Return to search

DEVELOPMENT OF MULTIMODAL FUSION-BASED VISUAL DATA ANALYTICS FOR ROBOTIC INSPECTION AND CONDITION ASSESSMENT

<div>This dissertation broadly focuses on autonomous condition assessment of civil infrastructures using vision-based methods, which present a plausible alternative to existing manual techniques. A region-based convolutional neural network (Faster R-CNN) is exploited for the detection of various earthquake-induced damages in reinforced concrete buildings. Four different damage categories are considered such as surface crack, spalling, spalling with exposed rebars, and severely buckled rebars. The performance of the model is evaluated on image data collected from buildings damaged under several past earthquakes taking place in different parts of the world. The proposed algorithm can be integrated with inspection drones or mobile robotic platforms for quick assessment of damaged buildings leading to expeditious planning of retrofit operations, minimization of damage cost, and timely restoration of essential services. </div><div><br></div><div> </div><div> Besides, a computer vision-based approach is presented to track the evolution of a damage over time by analysing historical visual inspection data. Once a defect is detected in a recent inspection data set, its spatial correspondences in the data collected during previous rounds of inspection are identified leveraging popular computer vision-based techniques. A single reconstructed view is then generated for each inspection round by synthesizing the candidate corresponding images. The chronology of damage thus established facilitates time-based quantification and lucid visual interpretation. This study is likely to enhance the efficiency structural inspection by introducing the time dimension into the autonomous condition assessment pipeline.</div><div><br></div><div> </div><div> Additionally, this dissertation incorporates depth fusion into a CNN-based semantic segmentation model. A 3D animation and visual effect software is exploited to generate a synthetic database of spatially aligned RGB and depth image pairs representing various damage categories which are commonly observed in reinforced concrete buildings. A number of encoding techniques are explored for representing the depth data. Besides, various schemes for fusion of RGB and depth data are investigated to identify the best fusion strategy. It was observed that depth fusion enhances the performance of deep learning-based damage segmentation algorithms significantly. Furthermore, strategies are proposed to manufacture depth information from corresponding RGB frame, which eliminates the need of depth sensing at the time of deployment without compromising on segmentation performance. Overall, the scientific research presented in this dissertation will be a stepping stone towards realizing a fully autonomous structural condition assessment pipeline.</div>

  1. 10.25394/pgs.17104541.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/17104541
Date01 December 2021
CreatorsTarutal Ghosh Mondal (11775980)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY-NC-SA 4.0
Relationhttps://figshare.com/articles/thesis/DEVELOPMENT_OF_MULTIMODAL_FUSION-BASED_VISUAL_DATA_ANALYTICS_FOR_ROBOTIC_INSPECTION_AND_CONDITION_ASSESSMENT/17104541

Page generated in 0.0024 seconds