Return to search

Integrating Multiple Deep Learning Models for Disaster Description in Low-Altitude Videos

Computer vision technologies are rapidly improving and becoming more important in disaster response. The majority of disaster description techniques now focus either on identify objects or categorize disasters. In this study, we trained multiple deep neural networks on low-altitude imagery with highly imbalanced and noisy labels. We utilize labeled images from the LADI dataset to formulate a solution for general problem in disaster classification and object detection. Our research integrated and developed multiple deep learning models that does the object detection task as well as the disaster scene classification task. Our solution is competitive in the TRECVID Disaster Scene Description and Indexing (DSDI) task, demonstrating that it is comparable to other suggested approaches in retrieving disaster-related video clips.

Identiferoai:union.ndltd.org:unt.edu/info:ark/67531/metadc2048642
Date12 1900
CreatorsWang, Haili
ContributorsBuckles, Bill, Yang, Qing, Namuduri, Kamesh, Oh, JungHwan
PublisherUniversity of North Texas
Source SetsUniversity of North Texas
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
FormatText
RightsPublic, Wang, Haili, Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved.

Page generated in 0.724 seconds