The influx of unmanned aerial systems over the last decade has increased need for airspace awareness. Monitoring solutions such as drone detection, tracking, and classification become increasingly important to maintain compliance for regulatory and security purposes, as well as for recognizing aircraft that may not be so. Vision systems offer significant size, weight, power, and cost (SWaP-C) advantages, which motivates exploration of algorithms to further aid with monitoring performance. A method to classify aircraft using vision systems to measure their motion characteristics is explored. It builds on the assumption that at least continuous visual detection or at most visual tracking of an object of interest is already accomplished. Monocular vision is in part limited by range/scale ambiguity, where range and scale information of an object projected onto the image plane of a camera using a pin- hole model is generally lost. In an indirect effort to attempt to recover scale information via identity, classification of aircraft can aid in improvement of. These measured motion characteristics can then be used to classify the perceived object based on its unique motion profile over time, using signal classification techniques. The study is not limited to just unmanned aircraft, but includes full scale aircraft in the simulated dataset used to provide a representative set of aircraft scale and motion. / Doctor of Philosophy / The influx of small drones over the last decade has increased need for airspace awareness to ensure they do not become a nuisance when operated by unqualified or ill-intentioned personnel. Monitoring airspace around locations where drone usage would be unwanted or a security issue is increasingly necessary, especially for more range and endurance capable fixed wing (airplane) drones. This work presents a solution utilizing a single camera to address the classification part of fixed wing drone monitoring, as cameras are extremely common, generally cheap, information rich sensors. Once an aircraft of interest is detected, classifying it can provide additional information regarding its intentions. It can also help improve visual detection and tracking performance since classification can help change expectations of where and how the aircraft may continue to travel. Most existing visual classification works rely on features visible on the aircraft itself or its silhouette shape. This work discusses an approach to classification by characterizing visually perceived motion of an aircraft as it flies through the air. The study is not limited to just drones, but includes full scale aircraft in the simulated dataset used. Video of an airplane is used to extract motion from each frame. This motion is condensed to and expressed as a single time signal, that is then classified using a neural network trained to recognize audio samples using a time-frequency representation called a spectrogram. This transfer learning approach with Resnet based spectrogram classification is able to achieve 90.9% precision on the simulated test set used.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/107805 |
Date | 19 January 2022 |
Creators | Chaudhry, Haseeb |
Contributors | Mechanical Engineering, Kochersberger, Kevin Bruce, Tokekar, Pratap, Woolsey, Craig A., Wicks, Alfred L. |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Dissertation |
Format | ETD, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0033 seconds