Visual surveillance of human crowds in a dynamic environment has attracted a great amount of computer vision research efforts in recent years. Moving object detection, which conventionally includes motion segmentation and optionally, object classification, is the first major task for any visual surveillance application. After detecting the targets, estimation of their geo-locations is needed to create the same reference coordinate system for them for higher-level decision-making. Depending on the required fidelity of decision, multi-target data association may be also needed at higher levels to differentiate multiple targets in a series of frames. Applying all these vision-based algorithms to a crowd surveillance system (a major application studied in this dissertation) using a team of cooperative unmanned vehicles (UVs), introduces new challenges to the problem. Since the visual sensors move with the UVs, and thus the targets and the environment are dynamic, it adds to the complexity and uncertainty of the video processing. Moreover, the limited onboard computation resources require more efficient algorithms to be proposed. Responding to these challenges, the goal of this dissertation is to design and develop an effective and efficient visual surveillance system based on dynamic data driven application system (DDDAS) paradigm to be used by the cooperative UVs for autonomous crowd control and border patrol. The proposed visual surveillance system includes different modules: 1) a motion detection module, in which a new method for detecting multiple moving objects, based on sliding window is proposed to segment the moving foreground using the moving camera onboard the unmanned aerial vehicle (UAV); 2) a target recognition module, in which a customized method based on histogram-of-oriented-gradients is applied to classify the human targets using the onboard camera of unmanned ground vehicle (UGV); 3) a target geo-localization module, in which a new moving-landmark-based method is proposed for estimating the geo-location of the detected crowd from the UAV, while a heuristic method based on triangulation is applied for geo-locating the detected individuals via the UGV; and 4) a multi-target data association module, in which the affinity score is dynamically adjusted to comply with the changing dispersion of the detected targets over successive frames. In this dissertation, a cooperative team of one UAV and multiple UGVs with onboard visual sensors is used to take advantage of the complementary characteristics (e.g. different fidelities and view perspectives) of these UVs for crowd surveillance. The DDDAS paradigm is also applied toward these vision-based modules, where the computational and instrumentation aspects of the application system are unified for more accurate or efficient analysis according to the scenario. To illustrate and demonstrate the proposed visual surveillance system, aerial and ground video sequences from the UVs, as well as simulation models are developed, and experiments are conducted using them. The experimental results on both developed videos and literature datasets reveal the effectiveness and efficiency of the proposed modules and their promising performance in the considered crowd surveillance application.
Identifer | oai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/625649 |
Date | January 2017 |
Creators | Minaeian, Sara, Minaeian, Sara |
Contributors | Son, Young-Jun, Liu, Jian, Son, Young-Jun, Liu, Jian, Valacich, Joseph S., Lien, Jyh-Ming |
Publisher | The University of Arizona. |
Source Sets | University of Arizona |
Language | en_US |
Detected Language | English |
Type | text, Electronic Dissertation |
Rights | Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. |
Page generated in 0.0025 seconds