Digital surveillance system is indispensable for household protection, community security and traffic monitoring. With the instant awareness of unusual events, timely measures can be taken to efficiently curb the happening of crimes. The surveillance video can be subjected to further analysis to pin down the relevant figures, timing and process of a specific event.
With the growing number of surveillance cameras set up and the expanse of surveillance area, the traditional split-screen display approach cannot provide intuitive correspondence between the images acquired and the areas under surveillance. This might further lead to failure of timely response to an event requiring immediate attention. Therefore, a mapping between the images acquired and the 3D surveillance area is needed to establish intuitively visual correspondence. Also, the traditional monitoring system usually equipped with a plurality of wide-angle lenses with stationary bases or single wide-angle lens travel periodically on a chassis. The cost for setting up a system with multiple surveillance cameras are higher. After displaying the images captured in a split-screen monitor, the images are usually too small to be useful in identifying the contents of a scene. On the other hand, a single-lens configuration cannot continuously monitor the same area of a scene. The periodic scanning pattern unavoidably leaves some portions unattended. Even though wide-angle lenses provide wider coverage of surveillance area, yet the lower resolution of images captured cannot provide effective identification of figures or articles in the scene. The system configuration and functionality of the traditional surveillance system leave much to be desired.
In the Event-Trigged Virtual Surveillance Environment proposed, the satellite picture will be employed to provide 2D surveillance area information. Users then enter relevant site information, such as the number of floors, floor map, camera setup location and types, etc. This information is combined to construct 3D surveillance scene, and the images acquired by surveillance cameras are pasted into the constructed 3D model to provide intuitively visual presentation. The status of different types of surveillance devices, such as infrared, circuit breaker, etc., can also be displayed visually in the 3D scene constructed.
The images captured through a plural number of cameras are integrated. Intrusion path is analyzed and predicted. With the coordination between different lenses, suspected objects can be detected and tracked. In one embodiment, Both wide-angle and telephoto lenses are used to improve the pitfalls of the aforementioned system. A wide-angle lens is responsible for the wide-area surveillance. When an unusual event is detected, one or several telephoto lenses, mounted on movable chassis, are directed to lock and track targets. Images with much higher resolution than those of wide-angle one can be acquired through the telephoto lenses. With the proposed system setup, a wide area of coverage and an improved resolution of target images can be simultaneously satisfied. Face detection and face recognition paradigms can be further applied to the images acquired to determine whether the target contains human faces and recognize the identities of the subjects. Depending on the results of face recognition, no warning is given for persons registered, or instant warning is dispatched for illegal intrusion. Combined with the analysis of target size and pattern of movement, and dynamic background updating, the false alarms due to changes attributable to environmental lighting condition, fallen leaves, water ripples, etc., can be significantly reduced. On detecting of security threat, the surveillance video and warning message are transmitted through internet and wireless channels to pre-selected network terminals or personal mobile devices to facilitate timely process. The settings of message warning can be modified by the network terminals or personal mobile devices.
The focuses of this paper include 3D surveillance scene construction and mapping of surveillance images to the constructed scene, detect objects and virtual patrol. In the first phase, 2D information extracted from satellite pictures and site data entered manually are combined to construct 3D surveillance scene. The images received from cameras are pasted into the corresponding areas in the 3D scene constructed. The mapping between the surveillance images and scene can provide more intuitive presentation in a wide-range area equipped with massive number of cameras. In the next phase, incorporated with the dynamic background updating, the analysis of target size and movement pattern, the false alarm rate can be improved in the face of environmental lighting condition change, fallen leaves, etc. Users can also real-time inspect the surveillance scene or change the setting of the remote surveillance system.
Identifer | oai:union.ndltd.org:NSYSU/oai:NSYSU:etd-0126110-120446 |
Date | 26 January 2010 |
Creators | Chen, Shih-pin |
Contributors | Chungnan Lee, John Y. Chiang, Shing-Min Liu |
Publisher | NSYSU |
Source Sets | NSYSU Electronic Thesis and Dissertation Archive |
Language | Cholon |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0126110-120446 |
Rights | not_available, Copyright information available at source archive |
Page generated in 0.0019 seconds