Spelling suggestions: "subject:"bthermal imagery"" "subject:"3thermal imagery""
11 |
Improved detection and tracking of objects in surveillance videoDenman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical
ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical
ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical
ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
|
12 |
A Perception Payload for Small-UAS Navigation in Structured EnvironmentsBharadwaj, Akshay S. 26 September 2018 (has links)
No description available.
|
13 |
MULTI-SPECTRAL FUSION FOR SEMANTIC SEGMENTATION NETWORKSJustin Cody Edwards (14700769) 31 May 2023 (has links)
<p> </p>
<p>Semantic segmentation is a machine learning task that is seeing increased utilization in multiples fields, from medical imagery, to land demarcation, and autonomous vehicles. Semantic segmentation performs the pixel-wise classification of images, creating a new, segmented representation of the input that can be useful for detected various terrain and objects within and image. Recently, convolutional neural networks have been heavily utilized when creating neural networks tackling the semantic segmentation task. This is particularly true in the field of autonomous driving systems.</p>
<p>The requirements of automated driver assistance systems (ADAS) drive semantic segmentation models targeted for deployment on ADAS to be lightweight while maintaining accuracy. A commonly used method to increase accuracy in the autonomous vehicle field is to fuse multiple sensory modalities. This research focuses on leveraging the fusion of long wave infrared (LWIR) imagery with visual spectrum imagery to fill in the inherent performance gaps when using visual imagery alone. This comes with a host of benefits, such as increase performance in various lighting conditions and adverse environmental conditions. Utilizing this fusion technique is an effective method of increasing the accuracy of a semantic segmentation model. Being a lightweight architecture is key for successful deployment on ADAS, as these systems often have resource constraints and need to operate in real-time. Multi-Spectral Fusion Network (MFNet) [ 1 ] accomplishes these parameters by leveraging a sensory fusion approach, and as such was selected as the baseline architecture for this research.</p>
<p>Many improvements were made upon the baseline architecture by leveraging a variety of techniques. Such improvements include the proposal of a novel loss function categorical cross-entropy dice loss, introduction of squeeze and excitation (SE) blocks, addition of pyramid pooling, a new fusion technique, and drop input data augmentation. These improvements culminated in the creation of the Fast Thermal Fusion Network (FTFNet). Further improvements were made by introducing depthwise separable convolutional layers leading to lightweight FTFNet variants, FTFNet Lite 1 & 2.</p>
|
14 |
Characterisation of volcanic emissions through thermal vision / Caractérisation des émissions volcaniques par la vision thermiqueBombrun, Maxime 01 October 2015 (has links)
En avril 2010, l’éruption de l’Eyjafjallajökull (Islande) a projeté des cendres sur toute l’Europe pendant six jours, causant d’importantes perturbations aériennes. Cette crise a soulevé la nécessité de mieux comprendre la dynamique des panaches lors de l’émission, de la dispersion, et de la retombée afin d’améliorer les modèles de suivis et de prédiction de ces phénomènes. Cette éruption a été classée comme Strombolienne. Ce type d’éruption offre un large panel de manifestations (coulée de lave, paroxysmes) et peut être utilisé comme indicateur d’éruptions plus dangereuses. Les éruptions stromboliennes permettent généralement une observation à quelques centaines de mètres tout en assurant la sécurité des opérateurs et du matériel. Depuis 2001, les caméras thermiques ont été de plus en plus utilisées pour comprendre la dynamique des évènements volcaniques. Toutefois, l’analyse, la modélisation et le post-traitement de ces données thermiques n’est toujours pas totalement informatisé. Durant ma thèse, j’ai étudié les différentes composantes d’une éruption strombolienne depuis les fines particules éjectées au niveau du cratère jusqu’à la vision d’ensemble offerte par les images satellites. Dans l’ensemble, j’ai caractérisé les émissions volcaniques à travers l’imagerie thermique. / In April 2010, the eruption of Eyjafjallajökull (Iceland) threw volcanic ash across northwest Europe for six days which led to air travel disruption. This recent crisis spotlighted the necessity to parameterise plume dynamics through emission, dispersion and fall out as to better model, track and forecast cloud motions. This eruption was labeled as a Strombolian-to-Sub-Plinian eruption type. Strombolian eruptions are coupled with a large range of volcanic event types (Lava flows, paroxysms) and eruption styles (Hawaiian, Sub-plinian) and offer a partial precursory-indicator of more dangerous eruptions. In addition, strombolian eruptions are small enough to allow observations from within few hundred meters with relative safety, for both operators and material. Since 2001, thermal cameras have been increasingly used to track, parameterise and understand dynamic volcanic events. However, analyses, modelling and post-processing of thermal data are still not fully automated. In this thesis, I focus on the different components of strombolian eruptions at the full range of remote sensing spatial scales. These range from millimeters for particles to kilometers for the entire features via satellite images. Overall, I aim to characterise volcanic emissions through thermal vision.
|
Page generated in 0.0684 seconds