91 |
Human Detection, Tracking and Segmentation in Surveillance VideoShu, Guang 01 January 2014 (has links)
This dissertation addresses the problem of human detection and tracking in surveillance videos. Even though this is a well-explored topic, many challenges remain when confronted with data from real world situations. These challenges include appearance variation, illumination changes, camera motion, cluttered scenes and occlusion. In this dissertation several novel methods for improving on the current state of human detection and tracking based on learning scene-specific information in video feeds are proposed. Firstly, we propose a novel method for human detection which employs unsupervised learning and superpixel segmentation. The performance of generic human detectors is usually degraded in unconstrained video environments due to varying lighting conditions, backgrounds and camera viewpoints. To handle this problem, we employ an unsupervised learning framework that improves the detection performance of a generic detector when it is applied to a particular video. In our approach, a generic DPM human detector is employed to collect initial detection examples. These examples are segmented into superpixels and then represented using Bag-of-Words (BoW) framework. The superpixel-based BoW feature encodes useful color features of the scene, which provides additional information. Finally a new scene-specific classifier is trained using the BoW features extracted from the new examples. Compared to previous work, our method learns scene-specific information through superpixel-based features, hence it can avoid many false detections typically obtained by a generic detector. We are able to demonstrate a significant improvement in the performance of the state-of-the-art detector. Given robust human detection, we propose a robust multiple-human tracking framework using a part-based model. Human detection using part models has become quite popular, yet its extension in tracking has not been fully explored. Single camera-based multiple-person tracking is often hindered by difficulties such as occlusion and changes in appearance. We address such problems by developing an online-learning tracking-by-detection method. Our approach learns part-based person-specific Support Vector Machine (SVM) classifiers which capture articulations of moving human bodies with dynamically changing backgrounds. With the part-based model, our approach is able to handle partial occlusions in both the detection and the tracking stages. In the detection stage, we select the subset of parts which maximizes the probability of detection. This leads to a significant improvement in detection performance in cluttered scenes. In the tracking stage, we dynamically handle occlusions by distributing the score of the learned person classifier among its corresponding parts, which allows us to detect and predict partial occlusions and prevent the performance of the classifiers from being degraded. Extensive experiments using the proposed method on several challenging sequences demonstrate state-of-the-art performance in multiple-people tracking. Next, in order to obtain precise boundaries of humans, we propose a novel method for multiple human segmentation in videos by incorporating human detection and part-based detection potential into a multi-frame optimization framework. In the first stage, after obtaining the superpixel segmentation for each detection window, we separate superpixels corresponding to a human and background by minimizing an energy function using Conditional Random Field (CRF). We use the part detection potentials from the DPM detector, which provides useful information for human shape. In the second stage, the spatio-temporal constraints of the video is leveraged to build a tracklet-based Gaussian Mixture Model for each person, and the boundaries are smoothed by multi-frame graph optimization. Compared to previous work, our method could automatically segment multiple people in videos with accurate boundaries, and it is robust to camera motion. Experimental results show that our method achieves better segmentation performance than previous methods in terms of segmentation accuracy on several challenging video sequences. Most of the work in Computer Vision deals with point solution; a specific algorithm for a specific problem. However, putting different algorithms into one real world integrated system is a big challenge. Finally, we introduce an efficient tracking system, NONA, for high-definition surveillance video. We implement the system using a multi-threaded architecture (Intel Threading Building Blocks (TBB)), which executes video ingestion, tracking, and video output in parallel. To improve tracking accuracy without sacrificing efficiency, we employ several useful techniques. Adaptive Template Scaling is used to handle the scale change due to objects moving towards a camera. Incremental Searching and Local Frame Differencing are used to resolve challenging issues such as scale change, occlusion and cluttered backgrounds. We tested our tracking system on a high-definition video dataset and achieved acceptable tracking accuracy while maintaining real-time performance.
|
92 |
Automating Deep-Sea Video AnnotationEgbert, Hanson 01 June 2021 (has links) (PDF)
As the world explores opportunities to develop offshore renewable energy capacity, there will be a growing need for pre-construction biological surveys and post-construction monitoring in the challenging marine environment. Underwater video is a powerful tool to facilitate such surveys, but the interpretation of the imagery is costly and time-consuming. Emerging technologies have improved automated analysis of underwater video, but these technologies are not yet accurate or accessible enough for widespread adoption in the scientific community or industries that might benefit from these tools.
To address these challenges, prior research developed a website that allows to: (1) Quickly play and annotate underwater videos, (2) Create a short tracking video for each annotation that shows how an annotated concept moves in time, (3) Verify the accuracy of existing annotations and tracking videos, (4) Create a neural network model from existing annotations, and (5) Automatically annotate unwatched videos using a model that was previously created. It uses both validated and unvalidated annotations and automatically generated annotations from trackings to count the number of Rathbunaster californicus (starfish) and Strongylocentrotus fragilis (sea urchin) with count accuracy of 97% and 99%, respectively, and F1 score accuracy of 0.90 and 0.81, respectively.
The thesis explores several improvements to the model above. First, a method to sync JavaScript video frames to a stable Python environment. Second, reinforcement training using marine biology experts and the verification feature. Finally, a hierarchical method that allows the model to combine predictions of related concepts. On average, this method improved the F1 scores from 0.42 to 0.45 (a relative increase of 7%) and count accuracy from 58% to 69% (a relative increase of 19%) for the concepts Umbellula Lindahli and Funiculina.
|
93 |
Accelerating Multi-target Visual Tracking on Smart Edge DevicesNalaie, Keivan January 2023 (has links)
\prefacesection{Abstract}
Multi-object tracking (MOT) is a key building block in video analytics and finds extensive use in surveillance, search and rescue, and autonomous driving applications. Object detection, a crucial stage in MOT, dominates in the overall tracking inference time due to its reliance on Deep Neural Networks (DNNs). Despite the superior performance of cutting-edge object detectors, their extensive computational demands limit their real-time application on embedded devices that possess constrained processing capabilities. Hence, we aim to reduce the computational burdens of object detection while maintaining tracking performance.
As the first approach, we adapt frame resolutions to reduce computational complexity. During inference, frame resolutions can be tuned according to the complexity of visual scenes. We present DeepScale, a model-agnostic frame resolution selection approach that operates on top of existing fully convolutional network-based trackers. By analyzing the effect of frame resolution on detection performance, DeepScale strikes good trade-offs between detection accuracy and processing speed by adapting frame resolutions on-the-fly.
Our second approach focuses on enhancing the efficiency of a tracker by model adaptation. We introduce AttTrack to expedite tracking by interleaving the execution of object detectors of different model sizes in inference. A sophisticated network (teacher) runs for keyframes only while, for non-keyframe, knowledge is transferred from the teacher to a smaller network (student) to improve the latter’s performance.
Our third contribution involves exploiting temporal-spatial redundancies to enable real-time multi-camera tracking. We propose the MVSparse pipeline which consists of a central processing unit that aggregates information from multiple cameras (on an edge server or in the cloud) and distributed lightweight Reinforcement Learning (RL) agents running on individual cameras that predict the informative blocks in the current frame based on past frames on the same camera and detection results from other cameras. / Thesis / Doctor of Science (PhD)
|
94 |
Kinematic Object Track Stitcher for Post Tracking Fragmentation Detection and CorrectionBeigh, Alex Wunderlin 03 June 2015 (has links)
No description available.
|
95 |
Efficient and Robust Video Understanding for Human-robot Interaction and DetectionLi, Ying 09 October 2018 (has links)
No description available.
|
96 |
Directional Ringlet Intensity Feature Transform for Tracking in Enhanced Wide Area Motion ImageryKrieger, Evan January 2015 (has links)
No description available.
|
97 |
Advanced wavelet application for video compression and video object trackingHe, Chao 13 September 2005 (has links)
No description available.
|
98 |
Ball tracking algorithm for mobile devicesRzechowski, Kamil January 2020 (has links)
Object tracking seeks to determine the object size and location in the following video frames, given the appearance and location of the object in the first frame. The object tracking approaches can be divided into categories: online trained trackers and offline trained tracker. First group of trackers is based on handcrafted features like HOG or Color Names. This group is characterised by high inference speed, but struggles from lack of highly deterministic features. On the other hand the second group uses Convolution Neural Networks as features extractors. They generate highly meaningful features, but limit the inference speed and possibility of learning object appearance in the offline phase. The following report investigates the problem of tracking a soccer ball on mobile devices. Keeping in mind the limited computational resources of mobile devices, we propose the fused tracker. At the beginning of the video the simple online trained tracker is fired. As soon as the tracker looses the ball, the more advanced tracer, based on deep neural networks is fired. The fusion allows to speed up the inference time, by using the simple tracker as much as possible, but keeps the tracking success rate high, by using the more advanced tracker after the object is lost by the first tracker. Both quantitative and qualitative experiments demonstrate the validity of this approach. / Objektspårning syftar till att bestämma objektets storlek och plats i följande videoramar, med tanke på objektets utseende och plats i den första bilden. Objektspårningsmetoderna kan delas in i kategorier: online-utbildade trackers och offline-utbildade trackers. Första gruppen av trackers är baserad på handgjorda funktioner som HOG eller Color Names. Denna grupp kännetecknas av hög inferenshastighet, men kämpar från brist på mycket deterministiska egenskaper. Å andra sidan använder den andra gruppen Convolution Neural Networks som funktioner för extrahering. De genererar mycket meningsfulla funktioner, men begränsar sluthastigheten och möjligheten att lära sig objekt i offlinefasen. Följande rapport undersöker problemet med att spåra en fotboll på mobila enheter. Med tanke på de begränsade beräkningsresurserna för mobila enheter föreslår vi den smälta trackern. I början av videon sparkas den enkla utbildade spåraren online. Så snart trackern förlorar bollen avfyras den mer avancerade spåraren, baserad på djupa neurala nätverk. Fusionen gör det möjligt att påskynda inferenstiden genom att använda den enkla trackern så mycket som möjligt, men håller spårningsfrekvensen hög, genom att använda den mer avancerade trackern efter att objektet förlorats av den första trackern. Både kvantitativa och kvalitativa experiment visar att detta tillvägagångssätt är giltigt.
|
99 |
Identifying seedling patterns in time-lapse imagingGustafsson, Nils January 2024 (has links)
With changing climate, it is necessary to investigate how different plants are af- fected by drought, which is the starting point for this project. The proposed project aims to apply Machine Learning tools to learn predictive patterns of Scots pine seedlings in response to drought conditions by measuring the canopy area and growing rate of the seedlings presented in the time-lapse images. There are 5 different families of Scots Pine researched in this project, therefore 5 different sets of time-lapse images will be used as the data set. The research group has previously created a method for finding the canopy area and computing the growth rate for the different families. Furthermore, the seedlings rotate in an individual pattern each day, which could prove to affect their tolerance to drought according to the research group and is currently not being measured. Therefore, we propose a method using an object detection model, such as Mask R-CNN, to detect and find each seedling’s respective region of interest. With the obtained region of interest, the goal will be to apply an object-tracking algorithm, such as a Dense Optical Flow Algorithm. Using different methods, such as the Shi-Tomasi or Lucas Kanade method, we aim to find feature points and track motion between images to find the direction and velocity of the rotation for each seedling. The tracking algorithms will then be evaluated based on their performance in estimating the rotation features against an annotated sub-set of the time-lapse data set.
|
100 |
A LIGHTWEIGHT CAMERA-LIDAR FUSION FRAMEWORK FOR TRAFFIC MONITORING APPLICATIONS / A CAMERA-LIDAR FUSION FRAMEWORKSochaniwsky, Adrian January 2024 (has links)
Intelligent Transportation Systems are advanced technologies used to reduce traffic
and increase road safety for vulnerable road users. Real-time traffic monitoring is an
important technology for collecting and reporting the information required to achieve
these goals through the detection and tracking of road users inside an intersection. To
be effective, these systems must be robust to all environmental conditions. This thesis
explores the fusion of camera and Light Detection and Ranging (LiDAR) sensors to
create an accurate and real-time traffic monitoring system. Sensor fusion leverages
complimentary characteristics of the sensors to increase system performance in low-
light and inclement weather conditions. To achieve this, three primary components
are developed: a 3D LiDAR detection pipeline, a camera detection pipeline, and a
decision-level sensor fusion module. The proposed pipeline is lightweight, running
at 46 Hz on modest computer hardware, and accurate, scoring 3% higher than the
camera-only pipeline based on the Higher Order Tracking Accuracy metric. The
camera-LiDAR fusion system is built on the ROS 2 framework, which provides a
well-defined and modular interface for developing and evaluated new detection and
tracking algorithms. Overall, the fusion of camera and LiDAR sensors will enable
future traffic monitoring systems to provide cities with real-time information critical
for increasing safety and convenience for all road-users. / Thesis / Master of Applied Science (MASc) / Accurate traffic monitoring systems are needed to improve the safety of road users.
These systems allow the intersection to “see” vehicles and pedestrians, providing near
instant information to assist future autonomous vehicles, and provide data to city
planers and officials to enable reductions in traffic, emissions, and travel times. This
thesis aims to design, build, and test a traffic monitoring system that uses a camera
and 3D laser-scanner to find and track road users in an intersection. By combining a
camera and 3D laser scanner, this system aims to perform better than either sensor
alone. Furthermore, this thesis will collect test data to prove it is accurate and able
to see vehicles and pedestrians during the day and night, and test if runs fast enough
for “live” use.
|
Page generated in 0.065 seconds