• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 7
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 61
  • 30
  • 22
  • 19
  • 14
  • 12
  • 11
  • 11
  • 10
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Tracking a tennis ball using image processing techniques

Mao, Jinzi 30 August 2006
In this thesis we explore several algorithms for automatic real-time tracking of a tennis ball. We first investigate the use of background subtraction with color/shape recognition for fast tracking of the tennis ball. We then compare our solution with a cascade of boosted Haar classifiers [68] in a simulated environment to estimate the accuracy and ideal processing speeds. The results show that background subtraction techniques were not only faster but also more accurate than Haar classifiers. Following these promising results, we extend the background subtraction and develop other three improved techniques. These techniques use more accurate background models, more reliable and stringent criteria. They allow us to track the tennis ball in a real tennis environment with cameras having higher resolutions and frame rates. <p>We tested our techniques with a large number of real tennis videos. In the indoors environment, We achieved a true positive rate of about 90%, a false alarm rate of less than 2%, and a tracking speed of about 20 fps. For the outdoors environment, the performance of our techniques is not as good as the indoors cases due to the complexity and instability of the outdoors environment. The problem can be solved by resetting our system such that the camera focuses mainly on the tennis ball. Therefore, the influence of the external factors is minimized.<p>Despite the existing limitations, our techniques are able to track a tennis ball with very high accuracy and fast speed which can not be achieved by most tracking techniques currently available. We are confident that the motion information generated from our techniques is reliable and accurate. Giving this promising result, we believe some real-world applications can be constructed.
2

Tracking a tennis ball using image processing techniques

Mao, Jinzi 30 August 2006 (has links)
In this thesis we explore several algorithms for automatic real-time tracking of a tennis ball. We first investigate the use of background subtraction with color/shape recognition for fast tracking of the tennis ball. We then compare our solution with a cascade of boosted Haar classifiers [68] in a simulated environment to estimate the accuracy and ideal processing speeds. The results show that background subtraction techniques were not only faster but also more accurate than Haar classifiers. Following these promising results, we extend the background subtraction and develop other three improved techniques. These techniques use more accurate background models, more reliable and stringent criteria. They allow us to track the tennis ball in a real tennis environment with cameras having higher resolutions and frame rates. <p>We tested our techniques with a large number of real tennis videos. In the indoors environment, We achieved a true positive rate of about 90%, a false alarm rate of less than 2%, and a tracking speed of about 20 fps. For the outdoors environment, the performance of our techniques is not as good as the indoors cases due to the complexity and instability of the outdoors environment. The problem can be solved by resetting our system such that the camera focuses mainly on the tennis ball. Therefore, the influence of the external factors is minimized.<p>Despite the existing limitations, our techniques are able to track a tennis ball with very high accuracy and fast speed which can not be achieved by most tracking techniques currently available. We are confident that the motion information generated from our techniques is reliable and accurate. Giving this promising result, we believe some real-world applications can be constructed.
3

Transformational Models for Background Subtraction with Moving Cameras

Zamalieva, Daniya January 2014 (has links)
No description available.
4

Motion Detection for Video Surveillance

Rahman, Junaedur January 2008 (has links)
This thesis is related to the broad subject of automatic motion detection and analysis in videosurveillance image sequence. Besides, proposing the new unique solution, some of the previousalgorithms are evaluated, where some of the approaches are noticeably complementary sometimes.In real time surveillance, detecting and tracking multiple objects and monitoring their activities inboth outdoor and indoor environment are challenging task for the video surveillance system. Inpresence of a good number of real time problems limits scope for this work since the beginning. Theproblems are namely, illumination changes, moving background and shadow detection.An improved background subtraction method has been followed by foreground segmentation, dataevaluation, shadow detection in the scene and finally the motion detection method. The algorithm isapplied on to a number of practical problems to observe whether it leads us to the expected solution.Several experiments are done under different challenging problem environment. Test result showsthat under most of the problematic environment, the proposed algorithm shows the better qualityresult.
5

Detecting and tracking multiple interacting objects without class-specific models

Bose, Biswajit, Wang, Xiaogang, Grimson, Eric 25 April 2006 (has links)
We propose a framework for detecting and tracking multiple interacting objects from a single, static, uncalibrated camera. The number of objects is variable and unknown, and object-class-specific models are not available. We use background subtraction results as measurements for object detection and tracking. Given these constraints, the main challenge is to associate pixel measurements with (possibly interacting) object targets. We first track clusters of pixels, and note when they merge or split. We then build an inference graph, representing relations between the tracked clusters. Using this graph and a generic object model based on spatial connectedness and coherent motion, we label the tracked clusters as whole objects, fragments of objects or groups of interacting objects. The outputs of our algorithm are entire tracks of objects, which may include corresponding tracks from groups of objects during interactions. Experimental results on multiple video sequences are shown.
6

Multiple Human Body Detection in Crowds

Feng, Weinan January 2012 (has links)
The objective of this project is to use digital imaging devices to monitor a delineated area of the public space and to register statistics about people moving across this area. A feasible detecting approach, which is based on background subtraction, has been developed and has been tested on 39 images. Individual pedestrians in images can be detected and counted. The approach is suitably used to detect and count pedestrians without overlapping. Accuracy rate of detection is higher than 80%.
7

Détection d'objets stationnaires par une paire de caméras PTZ / Stationary object detection by a pair of ptz cameras

Guillot, Constant 23 January 2012 (has links)
L’analyse vidéo pour la vidéo-surveillance nécessite d’avoir une bonne résolution pour pouvoir analyser les flux vidéo avec un maximum de robustesse. Dans le contexte de la détection d’objets stationnaires dans les grandes zones, telles que les parkings, le compromis entre la largeur du champ d’observation et la bonne résolution est difficile avec un nombre limité de caméras. Nous allons utiliser une paire de caméras à focale variable de type Pan-Tilt-Zoom (PTZ). Les caméras parcourent un ensemble de positions (pan, tilt, zoom) prédéfinies afin de couvrir l’ensemble de la scène à une résolution adaptée. Chacune de ces positions peut être vue comme une caméra stationnaire à très faible taux de rafraîchissement. Dans un premier temps notre approche considère les positions des PTZ comme des caméras indépendantes. Une soustraction de fond robuste aux changements de luminosité reposant sur une grille de descripteurs SURF est effectuée pour séparer le fond du premier plan. La détection des objets stationnaires est effectuée par ré-identification des descripteurs à un modèle du premier plan. Dans un deuxième temps afin de filtrer certaines fausses alarmes et pouvoir localiser les objets en 3D une phase de mise en correspondance des silhouettes entre les deux caméras et effectuée. Les silhouettes des objets stationnaires sont placées dans un repère commun aux deux caméras en coordonnées rectifiées. Afin de pouvoir gérer les erreurs de segmentation, des groupes de silhouettes s’expliquant mutuellement et provenant des deux caméras sont alors formés. Chacun de ces groupes (le plus souvent constitué d’une silhouette de chaque caméra, mais parfois plus) correspond à un objet stationnaire. La triangulation des points frontière haut et bas permet ensuite d’accéder à sa localisation 3D et à sa taille. / Video analysis for video surveillance needs a good resolution in order to analyse video streams with a maximum of robustness. In the context of stationary object detection in wide areas a good compromise between a limited number of cameras and a high coverage of the area is hard to achieve. Here we use a pair of Pan-Tilt-Zoom (PTZ) cameras whose parameter (pan, tilt and zoom) can change. The cameras go through a predefined set of parameters chosen such that the entire scene is covered at an adapted resolution. For each triplet of parameters a camera can be assimilated to a stationary camera with a very low frame-rate and is referred to as a view. First each view is considered independently. A background subtraction algorithm, robust to changes in illumination and based on a grid of SURF descriptors, is proposed in order to separate background from foreground. Then the detection and segmentation of stationary objects is done by reidentifying foreground descriptor to a foreground model. Then in order to filter out false alarms and to localise the objects in the3D world, the detected stationary silhouettes are matched between the two cameras. To remain robust to segmentation errors, instead of matched a silhouette to another, groups of silhouettes from the two cameras and mutually explaining each other are matched. Each of the groups then correspond to a stationary object. Finally the triangulation of the top and bottom points of the silhouettes gives an estimation of the position and size of the object.
8

Deep Learning Approach to Trespass Detection using Video Surveillance Data

Bashir, Muzammil 22 April 2019 (has links)
While railroad trespassing is a dangerous activity with significant security and safety risks, regular patrolling of potential trespassing sites is infeasible due to exceedingly high resource demands and personnel costs. There is thus a need to design an automated trespass detection and early warning prediction tool leveraging state-of-the-art machine learning techniques. Leveraging video surveillance through security cameras, this thesis designs a novel approach called ARTS (Automated Railway Trespassing detection System) that tackles the problem of detecting trespassing activity. In particular, we adopt a CNN-based deep learning architecture (Faster-RCNN) as the core component of our solution. However, these deep learning-based methods, while effective, are known to be computationally expensive and time consuming, especially when applied to a large amount of surveillance data. Given the sparsity of railroad trespassing activity, we design a dual-stage deep learning architecture composed of an inexpensive prefiltering stage for activity detection followed by a high fidelity trespass detection stage for robust classification. The former is responsible for filtering out frames that show little to no activity, this way reducing the amount of data to be processed by the later more compute-intensive stage which adopts state-of-the-art Faster-RCNN to ensure effective classification of trespassing activity. The resulting dual-stage architecture ARTS represents a flexible solution capable of trading-off performance and computational time. We demonstrate the efficacy of our approach on a public domain surveillance dataset.
9

Utilizando visão computacional para simular comportamentos de multidão de humanos virtuais

Jacques Junior, Julio Cezar Silveira 20 February 2006 (has links)
Made available in DSpace on 2015-03-05T13:56:58Z (GMT). No. of bitstreams: 0 Previous issue date: 20 / Hewlett-Packard Brasil Ltda / Este trabalho apresenta um modelo para extrair informações do mundo real, capturadas com a utilização de técnicas de visão computacional, no que tange acompanhamento de indivíduos, com o fim de simular e validar comportamentos de multidões de humanos virtuais. Uma grande dificuldade ao se tentar reproduzir de forma realista (por meio de simulação) o comportamento de uma multidão em um determinado espaço é informar para o modelo de simulação todos os atributos necessários para descrever o movimento das pessoas virtuais. Além das características individuais e coletivas das pessoas poderem produzir uma grande variedade de comportamentos, tornando sua modelagem complexa, o espaço também contém restrições que podem interferir no comportamento das pessoas. Neste trabalho é proposto um modelo onde pessoas do mundo real têm suas trajetórias capturadas de forma automática. Numa etapa de pós-processamento, as trajetórias capturadas são utilizadas para gerar campos de vetores velocidade que serão utilizados para aux / This study presents a model to extract information from the real world using computer vision techniques. In particular, we use tracking algorithms to extract the trajectories of filmed people, aiming to simulate and validate the behavior of virtual human crowds.A great challenge when trying to reproduce in a realistic manner (by means of simulation) the behavior of a crowd in a determined space is to inform to the simulation model all necessary attributes to describe the movement of virtual people. Individual and general features of people can produce a great variety of behaviors, making its modeling complex. Furthermore, the space also contains restrictions that can interfere on people behavior. In this study it is proposed a model in which people from the real world have their trajectories captured in an automatic manner. In a post-processing step, captured trajectories are used to generate velocity fields that will be used to help in the calculation of virtual human movement, providing more realistic s
10

Intelligent computer vision processing techniques for fall detection in enclosed environments

Rhuma, Adel January 2014 (has links)
Detecting unusual movement (falls) for elderly people in enclosed environments is receiving increasing attention and is likely to have massive potential social and economic impact. In this thesis, new intelligent computer vision processing based techniques are proposed to detect falls in indoor environments for senior citizens living independently, such as in intelligent homes. Different types of features extracted from video-camera recordings are exploited together with both background subtraction analysis and machine learning techniques. Initially, an improved background subtraction method is used to extract the region of a person in the recording of a room environment. A selective updating technique is introduced for adapting the change of the background model to ensure that the human body region will not be absorbed into the background model when it is static for prolonged periods of time. Since two-dimensional features can generate false alarms and are not invariant to different directions, more robust three-dimensional features are next extracted from a three-dimensional person representation formed from video-camera measurements of multiple calibrated video-cameras. The extracted three-dimensional features are applied to construct a single Gaussian model using the maximum likelihood technique. This can be used to distinguish falls from non-fall activity by comparing the model output with a single. In the final works, new fall detection schemes which use only one uncalibrated video-camera are tested in a real elderly person s home environment. These approaches are based on two-dimensional features which describe different human body posture. The extracted features are applied to construct a supervised method for posture classification for abnormal posture detection. Certain rules which are set according to the characteristics of fall activities are lastly used to build a robust fall detection model.

Page generated in 0.1106 seconds