Spelling suggestions: "subject:"lead detection"" "subject:"lead 1detection""
1 |
Human Body Part Detection And Multi-human Tracking Insurveillance VideosTopcu, Hasan Huseyin 01 May 2012 (has links) (PDF)
With the recent developments in Computer Vision and Pattern Recognition, surveillance applications are equipped with the capabilities of event/activity understanding and interpretation which usually require recognizing humans in real world scenes. Real world scenes such
as airports, streets and train stations are complex because they involve many people, complicated occlusions and cluttered backgrounds. Although complex real world scenes exist, human detectors have the capability to locate pedestrians accurately even in complex scenes
and visual trackers have the capability to track targets in cluttered environments. The integration of visual object detection and tracking, which are the fundamental features of available
surveillance applications, is one of the solutions for multi-human tracking problem in crowded scenes which is studied in this thesis.
In this thesis, human body part detectors, which are capable of detecting human heads and human upper body parts, are trained with Support Vector Machines (SVM) by using Histogram of Oriented Gradients (HOG), which is one of the state-of-the-art descriptor for human detection. The training process is elaborated by investigating the effects of the parameters of
the HOG descriptor. The human heads and upper body parts are searched in the region of interests (ROI) computed by detecting motion. In addition, these human body part detectors are integrated with a multi-human tracker which solves the data association problem with the
Multi Scan Markov Chain Monte Carlo Data Association (MCMCDA) algorithm. Associated measurements of human upper body part locations are used for state correction for each track.
State estimation is done through Kalman Filter. The performance of detectors are evaluated using MIT Pedestrian dataset and INRIA Human dataset.
|
2 |
Reconnaissance de formes et suivi de mouvements en 4D temps-réel : Restauration de cartes de profondeur / 4d real time object recognition and tracking : depth map restorationBrazey, Denis 09 December 2014 (has links)
Dans le cadre de cette thèse, nous nous intéressons à plusieurs problématiques liées au traitement de données 3D. La première concerne la détection et le suivi de personnes dans des séquences d'images de profondeur. Nous proposons une amélioration d'une méthode existante basée sur une étape de segmentation, puis de suivi des personnes. La deuxième problématique abordée est la détection et la modélisation de têtes dans un nuage de points 3D. Pour cela, nous adoptons une approche probabiliste basée sur un nouveau modèle de mélange sphérique. La dernière application traitée est liée à la restauration d'images de profondeur présentant des données manquantes. Nous proposons pour cela d'utiliser une méthode d'approximation de surface par Dm-splines d'interpolation avec changements d'échelle pour approximer et restaurer les données. Les résultats présentés illustrent l'efficacité des algorithmes développés. / In this dissertation, we are interested in several issues related to 3D data processing. The first one concerns people detection and tracking in depth map sequences. We propose an improvement of an existing method based on a segmentation stage followed by a tracking module. The second issue is head detection and modelling in 3D point clouds. In order to do this, we adopt a probabilistic approach based on a new spherical mixture model. The last considered application deals with the restoration of deteriorated depth maps. To solve this problem, we propose to use a surface approximation method based on interpolation Dm-splines with scale transforms to approximate and restore the image. Presented results illustrate the efficiency of the developed algorithms.
|
3 |
Tackling pedestrian detection in large scenes with multiple views and representations / Une approche réaliste de la détection de piétons multi-vues et multi-représentations pour des scènes extérieuresPellicanò, Nicola 21 December 2018 (has links)
La détection et le suivi de piétons sont devenus des thèmes phares en recherche en Vision Artificielle, car ils sont impliqués dans de nombreuses applications. La détection de piétons dans des foules très denses est une extension naturelle de ce domaine de recherche, et l’intérêt croissant pour ce problème est lié aux évènements de grande envergure qui sont, de nos jours, des scenarios à risque d’un point de vue de la sûreté publique. Par ailleurs, les foules très denses soulèvent des problèmes inédits pour la tâche de détection. De par le fait que les caméras ont le champ de vision le plus grand possible pour couvrir au mieux la foule les têtes sont généralement très petites et non texturées. Dans ce manuscrit nous présentons un système complet pour traiter les problèmes de détection et de suivi en présence des difficultés spécifiques à ce contexte. Ce système utilise plusieurs caméras, pour gérer les problèmes de forte occultation. Nous proposons une méthode robuste pour l’estimation de la position relative entre plusieurs caméras dans le cas des environnements requérant une surveillance. Ces environnements soulèvent des problèmes comme la grande distance entre les caméras, le fort changement de perspective, et la pénurie d’information en commun. Nous avons alors proposé d’exploiter le flot vidéo pour effectuer la calibration, avec l’objectif d’obtenir une solution globale de bonne qualité. Nous proposons aussi une méthode non supervisée pour la détection des piétons avec plusieurs caméras, qui exploite la consistance visuelle des pixels à partir des différents points de vue, ce qui nous permet d’effectuer la projection de l’ensemble des détections sur le plan du sol, et donc de passer à un suivi 3D. Dans une troisième partie, nous revenons sur la détection supervisée des piétons dans chaque caméra indépendamment en vue de l’améliorer. L’objectif est alors d’effectuer la segmentation des piétons dans la scène en partant d’une labélisation imprécise des données d’apprentissage, avec des architectures de réseaux profonds. Comme dernière contribution, nous proposons un cadre formel original pour une fusion de données efficace dans des espaces 2D. L’objectif est d’effectuer la fusion entre différents capteurs (détecteurs supervisés en chaque caméra et détecteur non supervisé en multi-vues) sur le plan du sol, qui représente notre cadre de discernement. nous avons proposé une représentation efficace des hypothèses composées qui est invariante au changement de résolution de l’espace de recherche. Avec cette représentation, nous sommes capables de définir des opérateurs de base et des règles de combinaison efficaces pour combiner les fonctions de croyance. Enfin, notre approche de fusion de données a été évaluée à la fois au niveau spatial, c’est à dire en combinant des détecteurs de nature différente, et au niveau temporel, en faisant du suivi évidentiel de piétons sur de scènes à grande échelle dans des conditions de densité variable. / Pedestrian detection and tracking have become important fields in Computer Vision research, due to their implications for many applications, e.g. surveillance, autonomous cars, robotics. Pedestrian detection in high density crowds is a natural extension of such research body. The ability to track each pedestrian independently in a dense crowd has multiple applications: study of human social behavior under high densities; detection of anomalies; large event infrastructure planning. On the other hand, high density crowds introduce novel problems to the detection task. First, clutter and occlusion problems are taken to the extreme, so that only heads are visible, and they are not easily separable from the moving background. Second, heads are usually small (they have a diameter of typically less than ten pixels) and with little or no textures. This comes out from two independent constraints, the need of one camera to have a field of view as high as possible, and the need of anonymization, i.e. the pedestrians must be not identifiable because of privacy concerns.In this work we develop a complete framework in order to handle the pedestrian detection and tracking problems under the presence of the novel difficulties that they introduce, by using multiple cameras, in order to implicitly handle the high occlusion issues.As a first contribution, we propose a robust method for camera pose estimation in surveillance environments. We handle problems as high distances between cameras, large perspective variations, and scarcity of matching information, by exploiting an entire video stream to perform the calibration, in such a way that it exhibits fast convergence to a good solution. Moreover, we are concerned not only with a global fitness of the solution, but also with reaching low local errors.As a second contribution, we propose an unsupervised multiple camera detection method which exploits the visual consistency of pixels between multiple views in order to estimate the presence of a pedestrian. After a fully automatic metric registration of the scene, one is capable of jointly estimating the presence of a pedestrian and its height, allowing for the projection of detections on a common ground plane, and thus allowing for 3D tracking, which can be much more robust with respect to image space based tracking.In the third part, we study different methods in order to perform supervised pedestrian detection on single views. Specifically, we aim to build a dense pedestrian segmentation of the scene starting from spatially imprecise labeling of data, i.e. heads centers instead of full head contours, since their extraction is unfeasible in a dense crowd. Most notably, deep architectures for semantic segmentation are studied and adapted to the problem of small head detection in cluttered environments.As last but not least contribution, we propose a novel framework in order to perform efficient information fusion in 2D spaces. The final aim is to perform multiple sensor fusion (supervised detectors on each view, and an unsupervised detector on multiple views) at ground plane level, that is, thus, our discernment frame. Since the space complexity of such discernment frame is very large, we propose an efficient compound hypothesis representation which has been shown to be invariant to the scale of the search space. Through such representation, we are capable of defining efficient basic operators and combination rules of Belief Function Theory. Furthermore, we propose a complementary graph based description of the relationships between compound hypotheses (i.e. intersections and inclusion), in order to perform efficient algorithms for, e.g. high level decision making.Finally, we demonstrate our information fusion approach both at a spatial level, i.e. between detectors of different natures, and at a temporal level, by performing evidential tracking of pedestrians on real large scale scenes in sparse and dense conditions.
|
4 |
Multi-sensor multi-person tracking on a mobile robot platformPoschmann, Peter 28 May 2018 (has links) (PDF)
Service robots need to be aware of persons in their vicinity in order to interact with them. People tracking enables the robot to perceive persons by fusing the information of several sensors. Most robots rely on laser range scanners and RGB cameras for this task. The thesis focuses on the detection and tracking of heads. This allows the robot to establish eye contact, which makes interactions feel more natural.
Developing a fast and reliable pose-invariant head detector is challenging. The head detector that is proposed in this thesis works well on frontal heads, but is not fully pose-invariant. This thesis further explores adaptive tracking to keep track of heads that do not face the robot. Finally, head detector and adaptive tracker are combined within a new people tracking framework and experiments show its effectiveness compared to a state-of-the-art system.
|
5 |
Multi-sensor multi-person tracking on a mobile robot platformPoschmann, Peter 02 January 2018 (has links)
Service robots need to be aware of persons in their vicinity in order to interact with them. People tracking enables the robot to perceive persons by fusing the information of several sensors. Most robots rely on laser range scanners and RGB cameras for this task. The thesis focuses on the detection and tracking of heads. This allows the robot to establish eye contact, which makes interactions feel more natural.
Developing a fast and reliable pose-invariant head detector is challenging. The head detector that is proposed in this thesis works well on frontal heads, but is not fully pose-invariant. This thesis further explores adaptive tracking to keep track of heads that do not face the robot. Finally, head detector and adaptive tracker are combined within a new people tracking framework and experiments show its effectiveness compared to a state-of-the-art system.
|
Page generated in 0.0945 seconds