• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 12
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-Time Spatial Object Tracking on iPhone

Heidari, Amin 08 December 2011 (has links)
In this thesis, a novel Object Tracking Algorithm is proposed which tracks objects on Apple iPhone 4 platform in real-time. The system utilizes the colorspace of the frames provided by iPhone camera, in parallel with the motion data provided by iPhone motion sensors, to cancel the effect of iPhone rotations during tracking and matching different candidate tracks. The proposed system also adapts to changes in target appearance and size, thus leading to an object tracking robust to such changes. Several experiments conducted on actual video sequences are used to illustrate the functionality of the proposed approach.
2

Real-Time Spatial Object Tracking on iPhone

Heidari, Amin 08 December 2011 (has links)
In this thesis, a novel Object Tracking Algorithm is proposed which tracks objects on Apple iPhone 4 platform in real-time. The system utilizes the colorspace of the frames provided by iPhone camera, in parallel with the motion data provided by iPhone motion sensors, to cancel the effect of iPhone rotations during tracking and matching different candidate tracks. The proposed system also adapts to changes in target appearance and size, thus leading to an object tracking robust to such changes. Several experiments conducted on actual video sequences are used to illustrate the functionality of the proposed approach.
3

Multi-Template Temporal Siamese Network for Visual Object Tracking

Sekhavati, Ali 04 January 2023 (has links)
Visual object tracking is the task of giving a unique ID to an object in a video frame, understanding whether it is present or not in a current frame and if it is present, precisely localizing its position. There are numerous challenges in object tracking, such as change of illumination, partial or full occlusion, change of target appearance, blurring caused by camera movement, presence of similar objects to the target, changes in video image quality through time, etc. Due to these challenges, traditional computer vision techniques cannot perform high-quality tracking, especially for long-term tracking. Almost all the state-of-the-art methods in object tracking use artificial intelligence nowadays, and more specifically, Convolutional Neural Networks. In this work, we present a Siamese based tracker which is different from previous works in two ways. Firstly, most of the Siamese based trackers takes the target in the first frame as the ground truth. Despite the success of such methods in previous years, it does not guarantee robust tracking as it cannot handle many of the challenges causing change in target appearance, such as blurring caused by camera movement, occlusion, pose variation, etc. In this work, while keeping the first frame as a template, we add five other additional templates that are dynamically updated and replaced considering target classification score in different frames. Diversity, similarity and recency are criteria to choose the members of the bag. We call it as a bag of dynamic templates. Secondly, many Siamese based trackers are vulnerable to mistakenly tracking another similar looking object instead of the intended target. Many researchers proposed computationally expensive approaches, such as tracking all the distractors and the given target and discriminate them in every frame. In this work, we propose an approach to handle this issue by estimate the next frame position by using the target's bounding box coordinates in previous frames. We use temporal network with past history of several previous frames, measure classification scores of candidates considering templates in the bag of dynamic templates and use tracker sequential confidence value which shows how confident the tracker has been in previous frames. We call it as robustifier that prevents the tracker from continuously switching between the target and possible distractors with this hypothesis in mind. Extensive experiments on OTB 50, OTB 100 and UAV20L datasets demonstrate the superiority of our work over the state-of-the-art methods.
4

Multi-Modal Visual Tracking Using Infrared Imagery

Wettermark, Emma, Berglund, Linda January 2021 (has links)
Generic visual object tracking is the task of tracking one or several objects in all frames in a video, knowing only the location and size of the target in the initial frame. Visual tracking can be carried out in both the infrared and the visual spectrum simultaneously, this is known as multi-modal tracking. Utilizing both spectra can result in a more diverse tracker since visual tracking in infrared imagery makes it possible to detect objects even in poor visibility or in complete darkness. However, infrared imagery lacks the number of details that are present in visual images. A common method for visual tracking is to use discriminative correlation filters (DCF). These correlation filters are then used to detect an object in every frame of an image sequence. This thesis focuses on investigating aspects of a DCF based tracker, operating in the two different modalities, infrared and visual imagery. First, it was investigated whether the tracking benefits from using two channels instead of one and what happens to the tracking result if one of those channels is degraded by an external cause. It was also investigated if the addition of image features can further improve the tracking. The result shows that the tracking improves when using two channels instead of only using a single channel. It also shows that utilizing two channels is a good way to create a robust tracker, which is still able to perform even though one of the channels is degraded. Using deep features, extracted from a pre-trained convolutional neural network, was the image feature improving the tracking the most, although the implementation of the deep features made the tracking significantly slower.
5

Visual tracking of articulated and flexible objects

WESIERSKI, Daniel 25 March 2013 (has links) (PDF)
Humans can visually track objects mostly effortlessly. However, it is hard for a computer to track a fast moving object under varying illumination and occlusions, in clutter, and with varying appearance in camera projective space due to its relaxed rigidity or change in viewpoint. Since a generic, precise, robust, and fast tracker could trigger many applications, object tracking has been a fundamental problem of practical importance since the beginnings of computer vision. The first contribution of the thesis is a computationally efficient approach to tracking objects of various shapes and motions. It describes a unifying tracking system that can be configured to track the pose of a deformable object in a low or high-dimensional state-space. The object is decomposed into a chained assembly of segments of multiple parts that are arranged under a hierarchy of tailored spatio-temporal constraints. The robustness and generality of the approach is widely demonstrated on tracking various flexible and articulated objects. Haar-like features are widely used in tracking. The second contribution of the thesis is a parser of ensembles of Haar-like features to compute them efficiently. The features are decomposed into simpler kernels, possibly shared by subsets of features, thus forming multi-pass convolutions. Discovering and aligning these kernels within and between passes allows forming recursive trees of kernels that require fewer memory operations than the classic computation, thereby producing the same result but more efficiently. The approach is validated experimentally on popular examples of Haar-like features
6

Uma Nova metaheurÃstica evolucionÃria para a formaÃÃo de mapas topologicamente ordenados e extensÃes / A New Evolutionary Metaheuristic for Topologically ordered maps Formation and Extensions.

Josà Everardo Bessa Maia 03 November 2011 (has links)
Mapas topologicamente ordenados sÃo tÃcnicas de representaÃÃo de dados baseadas em reduÃÃo de dimensionalidade com a propriedade especial de preservaÃÃo da vizinhanÃa espacial entre os protÃtipos no espaÃo dos dados e entre suas respectivas posiÃÃes no espaÃo de saÃda. Com base nesta propriedade, mapas topologicamente ordenados sÃo aplicados principalmente em agrupamento, quantizaÃÃo vetorial ou reduÃÃo de dimensionalidade e visualizaÃÃo de dados. Esta tese propÃe uma nova classificaÃÃo para os algoritmos de formaÃÃo de mapas topologicamente ordenados baseada no mecanismo de correlaÃÃo entre os espaÃos de entrada e de saÃda, e descreve um novo algoritmo, baseado em computaÃÃo evolucionÃria, denominado EvSOM, para a formaÃÃo de mapas topologicamente ordenado. As principais propriedades do novo algoritmo sÃo a sua flexibilidade para ponderaÃÃo pelo usuÃrio da importÃncia relativa das propriedades de quantizaÃÃo vetorial e de preservaÃÃo de topologia no mapa final, alÃm de boa rejeiÃÃo a outliers quando comparado ao algoritmo SOM de Kohonen. O trabalho desenvolve uma avaliaÃÃo empÃrica destas propriedades. O EvSOM Ã um algoritmo hÃbrido, neural-evolucionÃrio, biologicamente inspirado, que se utiliza de conceitos de redes neurais competitivas, computaÃÃo evolucionÃria, otimizaÃÃo e aproximaÃÃo iterativa. Para validar sua viabilidade de aplicaÃÃo, o EvSOM Ã estendido e especializado para a soluÃÃo de dois problemas bÃsicos relevantes em processamento de imagens e visÃo computacional, quais sejam, o problema de registro de imagens mÃdicas e o problema de rastreamento visual de objetos em vÃdeo. O algoritmo apresentou desempenho satisfatÃrio nas duas aplicaÃÃes. / Topologically ordered maps are data representation techniques based on dimensionality reduction with the special property of preserving the neighborhood between the data prototypes lying in the data space and their positions on to the output space. Based on this property, topologically ordered maps are applied mainly in clustering projected, vector quantization or dimensionality reduction and data visualization. This thesis proposes a new classification for the existing algorithms devoted to the formation of topologically ordered maps, which is based on the mechanism of correlation between the input and output spaces, and describes a new algorithm based on evolutionary computation, called EvSOM, for the topologically ordered maps formation. The main properties of the new algorithm are its flexibility for consideration by the user of the relative importance of the properties of vector quantization and topology preservation of the final map, and good outliers rejection when compared to the Kohonen SOM algorithm. The work provides an empirical evaluation of these properties. The EvSOM is a hybrid , neural-evolutionary, biologically inspired algorithm, which uses concepts of competitive neural networks, evolutionary computing, optimization and iterative approximation approximation. To validate its application feasibility, EvSOM is extended and specialized to solve two relevant basic problems in image processing and computer vision, namely, the medical image registration problem and the visual tracking of objects in video problem. The algorithm exhibits satisfactory performance in both aplications.
7

Implementation and evaluation of a 3D tracker / Implementation och utvärdering av en 3D tracker

Robinson, Andreas January 2014 (has links)
Many methods have been developed for visual tracking of generic objects. The vast majority of these assume the world is two-dimensional, either ignoring the third dimension or only dealing with it indirectly. This causes difficulties for the tracker when the target approaches or moves away from the camera, is occluded or moves out of the camera frame. Unmanned aerial vehicles (UAVs) are increasingly used in civilian applications and some of these will undoubtedly carry tracking systems in the future. As they move around, these trackers will encounter both scale changes and occlusions. To improve the tracking performance in these cases, the third dimension should be taken into account. This thesis extends the capabilities of a 2D tracker to three dimensions, with the assumption that the target moves on a ground plane. The position of the tracker camera is established by matching the video it produces to a sparse point-cloud map built with off-the-shelf structure-from-motion software. A target is tracked with a generic 2D tracker and subsequently positioned on the ground. Should the target disappear from view, its motion on the ground is predicted. In combination, these simple techniques are shown to improve the robustness of a tracking system on a moving platform under target scale changes and occlusions.
8

Fusion en ligne d'algorithmes de suivi visuel d'objet / On-line fusion of visual object tracking algorithms

Leang, Isabelle 15 December 2016 (has links)
Le suivi visuel d’objet est une fonction élémentaire de la vision par ordinateur ayant fait l’objet de nombreux travaux. La dérive au cours du temps est l'un des phénomènes les plus critiques à maîtriser, car elle aboutit à la perte définitive de la cible suivie. Malgré les nombreuses approches proposées dans la littérature pour contrer ce phénomène, aucune ne surpasse une autre en terme de robustesse face aux diverses sources de perturbations visuelles : variation d'illumination, occultation, mouvement brusque de caméra, changement d'aspect. L’objectif de cette thèse est d’exploiter la complémentarité d’un ensemble d'algorithmes de suivi, « trackers », en développant des stratégies de fusion en ligne capables de les combiner génériquement. La chaîne de fusion proposée a consisté à sélectionner les trackers à partir d'indicateurs de bon fonctionnement, à combiner leurs sorties et à les corriger. La prédiction en ligne de dérive a été étudiée comme un élément clé du mécanisme de sélection. Plusieurs méthodes sont proposées pour chacune des étapes de la chaîne, donnant lieu à 46 configurations de fusion possibles. Évaluées sur 3 bases de données, l’étude a mis en évidence plusieurs résultats principaux : une sélection performante améliore considérablement la robustesse de suivi ; une correction de mise à jour est préférable à une réinitialisation ; il est plus avantageux de combiner un petit nombre de trackers complémentaires et de performances homogènes qu'un grand nombre ; la robustesse de fusion d’un petit nombre de trackers est corrélée à la mesure d’incomplétude, ce qui permet de sélectionner la combinaison de trackers adaptée à un contexte applicatif donné. / Visual object tracking is an elementary function of computer vision that has been the subject of numerous studies. Drift over time is one of the most critical phenomena to master because it leads to the permanent loss of the target being tracked. Despite the numerous approaches proposed in the literature to counter this phenomenon, none outperforms another in terms of robustness to the various sources of visual perturbations: variation of illumination, occlusion, sudden movement of camera, change of aspect. The objective of this thesis is to exploit the complementarity of a set of tracking algorithms by developing on-line fusion strategies capable of combining them generically. The proposed fusion chain consists of selecting the trackers from indicators of good functioning, combining their outputs and correcting them. On-line drift prediction was studied as a key element of the selection mechanism. Several methods are proposed for each step of the chain, giving rise to 46 possible fusion configurations. Evaluated on 3 databases, the study highlighted several key findings: effective selection greatly improves robustness; The correction improves the robustness but is sensitive to bad selection, making updating preferable to reinitialization; It is more advantageous to combine a small number of complementary trackers with homogeneous performances than a large number; The robustness of fusion of a small number of trackers is correlated to the incompleteness measure, which makes it possible to select the appropriate combination of trackers to a given application context.
9

Visual tracking of articulated and flexible objects / Suivi par vision d’objets articulés et flexibles

Wesierski, Daniel 25 March 2013 (has links)
Les humains sont capables de suivre visuellement des objets sans effort. Cependant les algorithmes de vision artificielle rencontrent des limitations pour suivre des objets en mouvement rapide, sous un éclairage variable, en présence d'occultations, dans un environnement complexe ou dont l'apparence varie à cause de déformations et de changements de point de vue. Parce que des systèmes génériques, précis, robustes et rapides sont nécessaires pour de nombreuses d’applications, le suivi d’objets reste un problème pratique important en vision par ordinateur. La première contribution de cette thèse est une approche calculatoire rapide pour le suivi d'objets de forme et de mouvement variable. Elle consiste en un système unifié et configurable pour estimer l'attitude d’un objet déformable dans un espace d'états de dimension petite ou grande. L’objet est décomposé en une suite de segments composés de parties et organisés selon une hiérarchie spatio-temporelle contrainte. L'efficacité et l’universalité de cette approche sont démontrées expérimentalement sur de nombreux exemples de suivi de divers objets flexibles et articulés. Les caractéristiques de Haar (HLF) sont abondement utilisées pour le suivi d’objets. La deuxième contribution est une méthode de décomposition des HLF permettant de les calculer de manière efficace. Ces caractéristiques sont décomposées en noyaux plus simples, éventuellement réutilisables, et reformulées comme des convolutions multi-passes. La recherche et l'alignement des noyaux dans et entre les passes permet de créer des arbres récursifs de noyaux qui nécessitent moins d’opérations en mémoire que les systèmes de calcul classiques, pour un résultat de convolution identique et une mise en œuvre plus efficace. Cette approche a été validée expérimentalement sur des exemples de HLF très utilisés / Humans can visually track objects mostly effortlessly. However, it is hard for a computer to track a fast moving object under varying illumination and occlusions, in clutter, and with varying appearance in camera projective space due to its relaxed rigidity or change in viewpoint. Since a generic, precise, robust, and fast tracker could trigger many applications, object tracking has been a fundamental problem of practical importance since the beginnings of computer vision. The first contribution of the thesis is a computationally efficient approach to tracking objects of various shapes and motions. It describes a unifying tracking system that can be configured to track the pose of a deformable object in a low or high-dimensional state-space. The object is decomposed into a chained assembly of segments of multiple parts that are arranged under a hierarchy of tailored spatio-temporal constraints. The robustness and generality of the approach is widely demonstrated on tracking various flexible and articulated objects. Haar-like features are widely used in tracking. The second contribution of the thesis is a parser of ensembles of Haar-like features to compute them efficiently. The features are decomposed into simpler kernels, possibly shared by subsets of features, thus forming multi-pass convolutions. Discovering and aligning these kernels within and between passes allows forming recursive trees of kernels that require fewer memory operations than the classic computation, thereby producing the same result but more efficiently. The approach is validated experimentally on popular examples of Haar-like features
10

Vers un suivi robuste d'objets visuels : sélection de propositions et traitement des occlusions / Towards robust visual object tracking : proposal selection and occlusion reasoning

Hua, Yang 10 June 2016 (has links)
Cette dissertation traite du problème du suivi d'objets visuels, dont le but est de localiser un objet et de déterminer sa trajectoire au cours du temps. En particulier, nous nous concentrons sur les scénarios difficiles, dans lesquels les objets subissent d'importantes déformations et occlusions, ou quittent le champs de vision. A cette fin, nous proposons deux méthodes robustes qui apprennent un modèle pour l'objet d'intérêt et le mettent à jour, afin de refléter ses changements au cours du temps.Notre première méthode traite du problème du suivi dans le cas où les objets subissent d'importantes transformations géométriques comme une rotation ou un changement d'échelle. Nous présentons un nouvel algorithme de sélection de propositions, qui étend l'approche traditionnelle de ``suivi par détection''. Cette méthode procède en deux étapes: proposition puis sélection. Dans l'étape de proposition, nous construisons un ensemble de candidats qui représente les localisations potentielles de l'objet en estimant de manière robuste les transformations géométriques. La meilleure proposition est ensuite sélectionnée parmi cet ensemble de candidats pour précisément localiser l'objet en utilisant des indices d'apparence et de mouvement.Dans un second temps, nous traitons du problème de la mise à jour de modèles dans le suivi visuel, c'est-à-dire de déterminer quand il est besoin de mettre à jour le modèle de la cible, lequel peut subir une occlusion, ou quitter le champs de vision. Pour résoudre cela, nous utilisons des indices de mouvement pour identifier l'état d'un objet de manière automatique et nous mettons à jour le modèle uniquement lorsque l'objet est entièrement visible. En particulier, nous utilisons des trajectoires à long terme ainsi qu'une technique basée sur la coup de graphes pour estimer les parties de l'objet qui sont visibles.Nous avons évalué nos deux approches de manière étendue sur différents bancs d'essai de suivi, en particulier sur le récent banc d'essai de suivi en ligne et le jeu de donnée du concours de suivi visuel. Nos deux approches se comparent favorablement à l'état de l'art et font montre d'améliorations significatives par rapport à plusieurs autres récents suiveurs. Notre soumission au concours de suivi d'objets visuels de 2015 a par ailleurs remporté l'une de ces compétitions. / In this dissertation we address the problem of visual object tracking, whereinthe goal is to localize an object and determine its trajectory over time. Inparticular, we focus on challenging scenarios where the object undergoessignificant transformations, becomes occluded or leaves the field of view. Tothis end, we propose two robust methods which learn a model for the object ofinterest and update it, to reflect its changes over time.Our first method addresses the tracking problem in the context of objectsundergoing severe geometric transformations, such as rotation, change in scale.We present a novel proposal-selection algorithm, which extends the traditionaldiscriminative tracking-by-detection approach. This method proceeds in twostages -- proposal followed by selection. In the proposal stage, we compute acandidate pool that represents the potential locations of the object byrobustly estimating the geometric transformations. The best proposal is thenselected from this candidate set to localize the object precisely usingmultiple appearance and motion cues.Second, we consider the problem of model update in visual tracking, i.e.,determining when to update the model of the target, which may become occludedor leave the field of view. To address this, we use motion cues to identify thestate of the object in a principled way, and update the model only when theobject is fully visible. In particular, we utilize long-term trajectories incombination with a graph-cut based technique to estimate parts of the objectsthat are visible.We have evaluated both our approaches extensively on several trackingbenchmarks, notably, recent online tracking benchmark and the visual objecttracking challenge datasets. Both our approaches compare favorably to thestate of the art and show significant improvement over several other recenttrackers. Specifically, our submission to the visual object tracking challengeorganized in 2015 was the winner in one of the competitions.

Page generated in 0.0823 seconds