• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 36
  • 31
  • 13
  • 13
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Self-correcting Bayesian target tracking

Biresaw, Tewodros Atanaw January 2015 (has links)
Visual tracking, a building block for many applications, has challenges such as occlusions,illumination changes, background clutter and variable motion dynamics that may degrade the tracking performance and are likely to cause failures. In this thesis, we propose Track-Evaluate-Correct framework (self-correlation) for existing trackers in order to achieve a robust tracking. For a tracker in the framework, we embed an evaluation block to check the status of tracking quality and a correction block to avoid upcoming failures or to recover from failures. We present a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN). The self-correcting tracking is done similarly to a selfaware system where parameters are tuned in the model or different models are fused or selected in a piece-wise way in order to deal with tracking challenges and failures. In the DBN model representation, the parameter tuning, fusion and model selection are done based on evaluation and correction variables that correspond to the evaluation and correction, respectively. The inferences of variables in the DBN model are used to explain the operation of self-correcting tracking. The specific contributions under the generic self-correcting framework are correlation-based selfcorrecting tracking for an extended object with model points and tracker-level fusion as described below. For improving the probabilistic tracking of extended object with a set of model points, we use Track-Evaluate-Correct framework in order to achieve self-correcting tracking. The framework combines the tracker with an on-line performance measure and a correction technique. We correlate model point trajectories to improve on-line the accuracy of a failed or an uncertain tracker. A model point tracker gets assistance from neighbouring trackers whenever degradation in its performance is detected using the on-line performance measure. The correction of the model point state is based on the correlation information from the states of other trackers. Partial Least Square regression is used to model the correlation of point tracker states from short windowed trajectories adaptively. Experimental results on data obtained from optical motion capture systems show the improvement in tracking performance of the proposed framework compared to the baseline tracker and other state-of-the-art trackers. The proposed framework allows appropriate re-initialisation of local trackers to recover from failures that are caused by clutter and missed detections in the motion capture data. Finally, we propose a tracker-level fusion framework to obtain self-correcting tracking. The fusion framework combines trackers addressing different tracking challenges to improve the overall performance. As a novelty of the proposed framework, we include an online performance measure to identify the track quality level of each tracker to guide the fusion. The trackers in the framework assist each other based on appropriate mixing of the prior states. Moreover, the track quality level is used to update the target appearance model. We demonstrate the framework with two Bayesian trackers on video sequences with various challenges and show its robustness compared to the independent use of the trackers used in the framework, and also compared to other state-of-the-art trackers. The appropriate online performance measure based appearance model update and prior mixing on trackers allows the proposed framework to deal with tracking challenges.
12

Sistema de visão omnidirecional aplicado no controle de robôs móveis. / Omnidirectional vision system applied to mobile robots control.

Grassi Júnior, Valdir 07 May 2002 (has links)
Sistemas de visão omnidirecional produzem imagens de 360º do ambiente podendo ser utilizados em navegação, tele-operação e controle servo visual de robôs. Este tipo de sistema dispensa o movimento da câmera para determinada direção de atenção mas requer processamento não convencional da imagem, uma vez que a imagem adquirida se encontra mapeada em coordenadas polares não lineares. Uma maneira efetiva de se obter uma imagem em um sistema omnidirecional é com o uso combinado de lentes e espelhos. Várias formas de espelhos convexos podem ser utilizadas montando-se uma câmera com o seu eixo óptico alinhado com o centro do espelho. Dentre as formas usadas, tem-se os cônicos, parabólicos, hiperbólicos e esféricos. Neste trabalho foi implementado um sistema de visão omnidirecional utilizando um espelho hiperbólico. Este sistema de visão desenvolvido é embarcado em um robô móvel e aplicado em uma tarefa de controle. A tarefa de controle de interesse neste trabalho é a de fazer com que o robô mantenha uma distância constante de um determinado alvo móvel. Esta tarefa é realizada com a realimentação em tempo real de informações visuais do alvo obtidas pelo sistema de visão para controle do robô utilizando uma abordagem de controle servo visual. / Omnidirectional vision systems can get images with a 360-degree of field of view. This type of system is very well suited for tasks such as robotic navigation, tele-operation and visual servoing. Such systems do not require the movement of the camera to the direction of attention of the robot. On the other hand, it requires a non-conventional image processing as the image captured by this vision system is mapped on a non-linear polar coordinate system. One effective way to obtain an image in an omnidirectional system is through the use of lenses and mirrors. Several different shapes of convex mirrors can be used, mounting the center of the mirror aligned with the camera optical axis. The most commonly used mirror shapes are conic, parabolic, hyperbolic and spherical. In this work a hyperbolical mirror was used to build an omnidirectional vision system. This system was mounted on a mobile robot and used in a control task. The task of interest here is the tracking in real time of a moving target keeping the distance between the robot and the target constant. This task is accomplished with data acquisition from the omnidirectional vision system, that is used as feedback to control the mobile robot in a visual servo approach.
13

Sistema de visão omnidirecional aplicado no controle de robôs móveis. / Omnidirectional vision system applied to mobile robots control.

Valdir Grassi Júnior 07 May 2002 (has links)
Sistemas de visão omnidirecional produzem imagens de 360º do ambiente podendo ser utilizados em navegação, tele-operação e controle servo visual de robôs. Este tipo de sistema dispensa o movimento da câmera para determinada direção de atenção mas requer processamento não convencional da imagem, uma vez que a imagem adquirida se encontra mapeada em coordenadas polares não lineares. Uma maneira efetiva de se obter uma imagem em um sistema omnidirecional é com o uso combinado de lentes e espelhos. Várias formas de espelhos convexos podem ser utilizadas montando-se uma câmera com o seu eixo óptico alinhado com o centro do espelho. Dentre as formas usadas, tem-se os cônicos, parabólicos, hiperbólicos e esféricos. Neste trabalho foi implementado um sistema de visão omnidirecional utilizando um espelho hiperbólico. Este sistema de visão desenvolvido é embarcado em um robô móvel e aplicado em uma tarefa de controle. A tarefa de controle de interesse neste trabalho é a de fazer com que o robô mantenha uma distância constante de um determinado alvo móvel. Esta tarefa é realizada com a realimentação em tempo real de informações visuais do alvo obtidas pelo sistema de visão para controle do robô utilizando uma abordagem de controle servo visual. / Omnidirectional vision systems can get images with a 360-degree of field of view. This type of system is very well suited for tasks such as robotic navigation, tele-operation and visual servoing. Such systems do not require the movement of the camera to the direction of attention of the robot. On the other hand, it requires a non-conventional image processing as the image captured by this vision system is mapped on a non-linear polar coordinate system. One effective way to obtain an image in an omnidirectional system is through the use of lenses and mirrors. Several different shapes of convex mirrors can be used, mounting the center of the mirror aligned with the camera optical axis. The most commonly used mirror shapes are conic, parabolic, hyperbolic and spherical. In this work a hyperbolical mirror was used to build an omnidirectional vision system. This system was mounted on a mobile robot and used in a control task. The task of interest here is the tracking in real time of a moving target keeping the distance between the robot and the target constant. This task is accomplished with data acquisition from the omnidirectional vision system, that is used as feedback to control the mobile robot in a visual servo approach.
14

Rastreamento de jogadores de futebol em sequências de imagens. / Tracking soccer players in image sequences.

Rodrigo Dias Arnaut 30 November 2009 (has links)
Rastreamento visual em sequências de imagens tem sido muito estudado nos últimos 30 anos devido às inúmeras aplicações que possui em sistemas de visão computacional em tempo real; entretanto, poucos são os algoritmos disponíveis para que tal tarefa seja realizada com sucesso. Esta dissertação apresenta um método e uma arquitetura eficazes e eficientes para rastrear jogadores em jogos de futebol. A entrada do sistema consiste de vídeos capturados por câmeras estáticas instaladas em estádios de futebol. A saída é a trajetória descrita pelo jogador durante uma partida de futebol, dada no plano de imagem. O sistema possui dois estágios de processamento: inicialização e rastreamento. A inicialização do sistema é crítica no desempenho do rastreador e seu objetivo consiste em produzir uma estimativa aproximada da configuração e características de cada alvo, a qual é usada como uma estimativa inicial do estado pelo rastreador. O sistema de rastreamento utiliza Filtros de Kalman para modelar o contorno, posição e velocidade dos jogadores. Resultados são apresentados usando dados reais. Avaliações quantitativas são fornecidas e o sistema proposto é comparado com outro sistema correlato. Os experimentos mostram que o sistema proposto apresenta resultados bastante promissores. / Visual tracking in image sequences has been extensively studied in the last 30 years because of the many applications it has in real-time computer vision systems; however, there are few algorithms available for this task so that it is performed successfully. This work presents an effective and efficient system architecture and method to track players in soccer games. The system input consists of videos captured by static cameras installed in soccer stadiums. The output is the trajectory described by the player during a soccer match, given in the image plane. The system comprises two processing stages: initialization and tracking. The system startup is critical in the tracking performance and its goal is to produce a rough estimate of the configuration and characteristics of each target, which is used as an initial estimate of the state by the visual tracker. The tracking system uses Kalman filters to model the shape, position and speed of the players. Results are presented using real data. Quantitative assessments are provided and the proposed system is compared with related systems. The experiments show that our system can achieve very promising results.
15

Aportació als mètodes de seguiment tridimensional d'objectes d'alta velocitat d'operació mitjançant l'estereovisió

Aranda, Joan 16 October 1997 (has links)
No description available.
16

Efficient Calibration Of A Multi-camera Measurement System Using A Target With Known Dynamics

Aykin, Murat Deniz 01 August 2008 (has links) (PDF)
Multi camera measurement systems are widely used to extract information about the 3D configuration or &ldquo / state&rdquo / of one or more real world objects. Camera calibration is the process of pre-determining all the remaining optical and geometric parameters of the measurement system which are either static or slowly varying. For a single camera, this consist of the internal parameters of the camera device optics and construction while for a multiple camera system, it also includes the geometric positioning of the individual cameras, namely &ldquo / external&rdquo / parameters. The calibration is a necessary step before any actual state measurements can be made from the system. In this thesis, such a multi-camera state measurement system and in particular the problem of procedurally effective and high performance calibration of such a system is considered. This thesis presents a novel calibration algorithm which uses the known dynamics of a ballistically thrown target object and employs the Extended Kalman Filter (EKF) to calibrate the multi-camera system. The state-space representation of the target state is augmented with the unknown calibration parameters which are assumed to be static or slowly varying with respect to the state. This results in a &ldquo / super-state&rdquo / vector. The EKF algorithm is used to recursively estimate this super-state hence resulting in the estimates of the static camera parameters. It is demonstrated by both simulation studies as well as actual experiments that when the ballistic path of the target is processed by the improved versions of the EKF algorithm, the camera calibration parameter estimates asymptotically converge to their actual values. Since the image frames of the target trajectory can be acquired first and then processed off-line, subsequent improvements of the EKF algorithm include repeated and bidirectional versions where the same calibration images are repeatedly used. Repeated EKF (R-EKF) provides convergence with a limited number of image frames when the initial target state is accurately provided while its bidirectional version (RB-EKF) improves calibration accuracy by also estimating the initial target state. The primary contribution of the approach is that it provides a fast calibration procedure where there is no need for any standard or custom made calibration target plates covering the majority of camera field-of-view. Also, human assistance is minimized since all frame data is processed automatically and assistance is limited to making the target throws. The speed of convergence and accuracy of the results promise a field-applicable calibration procedure.
17

Localized statistical models in computer vision

Lankton, Shawn M. 14 September 2009 (has links)
Computer vision approximates human vision using computers. Two subsets are explored in this work: image segmentation and visual tracking. Segmentation involves partitioning an image into logical parts, and tracking analyzes objects as they change over time. The presented research explores a key hypothesis: localizing analysis of visual information can improve the accuracy of segmentation and tracking results. Accordingly, a new class of segmentation techniques based on localized analysis is developed and explored. Next, these techniques are applied to two challenging problems: neuron bundle segmentation in diffusion tensor imagery (DTI) and plaque detection in computed tomography angiography (CTA) imagery. Experiments demonstrate that local analysis is well suited for these medical imaging tasks. Finally, a visual tracking algorithm is shown that uses temporal localization to track objects that change drastically over time.
18

Robust target localization and segmentation using statistical methods

Arif, Omar 05 April 2010 (has links)
This thesis aims to contribute to the area of visual tracking, which is the process of identifying an object of interest through a sequence of successive images. The thesis explores kernel-based statistical methods, which map the data to a higher dimensional space. A pre-image framework is provided to find the mapping from the embedding space to the input space for several manifold learning and dimensional learning algorithms. Two algorithms are developed for visual tracking that are robust to noise and occlusions. In the first algorithm, a kernel PCA-based eigenspace representation is used. The de-noising and clustering capabilities of the kernel PCA procedure lead to a robust algorithm. This framework is extended to incorporate the background information in an energy based formulation, which is minimized using graph cut and to track multiple objects using a single learned model. In the second method, a robust density comparison framework is developed that is applied to visual tracking, where an object is tracked by minimizing the distance between a model distribution and given candidate distributions. The superior performance of kernel-based algorithms comes at a price of increased storage and computational requirements. A novel method is developed that takes advantage of the universal approximation capabilities of generalized radial basis function neural networks to reduce the computational and storage requirements for kernel-based methods.
19

Perceptual Segmentation of Visual Streams by Tracking of Objects and Parts

Papon, Jeremie 17 October 2014 (has links)
No description available.
20

Visual Tracking With Group Motion Approach

Arslan, Ali Erkin 01 January 2003 (has links) (PDF)
An algorithm for tracking single visual targets is developed in this study. Feature detection is the necessary and appropriate image processing technique for this algorithm. The main point of this approach is to use the data supplied by the feature detection as the observation from a group of targets having similar motion dynamics. Therefore a single visual target is regarded as a group of multiple targets. Accurate data association and state estimation under clutter are desired for this application similar to other multi-target tracking applications. The group tracking approach is used with the well-known probabilistic data association technique to cope with data association and estimation problems. The applicability of this method particularly for visual tracking and for other cases is also discussed.

Page generated in 0.0634 seconds