1 |
Monocular and Binocular Visual TrackingSalama, Gouda Ismail Mohamed 06 January 2000 (has links)
Visual tracking is one of the most important applications of computer vision. Several tracking systems have been developed which either focus mainly on the tracking of targets moving on a plane, or attempt to reduce the 3-dimensional tracking problem to the tracking of a set of characteristic points of the target. These approaches are seriously handicapped in complex visual situations, particularly those involving significant perspective, textures, repeating patterns, or occlusion.
This dissertation describes a new approach to visual tracking for monocular and binocular image sequences, and for both passive and active cameras. The method combines Kalman-type prediction with steepest-descent search for correspondences, using 2-dimensional affine mappings between images. This approach differs significantly from many recent tracking systems, which emphasize the recovery of 3-dimensional motion and/or structure of objects in the scene. We argue that 2-dimensional area-based matching is sufficient in many situations of interest, and we present experimental results with real image sequences to illustrate the efficacy of this approach.
Image matching between two images is a simple one to one mapping, if there is no occlusion. In the presence of occlusion wrong matching is inevitable. Few approaches have been developed to address this issue. This dissertation considers the effect of occlusion on tracking a moving object for both monocular and binocular image sequences. The visual tracking system described here attempts to detect occlusion based on the residual error computed by the matching method. If the residual matching error exceeds a user-defined threshold, this means that the tracked object may be occluded by another object. When occlusion is detected, tracking continues with the predicted locations based on Kalman filtering. This serves as a predictor of the target position until it reemerges from the occlusion again. Although the method uses a constant image velocity Kalman filtering, it has been shown to function reasonably well in a non-constant velocity situation. Experimental results show that tracking can be maintained during periods of substantial occlusion.
The area-based approach to image matching often involves correlation-based comparisons between images, and this requires the specification of a size for the correlation windows. Accordingly, a new approach based on moment invariants was developed to select window size adaptively. This approach is based on the sudden increasing or decreasing in the first Maitra moment invariant. We applied a robust regression model to smooth the first Maitra moment invariant to make the method robust against noise.
This dissertation also considers the effect of spatial quantization on several moment invariants. Of particular interest are the affine moment invariants, which have emerged, in recent years as a useful tool for image reconstruction, image registration, and recognition of deformed objects. Traditional analysis assumes moments and moment invariants for images that are defined in the continuous domain. Quantization of the image plane is necessary, because otherwise the image cannot be processed digitally. Image acquisition by a digital system imposes spatial and intensity quantization that, in turn, introduce errors into moment and invariant computations. This dissertation also derives expressions for quantization-induced error in several important cases. Although it considers spatial quantization only, this represents an important extension of work by other researchers.
A mathematical theory for a visual tracking approach of a moving object is presented in this dissertation. This approach can track a moving object in an image sequence where the camera is passive, and when the camera is actively controlled. The algorithm used here is computationally cheap and suitable for real-time implementation. We implemented the proposed method on an active vision system, and carried out experiments of monocular and binocular tracking for various kinds of objects in different environments. These experiments demonstrated that very good performance using real images for fairly complicated situations. / Ph. D.
|
2 |
Análise comparativa entre suportes para janelamento na técnica Shape From FocusSilva, Marcelo Robson de Azevedo Martins da 27 September 2017 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-02-22T13:55:41Z
No. of bitstreams: 1
Marcelo Robson de Azevedo Martins da Silva_.pdf: 3067819 bytes, checksum: d7b5c1e064c742114237189fe8cdc3d7 (MD5) / Made available in DSpace on 2018-02-22T13:55:41Z (GMT). No. of bitstreams: 1
Marcelo Robson de Azevedo Martins da Silva_.pdf: 3067819 bytes, checksum: d7b5c1e064c742114237189fe8cdc3d7 (MD5)
Previous issue date: 2017-09-27 / Nenhuma / Existem muitas técnicas para reconstrução de objetos tridimensionais em computador, algumas são empregadas em ambientes controlados e outras em ambientes que não necessitam de grande precisão. Shape From Focus é um método bastante conhecido que utiliza uma pilha de fotografias retiradas com diferentes configurações focais para reconstruir um mapa de profundidade bastante preciso. Este método obtém maior estabilidade na reconstrução de objetos muito pequenos ou microscópios, mas recentemente vem sendo utilizado para reconstrução de ambientes. Com isso, o modelo de reconstrução de mapas de profundidade, Shape From Focus, passou a processar maiores quantidades de interferências na pilha de fotografias, como por exemplo, a distorção da lente, o aumento da profundidade de campo, o efeito zoom, entre outros, e também o ruído introduzido pelo ambiente. Este trabalho analisa os efeitos do suporte adaptativo para o janelamento de avaliação do medidor de qualidade de foco do método Shape From Focus. Apesar de diferentes trabalhos sobre este tema utilizarem diversas variações do janelamento de avaliação, o suporte adaptativo pode fornecer uma alternativa para encontrar a estabilidade e confiança na obtenção do mapa de profundidade, limitando o erro introduzido por interferências globais. / There are many techniques for reconstructing three-dimensional objects in a computer, some are used in controlled environments and others in environments that do not require great precision. Shape From Focus is one of the well-known method that uses a stack of cropped photographs with different focal settings to reconstruct a fairly accurate depth map. This method obtains greater stability in the reconstruction of very small objects or microscopes, but has recently been used for reconstruction of environments. As a result, the Shape From Focus reconstruction model began to process greater amounts of interference in the photo stack, such as lens distortion, increased depth of field, zoom effect, among others, as well as noise Introduced by the environment. This work analyzes the effects of the adaptive support for the evaluation window of the focus quality meter of the Shape From Focus method. Although different works on this theme use several variations of the evaluation window, the adaptive support can provide an alternative to find the stability and confidence in obtaining the depth map, limiting the error introduced by global interferences.
|
Page generated in 0.047 seconds