• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Classification of Fish in Underwater Video; Pattern Matching - Affine Invariance and Beyond

gundam, madhuri, Gundam, Madhuri 15 May 2015 (has links)
Underwater video is used by marine biologists to observe, identify, and quantify living marine resources. Video sequences are typically analyzed manually, which is a time consuming and laborious process. Automating this process will significantly save time and cost. This work proposes a technique for automatic fish classification in underwater video. The steps involved are background subtracting, fish region tracking and classification using features. The background processing is used to separate moving objects from their surrounding environment. Tracking associates multiple views of the same fish in consecutive frames. This step is especially important since recognizing and classifying one or a few of the views as a species of interest may allow labeling the sequence as that particular species. Shape features are extracted using Fourier descriptors from each object and are presented to nearest neighbor classifier for classification. Finally, the nearest neighbor classifier results are combined using a probabilistic-like framework to classify an entire sequence. The majority of the existing pattern matching techniques focus on affine invariance, mainly because rotation, scale, translation and shear are common image transformations. However, in some situations, other transformations may be modeled as a small deformation on top of an affine transformation. The proposed algorithm complements the existing Fourier transform-based pattern matching methods in such a situation. First, the spatial domain pattern is decomposed into non-overlapping concentric circular rings with centers at the middle of the pattern. The Fourier transforms of the rings are computed, and are then mapped to polar domain. The algorithm assumes that the individual rings are rotated with respect to each other. The variable angles of rotation provide information about the directional features of the pattern. This angle of rotation is determined starting from the Fourier transform of the outermost ring and moving inwards to the innermost ring. Two different approaches, one using dynamic programming algorithm and second using a greedy algorithm, are used to determine the directional features of the pattern.
2

Affine Region Tracking and Augmentation Using MSER and Adaptive SIFT Model Generation

Marano, Matthew James 01 June 2009 (has links) (PDF)
Relatively complex Augmented Reality (AR) algorithms are becoming widely available due to advancements in affordable mobile computer hardware. To take advantage of this a new method is developed for tracking 2D regions without a prior knowledge of an environment and without developing a computationally expensive world model. In the method of this paper, affinely invariant planar regions in a scene are found using the Maximally Stable Extremal Region (MSER) detector. A region is selected by the user to define a search space, and then the Scale Invariant Feature Transform (SIFT) is used to detect affine invariant keypoints in the region. If three or more keypoint matches across frames are found, the affine transform A of a region is calculated. A 2D image is then transformed by A causing it to appear stationary on the 2D region being tracked. The search region is tracked by transforming the previous search region by A, defining a new location, size, and shape for the search region. Testing reveals that the method is robust to tracking planar surfaces despite affine changes in the geometry of a scene. Many real world surfaces provide adequate texture for successful augmentation of a scene. Regions found multiple frames are consistent with one another, with a mean cross-correlation of 0.608 relating augmented regions. The system can handle up to a 45° out of plane viewpoint change with respect to the camera. Although rotational changes appear to skew the affine transform slightly, translational and scale based have little distortion and provide convincing augmentations of graphics onto the real world.
3

Some problems on temporally consistent video editing and object recognition

Sadek, Rida 07 December 2012 (has links)
Video editing and object recognition are two significant fields in computer vi- sion: the first has remarkably assisted digital production and post-production tasks of a digital video footage; the second is considered fundamental to image classification or image based search in large databases (e.g. the web). In this thesis, we address two problems, namely we present a novel formulation that tackles video editing tasks and we develop a mechanism that allows to generate more robust descriptors for objects in an image. Concerning the first problem, this thesis proposes two variational models to perform temporally coherent video editing. These models are applied to change an object’s (rigid or non-rigid) texture throughout a given video sequence. One model is based on propagating color information from a given frame (or be- tween two given frames) along the motion trajectories of the video; while the other is based on propagating gradient domain information. The models we present in this thesis require minimal user intervention and they automatically accommodate for illumination changes in the scene. Concerning the second problem, this thesis addresses the problem of affine invariance in object recognition. We introduce a way to generate geometric affine invariant quantities that are used in the construction of feature descrip- tors. We show that when these quantities are used they do indeed achieve a more robust recognition than the state of the art descriptors. i / La edición de vídeo y el reconocimiento de objetos son dos áreas fundamentales en el campo de la visión por computador: la primera es de gran utilidad en los procesos de producción y post-producción digital de vídeo; la segunda es esencial para la clasificación o búsqueda de imágenes en grandes bases de datos (por ejemplo, en la web). En esta tesis se acometen ambos problemas, en concreto, se presenta una nueva formulación que aborda las tareas de edición de vídeo y se desarrolla un mecanismo que permite generar descriptores más robustos para los objetos de la imagen. Con respecto al primer problema, en esta tesis se proponen dos modelos variacionales para llevar a cabo la edición de vídeo de forma coherente en el tiempo. Estos modelos se aplican para cambiar la textura de un objeto (rígido o no) a lo largo de una secuencia de vídeo dada. Uno de los modelos está basado en la propagación de la información de color desde un determinado cuadro de la secuencia de vídeo (o entre dos cuadros dados) a lo largo de las trayectorias de movimiento del vídeo. El otro modelo está basado en la propagación de la información en el dominio del gradiente. Ambos modelos requieren una intervención mínima por parte del usuario y se ajustan de manera automática a los cambios de iluminación de la escena. Con respecto al segundo problema, esta tesis aborda el problema de la invariancia afín en el reconocimiento de objetos. Se introduce un nuevo método para generar cantidades geométricas afines que se utilizan en la generación de descriptores de características. También se demuestra que el uso de dichas cantidades proporciona mayor robustez al reconocimiento que los descriptores existentes actualmente en el estado del arte.

Page generated in 0.0655 seconds