• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • Tagged with
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Rotation Invariant Object Recognition from One Training Example

Yokono, Jerry Jun, Poggio, Tomaso 27 April 2004 (has links)
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. Such a descriptor--based on a set of oriented Gaussian derivative filters-- is used in our recognition system. We report here an evaluation of several techniques for orientation estimation to achieve rotation invariance of the descriptor. We also describe feature selection based on a single training image. Virtual images are generated by rotating and rescaling the image and robust features are selected. The results confirm robust performance in cluttered scenes, in the presence of partial occlusions, and when the object is embedded in different backgrounds.
2

Rotation Invariant Object Recognition from One Training Example

Yokono, Jerry Jun, Poggio, Tomaso 27 April 2004 (has links)
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. Such a descriptor--based on a set of oriented Gaussian derivative filters-- is used in our recognition system. We report here an evaluation of several techniques for orientation estimation to achieve rotation invariance of the descriptor. We also describe feature selection based on a single training image. Virtual images are generated by rotating and rescaling the image and robust features are selected. The results confirm robust performance in cluttered scenes, in the presence of partial occlusions, and when the object is embedded in different backgrounds.
3

Evaluation of sets of oriented and non-oriented receptive fields as local descriptors

Yokono, Jerry Jun, Poggio, Tomaso 24 March 2004 (has links)
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. We propose a performance criterion for a local descriptor based on the tradeoff between selectivity and invariance. In this paper, we evaluate several local descriptors with respect to selectivity and invariance. The descriptors that we evaluated are Gaussian derivatives up to the third order, gray image patches, and Laplacian-based descriptors with either three scales or one scale filters. We compare selectivity and invariance to several affine changes such as rotation, scale, brightness, and viewpoint. Comparisons have been made keeping the dimensionality of the descriptors roughly constant. The overall results indicate a good performance by the descriptor based on a set of oriented Gaussian filters. It is interesting that oriented receptive fields similar to the Gaussian derivatives as well as receptive fields similar to the Laplacian are found in primate visual cortex.
4

Evaluation of sets of oriented and non-oriented receptive fields as local descriptors

Yokono, Jerry Jun, Poggio, Tomaso 24 March 2004 (has links)
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. We propose a performance criterion for a local descriptor based on the tradeoff between selectivity and invariance. In this paper, we evaluate several local descriptors with respect to selectivity and invariance. The descriptors that we evaluated are Gaussian derivatives up to the third order, gray image patches, and Laplacian-based descriptors with either three scales or one scale filters. We compare selectivity and invariance to several affine changes such as rotation, scale, brightness, and viewpoint. Comparisons have been made keeping the dimensionality of the descriptors roughly constant. The overall results indicate a good performance by the descriptor based on a set of oriented Gaussian filters. It is interesting that oriented receptive fields similar to the Gaussian derivatives as well as receptive fields similar to the Laplacian are found in primate visual cortex.
5

Novos descritores de texturas dinâmicas utilizando padrões locais e fusão de dados / New dynamic texture descriptors using local patterns and data fusion

Langoni, Virgílio de Melo 21 September 2017 (has links)
Nas últimas décadas, as texturas dinâmicas ou texturas temporais, que são texturas com movimento, tornaram-se objetos de intenso interesse por parte de pesquisadores das áreas de processamento digital de imagens e visão computacional. Várias técnicas vêm sendo desenvolvidas, ou aperfeiçoadas, para a extração de características baseada em texturas dinâmicas. Essas técnicas, em vários casos, são a combinação de duas ou mais metodologias pré-existentes que visam apenas a extração de características e não a melhora da qualidade das características extraídas. Além disso, para os casos em que as características são \"pobres\" em qualidade, o resultado final do processamento poderá apresentar queda de desempenho. Assim, este trabalho propõe descritores que extraiam características dinâmicas de sequências de vídeos e realize a fusão de informações buscando aumentar o desempenho geral na segmentação e/ou reconhecimento de texturas ou cenas em movimento. Os resultados obtidos utilizando-se duas bases de vídeos demonstram que os descritores propostos chamados de D-LMP e D-SLMP foram superiores ao descritor da literatura comparado e denominado de LBP-TOP. Além de apresentarem taxas globais de acurácia, precisão e sensibilidade superiores, os descritores propostos extraem características em um tempo inferior ao descritor LBP-TOP, o que os tornam mais práticos para a maioria das aplicações. A fusão de dados oriundos de regiões com diferentes características dinâmicas aumentou o desempenho dos descritores, demonstrando assim, que a técnica pode ser aplicada não somente para a classificação de texturas dinâmicas em sí, mas também para a classificação de cenas gerais em vídeos. / In the last decades, the dynamic textures or temporal textures, which are textures with movement, have become objects of intense interest on the part of researchers of the areas of digital image processing and computer vision. Several techniques have been developed, or perfected, for feature extraction based on dynamic textures. These techniques, in several cases, are the combination of two or more pre-existing methodologies that aim only the feature extraction and not the improvement of the quality of the extracted features. Moreover, in cases that the features are \"poor\" in quality, the final result of processing may present low performance. Thus, this work proposes descriptors that extract dynamic features of video sequences and perform the fusion of information seeking to increase the overall performance in the segmentation and/or recognition of textures or moving scenes. The results obtained using two video bases show that the proposed descriptors called D-LMP and D-SLMP were superior to the descriptor of the literature compared and denominated of LBP-TOP. In addition to presenting higher overall accuracy, precision and sensitivity rates, the proposed descriptors extract features at a shorter time than the LBP-TOP descriptor, which makes them more practical for most applications. The fusion of data from regions with different dynamic characteristics increased the performance of the descriptors, thus demonstrating that the technique can be applied not only to the classification of dynamic textures, but also to the classification of general scenes in videos.
6

Novos descritores de texturas dinâmicas utilizando padrões locais e fusão de dados / New dynamic texture descriptors using local patterns and data fusion

Virgílio de Melo Langoni 21 September 2017 (has links)
Nas últimas décadas, as texturas dinâmicas ou texturas temporais, que são texturas com movimento, tornaram-se objetos de intenso interesse por parte de pesquisadores das áreas de processamento digital de imagens e visão computacional. Várias técnicas vêm sendo desenvolvidas, ou aperfeiçoadas, para a extração de características baseada em texturas dinâmicas. Essas técnicas, em vários casos, são a combinação de duas ou mais metodologias pré-existentes que visam apenas a extração de características e não a melhora da qualidade das características extraídas. Além disso, para os casos em que as características são \"pobres\" em qualidade, o resultado final do processamento poderá apresentar queda de desempenho. Assim, este trabalho propõe descritores que extraiam características dinâmicas de sequências de vídeos e realize a fusão de informações buscando aumentar o desempenho geral na segmentação e/ou reconhecimento de texturas ou cenas em movimento. Os resultados obtidos utilizando-se duas bases de vídeos demonstram que os descritores propostos chamados de D-LMP e D-SLMP foram superiores ao descritor da literatura comparado e denominado de LBP-TOP. Além de apresentarem taxas globais de acurácia, precisão e sensibilidade superiores, os descritores propostos extraem características em um tempo inferior ao descritor LBP-TOP, o que os tornam mais práticos para a maioria das aplicações. A fusão de dados oriundos de regiões com diferentes características dinâmicas aumentou o desempenho dos descritores, demonstrando assim, que a técnica pode ser aplicada não somente para a classificação de texturas dinâmicas em sí, mas também para a classificação de cenas gerais em vídeos. / In the last decades, the dynamic textures or temporal textures, which are textures with movement, have become objects of intense interest on the part of researchers of the areas of digital image processing and computer vision. Several techniques have been developed, or perfected, for feature extraction based on dynamic textures. These techniques, in several cases, are the combination of two or more pre-existing methodologies that aim only the feature extraction and not the improvement of the quality of the extracted features. Moreover, in cases that the features are \"poor\" in quality, the final result of processing may present low performance. Thus, this work proposes descriptors that extract dynamic features of video sequences and perform the fusion of information seeking to increase the overall performance in the segmentation and/or recognition of textures or moving scenes. The results obtained using two video bases show that the proposed descriptors called D-LMP and D-SLMP were superior to the descriptor of the literature compared and denominated of LBP-TOP. In addition to presenting higher overall accuracy, precision and sensitivity rates, the proposed descriptors extract features at a shorter time than the LBP-TOP descriptor, which makes them more practical for most applications. The fusion of data from regions with different dynamic characteristics increased the performance of the descriptors, thus demonstrating that the technique can be applied not only to the classification of dynamic textures, but also to the classification of general scenes in videos.
7

Detekce poznávací značky v obraze / Image-Based Licence Plate Recognition

Vacek, Michal January 2009 (has links)
In first part thesis contains known methods of license plate detection. Preprocessing-based methods, AdaBoost-based methods and extremal region detection methods are described.Finally, there is a described and implemented own access using local detectors to creating visual vocabulary, which is used to plate recognition. All measurements are summarized on the end.
8

Detekce pohyblivého objektu ve videu na CUDA / Moving Object Detection in Video Using CUDA

Čermák, Michal January 2011 (has links)
This thesis deals with model-based approach to 3D tracking from monocular video. The 3D model pose dynamically estimated through minimization of objective function by particle filter. Objective function is based on rendered scene to real video similarity.

Page generated in 0.0607 seconds