Return to search

Utilizando contexto na representação de imagens para a classificação de cenas

Submitted by Elizabete Silva (elizabete.silva@ufes.br) on 2015-09-02T19:11:11Z
No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Utilizando Contexto na Representa¸c˜ao de Imagens para a.pdf: 4083803 bytes, checksum: e0ced4975f7eee5db5316f7e096db639 (MD5) / Approved for entry into archive by Morgana Andrade (morgana.andrade@ufes.br) on 2015-11-23T19:25:43Z (GMT) No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Utilizando Contexto na Representa¸c˜ao de Imagens para a.pdf: 4083803 bytes, checksum: e0ced4975f7eee5db5316f7e096db639 (MD5) / Made available in DSpace on 2015-11-23T19:25:43Z (GMT). No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Utilizando Contexto na Representa¸c˜ao de Imagens para a.pdf: 4083803 bytes, checksum: e0ced4975f7eee5db5316f7e096db639 (MD5)
Previous issue date: 2014 / A classifica¸c˜ao de cenas ´e um campo bastante popular na ´area de vis˜ao computacional
e encontra diversas aplica¸c˜oes tais como: organiza¸c˜ao e recupera¸c˜ao de imagem baseada
em conte´udo, localiza¸c˜ao e navega¸c˜ao de robˆos. No entanto, a classifica¸c˜ao autom´atica de
cenas ´e uma tarefa desafiadora devido a diversos fatores, tais como, ocorrˆencia de oclus˜ao,
sombras, reflex˜oes e varia¸c˜oes nas condi¸c˜oes de ilumina¸c˜ao e escala.
Dentre os trabalhos que objetivam solucionar o problema da classifica¸c˜ao autom´atica de
cenas, est˜ao aqueles que utilizam transformadas n˜ao-param´etricas e aqueles que tˆem obtido
melhora no desempenho de classifica¸c˜ao atrav´es da explora¸c˜ao da informa¸c˜ao contextual.
Desse modo, esse trabalho prop˜oe dois descritores de imagens que associam informa¸c˜ao
contextual, ou seja, informa¸c˜ao advinda de regi˜oes vizinhas, a um tipo de transformada
n˜ao-param´etrica. O objetivo ´e propor uma abordagem que n˜ao eleve demasiadamente
a dimens˜ao do vetor de caracter´ısticas e que n˜ao utilize a t´ecnica de representa¸c˜ao
intermedi´aria bag-of-features, diminuindo, assim, o custo computacional e extinguindo a
necessidade de informa¸c˜ao de parˆametros, o que possibilita a sua utiliza¸c˜ao por usu´arios
que n˜ao possuem conhecimento na ´area de reconhecimento de padr˜oes.
Assim, s˜ao propostos os descritores CMCT (Transformada Census Modificada Contextual)
e ECMCT (CMCT Estendido) e seus desempenhos s˜ao avaliados em quatro bases de dados
p´ublicas. S˜ao propostas tamb´em cinco varia¸c˜oes destes descritores (GistCMCT, GECMCT,
GistCMCT-SM, ECMCT-SM e GECMCT-SM), obtidas atrav´es da associa¸c˜ao de cada um
deles com outros descritores. Os resultados obtidos nas quatro bases de dados mostram
que as representa¸c˜oes propostas s˜ao competitivas, e que provocam um aumento nas taxas
de classifica¸c˜ao, quando comparados com outros descritores. / Scene classification is a very popular topic in the field of computer vision and it has many
applications, such as, content-based image organization and retrieval and robot navigation.
However, scene classification is quite a challenging task, due to the occurrence of occlusion,
shadows and reflections, illumination changes and scale variability.
Among the approaches to solve the scene classification problems are those that use nonparametric
transform and those that improve classification results by using contextual
information. Thus, this work proposes two image descriptors that associate contextual
information, from neighboring regions, with a non-parametric transforms. The aim is to
propose an approach that does not increase excessively the feature vector dimension and
that does not use the bag-of-feature method. In this way, the proposals descrease the
computational costs and eliminate the dependence parameters, which allows the use of
those descriptors in applications for non-experts in the pattern recognition field.
The CMCT and ECMCT descriptors are presented and their performances are evaluated,
using four public datasets. Five variations of those descriptors are also proposed
(GistCMCT, GECMCT, GistCMCT-SM, ECMCT-SM e GECMCT-SM), obtained through
their association with other approaches. The results achieved on four public datasets show
that the proposed image representations are competitive and lead to an increase in the
classification rates when compared to others descriptors.

Identiferoai:union.ndltd.org:IBICT/oai:dspace2.ufes.br:10/1626
Date27 June 2014
CreatorsGazolli, Kelly Assis de Souza
ContributorsConci, Aura, Rauber, Thomas Walter, Vassallo, Raquel Frizera, Cicarelli, Patrick Marques, Salles, Evandro Ottoni Teatini
Source SetsIBICT Brazilian ETDs
LanguagePortuguese
Detected LanguagePortuguese
Typeinfo:eu-repo/semantics/publishedVersion, info:eu-repo/semantics/doctoralThesis
Formattext
Sourcereponame:Repositório Institucional da UFES, instname:Universidade Federal do Espírito Santo, instacron:UFES
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0019 seconds