Made available in DSpace on 2014-12-17T14:55:16Z (GMT). No. of bitstreams: 1
RafaelBG_TESE.pdf: 2529249 bytes, checksum: b16afb21de612f10dcfa5acb69028967 (MD5)
Previous issue date: 2013-08-02 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / Visual attention is a very important task in autonomous robotics, but, because of its
complexity, the processing time required is significant. We propose an architecture for
feature selection using foveated images that is guided by visual attention tasks and that
reduces the processing time required to perform these tasks. Our system can be applied
in bottom-up or top-down visual attention. The foveated model determines which scales
are to be used on the feature extraction algorithm. The system is able to discard features
that are not extremely necessary for the tasks, thus, reducing the processing time. If
the fovea is correctly placed, then it is possible to reduce the processing time without
compromising the quality of the tasks outputs. The distance of the fovea from the object
is also analyzed. If the visual system loses the tracking in top-down attention, basic
strategies of fovea placement can be applied. Experiments have shown that it is possible
to reduce up to 60% the processing time with this approach. To validate the method, we
tested it with the feature algorithm known as Speeded Up Robust Features (SURF), one
of the most efficient approaches for feature extraction. With the proposed architecture,
we can accomplish real time requirements of robotics vision, mainly to be applied in
autonomous robotics / A aten??o visual ? uma importante tarefa em rob?tica aut?noma, mas devido ? sua
complexidade, o tempo de processamento necess?rio ? significativo. Prop?e-se uma arquitetura
para sele??o de features usando imagens foveadas que ? guiada por tarefas envolvendo
aten??o visual e que reduz o tempo de processamento para realizar tais tarefas.
O sistema proposto pode ser aplicado para aten??o bottom-up ou top-down. O modelo
de foveamento determina quais escalas devem ser utilizadas no algoritmo de extra??o de
features. O sistema ? capaz de descartar features que n?o s?o essenciais para a realiza??o
da tarefa e, dessa forma, reduz o tempo de processamento. Se a f?vea ? corretamente
posicionada, ent?o ? poss?vel reduzir o tempo de processamento sem comprometer o desempenho
da tarefa. A dist?ncia da f?vea para o objeto tamb?m ? analisada. Caso o
sistema visual perca o tracking na aten??o top-down, estrat?gias b?sicas de reposicionamento
da f?vea podem ser aplicadas. Experimentos demonstram que ? poss?vel reduzir
em at? 60% o tempo de processamento com essa abordagem. Para validar o m?todo proposto,
s?o realizados testes com o algoritmo de extra??o de features SURF, um dos mais
eficientes existentes. Com a arquitetura proposta para sele??o de features, ? poss?vel cumprir
requisitos de um sistema de vis?o em tempo-real com poss?veis aplica??es na ?rea de
rob?tica
Identifer | oai:union.ndltd.org:IBICT/oai:repositorio.ufrn.br:123456789/15235 |
Date | 02 August 2013 |
Creators | Gomes, Rafael Beserra |
Contributors | CPF:32541457120, http://lattes.cnpq.br/1562357566810393, Carvalho, Bruno Motta de, CPF:79228860472, http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4791070J6, Chaimowicz, Luiz, CPF:62830732634, http://lattes.cnpq.br/4499928813481251, Leite, Luiz Eduardo Cunha, CPF:02702379419, http://lattes.cnpq.br/4080017602605582, Cesar Junior, Roberto Marcondes, CPF:07053817814, http://lattes.cnpq.br/2240951178648368, Gon?alves, Luiz Marcos Garcia |
Publisher | Universidade Federal do Rio Grande do Norte, Programa de P?s-Gradua??o em Engenharia El?trica, UFRN, BR, Automa??o e Sistemas; Engenharia de Computa??o; Telecomunica??es |
Source Sets | IBICT Brazilian ETDs |
Language | Portuguese |
Detected Language | English |
Type | info:eu-repo/semantics/publishedVersion, info:eu-repo/semantics/doctoralThesis |
Format | application/pdf |
Source | reponame:Repositório Institucional da UFRN, instname:Universidade Federal do Rio Grande do Norte, instacron:UFRN |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0123 seconds