CAPES / Mobile robot applications that involve automated exploration and inspection of environments are often dependant on novelty detection, the ability to differentiate between common and uncommon perceptions. Because novelty can be anything that deviates from the normal context, we argue that in order to implement a novelty filter it is necessary to exploit the robot's sensory data from the ground up, building models of normality rather than abnormality. In this work we use unrestricted colour visual data as perceptual input to on-line incremental learning algorithms. Unlike other sensor modalities, vision can provide a variety of useful information about the environment through massive amounts of data, which often need to be reduced for realtime operation. Here we use mechanisms of visual attention to select candidate image regions to be encoded and fed to higher levels of processing, enabling the localisation of novel features within the input image frame. An extensive series of experiments using visual input, obtained by a real mobile robot interacting with laboratory and medium-scale real world environments, are used to discuss different visual novelty filter configurations. We compare performance and functionality of novelty detection mechanisms based on the Grow-When-Required neural network and incremental Principal Component Analysis. Results are assessed using both qualitative and quantitative methods, demonstrating advantages and disadvantages of each investigated approach.
Identifer | oai:union.ndltd.org:IBICT/oai:repositorio.utfpr.edu.br:1/644 |
Date | 06 1900 |
Creators | Vieira Neto, Hugo |
Contributors | Nehmzow, Ulrich |
Publisher | University of Essex, Curitiba, Department of Computer Science |
Source Sets | IBICT Brazilian ETDs |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/publishedVersion, info:eu-repo/semantics/doctoralThesis |
Source | reponame:Repositório Institucional da UTFPR, instname:Universidade Tecnológica Federal do Paraná, instacron:UTFPR |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0026 seconds