• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Context-Based Vision System for Place and Object Recognition

Torralba, Antonio, Murphy, Kevin P., Freeman, William T., Rubin, Mark A. 19 March 2003 (has links)
While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., table, chair, car, computer). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides real-time feedback to the user.
2

Translating sensor measurements into texts for localization and mapping with mobile robots / Traduzindo leituras de sensores em textos para localização de mapeamento de robôs móveis

Maffei, Renan de Queiroz January 2017 (has links)
Localização e Mapeamento Simultâneos (SLAM), fundamental para robôs dotados de verdadeira autonomia, é um dos problemas mais difíceis na Robótica e consiste em estimar a posição de um robô que está se movendo em um ambiente desconhecido, enquanto incrementalmente constrói-se o mapa de tal ambiente. Provavelmente o requisito mais importante para localização e mapeamento adequados seja um preciso reconhecimento de local, isto é, determinar se um robô estava no mesmo lugar em diferentes ocasiões apenas analizando as observações feitas pelo robô em cada ocasião. A maioria das abordagens da literatura são boas quando se utilizam sensores altamente expressivos, como câmeras, ou quando o robô está situado em ambientes com pouco ambiguidade. No entanto, este não é o caso, por exemplo, quando o robô equipado apenas com sensores de alcance está em ambientes internos estruturados altamente ambíguos. Uma boa estratégia deve ser capaz de lidar com tais ambientes, lidar com ruídos e erros nas observações e, especialmente, ser capaz de modelar o ambiente e estimar o estado do robô de forma eficiente. Nossa proposta consiste em traduzir sequências de medições de laser em uma representação de texto eficiente e compacta, para então lidar com o problema de reconhecimento de local usando técnicas de processamento lingüísticos. Nós traduzimos as medições dos sensores em valores simples computados através de um novo modelo de observação baseado em estimativas de densidade de kernel chamado de Densidade de Espaço Livre (FSD). Estes valores são quantificados permitindo a divisão do ambiente em regiões contíguas de densidade homogênea, como corredores e cantos. Regiões são representadas de forma compacta por simples palavras descrevendo o valor de densidade espacial, o tamanho e a variação da orientação daquela região. No final, as cadeias de palavras compõem um texto, no qual se buscam casamentos de n-gramas (isto é, sequências de palavras). Nossa técnica também é aplicada com sucesso em alguns cenários de operação de longo-prazo, onde devemos lidar com objetos semi-estáticos (i.e. que se movem ocasionalmente, como portas e mobílias). Todas as abordagens foram avaliadas em cenários simulados e reais obtendo-se bons resultados. / Simultaneous Localization and Mapping (SLAM), fundamental for building robots with true autonomy, is one of the most difficult problems in Robotics and consists of estimating the position of a robot that is moving in an unknown environment while incrementally building the map of such environment. Arguably the most crucial requirement to obtain proper localization and mapping is precise place recognition, that is, determining if the robot is at the same place in different occasions just by looking at the observations taken by the robot. Most approaches in literature are good when using highly expressive sensors such as cameras or when the robot is situated in low ambiguous environments. However this is not the case, for instance, using robots equipped only with range-finder sensors in highly ambiguous indoor structured environments. A good SLAM strategy must be able to handle these scenarios, deal with noise and observation errors, and, especially, model the environment and estimate the robot state in an efficient way. Our proposal in this work is to translate sequences of raw laser measurements into an efficient and compact text representation and deal with the place recognition problem using linguistic processing techniques. First, we translate raw sensor measurements into simple observation values computed through a novel observation model based on kernel-density estimation called Free-Space Density (FSD). These values are quantized into significant classes allowing the division of the environment into contiguous regions of homogeneous spatial density, such as corridors and corners. Regions are represented in a compact form by simple words composed of three syllables – the value of spatial density, the size and the variation of orientation of that region. At the end, the chains of words associated to all observations made by the robot compose a text, in which we search for matches of n-grams (i.e. sequences of words), which is a popular technique from shallow linguistic processing. The technique is also successfully applied in some scenarios of long-term operation, where we must deal with semi-static objects (i.e. that can move occasionally, such as doors and furniture). All approaches were evaluated in simulated and real scenarios obtaining good results.
3

Scene Segmentation and Object Classification for Place Recognition

Cheng, Chang 01 August 2010 (has links)
This dissertation tries to solve the place recognition and loop closing problem in a way similar to human visual system. First, a novel image segmentation algorithm is developed. The image segmentation algorithm is based on a Perceptual Organization model, which allows the image segmentation algorithm to ‘perceive’ the special structural relations among the constituent parts of an unknown object and hence to group them together without object-specific knowledge. Then a new object recognition method is developed. Based on the fairly accurate segmentations generated by the image segmentation algorithm, an informative object description that includes not only the appearance (colors and textures), but also the parts layout and shape information is built. Then a novel feature selection algorithm is developed. The feature selection method can select a subset of features that best describes the characteristics of an object class. Classifiers trained with the selected features can classify objects with high accuracy. In next step, a subset of the salient objects in a scene is selected as landmark objects to label the place. The landmark objects are highly distinctive and widely visible. Each landmark object is represented by a list of SIFT descriptors extracted from the object surface. This object representation allows us to reliably recognize an object under certain viewpoint changes. To achieve efficient scene-matching, an indexing structure is developed. Both texture feature and color feature of objects are used as indexing features. The texture feature and the color feature are viewpoint-invariant and hence can be used to effectively find the candidate objects with similar surface characteristics to a query object. Experimental results show that the object-based place recognition and loop detection method can efficiently recognize a place in a large complex outdoor environment.
4

Mechanisms of place recognition and path integration based on the insect visual system

Stone, Thomas Jonathan January 2017 (has links)
Animals are often able to solve complex navigational tasks in very challenging terrain, despite using low resolution sensors and minimal computational power, providing inspiration for robots. In particular, many species of insect are known to solve complex navigation problems, often combining an array of different behaviours (Wehner et al., 1996; Collett, 1996). Their nervous system is also comparatively simple, relative to that of mammals and other vertebrates. In the first part of this thesis, the visual input of a navigating desert ant, Cataglyphis velox, was mimicked by capturing images in ultraviolet (UV) at similar wavelengths to the ant’s compound eye. The natural segmentation of ground and sky lead to the hypothesis that skyline contours could be used by ants as features for navigation. As proof of concept, sky-segmented binary images were used as input for an established localisation algorithm SeqSLAM (Milford and Wyeth, 2012), validating the plausibility of this claim (Stone et al., 2014). A follow-up investigation sought to determine whether using the sky as a feature would help overcome image matching problems that the ant often faced, such as variance in tilt and yaw rotation. A robotic localisation study showed that using spherical harmonics (SH), a representation in the frequency domain, combined with extracted sky can greatly help robots localise on uneven terrain. Results showed improved performance to state of the art point feature localisation methods on fast bumpy tracks (Stone et al., 2016a). In the second part, an approach to understand how insects perform a navigational task called path integration was attempted by modelling part of the brain of the sweat bee Megalopta genalis. A recent discovery that two populations of cells act as a celestial compass and visual odometer, respectively, led to the hypothesis that circuitry at their point of convergence in the central complex (CX) could give rise to path integration. A firing rate-based model was developed with connectivity derived from the overlap of observed neural arborisations of individual cells and successfully used to build up a home vector and steer an agent back to the nest (Stone et al., 2016b). This approach has the appeal that neural circuitry is highly conserved across insects, so findings here could have wide implications for insect navigation in general. The developed model is the first functioning path integrator that is based on individual cellular connections.
5

Translating sensor measurements into texts for localization and mapping with mobile robots / Traduzindo leituras de sensores em textos para localização de mapeamento de robôs móveis

Maffei, Renan de Queiroz January 2017 (has links)
Localização e Mapeamento Simultâneos (SLAM), fundamental para robôs dotados de verdadeira autonomia, é um dos problemas mais difíceis na Robótica e consiste em estimar a posição de um robô que está se movendo em um ambiente desconhecido, enquanto incrementalmente constrói-se o mapa de tal ambiente. Provavelmente o requisito mais importante para localização e mapeamento adequados seja um preciso reconhecimento de local, isto é, determinar se um robô estava no mesmo lugar em diferentes ocasiões apenas analizando as observações feitas pelo robô em cada ocasião. A maioria das abordagens da literatura são boas quando se utilizam sensores altamente expressivos, como câmeras, ou quando o robô está situado em ambientes com pouco ambiguidade. No entanto, este não é o caso, por exemplo, quando o robô equipado apenas com sensores de alcance está em ambientes internos estruturados altamente ambíguos. Uma boa estratégia deve ser capaz de lidar com tais ambientes, lidar com ruídos e erros nas observações e, especialmente, ser capaz de modelar o ambiente e estimar o estado do robô de forma eficiente. Nossa proposta consiste em traduzir sequências de medições de laser em uma representação de texto eficiente e compacta, para então lidar com o problema de reconhecimento de local usando técnicas de processamento lingüísticos. Nós traduzimos as medições dos sensores em valores simples computados através de um novo modelo de observação baseado em estimativas de densidade de kernel chamado de Densidade de Espaço Livre (FSD). Estes valores são quantificados permitindo a divisão do ambiente em regiões contíguas de densidade homogênea, como corredores e cantos. Regiões são representadas de forma compacta por simples palavras descrevendo o valor de densidade espacial, o tamanho e a variação da orientação daquela região. No final, as cadeias de palavras compõem um texto, no qual se buscam casamentos de n-gramas (isto é, sequências de palavras). Nossa técnica também é aplicada com sucesso em alguns cenários de operação de longo-prazo, onde devemos lidar com objetos semi-estáticos (i.e. que se movem ocasionalmente, como portas e mobílias). Todas as abordagens foram avaliadas em cenários simulados e reais obtendo-se bons resultados. / Simultaneous Localization and Mapping (SLAM), fundamental for building robots with true autonomy, is one of the most difficult problems in Robotics and consists of estimating the position of a robot that is moving in an unknown environment while incrementally building the map of such environment. Arguably the most crucial requirement to obtain proper localization and mapping is precise place recognition, that is, determining if the robot is at the same place in different occasions just by looking at the observations taken by the robot. Most approaches in literature are good when using highly expressive sensors such as cameras or when the robot is situated in low ambiguous environments. However this is not the case, for instance, using robots equipped only with range-finder sensors in highly ambiguous indoor structured environments. A good SLAM strategy must be able to handle these scenarios, deal with noise and observation errors, and, especially, model the environment and estimate the robot state in an efficient way. Our proposal in this work is to translate sequences of raw laser measurements into an efficient and compact text representation and deal with the place recognition problem using linguistic processing techniques. First, we translate raw sensor measurements into simple observation values computed through a novel observation model based on kernel-density estimation called Free-Space Density (FSD). These values are quantized into significant classes allowing the division of the environment into contiguous regions of homogeneous spatial density, such as corridors and corners. Regions are represented in a compact form by simple words composed of three syllables – the value of spatial density, the size and the variation of orientation of that region. At the end, the chains of words associated to all observations made by the robot compose a text, in which we search for matches of n-grams (i.e. sequences of words), which is a popular technique from shallow linguistic processing. The technique is also successfully applied in some scenarios of long-term operation, where we must deal with semi-static objects (i.e. that can move occasionally, such as doors and furniture). All approaches were evaluated in simulated and real scenarios obtaining good results.
6

Translating sensor measurements into texts for localization and mapping with mobile robots / Traduzindo leituras de sensores em textos para localização de mapeamento de robôs móveis

Maffei, Renan de Queiroz January 2017 (has links)
Localização e Mapeamento Simultâneos (SLAM), fundamental para robôs dotados de verdadeira autonomia, é um dos problemas mais difíceis na Robótica e consiste em estimar a posição de um robô que está se movendo em um ambiente desconhecido, enquanto incrementalmente constrói-se o mapa de tal ambiente. Provavelmente o requisito mais importante para localização e mapeamento adequados seja um preciso reconhecimento de local, isto é, determinar se um robô estava no mesmo lugar em diferentes ocasiões apenas analizando as observações feitas pelo robô em cada ocasião. A maioria das abordagens da literatura são boas quando se utilizam sensores altamente expressivos, como câmeras, ou quando o robô está situado em ambientes com pouco ambiguidade. No entanto, este não é o caso, por exemplo, quando o robô equipado apenas com sensores de alcance está em ambientes internos estruturados altamente ambíguos. Uma boa estratégia deve ser capaz de lidar com tais ambientes, lidar com ruídos e erros nas observações e, especialmente, ser capaz de modelar o ambiente e estimar o estado do robô de forma eficiente. Nossa proposta consiste em traduzir sequências de medições de laser em uma representação de texto eficiente e compacta, para então lidar com o problema de reconhecimento de local usando técnicas de processamento lingüísticos. Nós traduzimos as medições dos sensores em valores simples computados através de um novo modelo de observação baseado em estimativas de densidade de kernel chamado de Densidade de Espaço Livre (FSD). Estes valores são quantificados permitindo a divisão do ambiente em regiões contíguas de densidade homogênea, como corredores e cantos. Regiões são representadas de forma compacta por simples palavras descrevendo o valor de densidade espacial, o tamanho e a variação da orientação daquela região. No final, as cadeias de palavras compõem um texto, no qual se buscam casamentos de n-gramas (isto é, sequências de palavras). Nossa técnica também é aplicada com sucesso em alguns cenários de operação de longo-prazo, onde devemos lidar com objetos semi-estáticos (i.e. que se movem ocasionalmente, como portas e mobílias). Todas as abordagens foram avaliadas em cenários simulados e reais obtendo-se bons resultados. / Simultaneous Localization and Mapping (SLAM), fundamental for building robots with true autonomy, is one of the most difficult problems in Robotics and consists of estimating the position of a robot that is moving in an unknown environment while incrementally building the map of such environment. Arguably the most crucial requirement to obtain proper localization and mapping is precise place recognition, that is, determining if the robot is at the same place in different occasions just by looking at the observations taken by the robot. Most approaches in literature are good when using highly expressive sensors such as cameras or when the robot is situated in low ambiguous environments. However this is not the case, for instance, using robots equipped only with range-finder sensors in highly ambiguous indoor structured environments. A good SLAM strategy must be able to handle these scenarios, deal with noise and observation errors, and, especially, model the environment and estimate the robot state in an efficient way. Our proposal in this work is to translate sequences of raw laser measurements into an efficient and compact text representation and deal with the place recognition problem using linguistic processing techniques. First, we translate raw sensor measurements into simple observation values computed through a novel observation model based on kernel-density estimation called Free-Space Density (FSD). These values are quantized into significant classes allowing the division of the environment into contiguous regions of homogeneous spatial density, such as corridors and corners. Regions are represented in a compact form by simple words composed of three syllables – the value of spatial density, the size and the variation of orientation of that region. At the end, the chains of words associated to all observations made by the robot compose a text, in which we search for matches of n-grams (i.e. sequences of words), which is a popular technique from shallow linguistic processing. The technique is also successfully applied in some scenarios of long-term operation, where we must deal with semi-static objects (i.e. that can move occasionally, such as doors and furniture). All approaches were evaluated in simulated and real scenarios obtaining good results.
7

Deep visual place recognition for mobile surveillance services : Evaluation of localization methods for GPS denied environment

Blomqvist, Linus January 2022 (has links)
Can an outward facing camera on a bus, be used to recognize its location in GPS denied environment? Observit, provides cloud-based mobile surveillance services for bus operators using IP cameras with wireless connectivity. With the continuous gathering of video information, it opens up new possibilities for additional services. One service is to use the information with the technology, visual place recognition, to locate the vehicle, where the image was taken. The objective of this thesis has been to answer, how well can learnable visual place recognition methods localize a bus in a GPS denied environment and if a lightweight model can achieve the same accurate results as a heavyweight model. In order to achieve this, four model architecture has been implemented, trained and evaluate on a created dataset of interesting places. A visual place recognition application has been implemented as well, in order to test the models on bus video footage. The results show that the heavyweight model constructed of VGG16 with Patch-NetVLAD, performed best on the task with different recall@N values and got a recall@1 score of 92.31%. The lightweight model that used the backbone of MobileNetV2 with Patch-NetVLAD, scored similar recall@N results as the heavyweight model and got the same recall@1 score. The thesis shows that, with different localization methods, it is possible for a vehicle to identify its position in a GPS denied environment, with a model that could be deploy on a camera. This work, impacts companies that rely on cameras as their source of service.
8

Place recognition based visual localization in changing environments / Localisation visuelle basée sur la reconnaissance du lieu dans les environnements changeants

Qiao, Yongliang 03 April 2017 (has links)
Dans de nombreuses applications, il est crucial qu'un robot ou un véhicule se localise, notamment pour la navigation ou la conduite autonome. Cette thèse traite de la localisation visuelle par des méthodes de reconnaissance de lieux. Le principe est le suivant: lors d'une phase hors-ligne, des images géo-référencées de l'environnement d'évolution du véhicule sont acquises, des caractéristiques en sont extraites et sauvegardées. Puis lors de la phase en ligne, il s'agit de retrouver l'image (ou la séquence d'images) de la base d'apprentissage qui correspond le mieux à l'image (ou la séquence d'images) courante. La localisation visuelle reste un challenge car l'apparence et l'illumination changent drastiquement en particulier avec le temps, les conditions météorologiques et les saisons. Dans cette thèse, on cherche alors à améliorer la reconnaissance de lieux grâce à une meilleure capacité de description et de reconnaissance de la scène. Plusieurs approches sont proposées dans cette thèse:1) La reconnaissance visuelle de lieux est améliorée en considérant les informations de profondeur, de texture et de forme par la combinaison de plusieurs de caractéristiques visuelles, à savoir les descripteurs CSLBP (extraits sur l'image couleur et l'image de profondeur) et HOG. De plus l'algorithme LSH (Locality Sensitive Hashing) est utilisée pour améliorer le temps de calcul;2) Une méthode de la localisation visuelle basée sur une reconnaissance de lieux par mise en correspondance de séquence d'images (au lieu d'images considérées indépendamment) et combinaison des descripteurs GIST et CSLBP est également proposée. Cette approche est en particulier testée lorsque les bases d'apprentissage et de test sont acquises à des saisons différentes. Les résultats obtenus montrent que la méthode est robuste aux changements perceptuels importants;3) Enfin, la dernière approche de localisation visuelle proposée est basée sur des caractéristiques apprises automatiquement (à l'aide d'un réseau de neurones à convolution) et une mise en correspondance de séquences localisées d'images. Pour améliorer l'efficacité computationnelle, l'algorithme LSH est utilisé afin de viser une localisation temps-réel avec une dégradation de précision limitée / In many applications, it is crucial that a robot or vehicle localizes itself within the world especially for autonomous navigation and driving. The goal of this thesis is to improve place recognition performance for visual localization in changing environment. The approach is as follows: in off-line phase, geo-referenced images of each location are acquired, features are extracted and saved. While in the on-line phase, the vehicle localizes itself by identifying a previously-visited location through image or sequence retrieving. However, visual localization is challenging due to drastic appearance and illumination changes caused by weather conditions or seasonal changing. This thesis addresses the challenge of improving place recognition techniques through strengthen the ability of place describing and recognizing. Several approaches are proposed in this thesis:1) Multi-feature combination of CSLBP (extracted from gray-scale image and disparity map) and HOG features is used for visual localization. By taking the advantages of depth, texture and shape information, visual recognition performance can be improved. In addition, local sensitive hashing method (LSH) is used to speed up the process of place recognition;2) Visual localization across seasons is proposed based on sequence matching and feature combination of GIST and CSLBP. Matching places by considering sequences and feature combination denotes high robustness to extreme perceptual changes;3) All-environment visual localization is proposed based on automatic learned Convolutional Network (ConvNet) features and localized sequence matching. To speed up the computational efficiency, LSH is taken to achieve real-time visual localization with minimal accuracy degradation.
9

RELOCALIZATION AND LOOP CLOSING IN VISION SIMULTANEOUS LOCALIZATION AND MAPPING (VSLAM) OF A MOBILE ROBOT USING ORB METHOD

Venkatanaga Amrusha Aryasomyajula (8728027) 24 April 2020 (has links)
<p><a>It is essential for a mobile robot during autonomous navigation to be able to detect revisited places or loop closures while performing Vision Simultaneous Localization And Mapping (VSLAM). Loop closing has been identified as one of the critical data association problem when building maps. It is an efficient way to eliminate errors and improve the accuracy of the robot localization and mapping. In order to solve loop closing problem, the ORB-SLAM algorithm, a feature based simultaneous localization and mapping system that operates in real time is used. This system includes loop closing and relocalization and allows automatic initialization. </a></p> <p>In order to check the performance of the algorithm, the monocular and stereo and RGB-D cameras are used. The aim of this thesis is to show the accuracy of relocalization and loop closing process using ORB SLAM algorithm in a variety of environmental settings. The performance of relocalization and loop closing in different challenging indoor scenarios are demonstrated by conducting various experiments. Experimental results show the applicability of the approach in real time application like autonomous navigation.</p>
10

Méthodes probabilistes basées sur les mots visuels pour la reconnaissance de lieux sémantiques par un robot mobile / Visual words based probalistic methods for semantic places recognition

Dubois, Mathieu 20 February 2012 (has links)
Les êtres humains définissent naturellement leur espace quotidien en unités discrètes. Par exemple, nous sommes capables d'identifier le lieu où nous sommes (e.g. le bureau 205) et sa catégorie (i.e. un bureau), sur la base de leur seule apparence visuelle. Les travaux récents en reconnaissance de lieux sémantiques, visent à doter les robots de capacités similaires. Ces unités, appelées "lieux sémantiques", sont caractérisées par une extension spatiale et une unité fonctionnelle, ce qui distingue ce domaine des travaux habituels en cartographie. Nous présentons nos travaux dans le domaine de la reconnaissance de lieux sémantiques. Ces derniers ont plusieurs originalités par rapport à l'état de l'art. Premièrement, ils combinent la caractérisation globale d'une image, intéressante car elle permet de s'affranchir des variations locales de l'apparence des lieux, et les méthodes basées sur les mots visuels, qui reposent sur la classification non-supervisée de descripteurs locaux. Deuxièmement, et de manière intimement reliée, ils tirent parti du flux d'images fourni par le robot en utilisant des méthodes bayésiennes d'intégration temporelle. Dans un premier modèle, nous ne tenons pas compte de l'ordre des images. Le mécanisme d'intégration est donc particulièrement simple mais montre des difficultés à repérer les changements de lieux. Nous élaborons donc plusieurs mécanismes de détection des transitions entre lieux qui ne nécessitent pas d'apprentissage supplémentaire. Une deuxième version enrichit le formalisme classique du filtrage bayésien en utilisant l'ordre local d'apparition des images. Nous comparons nos méthodes à l'état de l'art sur des tâches de reconnaissance d'instances et de catégorisation, en utilisant plusieurs bases de données. Nous étudions l'influence des paramètres sur les performances et comparons les différents types de codage employés sur une même base.Ces expériences montrent que nos méthodes sont supérieures à l'état de l'art, en particulier sur les tâches de catégorisation. / Human beings naturally organize their space as composed of discrete units. Those units, called "semantic places", are characterized by their spatial extend and their functional unity. Moreover, we are able to quickly recognize a given place (e.g. office 205) and its category (i.e. an office), solely on their visual appearance. Recent works in semantic place recognition seek to endow the robot with similar capabilities. Contrary to classical localization and mapping work, this problem is usually tackled as a supervised learning problem. Our contributions are two fold. First, we combine global image characterization, which captures the global organization of the image, and visual words methods which are usually based unsupervised classification of local signatures. Our second but closely related, contribution is to use several images for recognition by using Bayesian methods for temporal integration. Our first model don't use the natural temporal ordering of images. Temporal integration is very simple but has difficulties when the robot moves from one place to another.We thus develop several mechanisms to detect place transitions. Those mechanisms are simple and don't require additional learning. A second model augment the classical Bayesian filtering approach by using the local order among images. We compare our methods to state-of-the-art algorithms on place recognition and place categorization tasks.We study the influence of system parameters and compare the different global characterization methods on the same dataset. These experiments show that our approach while being simple leads to better results especially on the place categorization task.

Page generated in 0.1077 seconds