• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 6
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 70
  • 41
  • 25
  • 21
  • 17
  • 16
  • 15
  • 14
  • 14
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Dynamics of Multi-Agent Systems with Bio-Inspired Active and Passive Sensing

Jahromi Shirazi, Masoud 22 October 2020 (has links)
Active sensors, such as radar, lidar and sonar, emit a signal into the environment and gather information from its reflection. In contrast, passive sensors such as cameras and microphones rely on the signals emitted from the environment. In the current application of active sensors in multi-agent autonomous systems, agents only rely on their own active sensing and filter out any information available passively. However, fusing passive and active sensing information may improve the accuracy of the agents. Also, there is evidence that bats who use biosonar eavesdrop on a conspecific's echolocation sound, which shows a successful example of implementing active and passive sonar sensor fusion in nature. We studied the effect of such information fusion in the framework of two problems: the collective behavior in a multi-agent system using the Vicsek model and the canonical robotics problem of Simultaneous Localization And Mapping (SLAM). Collective behavior refers to emergence of a complex behavior in a group of individuals through local interaction. The Vicsek model is a well-established flocking model based on alignment of individuals with their neighbors in the presence of noise. We studied the aligned motion in a group in which the agents employ both active and passive sensing. Our study shows that the group behavior is less sensitive to measurement accuracy compared to modeling precision. Therefore, using measurement values of the noisier passive sonar can be beneficial. In addition, the group alignment is improved when the passive measurements are not dramatically noisier than active measurements. In the SLAM problem, a robot scans an unknown environment building a map and simultaneously localizing itself within that map. We studied a landmark-based SLAM problem in which the robot uses active and passive sensing strategies. The information provided passively can improve the accuracy of the active sensing measurements and compensate for its blind spot. We developed an estimation algorithm using Extended Kalman Filter and employed Monte Carlo simulation to find a parameter region in which fusing passive and active sonar information improves the performance of the robot. Our analysis shows this region is aligned within the common range of active sonar parameters. / Doctor of Philosophy / Group behavior is a fascinating phenomenon in animal groups such as bird flocks, fish schools, bee colonies and fireflies. For instance, many species of fireflies synchronize their flashing when they bio-luminesce. This synchronization pattern is a group behavior created as a result of local interaction formed by sensing individuals in the group. The research question for this dissertation is inspired by comes from group behavior in bats. Bats use echolocation to perceive the environment. They make a sound and listen to the echo of the sound coming back from objects and by analyzing the echo, they can get information about their surroundings. It has been observed that bats may also use the echo of other bats' sound to perceive their environment. In other words they use two different sensors, one is called active sonar since they actively make the sound and listen to its echoes, and the other one is called passive sonar since they just passively listen to the sound generated by other bats. If this information is useful, can we exploit that in design of engineered systems? We investigated these questions using numerical simulation to solve two test bed problems. The first problem is based on a mathematical flocking model in which the individuals in the group align through local interaction. We found out that eavesdropping improves the alignment of the group within a range of parameters in the model which are relevant to the sensing capabilities of the sonar sensors. The other problem is a canonical robotics problem known as the simultaneous localization and mapping (SLAM). In this problem, a robot searches an unknown environment and creates a map of the environment (mapping) and reports the path it takes within the map (localization). We found out that when the robot uses both passive and active sonar, depending on the accuracy of the two sensing approaches, it can improve the accuracy of both the generated map and the robot's path.
22

Applications of Sensor Fusion to Classification, Localization and Mapping

Abdelbar, Mahi Othman Helmi Mohamed Helmi Hussein 30 April 2018 (has links)
Sensor Fusion is an essential framework in many Engineering fields. It is a relatively new paradigm for integrating data from multiple sources to synthesize new information that in general would not have been feasible from the individual parts. Within the wireless communications fields, many emerging technologies such as Wireless Sensor Networks (WSN), the Internet of Things (IoT), and spectrum sharing schemes, depend on large numbers of distributed nodes working collaboratively and sharing information. In addition, there is a huge proliferation of smartphones in the world with a growing set of cheap powerful embedded sensors. Smartphone sensors can collectively monitor a diverse range of human activities and the surrounding environment far beyond the scale of what was possible before. Wireless communications open up great opportunities for the application of sensor fusion techniques at multiple levels. In this dissertation, we identify two key problems in wireless communications that can greatly benefit from sensor fusion algorithms: Automatic Modulation Classification (AMC) and indoor localization and mapping based on smartphone sensors. Automatic Modulation Classification is a key technology in Cognitive Radio (CR) networks, spectrum sharing, and wireless military applications. Although extensively researched, performance of signal classification at a single node is largely bounded by channel conditions which can easily be unreliable. Applying sensor fusion techniques to the signal classification problem within a network of distributed nodes is presented as a means to overcome the detrimental channel effects faced by single nodes and provide more reliable classification performance. Indoor localization and mapping has gained increasing interest in recent years. Currently-deployed positioning techniques, such as the widely successful Global Positioning System (GPS), are optimized for outdoor operation. Providing indoor location estimates with high accuracy up to the room or suite level is an ongoing challenge. Recently, smartphone sensors, specially accelerometers and gyroscopes, provided attractive solutions to the indoor localization problem through Pedestrian Dead-Reckoning (PDR) frameworks, although still suffering from several challenges. Sensor fusion algorithms can be applied to provide new and efficient solutions to the indoor localization problem at two different levels: fusion of measurements from different sensors in a smartphone, and fusion of measurements from several smartphones within a collaborative framework. / Ph. D.
23

Semantic and Fiducial Aided Graph Simultaneous Localization and Mapping for Robotic In-Space Assembly and Servicing of Large Truss Structures

Chapin, Samantha Helen Glassner 22 May 2024 (has links)
This research focuses on the development of the semantic and fiducial aided graph simultaneous localization and mapping (SF-GraphSLAM) method that is tailored for robotic assembly and servicing of large truss structures. SF-GraphSLAM contributes to the state of the art by creating a novel way to add associations between map landmarks, in this scenario fiducials, by pre-generating a semantic map of expected relations based on the truss module known models, kinematic information about deployable modules, and hardware constraints for assembled modules. This additional information about the expected fiducial relations, and therefore truss module relative poses, can be used to add robustness to camera pose and measurement error. In parallel, the concept of a mixed assembly truss structure paradigm was created and analyzed. This mixed assembly method focuses on reducing the number of modules required to construct large truss structures by using a mixture of deployable and assembled truss modules to create a checkerboard array that is scalable to various dimensions and shapes while still minimizing the number of modules compared to a strut-by-strut method. Leveraging this paradigm SF-GraphSLAM is able to start at an advantage in terms of minimizing the state vector for the assembly testing. In addition, due to the added knowledge of the structure and the choice to utilize fiducial markers, SF-GraphSLAM is able to minimize the number of fiducials used to define the structure and therefore have the minimum state space to solve the assembly scenario, greatly improving the real-time estimation process between assembly steps. These optimizations will have a larger effect as the size of the scaled end structure increases. SF-GraphSLAM is derived in mathematical form following the same core process used for the pose and measurement components used in the base GraphSLAM. SF-GraphSLAM is evaluated against the state of the art example of GraphSLAM through simulation using an example 3x3x3 mixed assembly truss structure, known as the Built On-orbit Robotically-assembled Gigatruss (BORG). A physical BORG test truss was constructed to enable hardware trials of the SF-GraphSLAM algorithm. Although this ground hardware is not ideal for the high precision application of space structures it allows for rapid experimental robotic testing. This tailored SF-GraphSLAM approach will contribute to the state of the art of robotic in-space servicing, assembly, and manufacturing (ISAM) by providing progress on a method for dealing with the autonomous robotic assembly of movable modules to create larger structures. This will be critical for missions such as robotically assembling a large antenna structure or space telescope. Furthermore, the core methodology will study into how to best utilize information in a large-scale structure environment, including non-static flexible or deployable modules, to adequately map it which is also applicable to the larger field of robotic operations dealing with structures such as bridge surveying. / Doctor of Philosophy / The goal of this research is to enable in-space assembly of large truss structures by advancing the state of the art of how a robot can map the structure it is actively assembling. The concept of having a robot create a map of the landmarks, or in this case truss elements, it sees while keeping track of it's own movement is known as simultaneous localization and mapping (SLAM). This research focuses on the creation of a method called semantic and fiducial aided graph simultaneous localization and mapping (SF-GraphSLAM). The added semantic information is the model knowledge of the truss structure the robot is assembling, including what kind of modules are within and their desired relationships to each other. Fiducials are optical markers that can be used to provide identification, position, and orientation of what they are mounted to. Combining these concepts SF-GraphSLAM can use easily identifiable fiducials to mark components of the truss structure and use knowledge of how the truss structure should be assembled to help in estimating where the actual physical components are at different stages of the assembly process. This method is used to check if a truss module is assembled correctly after each step to ensure the final structure is within the requirements desired. This concept can be likened to when assembling a LEGO model, a person verifies they are using the correct brick for the next assembly step and then compared the state of the model with the reference photo before proceeding with the building. An incorrectly assembled module in an early step could result in a module down the line not being able to be properly placed or the final assembled structure not being within operational tolerances. This research shows how SF-GraphSLAM can be implemented for the application of assembling a truss structure out of both deployable and assembled modules. Mathematical analysis, simulations, and hardware testing were completed to compare this new method to the state of the art approach. SF-GraphSLAM is a critical step in the development required to make autonomous robotic assembly of larger structures in space feasible.
24

Scalable online decentralized smoothing and mapping

Cunningham, Alexander G. 22 May 2014 (has links)
Many applications for field robots can benefit from large numbers of robots, especially applications where the objective is for the robots to cover or explore a region. A key enabling technology for robust autonomy in these teams of small and cheap robots is the development of collaborative perception to account for the shortcomings of the small and cheap sensors on the robots. In this dissertation, I present DDF-SAM to address the decentralized data fusion (DDF) inference problem with a smoothing and mapping (SAM) approach to single-robot mapping that is online, scalable and consistent while supporting a variety of sensing modalities. The DDF-SAM approach performs fully decentralized simultaneous localization and mapping in which robots choose a relevant subset of variables from their local map to share with neighbors. Each robot summarizes their local map to yield a density on exactly this chosen set of variables, and then distributes this summarized map to neighboring robots, allowing map information to propagate throughout the network. Each robot fuses summarized maps it receives to yield a map solution with an extended sensor horizon. I introduce two primary variations on DDF-SAM, one that uses a batch nonlinear constrained optimization procedure to combine maps, DDF-SAM 1.0, and one that uses an incremental solving approach for substantially faster performance, DDF-SAM 2.0. I validate these systems using a combination of real-world and simulated experiments. In addition, I evaluate design trade-offs for operations within DDF-SAM, with a focus on efficient approximate map summarization to minimize communication costs.
25

Auto-localização e construção de mapas de ambiente para robôs móveis baseados em visão omnidirecional estéreo. / Simultaneous localization and map building for mobile robots with omnidirectional estereo vision.

Oliveira, Paulo Roberto Godoi de 14 April 2008 (has links)
Este projeto consiste no desenvolvimento de um sistema para auto-localização e construção de mapas de ambiente para robôs móveis em um ambiente estruturado, ou seja, que pode ser descrito através de primitivas geométricas. O mapa é construído a partir da reconstrução de imagens adquiridas por um sistema de visão omnidirecional estéreo baseado em um espelho duplo de perfil hiperbólico. A partir de uma única imagem obtida, utilizandose algoritmos de visão estéreo, realiza-se a reconstrução tridimensional do ambiente em torno do robô e, assim, obtêm-se as distâncias de objetos presentes no ambiente ao sistema de visão. A partir da correspondência da reconstrução de várias imagens tomadas em diferentes posições cria-se o mapa do ambiente. Além do mapa global do ambiente o sistema também realiza o cálculo da localização do robô no ambiente utilizando informações obtidas na correspondência da reconstrução da seqüência de imagens e a odometria do robô. O sistema de construção de mapas de ambiente e auto-localização do robô é testado em um ambiente virtual e um ambiente real. Os resultados obtidos tanto na construção do mapa global do ambiente, como na localização do robô, mostram que o sistema é capaz de obter informação com a acuracidade necessária para permitir a sua utilização para navegação de robôs móveis. O tempo computacional necessário para reconstruir as imagens, calcular a posição do robô e criar o mapa global do ambiente possibilita que o sistema desenvolvido seja usado em uma aplicação que necessite da geração do mapa global em um intervalo de tempo na ordem de poucos segundos. Ressalta-se que este projeto teve como ponto de partida um projeto de iniciação científica financiado pela FAPESP. Esse trabalho de iniciação científica foi publicado na forma de um trabalho de conclusão de curso (Oliveira, 2005). / This project aims the development of a system for self localization and environment map building for mobile robots in a structured environment. The map is built from images acquired by an omnidirectional stereo system with a hyperbolic double lobed mirror. From a single acquired image, using stereo vision algorithms, the environment around the robot is tridimensionally reconstruct and the distances of objects in the environment from the system are calculated. From the matching of several reconstructed environments obtained from images taken in different positions the global environment map is created. Besides the global map the system also calculates the localization of the mobile robot using information obtained from the matching of the sequence of image reconstructions and the robot odometry. The map building and robot localization system is tested in virtual and real environments. The computational time required to make the calculation is of the order of few seconds. The results obtained both for the map building and for the robot localization show that the system is capable of generating information with enough accuracy to allow it to be used for mobile robot navigation. This project had as start point a scientific initiation project supported by FAPESP. The scientific initiation project was published as a graduation work (Oliveira, 2005).
26

A Novel Mobile Robot Navigation Method Based On Combined Feature Based Scan Matching And Fastslam Algorithm

Ozgur, Ayhan 01 September 2010 (has links) (PDF)
The main focus of the study is the implementation of a practical indoor localization and mapping algorithm for large scale, structured indoor environments. Building an incremental consistent map while also using it for localization is partially unsolved problem and of prime importance for mobile robot navigation. Within this framework, a combined method consisting of feature based scan matching and FastSLAM algorithm using LADAR and odometer sensor is presented. In this method, an improved data association and localization accuracy is achieved by feeding the SLAM module with better incremental pose information from scan matching instead of raw odometer output. This thesis presents the following contributions for indoor localization and mapping. Firstly a method combining feature based scan matching and FastSLAM is achieved. Secondly, improved geometrical relations are used for scan matching and also a novel method based on vector transformation is used for the calculation of pose difference. These are carefully studied and tuned based on localization and mapping performance failures encountered in different realistic LADAR datasets. Thirdly, in addition to position, orientation information usage in line segment and corner oriented data association is presented as an extension in FastSLAM module. v The method is tested with LADAR and odometer data taken from real robot platforms operated in different indoor environments. In addition to using datasets from the literature, own datasets are collected on Pioneer 3AT experimental robot platform. As a result, a real time working localization algorithm which is pretty successive in large scale, structured environments is achieved.
27

Multi-robot Coordination Control Methodology For Search And Rescue Operations

Topal, Sebahattin 01 September 2011 (has links) (PDF)
This dissertation presents a novel multi-robot coordination control algorithm for search and rescue (SAR) operations. Continuous and rapid coverage of the unstructured and complex disaster areas in search of possible buried survivors is a time critical operation where prior information about the environment is either not available or very limited. Human navigation of such areas is definitely dangerous due to the nature of the debris. Hence, exploration of unknown disaster environments with a team of robots is gaining importance day by day to increase the efficiency of SAR operations. Localization of possible survivors necessitates uninterrupted navigation of robotic aiding devices within the rubbles without getting trapped into dead ends. In this work, a novel goal oriented prioritized exploration and map merging methodologies are proposed to generate efficient multi-robot coordination control strategy. These two methodologies are merged to make the proposed methodology more realistic for real world applications. Prioritized exploration of an environment is the first important task of the efficient coordination control algorithm for multi-robots. A goal oriented and prioritized exploration approach based on a percolation model for victim search operation in unknown environments is presented in this work. The percolation model is used to describe the behavior of liquid in random media. In our approach robots start prioritized exploration beginning from regions of the highest likelihood of finding victims using percolation model inspired controller. A novel map merging algorithm is presented to increase the performance of the SAR operation in the sense of time and energy. The problem of merging partial occupancy grid environment maps which are extracted independently by individual robot units during search and rescue (SAR) operations is solved for complex disaster environments. Moreover, these maps are combined using intensity and area based features without knowing the initial position and orientation of the robots. The proposed approach handles the limitation of existing works in the literature such as / limited overlapped area between partial maps of robots is sufficient for good merging performance and unstructured partial environment maps can be merged efficiently. These abilities allow multi-robot teams to efficiently generate the occupancy grid map of catastrophe areas and localize buried victim in the debris efficiently.
28

Autonomous Mapping and Exploration of Dynamic Indoor Environments / Autonom kartläggning och utforskning av dynamiska inomhusmiljöer

Fåk, Joel, Wilkinson, Tomas January 2013 (has links)
This thesis describes all the necessary parts needed to build a complete system for autonomous indoor mapping in 3D. The robotic platform used is a two-wheeled Segway, operating in a planar environment. This, together with wheel odometers, an Inertial Measurement Unit (IMU), two Microsoft Kinects and a laptop comprise the backbone of the system, which can be divided into three parts: The localization and mapping part, which fundamentally is a SLAM (simultaneous localization and mapping) algorithm implemented using the registration technique Iterative Closest Point (ICP). Along with the map being in 3D, it also designed to handle the mapping of dynamic scenes, something absent from the standard SLAM design. The planning used by the system is twofold. First, the path planning - finding a path from the current position to a destination - and second, the target planning - determining where to go next given the current state of the map and the robot. The third part of the system is the control and collision systems, which while they have not received much focus, are very necessary for a fully autonomous system. Contributions made by this thesis include: The 3D map framework Octomap is extended to handle the mapping of dynamic scenes; A new method for target planning, based on image processing is presented; A calibration procedure for the robot is derived that gives a full six degree of freedom pose for each Kinect. Results show that our calibration procedure produces an accurate pose for each Kinect, which is crucial for a functioning system. The dynamic mapping is shown to outperform the standard occupancy grid in fundamental situations that arise when mapping dynamic scenes. Additionally, the results indicate that the target planning algorithm provides a fast and easy way to plan new target destinations. Finally, the entire system’s autonomous mapping capabilities are evaluated together, producing promising results. However, it also highlights some problems that limit the system’s performance such as the inaccuracy and short range of the Kinects or noise added and reinforced by the multiple subsystems / Detta exjobb beskriver delarna som krävs för att för bygga ett komplett system som autonomt kartlägger inomhusmiljöer i tre dimensioner. Robotplattformen är en Segway, som är kapabel att röra sig i ett plan. Segwayn, tillsammans med en tröghetssensor, två Microsoft Kinects och en bärbar dator utgör grunden till systemet, som kan delas i tre delar: En lokaliserings- och karteringsdel, som i grunden är en SLAM-algoritm (simultan lokalisering och kartläggning)  baserad på registreringsmetoden Iterative Closest Point (ICP). Kartan som byggs upp är i tre dimensioner och ska dessutom hantera kartläggningen av dynamiska miljöer, något som orginalforumleringen av SLAM problemet inte klarar av. En automatisk planeringsdel, som består av två delar. Dels ruttplanering som går ut på att hitta en väg från sin nuvarande position till det valda målet och dels målplanering som innebär att välja ett mål att åka till givet den nuvarande kartan och robotens nuvarande position. Systemets tredje del är regler- och kollisionssystemen. Dessa system har inte varit i fokus i detta arbete, men de är ändå högst nödvändiga för att ett autonomt system skall fungera. Detta examensarbete bidrar med följande: Octomap, ett ramverk för kartläggningen i 3D, har utökats för att hantera kartläggningen av dynamiska miljöer; En ny metod för målplanering, baserad på bildbehandling läggs fram; En kalibreringsprocedur för roboten är framtagen som ger den fullständiga posen i förhållande till roboten för varje Kinect. Resultaten visar att vår kalibreringsprocedur ger en nogrann pose for för varje Kinect, vilket är avgörande för att systemet ska fungera. Metoden för kartläggningen av dynamiska miljöer visas prestera bra i grundläggande situationer som uppstår vid kartläggning av dynamiska miljöer. Vidare visas att målplaneringsalgoritmen ger ett snabbt och enkelt sätt att planera mål att åka till. Slutligen utvärderas hela systemets autonoma kartläggningsförmåga, som ger lovande resultat. Dock lyfter resultat även fram problem som begränsar systemets prestanda, till exempel Kinectens onoggranhet och korta räckvidd samt brus som läggs till och förstärks av de olika subsystemen.
29

Bayesian mechanisms in spatial cognition : towards real-world capable computational cognitive models of spatial memory

Madl, Tamas January 2016 (has links)
Existing computational cognitive models of spatial memory often neglect difficulties posed by the real world, such as sensory noise, uncertainty, and high spatial complexity. On the other hand, robotics is unconcerned with understanding biological cognition. This thesis takes an interdisciplinary approach towards developing cognitively plausible spatial memory models able to function in realistic environments, despite sensory noise and spatial complexity. We hypothesized that Bayesian localization and error correction accounts for how brains might maintain accurate location estimates, despite sensory errors. We argued that these mechanisms are psychologically plausible (producing human-like behaviour) as well as neurally plausible (implementable in brains). To support our hypotheses, we reported modelling results of neural recordings from rats (acquired outside this PhD), constituting the first evidence for Bayesian inference in neurons representing spatial location, as well as modelling human behaviour data. In addition to dealing with uncertainty, spatial representations have to be stored and used efficiently in realistic environments, by using structured representations such as hierarchies (which facilitate efficient retrieval and route planning). Evidence suggests that human spatial memories are structured hierarchically, but the process responsible for these structures has not been known. We investigated features influencing them using data from experiments in real-world and virtual reality environments, and proposed a computational model able to predict them in advance (based on clustering in psychological space). We have extended a general cognitive architecture, LIDA (Learning Intelligent Distribution Agent), by these probabilistic models of how brains might estimate, correct, and structure representations of spatial locations. We demonstrated the ability of the resulting model to deal with the challenges of realistic environments by running it in high-fidelity robotic simulations, modelled after participants' actual cities. Our results show that the model can deal with noise, uncertainty and complexity, and that it can reproduce the spatial accuracies of human participants.
30

Auto-localização e construção de mapas de ambiente para robôs móveis baseados em visão omnidirecional estéreo. / Simultaneous localization and map building for mobile robots with omnidirectional estereo vision.

Paulo Roberto Godoi de Oliveira 14 April 2008 (has links)
Este projeto consiste no desenvolvimento de um sistema para auto-localização e construção de mapas de ambiente para robôs móveis em um ambiente estruturado, ou seja, que pode ser descrito através de primitivas geométricas. O mapa é construído a partir da reconstrução de imagens adquiridas por um sistema de visão omnidirecional estéreo baseado em um espelho duplo de perfil hiperbólico. A partir de uma única imagem obtida, utilizandose algoritmos de visão estéreo, realiza-se a reconstrução tridimensional do ambiente em torno do robô e, assim, obtêm-se as distâncias de objetos presentes no ambiente ao sistema de visão. A partir da correspondência da reconstrução de várias imagens tomadas em diferentes posições cria-se o mapa do ambiente. Além do mapa global do ambiente o sistema também realiza o cálculo da localização do robô no ambiente utilizando informações obtidas na correspondência da reconstrução da seqüência de imagens e a odometria do robô. O sistema de construção de mapas de ambiente e auto-localização do robô é testado em um ambiente virtual e um ambiente real. Os resultados obtidos tanto na construção do mapa global do ambiente, como na localização do robô, mostram que o sistema é capaz de obter informação com a acuracidade necessária para permitir a sua utilização para navegação de robôs móveis. O tempo computacional necessário para reconstruir as imagens, calcular a posição do robô e criar o mapa global do ambiente possibilita que o sistema desenvolvido seja usado em uma aplicação que necessite da geração do mapa global em um intervalo de tempo na ordem de poucos segundos. Ressalta-se que este projeto teve como ponto de partida um projeto de iniciação científica financiado pela FAPESP. Esse trabalho de iniciação científica foi publicado na forma de um trabalho de conclusão de curso (Oliveira, 2005). / This project aims the development of a system for self localization and environment map building for mobile robots in a structured environment. The map is built from images acquired by an omnidirectional stereo system with a hyperbolic double lobed mirror. From a single acquired image, using stereo vision algorithms, the environment around the robot is tridimensionally reconstruct and the distances of objects in the environment from the system are calculated. From the matching of several reconstructed environments obtained from images taken in different positions the global environment map is created. Besides the global map the system also calculates the localization of the mobile robot using information obtained from the matching of the sequence of image reconstructions and the robot odometry. The map building and robot localization system is tested in virtual and real environments. The computational time required to make the calculation is of the order of few seconds. The results obtained both for the map building and for the robot localization show that the system is capable of generating information with enough accuracy to allow it to be used for mobile robot navigation. This project had as start point a scientific initiation project supported by FAPESP. The scientific initiation project was published as a graduation work (Oliveira, 2005).

Page generated in 0.0858 seconds