• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • Tagged with
  • 9
  • 9
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Grades de evidência com visão estéreo omnidirecional para robôs móveis. / Evidence grids with omnidirectional stereovision for mobile robots.

Fabiano Rogério Corrêa 27 August 2004 (has links)
Robôs móveis autônomos dependem da informação obtida de seus sensores para processos de tomada de decisão durante a realização de suas tarefas. A utilização de sistemas de visão permite a aquisição de um grande volume de dados sobre o ambiente no qual o robô se encontra. Particularmente, um sistema de visão omnidirecional é capaz de fornecer informações sobre todo o espaço ao redor do robô numa única imagem. Através do processamento de um par ou mais de imagens omnidirecionais pode-se obter as distâncias entre o robô e os objetos no seu ambiente de trabalho. Devido às incertezas inerentes a qualquer sensoriamento, um modelo probabilístico do mesmo faz-se necessário para que a informação sensorial adquirida possa ser utilizada para os processos de decisão internos do robô durante a execução de sua tarefa. Assim, tendo como único sensor um sistema de visão estéreo omnidirecional utilizado como fonte de informação para uma representação estocástica espacial do ambiente, conhecida como Grades de Evidência, o robô é capaz de determinar a probabilidade da ocupação dos espaços ao seu redor e assim navegar autonomamente no ambiente. Este artigo mostra um algoritmo estéreo com imagens omnidirecionais e um modelo do sistema de visão estéreo omnidirecional para atualização das Grades de Evidência. Este é a primeira etapa de um trabalho que visa a realização de tarefas de navegação e exploração de ambientes desconhecidos e não-estruturados tendo como base de conhecimento para o robô um modelo probabilístico baseado nas Grades de Evidência. / Autonomous mobile robots depend on information acquired with its sensors to make decisions during its task. The use of vision systems provide a large amount of data about the environment in which the robot is. Particularly, an omnidirectional vision systems provide information in all directions of the environment to the robot with just one image. Through the processing of a pair of omnidirectional images it is possible to obtain the distances between the robot and the objects in its work environment. Because of the uncertainty of all sensors, a probabilistic model is necessary so that the information acquired could be used in decision make processes. Having just an omnidirectional stereovision system as a source of information to an stochastic representation of the environment, known as Evidence Grids, the robot can determine the probability of occupation of the space in the environment and navigate autonomously. This article shows a stereo algorithm and a model of the omnidirectional stereovision system to update the Evidence Grid. This is the beginning of a work that have as objective make navigation and exploration of unknown and unstructured environment having as knowledge base a probabilistic model as Evidence Grids.
2

Grades de evidência com visão estéreo omnidirecional para robôs móveis. / Evidence grids with omnidirectional stereovision for mobile robots.

Corrêa, Fabiano Rogério 27 August 2004 (has links)
Robôs móveis autônomos dependem da informação obtida de seus sensores para processos de tomada de decisão durante a realização de suas tarefas. A utilização de sistemas de visão permite a aquisição de um grande volume de dados sobre o ambiente no qual o robô se encontra. Particularmente, um sistema de visão omnidirecional é capaz de fornecer informações sobre todo o espaço ao redor do robô numa única imagem. Através do processamento de um par ou mais de imagens omnidirecionais pode-se obter as distâncias entre o robô e os objetos no seu ambiente de trabalho. Devido às incertezas inerentes a qualquer sensoriamento, um modelo probabilístico do mesmo faz-se necessário para que a informação sensorial adquirida possa ser utilizada para os processos de decisão internos do robô durante a execução de sua tarefa. Assim, tendo como único sensor um sistema de visão estéreo omnidirecional utilizado como fonte de informação para uma representação estocástica espacial do ambiente, conhecida como Grades de Evidência, o robô é capaz de determinar a probabilidade da ocupação dos espaços ao seu redor e assim navegar autonomamente no ambiente. Este artigo mostra um algoritmo estéreo com imagens omnidirecionais e um modelo do sistema de visão estéreo omnidirecional para atualização das Grades de Evidência. Este é a primeira etapa de um trabalho que visa a realização de tarefas de navegação e exploração de ambientes desconhecidos e não-estruturados tendo como base de conhecimento para o robô um modelo probabilístico baseado nas Grades de Evidência. / Autonomous mobile robots depend on information acquired with its sensors to make decisions during its task. The use of vision systems provide a large amount of data about the environment in which the robot is. Particularly, an omnidirectional vision systems provide information in all directions of the environment to the robot with just one image. Through the processing of a pair of omnidirectional images it is possible to obtain the distances between the robot and the objects in its work environment. Because of the uncertainty of all sensors, a probabilistic model is necessary so that the information acquired could be used in decision make processes. Having just an omnidirectional stereovision system as a source of information to an stochastic representation of the environment, known as Evidence Grids, the robot can determine the probability of occupation of the space in the environment and navigate autonomously. This article shows a stereo algorithm and a model of the omnidirectional stereovision system to update the Evidence Grid. This is the beginning of a work that have as objective make navigation and exploration of unknown and unstructured environment having as knowledge base a probabilistic model as Evidence Grids.
3

Real Time SLAM Using Compressed Occupancy Grids For a Low Cost Autonomous Underwater Vehicle

Cain, Christopher Hawthorn 07 May 2014 (has links)
The research presented in this dissertation pertains to the development of a real time SLAM solution that can be performed by a low cost autonomous underwater vehicle equipped with low cost and memory constrained computing resources. The design of a custom rangefinder for underwater applications is presented. The rangefinder makes use of two laser line generators and a camera to measure the unknown distance to objects in an underwater environment. A visual odometry algorithm is introduced that makes use of a downward facing camera to provide our underwater vehicle with localization information. The sensor suite composed of the laser rangefinder, downward facing camera, and a digital compass are verified, using the Extended Kalman Filter based solution to the SLAM problem along with the particle filter based solution known as FastSLAM, to ensure that they provide in- formation that is accurate enough to solve the SLAM problem for out low cost underwater vehicle. Next, an extension of the FastSLAM algorithm is presented that stores the map of the environment using an occupancy grid is introduced. The use of occupancy grids greatly increases the amount of memory required to perform the algorithm so a version of the Fast- SLAM algorithm that stores the occupancy grids using the Haar wavelet representation is presented. Finally, a form of the FastSLAM algorithm is presented that stores the occupancy grid in compressed form to reduce the amount memory required to perform the algorithm. It is shown in experimental results that the same result can be achieved, as that produced by the algorithm that stores the complete occupancy grid, using only 40% of the memory required to store the complete occupancy grid. / Ph. D.
4

Triangulation Based Fusion of Sonar Data with Application in Mobile Robot Mapping and Localization

Wijk, Olle January 2001 (has links)
No description available.
5

Triangulation Based Fusion of Sonar Data with Application in Mobile Robot Mapping and Localization

Wijk, Olle January 2001 (has links)
No description available.
6

Building an Efficient Occupancy Grid Map Based on Lidar Data Fusion for Autonomous driving Applications

Salem, Marwan January 2019 (has links)
The Localization and Map building module is a core building block for designing an autonomous vehicle. It describes the vehicle ability to create an accurate model of its surroundings and maintain its position in the environment at the same time. In this thesis work, we contribute to the autonomous driving research area by providing a proof-of-concept of integrating SLAM solutions into commercial vehicles; improving the robustness of the Localization and Map building module. The proposed system applies Bayesian inference theory within the occupancy grid mapping framework and utilizes Rao-Blackwellized Particle Filter for estimating the vehicle trajectory. The work has been done at Scania CV where a heavy duty vehicle equipped with multiple-Lidar sensory architecture was used. Low level sensor fusion of the different Lidars was performed and a parallelized implementation of the algorithm was achieved using a GPU. When tested on the frequently used datasets in the community, the implemented algorithm outperformed the scan-matching technique and showed acceptable performance in comparison to another state-of-art RBPF implementation that adapts some improvements on the algorithm. The performance of the complete system was evaluated under a designed set of real scenarios. The proposed system showed a significant improvement in terms of the estimated trajectory and provided accurate occupancy representations of the vehicle surroundings. The fusion module was found to build more informative occupancy grids than the grids obtained form individual sensors. / Modulen som har hand om både lokalisering och byggandet av karta är en av huvudorganen i ett system för autonom körning. Den beskriver bilens förmåga att skapa en modell av omgivningen och att hålla en position i förhållande till omgivningen. I detta examensarbete bidrar vi till forskningen inom autonom bilkörning med ett valideringskoncept genom att integrera SLAM-lösningar i kommersiella fordon, vilket förbättrar robustheten hos lokaliserings-kartbyggarmodulen. Det föreslagna systemet använder sig utav Bayesiansk statistik applicerat i ett ramverk som har hand om att skapa en karta, som består av ett rutnät som används för att beskriva ockuperingsgraden. För att estimera den bana som fordonet kommer att färdas använder ramverket RBPF(Rao-Blackwellized particle filter). Examensarbetet har genomförts hos Scania CV, där ett tungt fordon utrustat med flera lidarsensorer har använts. En lägre nivå av sensor fusion applicerades för de olika lidarsensorerna och en parallelliserad implementation av algoritmen implementerades på GPU. När algoritmen kördes mot data som ofta används av ”allmänheten” kan vi konstatera att den implementerade algoritmen ger ett väldigt mycket bättre resultat än ”scan-matchnings”-tekniken och visar på ett acceptabelt resultat i jämförelse med en annan högpresterande RBPFimplementation, vilken tillför några förbättringar på algoritmen. Prestandan av hela systemet utvärderas med ett antal egendesignade realistiska scenarion. Det föreslagna systemet visar på en tydlig förbättring av uppskattningen av körbanan och bidrar även med en exakt representation av omgivningen. Sensor Fusionen visar på en bättre och mer informativ representation än när man endast utgår från de individuella lidarsensorerna.
7

Environment Perception for Autonomous Driving : A 1/10 Scale Implementation Of Low Level Sensor Fusion Using Occupancy Grid Mapping

Rawat, Pallav January 2019 (has links)
Autonomous Driving has recently gained a lot of recognition and provides challenging research with an aim to make transportation safer, more convenient and efficient. This emerging technology also has widespread applications and implications beyond all current expectations in other fields of robotics. Environment perception is one of the big challenges for autonomous robots. Though a lot of methods have been developed to utilize single sensor based approaches, since different sensor types have different operational characteristics and failure modes, they compliment each other. Different sensors provide different sets of data, which creates difficulties combining information to form a unified picture. The proposed solution consists of low level sensor fusion of LIDAR and stereo camera data using an occupancy grid framework. Bayesian inference theory is utilized and a real time system has been implemented on a 1/10 scale robot vehicle. The result of the thesis shows that it is possible to use a 2D LIDAR and stereo camera to build a map of the environment. The implementation focuses on the practical issues like blind spots of individ sensors. Overall, the fused occupancy grid gives better result than occupancy grids from individual sensors. Sensor confidence is higher for the camera since frequency of mapping of a 2D LIDAR is low / Autonom körning har nyligen fått mycket erkännande och erbjuder utmanande forskningsmöjligheter med målen att göra transporter säkrare, bekvämare och effektivare. Den framväxande tekniken har också tillämpningar och konsekvenser inom andra områden av robotteknik i en omfattning som vida överträffat förväntningarna. Att uppfatta den omgivande miljön är en av de stora utmaningarna för autonoma robotar. Även om många metoder har utvecklats där en enda sensor används, har de bästa resultaten uppnåtts genom en kombination av sensorer. Olika sensorer ger olika uppsättningar data, vilket skapar svårigheter att kombinera information för att bilda en enhetlig bild. Den föreslagna lösningen består av lågfrekvent sensorfusion av LIDAR och stereokamera med användning av rutnätsramar. Bayesisk inferensteori har använts och ett realtidssystem har implementerats på robotfordon i skala 1/10. Resultatet av examensarbetet visar att det är möjligt att använda en 2D-LIDAR och en stereokamera för att bygga en omgivningskarta. Genomförandet fokuserar på praktiska problem såsom problem med döda vinkeln hos dessa sensorer. Generellt ger det kombinerade rutnätet bättre resultat än det från enskilda sensorer. Sensortillförlitligheten är högre för kameran då 2D-LIDAR kartlägger med mycket lägre frekvens
8

High Fidelity Localization and Map Building from an Instrumented Probe Vehicle

Thornton, Douglas Anthony 24 May 2017 (has links)
No description available.
9

Exploitation of map data for the perception of intelligent vehicles / Exploitation des données cartographiques pour la perception de véhicules intelligents

Kurdej, Marek 05 February 2015 (has links)
La plupart des logiciels contrôlant les véhicules intelligents traite de la compréhension de la scène. De nombreuses méthodes existent actuellement pour percevoir les obstacles de façon automatique. La majorité d’entre elles emploie ainsi les capteurs extéroceptifs comme des caméras ou des lidars. Cette thèse porte sur les domaines de la robotique et de la fusion d’information et s’intéresse aux systèmes d’information géographique. Nous étudions ainsi l’utilité d’ajouter des cartes numériques, qui cartographient le milieu urbain dans lequel évolue le véhicule, en tant que capteur virtuel améliorant les résultats de perception. Les cartes contiennent en effet une quantité phénoménale d’information sur l’environnement : sa géométrie, sa topologie ainsi que d’autres informations contextuelles. Dans nos travaux, nous avons extrait la géométrie des routes et des modèles de bâtiments afin de déduire le contexte et les caractéristiques de chaque objet détecté. Notre méthode se base sur une extension de grilles d’occupations : les grilles de perception crédibilistes. Elle permet de modéliser explicitement les incertitudes liées aux données de cartes et de capteurs. Elle présente également l’avantage de représenter de façon uniforme les données provenant de différentes sources : lidar, caméra ou cartes. Les cartes sont traitées de la même façon que les capteurs physiques. Cette démarche permet d’ajouter les informations géographiques sans pour autant leur donner trop d’importance, ce qui est essentiel en présence d’erreurs. Dans notre approche, le résultat de la fusion d’information contenu dans une grille de perception est utilisé pour prédire l’état de l’environnement à l’instant suivant. Le fait d’estimer les caractéristiques des éléments dynamiques ne satisfait donc plus l’hypothèse du monde statique. Par conséquent, il est nécessaire d’ajuster le niveau de certitude attribué à ces informations. Nous y parvenons en appliquant l’affaiblissement temporel. Étant donné que les méthodes existantes n’étaient pas adaptées à cette application, nous proposons une famille d’opérateurs d’affaiblissement prenant en compte le type d’information traitée. Les algorithmes étudiés ont été validés par des tests sur des données réelles. Nous avons donc développé des prototypes en Matlab et des logiciels en C++ basés sur la plate-forme Pacpus. Grâce à eux nous présentons les résultats des expériences effectués en conditions réelles. / This thesis is situated in the domains of robotics and data fusion, and concerns geographic information systems. We study the utility of adding digital maps, which model the urban environment in which the vehicle evolves, as a virtual sensor improving the perception results. Indeed, the maps contain a phenomenal quantity of information about the environment : its geometry, topology and additional contextual information. In this work, we extract road surface geometry and building models in order to deduce the context and the characteristics of each detected object. Our method is based on an extension of occupancy grids : the evidential perception grids. It permits to model explicitly the uncertainty related to the map and sensor data. By this means, the approach presents also the advantage of representing homogeneously the data originating from various sources : lidar, camera or maps. The maps are handled on equal terms with the physical sensors. This approach allows us to add geographic information without imputing unduly importance to it, which is essential in presence of errors. In our approach, the information fusion result, stored in a perception grid, is used to predict the stateof environment on the next instant. The fact of estimating the characteristics of dynamic elements does not satisfy the hypothesis of static world. Therefore, it is necessary to adjust the level of certainty attributed to these pieces of information. We do so by applying the temporal discounting. Due to the fact that existing methods are not well suited for this application, we propose a family of discoun toperators that take into account the type of handled information. The studied algorithms have been validated through tests on real data. We have thus developed the prototypes in Matlab and the C++ software based on Pacpus framework. Thanks to them, we present the results of experiments performed in real conditions.

Page generated in 0.0573 seconds