• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 10
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Collision Avoidance for Automated Vehicles Using Occupancy Grid Map and Belief Theory

Soltani, Reza 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis discusses occupancy grid map, collision avoidance system and belief theory, and propose some of the latest and the most effective method such as predictive occupancy grid map, risk evaluation model and OGM role in the belief function theory with the approach of decision uncertainty according to the environment perception with the degree of belief in the driving command acceptability. Finally, how the proposed models mitigate or prevent the occurrence of the collision.
2

Occupancy grid mapping using stereo vision

Burger, Alwyn Johannes 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: This thesis investigates the use of stereo vision sensors for dense autonomous mapping. It characterises and analyses the errors made during the stereo matching process so measurements can be correctly integrated into a 3D grid-based map. Maps are required for navigation and obstacle avoidance on autonomous vehicles in complex, unknown environments. The safety of the vehicle as well as the public depends on an accurate mapping of the environment of the vehicle, which can be problematic when inaccurate sensors such as stereo vision are used. Stereo vision sensors are relatively cheap and convenient, however, and a system that can create reliable maps using them would be beneficial. A literature review suggests that occupancy grid mapping poses an appropriate solution, offering dense maps that can be extended with additional measurements incrementally. It forms a grid representation of the environment by dividing it into cells, and assigns a probability to each cell of being occupied. These probabilities are updated with measurements using a sensor model that relates measurements to occupancy probabilities. Numerous forms of these sensor models exist, but none of them appear to be based on meaningful assumptions and sound statistical principles. Furthermore, they all seem to be limited by an assumption of unimodal, zero-mean Gaussian measurement noise. Therefore, we derive a principled inverse sensor model (PRISM) based on physically meaningful assumptions. This model is capable of approximating any realistic measurement error distribution using a Gaussian mixture model (GMM). Training a GMM requires a characterisation of the measurement errors, which are related to the environment as well as which stereo matching technique is used. Therefore, a method for fitting a GMM to the error distribution of a sensor using measurements and ground truth is presented. Since we may consider the derived principled inverse sensor model to be theoretically correct under its assumptions, we use it to evaluate the approximations made by other models from the literature that are designed for execution speed. We show that at close range these models generally offer good approximations that worsen with an increase in measurement distance. We test our model by creating maps using synthetic and real world data. Comparing its results to those of sensor models from the literature suggests that our model calculates occupancy probabilities reliably. Since our model captures the limited measurement range of stereo vision, we conclude that more accurate sensors are required for mapping at greater distances. / AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die gebruik van stereovisie sensors vir digte outonome kartering. Dit karakteriseer en ontleed die foute wat gemaak word tydens die stereopassingsproses sodat metings korrek geïntegreer kan word in 'n 3D rooster-gebaseerde kaart. Sulke kaarte is nodig vir die navigasie en hindernisvermyding van outonome voertuie in komplekse en onbekende omgewings. Die veiligheid van die voertuig sowel as die publiek hang af van 'n akkurate kartering van die voertuig se omgewing, wat problematies kan wees wanneer onakkurate sensors soos stereovisie gebruik word. Hierdie sensors is egter relatief goedkoop en gerieflik, en daarom behoort 'n stelsel wat hulle dit gebruik om op 'n betroubare manier kaarte te skep baie voordelig te wees. 'n Literatuuroorsig dui daarop dat die besettingsroosteralgoritme 'n geskikte oplossing bied, aangesien dit digte kaarte skep wat met bykomende metings uitgebrei kan word. Hierdie algoritme skep 'n roostervoorstelling van die omgewing en ken 'n waarskynlikheid dat dit beset is aan elke sel in die voorstelling toe. Hierdie waarskynlikhede word deur nuwe metings opgedateer deur gebruik te maak van 'n sensormodel wat beskryf hoe metings verband hou met besettingswaarskynlikhede. Menigde a eidings bestaan vir hierdie sensormodelle, maar dit blyk dat geen van die modelle gebaseer is op betekenisvolle aannames en statistiese beginsels nie. Verder lyk dit asof elkeen beperk word deur 'n aanname van enkelmodale, nul-gemiddelde Gaussiese metingsgeraas. Ons lei 'n beginselfundeerde omgekeerde sensormodel af wat gebaseer is op fisies betekenisvolle aannames. Hierdie model is in staat om enige realistiese foutverspreiding te weerspieël deur die gebruik van 'n Gaussiese mengselmodel (GMM). Dit vereis 'n karakterisering van 'n stereovisie sensor se metingsfoute, wat afhang van die omgewing sowel as watter stereopassingstegniek gebruik is. Daarom stel ons 'n metode voor wat die foutverspreiding van die sensor met behulp van 'n GMM modelleer deur gebruik te maak van metings en absolute verwysings. Die afgeleide ge inverteerde sensormodel is teoreties korrek en kan gevolglik gebruik word om modelle uit die literatuur wat vir uitvoerspoed ontwerp is te evalueer. Ons wys dat op kort afstande die modelle oor die algemeen goeie benaderings bied wat versleg soos die metingsafstand toeneem. Ons toets ons nuwe model deur kaarte te skep met gesimuleerde data, sintetiese data, en werklike data. Vergelykings tussen hierdie resultate en dié van sensormodelle uit die literatuur dui daarop dat ons model besettingswaarskynlikhede betroubaar bereken. Aangesien ons model die beperkte metingsafstand van stereovisie vasvang, lei ons af dat meer akkurate sensors benodig word vir kartering oor groter afstande.
3

Integer Occupancy Grids : a probabilistic multi-sensor fusion framework for embedded perception / Grille d'occupation entière : une méthode probabiliste de fusion multi-capteurs pour la perception embarquée

Rakotovao Andriamahefa, Tiana 21 February 2017 (has links)
Pour les voitures autonomes, la perception est une fonction principale où la sécurité est de la plus haute importance. Un système de perception construit un modèle de l'environnement de conduite en fusionnant plusieurs capteurs de perception incluant les LIDARs, les radars, les capteurs de vision, etc. La fusion basée sur les grilles d'occupation construit un modèle probabiliste de l'environnement en prenant en compte l'incertitude des capteurs. Cette thèse vise à intégrer le calcul des grilles d'occupation dans des systèmes embarqués à bas-coût et à basse-consommation. Cependant, les grilles d'occupation effectuent des calculs de probabilité intenses et difficilement calculables en temps-réel par les plateformes matérielles embarquées.Comme solution, cette thèse introduit une nouvelle méthode de fusion probabiliste appelée Grille d'Occupation Entière. Les Grilles d'Occupation Entières se reposent sur des principes mathématiques qui permettent de calculer la fusion de capteurs grâce à des simple addition de nombre entiers. L'intégration matérielle et logicielle des Grilles d'Occupation Entière est sûre et fiable. Les erreurs numériques engendrées par les calculs sont connues, majorées et paramétrées par l'utilisateur. Les Grilles d'Occupation Entière permettent de calculer en temps-réel la fusion de multiple capteurs sur un système embarqué bas-coût et à faible consommation dédié pour les applications pour l'automobile. / Perception is a primary task for an autonomous car where safety is of utmost importance. A perception system builds a model of the driving environment by fusing measurements from multiple perceptual sensors including LIDARs, radars, vision sensors, etc. The fusion based on occupancy grids builds a probabilistic environment model by taking into account sensor uncertainties. This thesis aims to integrate the computation of occupancy grids into embedded low-cost and low-power platforms. Occupancy Grids perform though intensive probability calculus that can be hardly processed in real-time on embedded hardware.As a solution, this thesis introduces the Integer Occupancy Grid framework. Integer Occupancy Grids rely on a proven mathematical foundation that enables to process probabilistic fusion through simple addition of integers. The hardware/software integration of integer occupancy grids is safe and reliable. The involved numerical errors are bounded and is parametrized by the user. Integer Occupancy Grids enable a real-time computation of multi-sensor fusion on embedded low-cost and low-power processing platforms dedicated for automotive applications.
4

Stereo Vision-based Autonomous Vehicle Navigation

Meira, Guilherme Tebaldi 26 April 2016 (has links)
Research efforts on the development of autonomous vehicles date back to the 1920s and recent announcements indicate that those cars are close to becoming commercially available. However, the most successful prototypes that are currently being demonstrated rely on an expensive set of sensors. This study investigates the use of an affordable vision system as a planner for the Robocart, an autonomous golf cart prototype developed by the Wireless Innovation Laboratory at WPI. The proposed approach relies on a stereo vision system composed of a pair of Raspberry Pi computers, each one equipped with a Camera Module. They are connected to a server and their clocks are synchronized using the Precision Time Protocol (PTP). The server uses timestamps to obtain a pair of simultaneously captured images. Images are processed to generate a disparity map using stereo matching and points in this map are reprojected to the 3D world as a point cloud. Then, an occupancy grid is built and used as input for an A* graph search that finds a collision-free path for the robot. Due to the non-holonomic constraints of a car-like robot, a Pure Pursuit algorithm is used as the control method to guide the robot along the computed path. The cameras are also used by a Visual Odometry algorithm that tracks points on a sequence of images to estimate the position and orientation of the vehicle. The algorithms were implemented using the C++ language and the open source library OpenCV. Tests in a controlled environment show promising results and the interfaces between the server and the Robocart have been defined, so that the proposed method can be used on the golf cart as soon as the mechanical systems are fully functional.
5

Data Driven Selective Sensing for 3D Image Acquisition

Curtis, Phillip 26 November 2013 (has links)
It is well established that acquiring large amounts of range data with vision sensors can quickly lead to important data management challenges where processing capabilities become saturated and pre-empt full usage of the information available for autonomous systems to make educated decisions. While sub-sampling offers a naïve solution for reducing dataset dimension after acquisition, it does not capitalize on the knowledge available in already acquired data to selectively and dynamically drive the acquisition process over the most significant regions in a scene, the latter being generally characterized by variations in depth and surface shape in the context of 3D imaging. This thesis discusses the development of two formal improvement measures, the first based upon surface meshes and Ordinary Kriging that focuses on improving scene accuracy, and the second based upon probabilistic occupancy grids that focuses on improving scene coverage. Furthermore, three selection processes to automatically choose which locations within the field of view of a range sensor to acquire next are proposed based upon the two formal improvement measures. The first two selection processes each use only one of the proposed improvement measures. The third selection process combines both improvement measures in order to counterbalance the parameters of the accuracy of knowledge about the scene and the coverage of the scene. The proposed algorithms mainly target applications using random access range sensors, defined as sensors that can acquire depth measurements at a specified location within their field of view. Additionally, the algorithms are applicable to the case of estimating the improvement and point selection from within a single point of view, with the purpose of guiding the random access sensor to locations it can acquire. However, the framework is developed to be independent of the range sensing technology used, and is validated with range data of several scenes acquired from many different sensors employing various sensing technologies and configurations. Furthermore, the experimental results of the proposed selection processes are compared against those produced by a random sampling process, as well as a neural gas selective sensing algorithm.
6

Contribution to evidential models for perception grids : application to intelligent vehicle navigation / Contribution aux modèles évidentiels pour les grilles de perception : application à la navigation des véhicules intelligents

Yu, Chunlei 15 September 2016 (has links)
Pour les véhicules intelligents, un système de perception est un élément clé pour caractériser en temps réel un modèle de l’environnement de conduite autour du véhicule. Lors de la modélisation de l’environnement, les informations relatives aux obstacles doivent être gérées prioritairement car les collisions peuvent être mortelles pour les autres usagers de la route ou pour les passagers à bord du véhicule considéré. La caractérisation de l’espace occupé est donc cruciale mais pas suffisante pour les véhicules autonomes puisque le système de contrôle a besoin de trouver l’espace navigable pour assurer une planification sure de trajectoire. En effet, afin de naviguer sur les routes publiques avec d’autres utilisateurs, le véhicule doit aussi suivre les règles de circulation qui sont décrites, par exemple, par des marquages au sol peints sur la chaussée. Dans ce travail, nous nous concentrons sur une approche fondée sur des grilles égocentrées pour modéliser l’environnement. L’objectif est d’obtenir un modèle unifié contenant les informations d’obstacle avec des règles sémantiques de la route. Pour modéliser les informations d’obstacle, l’occupation est assurée par l’interprétation des informations des différents capteurs comme les valeurs des cellules. Pour modéliser la sémantique de l’espace navigable, nous proposons d’introduire la notion de grille de voies qui consiste à intégrer l’information sémantique de voie dans les cellules de la grille. La combinaison de ces deux niveaux d’information donne ainsi un modèle d’environnement plus raffiné. Lors de l’interprétation des données des capteurs en information d’obstacle, il faut manipuler des incertitudes dues à de l’ignorance ou des erreurs. L’ignorance est liée à la perception des nouveaux espaces dans la zone de perception et les erreurs proviennent de mesures bruitées et d’estimations imprécises de la pose. Dans cette recherche, la théorie de la fonction de croyance est adoptée pour faire face aux incertitudes et nous proposons des modèles évidentiels pour différents types de capteurs comme des lidars et des caméras. Les grilles de voie contiennent des informations sémantiques sur les voies provenant des marquages au sol, par exemple. À cette fin, nous proposons d’utiliser une carte a priori qui contient des informations détaillées sur la route comme l’orientation de la route et les marquages des voies. Ces informations sont extraites de la carte en utilisant une estimation de pose fournie par un système de localisation. Dans le modèle proposé, nous intégrons dans les grilles les informations de voie en tenant compte de l’incertitude de la pose estimée. Les algorithmes proposés ont été implémentés et testés sur des données réelles obtenues sur des routes publiques. Nous avons développé des algorithmes Matlab et C ++ avec le logiciel PACPUS développé au laboratoire. / For intelligent vehicle applications, a perception system is a key component to characterize in real-time a model of the driving environment at the surrounding of the vehicle. When modeling the environment, obstacle information is the first feature that has to be managed since collisions can be fatal for the other road users or for the passengers on-board the considered vehicle. Characterization of occupation space is therefore crucial but not sufficient for autonomous vehicles since the control system needs to find the navigable space for safe trajectory planning. Indeed, in order to run on public roads with other users, the vehicle needs to follow the traffic rules which are, for instance, described by markings painted on the carriageway. In this work, we focus on an ego-centered grid-based approach to model the environment. The objective is to include in a unified world model obstacle information with semantic road rules. To model obstacle information, occupancy is handled by interpreting the information of different sensors into the values of the cells. To model the semantic of the navigable space, we propose to introduce the notion of lane grids which consist in integrating semantic lane information into the cells of the grid. The combination of these two levels of information gives a refined environment model. When interpreting sensor data into obstacle information, uncertainty inevitably arises from ignorance and errors. Ignorance is due to the perception of new areas and errors come from noisy measurements and imprecise pose estimation. In this research, the belief function theory is adopted to deal with uncertainties and we propose evidential models for different kind of sensors like lidars and cameras. Lane grids contain semantic lane information coming from lane marking information for instance. To this end, we propose to use a prior map which contains detailed road information including road orientation and lane markings. This information is extracted from the map by using a pose estimate provided by a localization system. In the proposed model, we integrate lane information into the grids by taking into account the uncertainty of the estimated pose. The proposed algorithms have been implemented and tested on real data acquired on public roads. We have developed algorithms in Matlab and C++ using the PACPUS software framework developed at the laboratory.
7

Mapeamento com Sonar Usando Grade de Ocupa??o baseado em Modelagem Probabil?stica

Souza, Anderson Abner de Santana 15 February 2008 (has links)
Made available in DSpace on 2014-12-17T14:55:08Z (GMT). No. of bitstreams: 1 AndersonASS.pdf: 906367 bytes, checksum: 22fe3d988905f9e44afd63465e16e0df (MD5) Previous issue date: 2008-02-15 / In this work, we propose a probabilistic mapping method with the mapped environment represented through a modified occupancy grid. The main idea of the proposed method is to allow a mobile robot to construct in a systematic and incremental way the geometry of the underlying space, obtaining at the end a complete environment map. As a consequence, the robot can move in the environment in a safe way, based on a confidence value of data obtained from its perceptive system. The map is represented in a coherent way, according to its sensory data, being these noisy or not, that comes from exterior and proprioceptive sensors of the robot. Characteristic noise incorporated in the data from these sensors are treated by probabilistic modeling in such a way that their effects can be visible in the final result of the mapping process. The results of performed experiments indicate the viability of the methodology and its applicability in the area of autonomous mobile robotics, thus being an contribution to the field / Neste trabalho, propomos um m?todo de mapeamento probabil?stico com a representa??o do ambiente mapeado em uma grade de ocupa??o modificada. A id?ia principal do m?todo proposto ? deixar que um rob? m?vel construa de forma sistem?tica e incremental a geometria do seu entorno, obtendo ao final um mapa completo do ambiente. Como conseq??ncia, o rob? poder? locomover-se no seu ambiente de modo seguro, baseando-se em um ?ndice de confiabilidade dos dados colhidos do seu sistema perceptivo. O mapa ? representado de forma coerente com os dados sensoriais, sejam esses ruidosos ou n?o, oriundos dos sensores externoceptivos e proprioceptivos do rob?. Os ru?dos caracter?sticos incorporados nos dados de tais sensores s?o tratados por modelagem probabil?stica, de modo que seus efeitos possam ser vis?veis no resultado final do processo de mapeamento. Os resultados dos experimentos realizados, mostrados no presente trabalho, indicam a viabilidade desta metodologia e sua aplicabilidade na ?rea da rob?tica m?vel aut?noma, sendo assim uma contribui??o para a ?rea
8

Data Driven Selective Sensing for 3D Image Acquisition

Curtis, Phillip January 2013 (has links)
It is well established that acquiring large amounts of range data with vision sensors can quickly lead to important data management challenges where processing capabilities become saturated and pre-empt full usage of the information available for autonomous systems to make educated decisions. While sub-sampling offers a naïve solution for reducing dataset dimension after acquisition, it does not capitalize on the knowledge available in already acquired data to selectively and dynamically drive the acquisition process over the most significant regions in a scene, the latter being generally characterized by variations in depth and surface shape in the context of 3D imaging. This thesis discusses the development of two formal improvement measures, the first based upon surface meshes and Ordinary Kriging that focuses on improving scene accuracy, and the second based upon probabilistic occupancy grids that focuses on improving scene coverage. Furthermore, three selection processes to automatically choose which locations within the field of view of a range sensor to acquire next are proposed based upon the two formal improvement measures. The first two selection processes each use only one of the proposed improvement measures. The third selection process combines both improvement measures in order to counterbalance the parameters of the accuracy of knowledge about the scene and the coverage of the scene. The proposed algorithms mainly target applications using random access range sensors, defined as sensors that can acquire depth measurements at a specified location within their field of view. Additionally, the algorithms are applicable to the case of estimating the improvement and point selection from within a single point of view, with the purpose of guiding the random access sensor to locations it can acquire. However, the framework is developed to be independent of the range sensing technology used, and is validated with range data of several scenes acquired from many different sensors employing various sensing technologies and configurations. Furthermore, the experimental results of the proposed selection processes are compared against those produced by a random sampling process, as well as a neural gas selective sensing algorithm.
9

Grid-Based Multi-Sensor Fusion for On-Road Obstacle Detection: Application to Autonomous Driving / Rutnätsbaserad multisensorfusion för detektering av hinder på vägen: tillämpning på självkörande bilar

Gálvez del Postigo Fernández, Carlos January 2015 (has links)
Self-driving cars have recently become a challenging research topic, with the aim of making transportation safer and more efficient. Current advanced driving assistance systems (ADAS) allow cars to drive autonomously by following lane markings, identifying road signs and detecting pedestrians and other vehicles. In this thesis work we improve the robustness of autonomous cars by designing an on-road obstacle detection system. The proposed solution consists on the low-level fusion of radar and lidar through the occupancy grid framework. Two inference theories are implemented and evaluated: Bayesian probability theory and Dempster-Shafer theory of evidence. Obstacle detection is performed through image processing of the occupancy grid. Last, the Dempster-Shafer additional features are leveraged by proposing a sensor performance estimation module and performing advanced conflict management. The work has been carried out at Volvo Car Corporation, where real experiments on a test vehicle have been performed under different environmental conditions and types of objects. The system has been evaluated according to the quality of the resulting occupancy grids, detection rate as well as information content in terms of entropy. The results show a significant improvement of the detection rate over single-sensor approaches. Furthermore, the Dempster-Shafer implementation may slightly outperform the Bayesian one when there is conflicting information, although the high computational cost limits its practical application. Last, we demonstrate that the proposed solution is easily scalable to include additional sensors.
10

Obstacle Detection and Avoidance for an Automated Guided Vehicle / Detektion av hinder och hur de kan undvikas för ett autonomt guidat fordon

Berlin, Filip, Granath, Sebastian January 2021 (has links)
The need for faster and more reliable logistics solutions is rapidly increasing. This is due to higher demands on the logistical services to improve quality,  quantity, speed and reduce the error tolerance. An arising solution to these increased demands is automated solutions in warehouses, i.e., automated material  handling. In order to provide a satisfactory solution, the vehicles need to be smart and able to solve unexpected situations without human interaction.  The purpose of this thesis was to investigate if obstacle detection and avoidance in a semi-unknown environment could be achieved based on the data from a 2D LIDAR-scanner. The work was done in cooperation with the development of a new load-handling vehicle at Toyota Material Handling. The vehicle is navigating from a map that is created when the vehicle is introduced to the environment it will be operational within. Therefore, it cannot successfully navigate around new unrepresented obstacles in the map, something that often occurs in a material handling warehouse. The work in this thesis resulted in the implementation of a modified occupancy grid map algorithm, that can create maps of previously unknown environments if the position and orientation of the AGV are known. The generated occupancy grid map could then be utilized in a lattice planner together with the A* planning algorithm to find the shortest path. The performance was tested in different scenarios at a testing facility at Toyota Material Handling.  The results showed that the occupancy grid provided an accurate description of the environment and that the lattice planning provided the shortest path, given constraints on movement and allowed closeness to obstacles. However, some performance enhancement can still be introduced to the system which is further discussed at the end of the report.  The main conclusions of the project are that the proposed solution met the requirements placed upon the application, but could benefit from a more efficient usage of the mapping algorithm combined with more extensive path planning. / <p>Digital framläggning</p>

Page generated in 0.0571 seconds