• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 3
  • 1
  • Tagged with
  • 23
  • 23
  • 12
  • 11
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Novel Fusion Technique for 2D LIDAR and Stereo Camera Data Using Fuzzy Logic for Improved Depth Perception

Saksena, Harsh 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Obstacle detection, avoidance and path finding for autonomous vehicles requires precise information of the vehicle’s system environment for faultless navigation and decision making. As such vision and depth perception sensors have become an integral part of autonomous vehicles in the current research and development of the autonomous industry. The advancements made in vision sensors such as radars, Light Detection And Ranging (LIDAR) sensors and compact high resolution cameras is encouraging, however individual sensors can be prone to error and misinformation due to environmental factors such as scene illumination, object reflectivity and object transparency. The application of sensor fusion in a system, by the utilization of multiple sensors perceiving similar or relatable information over a network, is implemented to provide a more robust and complete system information and minimize the overall perceived error of the system. 3D LIDAR and monocular camera are the most commonly utilized vision sensors for the implementation of sensor fusion. 3D LIDARs boast a high accuracy and resolution for depth capturing for any given environment and have a broad range of applications such as terrain mapping and 3D reconstruction. Despite 3D LIDAR being the superior sensor for depth, the high cost and sensitivity to its environment make it a poor choice for mid-range application such as autonomous rovers, RC cars and robots. 2D LIDARs are more affordable, easily available and have a wider range of applications than 3D LIDARs, making them the more obvious choice for budget projects. The primary objective of this thesis is to implement a smart and robust sensor fusion system using 2D LIDAR and a stereo depth camera to capture depth and color information of an environment. The depth points generated by the LIDAR are fused with the depth map generated by the stereo camera by a Fuzzy system that implements smart fusion and corrects any gaps in the depth information of the stereo camera. The use of Fuzzy system for sensor fusion of 2D LIDAR and stereo camera is a novel approach to the sensor fusion problem and the output of the fuzzy fusion provides higher depth confidence than the individual sensors provide. In this thesis, we will explore the multiple layers of sensor and data fusion that have been applied to the vision system, both on the camera and lidar data individually and in relation to each other. We will go into detail regarding the development and implementation of fuzzy logic based fusion approach, the fuzzification of input data and the method of selection of the fuzzy system for depth specific fusion for the given vision system and how fuzzy logic can be utilized to provide information which is vastly more reliable than the information provided by the camera and LIDAR separately
12

ORB-SLAM PERFORMANCE FOR INDOOR ENVIRONMENT USING JACKAL MOBILE ROBOT

Tianshu Ruan (8632812) 16 April 2020 (has links)
This thesis explains how Oriented FAST and rotated BRIEF SLAM (ORB-SLAM), one of the best visual SLAM solutions, works indoor and evaluates the technique performance for three different cameras: monocular camera, stereo camera and RGB-D camera. Three experiments are designed to find the limitation of the algorithm. From the experiments, the RGB-D SLAM gives the most accurate result for the indoor environment. The monocular SLAM performs better than stereo SLAM on our platform due to limited computation power. It is expected that stereo SLAM provides better results by increasing the experimental platform computational power. The ORBSLAM results demonstrate the applicability of the approach for the autonomous navigation and future autonomous cars.
13

Road Surface Modeling using Stereo Vision / Modellering av Vägyta med hjälp av Stereokamera

Lorentzon, Mattis, Andersson, Tobias January 2012 (has links)
Modern day cars are often equipped with a variety of sensors that collect information about the car and its surroundings. The stereo camera is an example of a sensor that in addition to regular images also provides distances to points in its environment. This information can, for example, be used for detecting approaching obstacles and warn the driver if a collision is imminent or even automatically brake the vehicle. Objects that constitute a potential danger are usually located on the road in front of the vehicle which makes the road surface a suitable reference level from which to measure the object's heights. This Master's thesis describes how an estimate of the road surface can be found to in order to make these height measurements. The thesis describes how the large amount of data generated by the stereo camera can be scaled down to a more effective representation in the form of an elevation map. The report discusses a method for relating data from different instances in time using information from the vehicle's motion sensors and shows how this method can be used for temporal filtering of the elevation map. For estimating the road surface two different methods are compared, one that uses a RANSAC-approach to iterate for a good surface model fit and one that uses conditional random fields for modeling the probability of different parts of the elevation map to be part of the road. A way to detect curb lines and how to use them to improve the road surface estimate is shown. Both methods for road classification show good results with a few differences that are discussed towards the end of the report. An example of how the road surface estimate can be used to detect obstacles is also included.
14

Novel 3D Back Reconstruction using Stereo Digital Cameras

Kumar, Anish Unknown Date
No description available.
15

Systém pro autonomní mapování závodní dráhy / System for autonomous racetrack mapping

Soboňa, Tomáš January 2021 (has links)
The focus of this thesis is to theoretically design, describe, implement and verify thefunctionality of the selected concept for race track mapping. The theoretical part ofthe thesis describes the ORB-SLAM2 algorithm for vehicle localization. It then furtherdescribes the format of the map - occupancy grid and the method of its creation. Suchmap should be in a suitable format for use by other trajectory planning systems. Severalcameras, as well as computer units, are described in this part, and based on parametersand tests, the most suitable ones are selected. The thesis also proposes the architectureof the mapping system, it describes the individual units that make up the system, aswell as what is exchanged between the units, and in what format the system output issent. The individual parts of the system are first tested separately and subsequently thesystem is tested as a whole. Finally, the achieved results are evaluated as well as thepossibilities for further expansion.
16

Fusion of Stationary Monocular and Stereo Camera Technologies for Traffic Parameters Estimation

Ali, Syed Musharaf 07 March 2017 (has links)
Modern day intelligent transportation system (ITS) relies on reliable and accurate estimated traffic parameters. Travel speed, traffic flow, and traffic state classification are the main traffic parameters of interest. These parameters can be estimated through efficient vision-based algorithms and appropriate camera sensor technology. With the advances in camera technologies and increasing computing power, use of monocular vision, stereo vision, and camera sensor fusion technologies have been an active research area in the field of ITS. In this thesis, we investigated stationary monocular and stereo camera technology for traffic parameters estimation. Stationary camera sensors provide large spatial-temporal information of the road section with relatively low installation costs. Two novel scientific contributions for vehicle detection and recognition are proposed. The first one is the use stationary stereo camera technology, and the second contribution is the fusion of monocular and stereo camera technologies. A vision-based ITS consists of several hardware and software components. The overall performance of such a system does not only depend on these single modules but also on their interaction. Therefore, a systematic approach considering all essential modules was chosen instead of focusing on one element of the complete system chain. This leads to detailed investigations of several core algorithms, e.g. background subtraction, histogram based fingerprints, and data fusion methods. From experimental results on standard datasets, we concluded that proposed fusion-based approach, consisting of monocular and stereo camera technologies performs better than each particular technology for vehicle detection and vehicle recognition. Moreover, this research work has a potential to provide a low-cost vision-based solution for online traffic monitoring systems in urban and rural environments.
17

Evaluation and Analysis of Perception Systems for Autonomous Driving

Sharma, Devendra January 2020 (has links)
For safe mobility, an autonomous vehicle must perceive the surroundings accurately. There are many perception tasks associated with understanding the local environment such as object detection, localization, and lane analysis. Object detection, in particular, plays a vital role in determining an object’s location and classifying it correctly and is one of the challenging tasks in the self-driving research area. Before employing an object detection module in autonomous vehicle testing, an organization needs to have a precise analysis of the module. Hence, it becomes crucial for a company to have an evaluation framework to evaluate an object detection algorithm’s performance. This thesis develops a comprehensive framework for evaluating and analyzing object detection algorithms, both 2D (camera images based) and 3D (LiDAR point cloud-based). The pipeline developed in this thesis provides the ability to evaluate multiple models with ease, signified by the key performance metrics, Average Precision, F-score, and Mean Average Precision. 40-point interpolation method is used to calculate the Average Precision. / För säker rörlighet måste ett autonomt fordon uppfatta omgivningen exakt. Det finns många uppfattningsuppgifter associerade med att förstå den lokala miljön, såsom objektdetektering, lokalisering och filanalys. I synnerhet objektdetektering spelar en viktig roll för att bestämma ett objekts plats och klassificera det korrekt och är en av de utmanande uppgifterna inom det självdrivande forskningsområdet. Innan en anställd detekteringsmodul används i autonoma fordonsprovningar måste en organisation ha en exakt analys av modulen. Därför blir det avgörande för ett företag att ha en utvärderingsram för att utvärdera en objektdetekteringsalgoritms prestanda. Denna avhandling utvecklar ett omfattande ramverk för utvärdering och analys av objektdetekteringsalgoritmer, både 2 D (kamerabilder baserade) och 3 D (LiDAR-punktmolnbaserade). Rörledningen som utvecklats i denna avhandling ger möjlighet att enkelt utvärdera flera modeller, betecknad med nyckelprestandamätvärdena, Genomsnittlig precision, F-poäng och genomsnittlig genomsnittlig precision. 40-punkts interpoleringsmetod används för att beräkna medelprecisionen.
18

Fusão de informações obtidas a partir de múltiplas imagens visando à navegação autônoma de veículos inteligentes em abiente agrícola / Data fusion obtained from multiple images aiming the navigation of autonomous intelligent vehicles in agricultural environment

Utino, Vítor Manha 08 April 2015 (has links)
Este trabalho apresenta um sistema de auxilio à navegação autônoma para veículos terrestres com foco em ambientes estruturados em um cenário agrícola. É gerada a estimativa das posições dos obstáculos baseado na fusão das detecções provenientes do processamento dos dados de duas câmeras, uma estéreo e outra térmica. Foram desenvolvidos três módulos de detecção de obstáculos. O primeiro módulo utiliza imagens monoculares da câmera estéreo para detectar novidades no ambiente através da comparação do estado atual com o estado anterior. O segundo módulo utiliza a técnica Stixel para delimitar os obstáculos acima do plano do chão. Por fim, o terceiro módulo utiliza as imagens térmicas para encontrar assinaturas que evidenciem a presença de obstáculo. Os módulos de detecção são fundidos utilizando a Teoria de Dempster-Shafer que fornece a estimativa da presença de obstáculos no ambiente. Os experimentos foram executados em ambiente agrícola real. Foi executada a validação do sistema em cenários bem iluminados, com terreno irregular e com obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas três módulos de detecção com metodologias que não tem por objetivo priorizar a confirmação de obstáculos, mas sim a busca de novos obstáculos. Nesta dissertação são apresentados os principais componentes de um sistema de detecção de obstáculos e as etapas necessárias para a sua concepção, assim como resultados de experimentos com o uso de um veículo real. / This work presents a support system to the autonomous navigation for ground vehicles with focus on structured environments in an agricultural scenario. The estimated obstacle positions are generated based on the fusion of the detections from the processing of data from two cameras, one stereo and other thermal. Three modules obstacle detection have been developed. The first module uses monocular images of the stereo camera to detect novelties in the environment by comparing the current state with the previous state. The second module uses Stixel technique to delimit the obstacles above the ground plane. Finally, the third module uses thermal images to find signatures that reveal the presence of obstacle. The detection modules are fused using the Dempster-Shafer theory that provides an estimate of the presence of obstacles in the environment. The experiments were executed in real agricultural environment. System validation was performed in well-lit scenarios, with uneven terrain and different obstacles. The system showed satisfactory performance considering the use of an approach based on only three detection modules with methods that do not prioritize obstacle confirmation, but the search for new ones. This dissertation presents the main components of an obstacle detection system and the necessary steps for its design as well as results of experiments with the use of a real vehicle.
19

Fusão de informações obtidas a partir de múltiplas imagens visando à navegação autônoma de veículos inteligentes em abiente agrícola / Data fusion obtained from multiple images aiming the navigation of autonomous intelligent vehicles in agricultural environment

Vítor Manha Utino 08 April 2015 (has links)
Este trabalho apresenta um sistema de auxilio à navegação autônoma para veículos terrestres com foco em ambientes estruturados em um cenário agrícola. É gerada a estimativa das posições dos obstáculos baseado na fusão das detecções provenientes do processamento dos dados de duas câmeras, uma estéreo e outra térmica. Foram desenvolvidos três módulos de detecção de obstáculos. O primeiro módulo utiliza imagens monoculares da câmera estéreo para detectar novidades no ambiente através da comparação do estado atual com o estado anterior. O segundo módulo utiliza a técnica Stixel para delimitar os obstáculos acima do plano do chão. Por fim, o terceiro módulo utiliza as imagens térmicas para encontrar assinaturas que evidenciem a presença de obstáculo. Os módulos de detecção são fundidos utilizando a Teoria de Dempster-Shafer que fornece a estimativa da presença de obstáculos no ambiente. Os experimentos foram executados em ambiente agrícola real. Foi executada a validação do sistema em cenários bem iluminados, com terreno irregular e com obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas três módulos de detecção com metodologias que não tem por objetivo priorizar a confirmação de obstáculos, mas sim a busca de novos obstáculos. Nesta dissertação são apresentados os principais componentes de um sistema de detecção de obstáculos e as etapas necessárias para a sua concepção, assim como resultados de experimentos com o uso de um veículo real. / This work presents a support system to the autonomous navigation for ground vehicles with focus on structured environments in an agricultural scenario. The estimated obstacle positions are generated based on the fusion of the detections from the processing of data from two cameras, one stereo and other thermal. Three modules obstacle detection have been developed. The first module uses monocular images of the stereo camera to detect novelties in the environment by comparing the current state with the previous state. The second module uses Stixel technique to delimit the obstacles above the ground plane. Finally, the third module uses thermal images to find signatures that reveal the presence of obstacle. The detection modules are fused using the Dempster-Shafer theory that provides an estimate of the presence of obstacles in the environment. The experiments were executed in real agricultural environment. System validation was performed in well-lit scenarios, with uneven terrain and different obstacles. The system showed satisfactory performance considering the use of an approach based on only three detection modules with methods that do not prioritize obstacle confirmation, but the search for new ones. This dissertation presents the main components of an obstacle detection system and the necessary steps for its design as well as results of experiments with the use of a real vehicle.
20

Měření rychlosti vozidel pomocí stereo kamery / Vehicle Speed Measurement Using Stereo Camera Pair

Najman, Pavel January 2021 (has links)
Tato práce se snaží najít odpověď na otázku, zda je v současnosti možné autonomně měřit rychlost vozidel pomocí stereoskopické měřící metody s průměrnou chybou v rozmezí 1 km/h, maximální chybou v rozmezí 3 km/h a směrodatnou odchylkou v rozmezí 1 km/h. Tyto rozsahy chyb jsou založené na požadavcích organizace OIML, jejichž doporučení jsou základem metrologických legislativ mnoha zemí. Pro zodpovězení této otázky je zformulována hypotéza, která je následně testována. Metoda, která využívá stereo kameru pro měření rychlosti vozidel je navržena a experimentálně vyhodnocena. Výsledky pokusů ukazují, že navržená metoda překonává výsledky dosavadních metod. Průměrná chyba měření je přibližně 0.05 km/h, směrodatná odchylka chyby je menší než 0.20 km/h a maximální absolutní hodnota chyby je menší než 0.75 km/h. Tyto výsledky jsou v požadovaném rozmezí a potvrzují tedy testovanou hypotézu.

Page generated in 0.1906 seconds