• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 36
  • 36
  • 12
  • 10
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Key Technologies in Low-cost Integrated Vehicle Navigation Systems

Zhao, Yueming January 2013 (has links)
Vehicle navigation systems incorporate on-board sensors/signal receivers and provide necessary positioning and guidance information for land, marine, airborne and space vehicles. Among different navigation solutions, the Global Positioning System (GPS) and an Inertial Navigation System (INS) are two basic navigation systems. Due to their complementary characters in many aspects, a GPS/INS integrated navigation system has been a hot research topic in recent decades. Both advantages and disadvantages of each individual system and their combination are analysed in this thesis. The Micro Electrical Mechanical Sensors (MEMS) successfully solved the problems of price, size and weight with traditional INS, and hence are widely applied in GPS/INS integrated systems. The main problem of MEMS is the large sensor errors, which rapidly degrade the navigation performance in an exponential speed. By means of different methods, such as autoregressive model, Gauss-Markov process, Power Spectral Density and Allan Variance, we analyse the stochastic errors within the MEMS sensors. The test results show that different methods give similar estimates of stochastic error sources. An equivalent model of coloured noise components (random walk, bias instability and ramp noise) is given. Three levels of GPS/IMU integration structures, i.e. loose, tight and ultra-tight GPS/IMU navigation, are introduced with a brief analysis of each character. The loose integration principles are presented with detailed equations as well as the INS navigation principles. The Extended Kalman Filter (EKF) is introduced as the data fusion algorithm, which is the core of the whole navigation system. Based on the system model, we show the propagation of position standard errors with the tight integration structure under different scenarios. Even less than 4 observable GNSS satellites can contribute to the integrated system, especially for the orientation errors. A real test with loose integration is carried out, and the EKF performance is analysed in detail. Since the GPS receivers are normally working with a digital map, the map matching principle and its link-choosing problem are briefly introduced. This problem is proposed to be solved by the lane detection from real-time images. The procedures for the lane detection based on image processing are presented. The test on high ways, city streets and pathways are successfully carried out, and analyses with possible solutions are given for some special failure situations. To solve the large error drift of the IMU, we propose to support the IMU orientation with camera motion estimation from image pairs. First the estimation theory and computer vision principles are briefly introduced. Then both point and line matches algorithms are given. Finally the L1-norm estimator with balanced adjustment is proposed to deal with possible mismatches (outliers). Tests and comparisons with the RANSAC algorithm are also presented. For the latest trend of MEMS chip sensors, their industry and market are introduced. To evaluate the MEMS navigation performance, we augment the EKF with an equivalent coloured noise model, and the basic observability analysis is given. A realistic simulated navigation test is carried out with single and multiple MEMS sensors, and a sensor array of 5-10 sensors are recommended according to the test results and analysis. Finally some suggestions for future research are proposed. / <p>QC 20131016</p>
12

In Situ Detection of Road Lanes Using Raspberry Pi

Chahal, Ashwani 01 May 2018 (has links)
A self-driven car is a vehicle that can drive without human intervention by making correct decisions based on the environmental conditions. Since the innovation is in its beginning periods, totally moving beyond the human inclusion is still a long shot. However, rapid technological advancements are being made towards the safety of the driver and the passengers. One such safety feature is a Lane Detection System that empowers vehicle to detect road lane lines in various climate conditions. This research provides a feasible and economical solution to detect the road lane lines while driving in a sunny, rainy, or snowy weather condition. An algorithm is designed to perform real time road lane line detection on a low voltage computer that can be easily powered in a regular auto vehicle. The algorithm runs on a RaspberryPi computer placed inside the car. A camera, attached to the vehicle’s windshield, captures the real time images and passes them to the RaspberryPi for processing. The algorithm processes each frame and determines the lane lines. The detected lane lines can be viewed on a 7 inch display screen connected to the Raspberry Pi. The entire system is mounted inside a Jeep Wrangler to conduct the experiments and is powered by the vehicle’s standard charger of 12V-15V power supply. The algorithm provides approximately 97% accurate detection of road lane lines in all weather conditions.
13

COMPUTER VISION BASED ROBUST LANE DETECTION VIA MULTIPLE MODEL ADAPTIVE ESTIMATION TECHNIQUE

Iman Fakhari (11806169) 07 January 2022 (has links)
The lane-keeping system in autonomous vehicles (AV) or even as a part of the advanced driving assistant system (ADAS) is known as one of the primary options of AVs and ADAS. The developed lane-keeping systems work on either computer vision or deep learning algorithms for their lane detection section. However, even the strongest image processing units or the robust deep learning algorithms for lane detection have inaccuracies during lane detection under certain conditions. The source of these inaccuracies could be rainy or foggy weather, high contrast shades of buildings and objects on-street, or faded lines. Since the lane detection unit of these systems is responsible for controlling the steering, even a momentary loss of lane detection accuracy could result in an accident or failure. As mentioned, different lane detection algorithms have been presented based on computer vision and deep learning during the last few years, and each one has pros and cons. Each model may have a better performance in some situations and fail in others. For example, deep learning-based methods are vulnerable to new samples. In this research, multiple models of lane detection are evaluated and used together to implement a robust lane detection algorithm. The purpose of this research is to develop an estimator-based Multiple Model Adaptive Estimation (MMAE) algorithm on the lane-keeping system to improve the robustness of the lane detection system. To verify the performance of the implemented algorithm, the AirSim simulation environment was used. The test simulation vehicle was equipped with one front camera and one back camera used to implement the proposed algorithm. The front camera images are used for detecting the lane and the offset of the vehicle and center point of the lane. The rear camera, which offered better performance in lane detection, was used as an estimator for calculating the uncertainty of each model. The simulation results showed that combining two implemented models with MMAE performed robustly even in those case studies where one of the models failed. The proposed algorithm was able to detect the failures of either of the models and then switch to another good working model to improve the robustness of the lane detection system. However, the proposed algorithm had some limitations; it can be improved by replacing PID controller with an MPC controller in future studies. In addition, in the presented algorithm, two computer vision-based algorithms were used; however, adding a deep learning-based model could improve the performance of the proposed MMAE. To have a robust deep learning-based model, it is suggested to train the network based on AirSim output images. Otherwise, the network will not work accurately due to the differences in the camera's location, camera configuration, colors, and contrast.
14

Computer Vision Based Robust Lane Detection Via Multiple Model Adaptive Estimation Technique

Fakhari, Iman 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The lane-keeping system in autonomous vehicles (AV) or even as a part of the advanced driving assistant system (ADAS) is known as one of the primary options of AVs and ADAS. The developed lane-keeping systems work on either computer vision or deep learning algorithms for their lane detection section. However, even the strongest image processing units or the robust deep learning algorithms for lane detection have inaccuracies during lane detection under certain conditions. The source of these inaccuracies could be rainy or foggy weather, high contrast shades of buildings and objects on-street, or faded lines. Since the lane detection unit of these systems is responsible for controlling the steering, even a momentary loss of lane detection accuracy could result in an accident or failure. As mentioned, different lane detection algorithms have been presented based on computer vision and deep learning during the last few years, and each one has pros and cons. Each model may have a better performance in some situations and fail in others. For example, deep learning-based methods are vulnerable to new samples. In this research, multiple models of lane detection are evaluated and used together to implement a robust lane detection algorithm. The purpose of this research is to develop an estimator-based Multiple Model Adaptive Estimation (MMAE) algorithm on the lane-keeping system to improve the robustness of the lane detection system. To verify the performance of the implemented algorithm, the AirSim simulation environment was used. The test simulation vehicle was equipped with one front camera and one back camera used to implement the proposed algorithm. The front camera images are used for detecting the lane and the offset of the vehicle and center point of the lane. The rear camera, which offered better performance in lane detection, was used as an estimator for calculating the uncertainty of each model. The simulation results showed that combining two implemented models with MMAE performed robustly even in those case studies where one of the models failed. The proposed algorithm was able to detect the failures of either of the models and then switch to another good working model to improve the robustness of the lane detection system. However, the proposed algorithm had some limitations; it can be improved by replacing PID controller with an MPC controller in future studies. In addition, in the presented algorithm, two computer vision-based algorithms were used; however, adding a deep learning-based model could improve the performance of the proposed MMAE. To have a robust deep learning-based model, it is suggested to train the network based on AirSim output images. Otherwise, the network will not work accurately due to the differences in the camera's location, camera configuration, colors, and contrast.
15

Camera Based Deep Learning Algorithms with Transfer Learning in Object Perception

Hu, Yujie January 2021 (has links)
The perception system is the key for autonomous vehicles to sense and understand the surrounding environment. As the cheapest and most mature sensor, monocular cameras create a rich and accurate visual representation of the world. The objective of this thesis is to investigate if camera-based deep learning models with transfer learning technique can achieve 2D object detection, License Plate Detection and Recognition (LPDR), and highway lane detection in real time. The You Only Look Once version 3 (YOLOv3) algorithm with and without transfer learning is applied on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset for cars, cyclists, and pedestrians detection. This application shows that objects could be detected in real time and the transfer learning boosts the detection performance. The Convolutional Recurrent Neural Network (CRNN) algorithm with a pre-trained model is applied on multiple License Plate (LP) datasets for real-time LP recognition. The optimized model is then used to recognize Ontario LPs and achieves high accuracy. The Efficient Residual Factorized ConvNet (ERFNet) algorithm with transfer learning and a cubic spline model are modified and implemented on the TuSimple dataset for lane segmentation and interpolation. The detection performance and speed are comparable with other state-of-the-art algorithms. / Thesis / Master of Applied Science (MASc)
16

Hardware Accelerated Particle Filter for Lane Detection and Tracking in OpenCL

Madduri, Nikhil January 2014 (has links)
A road lane detection and tracking algorithm is developed, especially tailored to run on high-performance heterogeneous hardware like GPUs and FPGAs in autonomous road vehicles. The algorithm was initially developed in C/C++ and was ported to OpenCL which supports computation on heterogeneous hardware.A novel road lane detection algorithm is proposed using random sampling of particles modeled as straight lines. Weights are assigned to these particles based on their location in the gradient image. To improve the computation efficiency of the lane detection algorithm, lane tracking is introduced in the form of a Particle Filter. Creation of the particles in lane detection step and prediction, measurement updates in lane tracking step are computed parellelly on GPU/FPGA using OpenCL code, while the rest of the code runs on a host CPU. The software was tested on two GPUs - NVIDIA GeForce GTX 660 Ti &amp; NVIDIA GeForce GTX 285 and an FPGA - Altera Stratix-V, which gave a computational frame rate of up to 104 Hz, 79 Hz and 27 Hz respectively. The code was tested on video streams from five different datasets with different scenarios of varying lighting conditions on the road, strong shadows and the presence of light to moderate traffic and was found to be robust in all the situations for detecting a single lane. / <p>Validerat; 20140128 (global_studentproject_submitter)</p>
17

Lane Detection for DEXTER, an Autonomous Robot, in the Urban Challenge

McMichael, Scott Thomas 25 January 2008 (has links)
No description available.
18

Lane Detection and Obstacle Avoidance in Mobile Robots

Rajasingh, Joshua January 2010 (has links)
No description available.
19

A study on lane detection methods for autonomous driving

Cudrano, Paolo January 2019 (has links)
Machine perception is a key element for the research on autonomous driving vehicles. In particular, we focus on the problem of lane detection with a single camera. Many lane detection systems have been developed and many algorithms have been published over the years. However, while they are already commercially available to deliver lane departure warnings, their reliability is still unsatisfactory for fully autonomous scenarios. In this work, we questioned the reasons for such limitations. After examining the state of the art and the relevant literature, we identified the key methodologies adopted. We present a self-standing discussion of bird’s eye view (BEV) warping and common image preprocessing techniques, followed by gradient-based and color-based feature extraction and selection. Line fitting algorithms are then described, including least squares methods, Hough transform and random sample consensus (RANSAC). Polynomial and spline models are considered. As a result, a general processing pipeline emerged. We further analyzed each key technique by implementing it and performing experiments using data we previously collected. At the end of our evaluation, we designed and developed an overall system, finally studying its behavior. This analysis allowed us on one hand to gain insight into the reasons holding back present systems, and on the other to propose future developments in those directions. / Thesis / Master of Science (MSc)
20

Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem

Alves, Thiago Waszak January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.

Page generated in 0.1239 seconds