• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 36
  • 36
  • 12
  • 10
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Real-time Vision-Based Lane Detection with 1D Haar Wavelet Transform on Raspberry Pi

Sudini, Vikas Reddy 01 May 2017 (has links)
Rapid progress is being made towards the realization of autonomous cars. Since the technology is in its early stages, human intervention is still necessary in order to ensure hazard-free operation of autonomous driving systems. Substantial research efforts are underway to enhance driver and passenger safety in autonomous cars. Toward that end GreedyHaarSpiker, a real-time vision-based lane detection algorithm is proposed for road lane detection in different weather conditions. The algorithm has been implemented in Python 2.7 with OpenCV 3.0 and tested on a Raspberry Pi 3 Model B ARMv8 1GB RAM coupled to a Raspberry Pi camera board v2. To test the algorithm’s performance, the Raspberry Pi and the camera board were mounted inside a Jeep Wrangler. The algorithm performed better in sunny weather with no snow on the road. The algorithm’s performance deteriorated at night time or when the road surface was covered with snow.
22

Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem

Alves, Thiago Waszak January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.
23

Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem

Alves, Thiago Waszak January 2017 (has links)
A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms. / Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms.
24

Zařízení varovného systému pro udržení vozidla v jízdním pruhu / Warning system to keep the vehicle in the lane

Fendrich, Vítězslav January 2019 (has links)
This thesis adresses designing a device that detects lane departure of a vehicle via a video feed from a camera module. This device is intended to be attached onto the windshield of the vehicle. The initial part of the thesis will cover the current methods of lane departure detection through a video feed. In the following part the selection of suitable hardware, specifically the latest model of a Raspberry Pi, has been made. Afterwards a suitable container for the aforementioned hardware has been designed and created using a 3D printer. Subsequently an appropriate LDWS algorithm is chosen and designed. In the next part, the range and parameters of a testing database through which the proper functionality of the device will be tested on are chosen. The final part of the thesis contains evaluation of the success rate of detection via the acquired database.
25

A Novel Lightweight Lane Departure Warning System Based on Computer Vision for Improving Road Safety

Chen, Yue 14 May 2021 (has links)
With the rapid improvement of the Advanced Driver Assistant System (ADAS), autonomous driving has become one of the most common hot topics in recent years. While driving, many technologies related to autonomous driving choose to use the sensors installed on the vehicle to collect the information of road status and the environment outside. This aims to warn the driver to perceive the potential danger in the fastest time, which has become the focus of autonomous driving in recent years. Although autonomous driving brings plenty of conveniences to people, the safety of it is still facing difficulties. During driving, even the experienced driver can not guarantee focus on the status of the road all the time. Thus, lane departure warning system (LDWS) becomes developed. The purpose of LDWS is to determine whether the vehicle is in the safe driving area. If the vehicle is out of this area, LDWS will detect it and alert the driver by the sensors, such as sound and vibration, in order to make the driver back to the safe driving area. This thesis proposes a novel lightweight LDWS model LEHA, which divides the entire LDWS into three stages: image preprocessing, lane detection, and lane departure recognition. Different from the deep learning methods of LDWS, our LDWS model LEHA can achieve high accuracy and efficiency by relying only on simple hardware. The image preprocessing stage aims to process the original road image to remove the noise which is irrelevant to the detection result. In this stage, we apply a novel algorithm of grayscale preprocessing to convert the road image to a grayscale image, which removes the color of it. Then, we design a binarization method to greatly extract the lane lines from the background. A newly-designed image smoothing is added to this stage to reduce most of the noise, which improves the accuracy of the following lane detection stage. After obtaining the processed image, the lane detection stage is applied to detect and mark the lane lines. We use region of interest (ROI) to remove the irrelevant parts of the road image to reduce the detection time. After that, we introduce the Canny edge detection method, which aims to extract the edges of the lane lines. The last step of LDWS in the lane detection stage is a novel Hough transform method, the purpose of it is to detect the position of the lane and mark it. Finally, the lane departure recognition stage is used to calculate the deviation distance between the vehicle and the centerline of the lane to determine whether the warning needs to turn on. In the last part of this paper, we present the experiment results which show the comparison results of different lane conditions. We do the statistic of the proposed LDWS accuracy in terms of detection and departure. The detection rate of our proposed LDWS is 98.2% and the departure rate of it is 99.1%. The average processing time of our proposed LDWS is 20.01 x 10⁻³s per image.
26

Lane Detection based on Contrast Analysis

Kumar, Surinder 09 June 2016 (has links)
Computer vision and image processing systems are ubiquitous in automotive domain and manufacturing industry. Lane detection warning systems has been an elementary part of the modern automotive industry. Due to the recent progress in the computer vision and image processing methods, economical and flexible use of computer vision is now pervasive and computing with images is not just for the realm of the science, but also for the arts and social science and even for hobbyists. Image processing is a key technology in automotive industry, even now there is hardly a single manufacturing process that is thinkable without imaging. The applications of image processing and computer vision methods in embedded systems platform, is an ongoing research area since many years. OpenCV, an open-source computer vision library containing optimized algorithms and methods for designing and implementing applications based on video and image processing techniques. These method are organized in the form of modules for specific field including, user-graphic interface, machine learning, feature extraction etc [43]. Vision-based automotive application systems become an important mechanism for lane detection and warning systems to alert a driver about the road in localization of the vehicle [1]. In automotive electronic market, for lane detection problem, vision-based approaches has been designed and developed using different electronic hardware and software components including wireless sensor, camera module, Field-Programmable Gate Array (FPGA) based systems, GPU and digital signal processors (DSP) [13]. The software module consists on the top of real-time operating systems and hardware description programming language including Verilog, or VHDL. One of the most time critical task of vision based systems is to test system applications in real physical environment with wide variety of driving scenarios and validating the whole systems as per the automotive industry standards. For validating and testing the advanced driver assistance systems, there are some commercial tools available including Assist ADTF from Elektrobit, EB company [43]. In addition to the design and strict real-time requirements for advanced driver assistance systems applications based on electronic components and embedded platform, the complexity and characteristics of the implemented algorithms are two parameters that need to be taken into consideration choosing hardware and software component [13]. The development of vision-based automotive application, based on alone electronic and micro-controller is not a feasible solution approach [35] [13] and GPU based solution are attractive but has many other issues including power consumption. In this thesis project, image and video processing module is used from OpenCV library for road lane detection problems. In proposed lane detection methods, low-level image processing algorithms and methods are used to extract relevant information for lane detection problem by applying contrast analysis at pixel level intensity values. Furthermore, the work at hand presents different approaches for solving relevant partial problems in the domain of lane detection. The aim of the work is to apply contrast analysis based on low-level image processing methods to extract relevant lane model information from the grid of intensity values of pixel elements available in image frame. The approaches presented in this project work are based on contrast analysis of binary mask image frame extracted after applying range threshold. A set of points, available in an image frame, based lane feature models are used for detecting lanes on color image frame captured from video. For the performance measurement and evaluation, the proposed methods are tested on different systems setup, including Linux, Microsoft Windows, CodeBlocks, Visual Studio 2012 and Linux based Rasbian-Jessie operating systems running on Intel i3, AMD A8 APU, and embedded systems based (Raspberry Pi 2 Model B) ARM v7 processor respectively.
27

Comparative study on road and lane detection inmixed criticality embedded systems / Jämförande studie av olika väghållningsalgoritmer

FERHATOVIC, SANEL January 2017 (has links)
One of the main challenges for advanced driver assistance systems (ADAS)is the environment perception problem. One factor that makes ADAS hardto implement is the large amount of different conditions that have to betaken care of. The main sources for condition diversity are lane and roadappearance, image clarity issues and poor visibility conditions. A review ofcurrent lane detection algorithms has been carried out and based on that alane detection algorithm has been developed and implemented on a mixedcriticality platform. The thesis is part of a larger group project consisting offive master thesis students creating a demonstrator for autonomous platoondriving. The final lane detection algorithms consists of preprocessing stepswhere the image is converted to gray scale and everything except the regionof interest (ROI) is cut away. OpenCV, a library for image processing hasbeen utilized for edge detection and hough transform. An algorithm for errorcalculations is developed which compares the center and direction of the lanewith the actual vehicle position and direction during real experiments. Thelane detection system is implemented on a Raspberry Pi which communicateswith a mixed criticality platform through UART. The demonstrator vehiclecan achieve a measured speed of 3.5 m/s with reliable lane keeping using thedeveloped algorithm. It seems that the bottleneck is the lateral control ofthe vehicle rather than lane detection, further work should focus on controlof the vehicle and possibly extending the ROI to detect curves in an earlierstage. / En stor utmaning för avancerade förarstödsystem (ADAS) är problemet med uppfattning av miljön runt omkring. En faktor som gör ADAS svårt att implementera är den stora mängd olika förhållanden som måste tas hand om. De största källorna till olikheter är utseendet på körfältet och vägen, dåliga siktförhållanden samt otydliga bilder. En granskning av nuvarande algoritmer för körfältsdetektering har utförts och baserat på den har en körfältsdetekteringsalgoritm utvecklats och implementerats på ett blandkritiskt system. Avhandlingen är en del av ett större grupprojekt bestående av fem mastersstudenter som ska skapa en demonstrator för autonom konvojkörning. Den slutgiltiga körfältsdetekteringsalgoritmen består av förbehandlingssteg, där bilden konverteras till gråskala och allt utom intresseområdet är bortklippt. OpenCV, ett bibliotek för bildbehandling har använts för kantdetektering och houghtransformation. En algoritm som jämför körfältets mittpunkt och riktning med fordonets faktiska position och riktning har utvecklats och används i experiment för kontroll av fordonet. Körfältsdetekteringsalgoritmen har implementeras på en Raspberry Pi som kommunicerar med en blandkritisk plattform genom UART. Demo-fordonet kan uppnå en uppmätt hastighet på 3,5 m/s med pålitlig väghållning med den utvecklade algoritmen. Det verkar som att flaskhalsen är kontroll av fordonet i sidled och inte körfältsdetektering, ytterligare arbete bör fokusera på kontroll av fordonet och eventuellt utöka synfältet för att detektera kurvor i ett tidigare skede.
28

Studies of Spectral Distortion Under ATR Condition in Spectroelectrochemical Sensor Development of Laser Induced Fluorescence Detection System for Multilane Capillary Electrophoresis Microchips

Piruska, Aigars January 2006 (has links)
No description available.
29

Analysis of Robustness in Lane Detection using Machine Learning Models

Adams, William A. January 2015 (has links)
No description available.
30

Multi-viewpoint lane detection with applications in driver safety systems

Borkar, Amol 19 December 2011 (has links)
The objective of this dissertation is to develop a Multi-Camera Lane Departure Warning (MCLDW) system and a framework to evaluate it. A Lane Departure Warning (LDW) system is a safety feature that is included in a few luxury automobiles. Using a single camera, it performs the task of informing the driver if a lane change is imminent. The core component of an LDW system is a lane detector, whose objective is to find lane markers on the road. Therefore, we start this dissertation by explaining the requirements of an ideal lane detector, and then present several algorithmic implementations that meet these requirements. After selecting the best implementation, we present the MCLDW methodology. Using a multi-camera setup, MCLDW system combines the detected lane marker information from each camera's view to estimate the immediate distance between the vehicle and the lane marker, and signals a warning if this distance is under a certain threshold. Next, we introduce a procedure to create ground truth and a database of videos which serve as the framework for evaluation. Ground truth is created using an efficient procedure called Time-Slicing that allows the user to quickly annotate the true locations of the lane markers in each frame of the videos. Subsequently, we describe the details of a database of driving videos that has been put together to help establish a benchmark for evaluating existing lane detectors and LDW systems. Finally, we conclude the dissertation by citing the contributions of the research and discussing the avenues for future work.

Page generated in 0.5101 seconds