• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 8
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 16
  • 14
  • 14
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Detekce aut přijíždějících ke křižovatce / Detection of the Cars Approaching the Crossroad

Vácha, Lukáš January 2016 (has links)
Traffic monitoring using computer vision is becoming the desired system in practice. It allows nondestructive installation and also is very useful in many applications. This thesis focuses on automatic detection of vehicles approaching to a crossroads. This work also includes description of selected methods for detecting moving vehicles and the way of tracking them. On the basis of these methods is designed application that is implemented and tested in different lighting and weather conditions and various direction of approaching vehicles.
32

Contribution au positionnement des véhicules communicants fondé sur les récepteurs GPS et les systèmes de vision / Contribution of communicant vehicles positionning using GPS receivers and vision systems

Challita, Georges 16 September 2009 (has links)
Ces travaux de thèse sont réalisés au sein de l’équipe STI du laboratoire LITIS, en collaboration avec le centre de robotique CAOR de l’école des mines de Paris et l’INRIA Rocquencourt dont ils ont utilisé la plateforme du prototype LARA composée de véhicules instrumentés. L’objectif est de contribuer à la localisation des véhicules intelligents équipés de récepteurs GPS (Global Positionning System), de systèmes de vision et du matériel de communication permettant la coopération entre ces véhicules. En milieu urbain, les performances du GPS sont fortement dégradées. La réception du signal GPS souffre de masquages ou de mauvaises configurations géométriques des satellites. De plus, la qualité du signal peut être corrompue à cause du phénomène de multi-trajets lié à la réflexion du signal sur les bâtiments, tunnels... Alors la robustesse, la précision et la disponibilité de l’estimation de la position peut décroître significativement. D’où la nécessité d’une source d’information complémentaire pour compenser les faiblesses du récepteur GPS. L’originalité de nos travaux consiste à utiliser les données exploitées par notre système de vision. Le système de vision utilisé est basé sur une caméra (monovision). Il permet la détection robuste des obstacles sur la route, ainsi que la détection de la pluie. Le calcul de la distance de l’obstacle à notre véhicule est réalisé à l’aide du modèle sténopé et l’hypothèse de la route plane. Les véhicules utilisant des systèmes de communication sans fil basé sur la norme 802.11g+ coopèrent entre eux en échangeant leurs coordonnées GPS si elles sont disponibles. Cette coopération permet de connaître la position des véhicules qui nous entourent. Le système de communication est aussi utilisé pour l’alerte météorologique V2I ou V2V en utilisant la détection de la pluie réalisée en collaboration avec Valeo. Pour réaliser le positionnement relatif fiable, nous avons mis en oeuvre un algorithme de suivi basé sur le filtrage particulaire. Cette méthode permet de fusionner les données en utilisant les techniques probabilistes lors des différentes étapes du filtre. Finalement, une validation expérimentale en temps réel sur les véhicules du prototype LARA a été réalisée sur différents scénarios. / This thesis work realised at the STI team of the LITIS Laboratory, in collaboration with the Center of Robotics CAOR at the Ecole des Mines of Paris and the INRIA Rocquencourt, and tested on the prototype LARA. The aim is to better positionning of intelligent vehicles equipped with GPS, vision systems and communication devices used for cooperation between vehicles. In urban areas, The usage of GPS is not always ideal because of the poorness of the satellite coverage. Sometimes, the GPS signal may be also corrupted by multipath reflections due to tunnels, high buildings, electronic interferences etc. So, in order to accurate the vehicle positioning in the navigation application, the GPS data will be enhanced with vision data using communication between vehicles. The vision system is based on a monocular real-time vision-based vehicle detection. We can calculate the distance between vehicles using the pinhole model. We developped a rain detection system using the same camera. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. Localization information can be exchanged between the vehicles through a wireless communication devices. The creation of the system will adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles LARA.
33

Dataset Evaluation Method for Vehicle Detection Using TensorFlow Object Detection API / Utvärderingsmetod för dataset inom fordonsigenkänning med användning avTensorFlow Object Detection API

Furundzic, Bojan, Mathisson, Fabian January 2021 (has links)
Recent developments in the field of object detection have highlighted a significant variation in quality between visual datasets. As a result, there is a need for a standardized approach of validating visual dataset features and their performance contribution. With a focus on vehicle detection, this thesis aims to develop an evaluation method utilized for comparing visual datasets. This method was utilized to determine the dataset that contributed to the detection model with the greatest ability to detect vehicles. The visual datasets compared in this research were BDD100K, KITTI and Udacity, each one being trained on individual models. Applying the developed evaluation method, a strong indication of BDD100K's performance superiority was determined. Further analysis and feature extraction of dataset size, label distribution and average labels per image was conducted. In addition, real-world experimental conduction was performed in order to validate the developed evaluation method. It could be determined that all features and experimental results pointed to BDD100K's superiority over the other datasets, validating the developed evaluation method. Furthermore, the TensorFlow Object Detection API's ability to improve performance gain from a visual dataset was studied. Through the use of augmentations, it was concluded that the TensorFlow Object Detection API serves as a great tool to increase performance gain for visual datasets. / Inom fältet av objektdetektering har ny utveckling demonstrerat stor kvalitetsvariation mellan visuella dataset. Till följd av detta finns det ett behov av standardiserade valideringsmetoder för att jämföra visuella dataset och deras prestationsförmåga. Detta examensarbete har, med ett fokus på fordonsigenkänning, som syfte att utveckla en pålitlig valideringsmetod som kan användas för att jämföra visuella dataset. Denna valideringsmetod användes därefter för att fastställa det dataset som bidrog till systemet med bäst förmåga att detektera fordon. De dataset som användes i denna studien var BDD100K, KITTI och Udacity, som tränades på individuella igenkänningsmodeller. Genom att applicera denna valideringsmetod, fastställdes det att BDD100K var det dataset som bidrog till systemet med bäst presterande igenkänningsförmåga. En analys av dataset storlek, etikettdistribution och genomsnittliga antalet etiketter per bild var även genomförd. Tillsammans med ett experiment som genomfördes för att testa modellerna i verkliga sammanhang, kunde det avgöras att valideringsmetoden stämde överens med de fastställda resultaten. Slutligen studerades TensorFlow Object Detection APIs förmåga att förbättra prestandan som erhålls av ett visuellt dataset. Genom användning av ett modifierat dataset, kunde det fastställas att TensorFlow Object Detection API är ett lämpligt modifieringsverktyg som kan användas för att öka prestandan av ett visuellt dataset.
34

Deep Learning-Based Vehicle Recognition Schemes for Intelligent Transportation Systems

Ma, Xiren 02 June 2021 (has links)
With the increasing highlighted security concerns in Intelligent Transportation System (ITS), Vision-based Automated Vehicle Recognition (VAVR) has attracted considerable attention recently. A comprehensive VAVR system contains three components: Vehicle Detection (VD), Vehicle Make and Model Recognition (VMMR), and Vehicle Re-identification (VReID). These components perform coarse-to-fine recognition tasks in three steps. The VAVR system can be widely used in suspicious vehicle recognition, urban traffic monitoring, and automated driving system. Vehicle recognition is complicated due to the subtle visual differences between different vehicle models. Therefore, how to build a VAVR system that can fast and accurately recognize vehicle information has gained tremendous attention. In this work, by taking advantage of the emerging deep learning methods, which have powerful feature extraction and pattern learning abilities, we propose several models used for vehicle recognition. First, we propose a novel Recurrent Attention Unit (RAU) to expand the standard Convolutional Neural Network (CNN) architecture for VMMR. RAU learns to recognize the discriminative part of a vehicle on multiple scales and builds up a connection with the prominent information in a recurrent way. The proposed ResNet101-RAU achieves excellent recognition accuracy of 93.81% on the Stanford Cars dataset and 97.84% on the CompCars dataset. Second, to construct efficient vehicle recognition models, we simplify the structure of RAU and propose a Lightweight Recurrent Attention Unit (LRAU). The proposed LRAU extracts the discriminative part features by generating attention masks to locate the keypoints of a vehicle (e.g., logo, headlight). The attention mask is generated based on the feature maps received by the LRAU and the preceding attention state generated by the preceding LRAU. Then, by adding LRAUs to the standard CNN architectures, we construct three efficient VMMR models. Our models achieve the state-of-the-art results with 93.94% accuracy on the Stanford Cars dataset, 98.31% accuracy on the CompCars dataset, and 99.41% on the NTOU-MMR dataset. In addition, we construct a one-stage Vehicle Detection and Fine-grained Recognition (VDFG) model by combining our LRAU with the general object detection model. Results show the proposed VDFG model can achieve excellent performance with real-time processing speed. Third, to address the VReID task, we design the Compact Attention Unit (CAU). CAU has a compact structure, and it relies on a single attention map to extract the discriminative local features of a vehicle. We add two CAUs to the truncated ResNet to construct a small but efficient VReID model, ResNetT-CAU. Compared with the original ResNet, the model size of ResNetT-CAU is reduced by 60%. Extensive experiments on the VeRi and VehicleID dataset indicate the proposed ResNetT-CAU achieve the best re-identification results on both datasets. In summary, the experimental results on the challenging benchmark VMMR and VReID datasets indicate our models achieve the best VMMR and VReID performance, and our models have a small model size and fast image processing speed.
35

Fusion of Stationary Monocular and Stereo Camera Technologies for Traffic Parameters Estimation

Ali, Syed Musharaf 07 March 2017 (has links)
Modern day intelligent transportation system (ITS) relies on reliable and accurate estimated traffic parameters. Travel speed, traffic flow, and traffic state classification are the main traffic parameters of interest. These parameters can be estimated through efficient vision-based algorithms and appropriate camera sensor technology. With the advances in camera technologies and increasing computing power, use of monocular vision, stereo vision, and camera sensor fusion technologies have been an active research area in the field of ITS. In this thesis, we investigated stationary monocular and stereo camera technology for traffic parameters estimation. Stationary camera sensors provide large spatial-temporal information of the road section with relatively low installation costs. Two novel scientific contributions for vehicle detection and recognition are proposed. The first one is the use stationary stereo camera technology, and the second contribution is the fusion of monocular and stereo camera technologies. A vision-based ITS consists of several hardware and software components. The overall performance of such a system does not only depend on these single modules but also on their interaction. Therefore, a systematic approach considering all essential modules was chosen instead of focusing on one element of the complete system chain. This leads to detailed investigations of several core algorithms, e.g. background subtraction, histogram based fingerprints, and data fusion methods. From experimental results on standard datasets, we concluded that proposed fusion-based approach, consisting of monocular and stereo camera technologies performs better than each particular technology for vehicle detection and vehicle recognition. Moreover, this research work has a potential to provide a low-cost vision-based solution for online traffic monitoring systems in urban and rural environments.
36

Use of improved Deep Learning and DeepSORT for Vehicle estimation / Användning av förbättrad djupinlärning och DeepSORT för fordonsuppskattning

Zheng, Danna January 2022 (has links)
Intelligent Traffic System (ITS) has high application value in nowadays vehicle surveillance and future applications such as automated driving. The crucial part of ITS is to detect and track vehicles in real-time video stream with high accuracy and low GPU consumption. In this project, we select the YOLO version4 (YOLOv4) one-stage deep learning detector to generate bounding boxes with vehicle classes and location as well as confidence value, we select Simple Online and Realtime Tracking with a Deep Association Metric (DeepSORT) tracker to track vehicles using the output of YOLOv4 detector. Furthermore, in order to make the detector more adaptive to practical use, especially when the vehicle is small or obscured, we improved the detector’s structure by adding attention mechanisms and reducing parameters to detect vehicles with relatively high accuracy and low GPU memory usage. With the baseline model, results show that the YOLOv4 and DeepSORT vehicle detection could achieve 82.4% mean average precision among three vehicle classes with 63.945 MB parameters under 19.98 frames per second. After optimization, the improved model could achieve 85.84% mean average precision among three detection classes with 44.158MB parameters under 18.65 frames per second. Compared with original YOLOv4, the improved YOLOv4 detector could increase the mean average precision by 3.44% and largely reduced the parameters by 30.94% as well as maintaining high detection speed. This proves the validity and high applicability of the proposed improved YOLOv4 detector. / Intelligenta trafiksystem har ett stort tillämpningsvärde i dagens fordonsövervakning och framtida tillämpningar som t.ex. automatiserad körning. Den avgörande delen av systemet är att upptäcka och spåra fordon i videoströmmar i realtid med hög noggrannhet och låg GPU-förbrukning. I det här projektet väljer vi YOLOv4-detektorn för djupinlärning i ett steg för att generera avgränsande rutor med fordonsklasser och lokalisering samt konfidensvärde, och vi väljer DeepSORT-tracker för att spåra fordon med hjälp av YOLOv4-detektorns resultat. För att göra detektorn mer anpassningsbar för praktisk användning, särskilt när fordonet är litet eller dolt, förbättrade vi dessutom detektorns struktur genom att lägga till uppmärksamhetsmekanismer och minska parametrarna för att upptäcka fordon med relativt hög noggrannhet och låg GPU-minneanvändning. Med basmodellen visar resultaten att YOLOv4 och DeepSORT fordonsdetektering kunde uppnå en genomsnittlig genomsnittlig precision på 82.4 % bland tre fordonsklasser med 63.945 MB parametrar under 19.98 bilder per sekund. Efter optimering kunde den förbättrade modellen uppnå 85.84% genomsnittlig precision bland tre detektionsklasser med 44.158 MB parametrar under 18.65 bilder per sekund. Jämfört med den ursprungliga YOLOv4-detektorn kunde den förbättrade YOLOv4-detektorn öka den genomsnittliga precisionen med 3.44 % och minska parametrarna med 30.94%, samtidigt som den bibehöll en hög detektionshastighet. Detta visar att den föreslagna förbättrade YOLOv4-detektorn är giltig och mycket användbar.
37

Real Time Vehicle Detection for Intelligent Transportation Systems

Shurdhaj, Elda, Christián, Ulehla January 2023 (has links)
This thesis aims to analyze how object detectors perform under winter weather conditions, specifically in areas with varying degrees of snow cover. The investigation will evaluate the effectiveness of commonly used object detection methods in identifying vehicles in snowy environments, including YOLO v8, Yolo v5, and Faster R-CNN. Additionally, the study explores the method of labeling vehicle objects within a set of image frames for the purpose of high-quality annotations in terms of correctness, details, and consistency. Training data is the cornerstone upon which the development of machine learning is built. Inaccurate or inconsistent annotations can mislead the model, causing it to learn incorrect patterns and features. Data augmentation techniques like rotation, scaling, or color alteration have been applied to enhance some robustness to recognize objects under different alterations. The study aims to contribute to the field of deep learning by providing valuable insights into the challenges of detecting vehicles in snowy conditions and offering suggestions for improving the accuracy and reliability of object detection systems. Furthermore, the investigation will examine edge devices' real-time tracking and detection capabilities when applied to aerial images under these weather conditions. What drives this research is the need to delve deeper into the research gap concerning vehicle detection using drones, especially in adverse weather conditions. It highlights the scarcity of substantial datasets before Mokayed et al. published the Nordic Vehicle Dataset. Using unmanned aerial vehicles(UAVs) or drones to capture real images in different settings and under various snow cover conditions in the Nordic region contributes to expanding the existing dataset, which has previously been restricted to non-snowy weather conditions. In recent years, the leverage of drones to capture real-time data to optimize intelligent transport systems has seen a surge. The potential of drones in providing an aerial perspective efficiently collecting data over large areas to precisely and timely monitor vehicular movement is an area that is imperative to address. To a greater extent, snowy weather conditions can create an environment of limited visibility, significantly complicating data interpretation and object detection. The emphasis is set on edge devices' real-time tracking and detection capabilities, which in this study introduces the integration of edge computing in drone technologies to explore the speed and efficiency of data processing in such systems.
38

Měření rychlosti vozidel pomocí stereo kamery / Vehicle Speed Measurement Using Stereo Camera Pair

Najman, Pavel January 2021 (has links)
Tato práce se snaží najít odpověď na otázku, zda je v současnosti možné autonomně měřit rychlost vozidel pomocí stereoskopické měřící metody s průměrnou chybou v rozmezí 1 km/h, maximální chybou v rozmezí 3 km/h a směrodatnou odchylkou v rozmezí 1 km/h. Tyto rozsahy chyb jsou založené na požadavcích organizace OIML, jejichž doporučení jsou základem metrologických legislativ mnoha zemí. Pro zodpovězení této otázky je zformulována hypotéza, která je následně testována. Metoda, která využívá stereo kameru pro měření rychlosti vozidel je navržena a experimentálně vyhodnocena. Výsledky pokusů ukazují, že navržená metoda překonává výsledky dosavadních metod. Průměrná chyba měření je přibližně 0.05 km/h, směrodatná odchylka chyby je menší než 0.20 km/h a maximální absolutní hodnota chyby je menší než 0.75 km/h. Tyto výsledky jsou v požadovaném rozmezí a potvrzují tedy testovanou hypotézu.
39

Měření rychlosti automobilů z dohledové kamery / Speed Measurement of Vehicles from Surveillance Camera

Jaklovský, Samuel January 2018 (has links)
This master's thesis is focused on fully automatic calibration of traffic surveillance camera, which is used for speed measurement of passing vehicles. Thesis contains and describes theoretical information and algorithms related to this issue. Based on this information and algorithms, a comprehensive system design for automatic calibration and speed measurement was built. The proposed system has been successfully implemented. The implemented system is optimized to process the smallest portion of the video input for the automatic calibration of the camera. Calibration parameters are obtained after processing only two and half minutes of input video. The accuracy of the implemented system was evaluated on the dataset BrnoCompSpeed. The speed measurement error using the automatic calibration system is 8.15 km/h. The error is mainly caused by inaccurate scale acquisition, and when it is replaced by manually obtained scale, the error is reduced to 2.45 km/h. The speed measuring system itself has an error of only 1.62 km/h (evaluated using manual calibration parameters).
40

Systém pro asistenci při nepřehledných dopravních situacích / Traffic assistant system for complicated situations

Podola, David January 2019 (has links)
T-intersections are one of the most common places where collisions happen. An intelligent traffic mirror is one the possible solutions to reduce the accident rate. The mirror detects the situation around the intersection, process the data and provides the driver with an information, whether the situation is safe and the driver can enter the junction safely. The aim of the thesis is a feasibility study of reliable detection of non-stationary objects based on cameras. The core of the intended product – the detection algorithm – detected the object on short distance from the camera reliably but as the distance was growing, the detection quality degraded. One of the possible solutions to achieve better detection results on longer distances may be achieved by using a camera with greater zoom. Based on the example improvement proposal, the feasibility of the solution based on optical methods was finally confirmed.

Page generated in 0.1654 seconds