Spelling suggestions: "subject:"abject detection."" "subject:"6bject detection.""
361 |
En jämförelse mellan två öppna ramverk för objektigenkänning : En undersökning gällande noggrannhet och tidsåtgång vidträning och test / A comparison between two open frameworks for object detection - Astudy regarding precision and duration with training and testTirus, Nicklas January 2018 (has links)
Samarbetspartnern som denna studie har gjorts för har som mål att konstruera en detektor för tågtrafiken som bygger på bildigenkänning och artificiell intelligens. Problemet är att de lösningar som finns idag är dyra, och därför är en förutsättning att den ska vara byggd med konsumentprodukter för att få ner kostnaden samt att den ska vara enkel att installera och underhålla. Flera ramverk för objektigenkänning existerar, men dessa bygger på olika metoder och tekniker. Studien har därför utförts som en fallstudie vars syfte har varit att jämföra två välanvända ramverk för objektigenkänning för att identifiera olika för- och nackdelar gällande noggrannhet och tidsåtgång vid träning och test med hjälp av dessa ramverk. Även vilka olika utmaningar som stötts på under tillvägagångssättet har lyfts fram. Studien sammanfattar sedan dessa för att skapa idéer och diskussion för hur dessa skulle kunna implementeras på den nya tågdetektorn. Ramverken som har jämförts är OpenCV och Google TensorFlow. Dessa bygger på olika objektigenkänningstekniker, i huvudsak kaskadklassificering och neurala nät. Ramverken testades med en datamängd på 400 bilder på olika tågfordon där hjulaxlarna användes som parameter för objektigenkänningen. Testerna bedömdes efter kriterier gällande noggrannhet, tidsåtgång för träning samt komplexitet för konfiguration och användning. Resultatet visade att OpenCV hade en snabb träningsprocess, men visade låg precision och en mer komplex konfigurerings- och användningsprocess. TensorFlow hade en långsammare träningsprocess, men visade istället bättre precision och en mindre komplex konfigurering. Slutsatsen av studien är att TensorFlow visade bäst resultat och har mest potential att användas i den nya tågdetektorn. Detta baseras på studiens resultat samt att den bygger på modernare tekniker med neurala nät för objektigenkänning. / The research in this thesis is conducted with the partners aim to construct a new train detection system that uses image recognition and artificial intelligence. Detectors like these that exists today are expensive, so the construction is going to be based around the use of consumer electronics to lower the cost and simplify installation and maintenance. Several frameworks for object detection are available, but they use different approaches and methods. This thesis is therefore carried out as a case study that compares two widely used frameworks for image recognition tasks. The purpose is to identify advantages and disadvantages regarding training and testing when using these frameworks. Also highlighted is different challenges encountered in the process. The summary of the results is used to form ideas and a discussion about how to implement a framework in the new detection system. The frameworks compared in this study are OpenCV and Google TensorFlow. These frameworks use different methods for object detection, mainly cascade classifiers and convolutional neural nets. The frameworks were tested using a dataset of 400 images on different trains where the wheel-axles were used as the object of interest. The results were analyzed based on criteria regarding precision, total training time and also complexity regarding configuration and usage. The results showed that OpenCV had a faster training process but had low precision and more complex configuration. TensorFlow had a much longer training process but had better precision and less complex configuration. The conclusion of the study is that TensorFlow overall showed the best result and has a better potential for implementation in the new detection system. This is based on the results from the study, but also that the framework is developed with a more modern approach using convolutional neural nets for bject detection.
|
362 |
Vision stéréoscopique temps-réel pour la navigation autonome d'un robot en environnement dynamique / Real-time stereovision for autonomous robot navigation in dynamic environmentDerome, Maxime 22 June 2017 (has links)
L'objectif de cette thèse est de concevoir un système de perception stéréoscopique embarqué, permettant une navigation robotique autonome en environnement dynamique (i.e. comportant des objets mobiles). Pour cela, nous nous sommes imposé plusieurs contraintes : 1) Puisque l'on souhaite pouvoir naviguer en terrain inconnu et en présence de tout type d'objets mobiles, nous avons adopté une approche purement géométrique. 2) Pour assurer une couverture maximale du champ visuel nous avons choisi d'employer des méthodes d'estimation denses qui traitent chaque pixel de l'image. 3) Puisque les algorithmes utilisés doivent pouvoir s'exécuter en embarqué sur un robot, nous avons attaché le plus grand soin à sélectionner ou concevoir des algorithmes particulièrement rapides, pour nuire au minimum à la réactivité du système. La démarche présentée dans ce manuscrit et les contributions qui sont faites sont les suivantes. Dans un premier temps, nous étudions plusieurs algorithmes d’appariement stéréo qui permettent d'estimer une carte de disparité dont on peut déduire, par triangulation, une carte de profondeur. Grâce à cette évaluation nous mettons en évidence un algorithme qui ne figure pas sur les benchmarks KITTI, mais qui offre un excellent compromis précision/temps de calcul. Nous proposons également une méthode pour filtrer les cartes de disparité. En codant ces algorithmes en CUDA pour profiter de l’accélération des calculs sur cartes graphiques (GPU), nous montrons qu’ils s’exécutent très rapidement (19ms sur les images KITTI, sur GPU GeForce GTX Titan).Dans un deuxième temps, nous souhaitons percevoir les objets mobiles et estimer leur mouvement. Pour cela nous calculons le déplacement du banc stéréo par odométrie visuelle pour pouvoir isoler dans le mouvement apparent 2D ou 3D (estimé par des algorithmes de flot optique ou de flot de scène) la part induite par le mouvement propre à chaque objet. Partant du constat que seul l'algorithme d'estimation du flot optique FOLKI permet un calcul en temps-réel, nous proposons plusieurs modifications de celui-ci qui améliorent légèrement ses performances au prix d'une augmentation de son temps de calcul. Concernant le flot de scène, aucun algorithme existant ne permet d'atteindre la vitesse d'exécution souhaitée, nous proposons donc une nouvelle approche découplant structure et mouvement pour estimer rapidement le flot de scène. Trois algorithmes sont proposés pour exploiter cette décomposition structure-mouvement et l’un d’eux, particulièrement efficace, permet d'estimer très rapidement le flot de scène avec une précision relativement bonne. A notre connaissance, il s'agit du seul algorithme publié de calcul du flot de scène capable de s'exécuter à cadence vidéo sur les données KITTI (10Hz).Dans un troisième temps, pour détecter les objets en mouvement et les segmenter dans l'image, nous présentons différents modèles statistiques et différents résidus sur lesquels fonder une détection par seuillage d'un critère chi2. Nous proposons une modélisation statistique rigoureuse qui tient compte de toutes les incertitudes d'estimation, notamment celles de l'odométrie visuelle, ce qui n'avait pas été fait à notre connaissance dans le contexte de la détection d'objets mobiles. Nous proposons aussi un nouveau résidu pour la détection, en utilisant la méthode par prédiction d’image qui permet de faciliter la propagation des incertitudes et l'obtention du critère chi2. Le gain apporté par le résidu et le modèle d'erreur proposés est démontré par une évaluation des algorithmes de détection sur des exemples tirés de la base KITTI. Enfin, pour valider expérimentalement notre système de perception en embarqué sur une plateforme robotique, nous implémentons nos codes sous ROS et certains codes en CUDA pour une accélération sur GPU. Nous décrivons le système de perception et de navigation utilisé pour la preuve de concept qui montre que notre système de perception, convient à une application embarquée. / This thesis aims at designing an embedded stereoscopic perception system that enables autonomous robot navigation in dynamic environments (i.e. including mobile objects). To do so, we need to satisfy several constraints: 1) We want to be able to navigate in unknown environment and with any type of mobile objects, thus we adopt a geometric approach. 2) We want to ensure the best possible coverage of the field of view, so we employ dense methods that process every pixel in the image. 3) The algorithms must be compliant with an embedded platform, therefore we must carefully design the algorithms so they are fast enough to keep a certain level of reactivity. The approach presented in this thesis manuscript and the contributions are summarized below. First, we study several stereo matching algorithms that estimate a disparity map from which we can deduce a depth map, by triangulation. This comparative study highlights one algorithm that is not in the KITTI benchmarks, but that gives a great accuracy/processing time tradeoff. We also propose a filtering method to post-process the disparity maps. By coding these algorithm in CUDA to benefit from hardware acceleration on Graphics Processing Unit, we show that they can perform very fast (19ms on KITTI images, with a GPU GeForce GTX Titan).Second, we want to detect mobile objects and estimate their motion. To do so we compute the stereo rig motion using visual odometry, in order to isolate the part induced by moving objects in the 2D or 3D apparent motion (estimated by optical flow or scene flow algorithms). Considering that the only optical flow algorithm able to perform in real-time is FOLKI, we propose several modifications of it to slightly improve its performances at the cost of a slower processing time. Regarding the scene flow estimation, existing algorithms cannot reach the desired computation speed, so we propose a new approach by decoupling structure and motion for a fast scene flow estimation. Three algorithms are proposed to use this structure-motion decomposition, and one of them, particularly efficient, enables very fast scene flow computing with a relatively good accuracy. To our knowledge it is the only published scene flow algorithm able to perform at framerate on KITTI dataset (10 Hz).Third, to detect moving objects and segment them in the image, we show several statistical models and residual quantities on which we can base the detection by thresholding a chi2 criterion. We propose a rigorous statistical modeling that takes into account all the uncertainties occurring during the estimation, in particular during the visual odometry, which had not been done to our knowledge, in the context of moving object detection. We also propose a new residual quantity for the detection, using an image prediction approach to facilitate uncertainty propagation and the chi2 criterion modeling. The benefit brought by the proposed residual quantity and error model is demonstrated by evaluating detection algorithms on a samples of annotated KITTI data. Finally, we implement our algorithms on ROS to run the perception system on en embedded platform, and we code some algorithms in CUDA to accelerate the computing using GPU. We describe the perception and the navigation system that we use for the experimental validation. We show in our experiments that the proposed stereovision perception system is suitable for embedded robotic applications.
|
363 |
E-scooter Rider Detection System in Driving EnvironmentsKumar Apurv (11184732) 06 August 2021 (has links)
E-scooters are ubiquitous and their number keeps escalating, increasing their interactions with other vehicles on the road. E-scooter riders have an atypical behavior that varies enormously from other vulnerable road users, creating new challenges for vehicle active safety systems and automated driving functionalities. The detection of e-scooter riders by other vehicles is the first step in taking care of the risks. This research presents a novel vision-based system to differentiate between e-scooter riders and regular pedestrians and a benchmark dataset for e-scooter riders in natural environments. An efficient system pipeline built using two existing state-of-the-art convolutional neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2, performs detection of these vulnerable e-scooter riders.<br>
|
364 |
Metody hlubokého učení pro zpracování obrazů / Deep learning methods for image processingKřenek, Jakub January 2017 (has links)
This master‘s thesis deals with the Deep Learning methods for image recognition tasks from the first methods to the modern ones. The main focus is on convolutional neural nets based models for classification, detection and image segmentation. These methods are used for practical implemetation – counting passing cars on video from traffic camera. After several test of available models, the YOLOv2 architecture was chosen and retrained on own dataset. The application also includes the addition of the SORT tracking algorithm.
|
365 |
Anonymizace videa / Video AnonymizationMokrý, Martin January 2019 (has links)
The goal of this thesis is to design and create an automatic system for video anonymization. This system makes use of various object detectors on an image to ensure functionality, as well as active tracking of objects detected in this manner. Adjustments are later applied to these detected objects which ensure sufficient level of anonymization. The main asset of this system is speeding up the anonymization process of videos that can be published after.
|
366 |
Využití moderních metod zpracování obrazu při kontrole laboratorních procesů / Use of modern image processing methods in the control of laboratory processesKiac, Martin January 2019 (has links)
The thesis deals with the processing and detection of specific objects in the image on the Android mobile platform. The main objective of this work was to design and then implement a mobile application for Android operating system, which allows control of pipetting processes based on images from a mobile device camera. The OpenCV library is used in the application for image processing. The resulting application should serve primarily in laboratories as a tool for complete analysis of the pipetting process. The work is divided into two main chapters, which further consist of sections and smaller subsections. The first chapter is devoted to the theoretical analysis of this work. Here is also describes used technology, Android operating system, OpenCV library and important parts of image processing. The second chapter deals with the proposal and subsequent practical solution of this work. There is a proposal and the following procedure for solving this work, important techniques, methods of processing and analysis of the camera image. The conclusion of the thesis is an evaluation of the results of the solution of this work.
|
367 |
Autonomní vozidlo pro model dopravní situace / Autonomous vehicle for traffic situation modelSchneiderka, Dominik January 2020 (has links)
This thesis describes development of autonomous car for Carrera 143 racing track. Main objective of a car is to stop when traffic light shows red, or when there is an obstacle infront of a car. This paper also describes electric schemes used to control the car and their placement on the car. Algorithms developed for image processing are developed for processing unit Raspberry Pi Zero and are written in C/C++ programming language. OpenCV library is used for image processing. All source codes were developed in Microsoft Visual Studio 2019.
|
368 |
Sledování osob v záznamu z dronu / Tracking People in Video Captured from a DroneLukáč, Jakub January 2020 (has links)
Práca rieši možnosť zaznamenávať pozíciu osôb v zázname z kamery drona a určovať ich polohu. Absolútna pozícia sledovanej osoby je odvodená vzhľadom k pozícii kamery, teda vzhľadom k umiestneniu drona vybaveného príslušnými senzormi. Zistené dáta sú po ich spracovaní vykreslené ako príslušné cesty. Práca si ďalej dáva za cieľ využiť dostupné riešenia čiastkových problémov: detekcia osôb v obraze, identifikácie jednotlivých osôb v čase, určenie vzdialenosti objektu od kamery, spracovanie potrebných senzorových dát. Následne využiť preskúmané metódy a navrhnúť riešenie, ktoré bude v reálnom čase pracovať na uvedenom probléme. Implementačná časť spočíva vo využití akcelerátoru Intel NCS v spojení s Raspberry Pi priamo ako súčasť drona. Výsledný systém je schopný generovať výstup o polohe osôb v zábere kamery a príslušne ho prezentovať.
|
369 |
Detekce cizích objektů v rentgenových snímcích hrudníku s využitím metod strojového učení / Detection of foreign objects in X-ray chest images using machine learning methodsMatoušková, Barbora January 2021 (has links)
Foreign objects in Chest X-ray (CXR) cause complications during automatic image processing. To prevent errors caused by these foreign objects, it is necessary to automatically find them and ommit them in the analysis. These are mainly buttons, jewellery, implants, wires and tubes. At the same time, finding pacemakers and other placed devices can help with automatic processing. The aim of this work was to design a method for the detection of foreign objects in CXR. For this task, Faster R-CNN method with a pre-trained ResNet50 network for feature extraction was chosen which was trained on 4 000 images and lately tested on 1 000 images from a publicly available database. After finding the optimal learning parameters, it was managed to train the network, which achieves 75% precision, 77% recall and 76% F1 score. However, a certain part of the error is formed by non-uniform annotations of objects in the data because not all annotated foreign objects are located in the lung area, as stated in the description.
|
370 |
Sledování osob ve videu z dronu / Tracking People in Video Captured from a DroneLukáč, Jakub January 2021 (has links)
Práca rieši možnosť zaznamenávať pozíciu osôb v zázname z kamery drona a určovať ich polohu. Absolútna pozícia sledovanej osoby je odvodená vzhľadom k pozícii kamery, teda vzhľadom k umiestneniu drona vybaveného príslušnými senzormi. Zistené dáta sú po ich spracovaní vykreslené ako príslušné cesty v grafe. Práca si ďalej dáva za cieľ využiť dostupné riešenia čiastkových problémov: detekcia osôb v obraze, identifikácia jednotlivých osôb v čase, určenie vzdialenosti objektu od kamery, spracovanie potrebných senzorových dát. Následne využiť preskúmané metódy a navrhnúť riešenie, ktoré bude v reálnom čase pracovať na uvedenom probléme. Implementačná časť spočíva vo využití akcelerátoru Intel NCS v spojení s Raspberry Pi priamo ako súčasť drona. Výsledný systém je schopný generovať výstup o polohe detekovaných osôb v zábere kamery a príslušne ho prezentovať.
|
Page generated in 0.1112 seconds