Spelling suggestions: "subject:"yolk""
41 |
Machine visual feedback through CNN detectors : Mobile object detection for industrial applicationRexhaj, Kastriot January 2019 (has links)
This paper concerns itself with object detection as a possible solution to Valmet’s quest for a visual-feedback system that can help operators and other personnel to more easily interact with their machines and equipment. New advancements in deep learning, specifically CNN models, have been exploring neural networks with detection-capabilities. Object detection has historically been mostly inaccessible to the industry due the complex solutions involving various tricky image processing algorithms. In that regard, deep learning offers a more easily accessible way to create scalable object detection solutions. This study has therefore chosen to review recent literature detailing detection models with a selective focus on factors making them realizable on ARM hardware and in turn mobile devices like phones. An attempt was made to single out the most lightweight and hardware efficient model and implement it as a prototype in order to help Valmet in their decision process around future object detection products. The survey led to the choice of a SSD-MobileNetsV2 detection architecture due to promising characteristics making it suitable for performance-constrained smartphones. This CNN model was implemented on Valmet’s phone of choice, Samsung Galaxy S8, and it successfully achieved object detection functionality. Evaluation shows a mean average precision of 60 % in detecting objects and a 4.7 FPS performance on the chosen phone model. TensorFlow was used for developing, training and evaluating the model. The report concludes with recommending Valmet to pursue solutions built on-top of these kinds of models and further wishes to express an optimistic outlook on this type of technology for the future. Realizing performance of this magnitude on a mid-tier phone using deep learning (which historically is very computationally intensive) sets us up for great strides with this type of technology in the future; and along with better smartphones, great benefits are expected to both industry and consumers. / Den här rapporten behandlar objekt detektering som en möjlig lösning på Valmets efterfrågan av ett visuellt återkopplingssystem som kan hjälpa operatörer och annan personal att lättare interagera med maskiner och utrustning. Nya framsteg inom djupinlärning har dem senaste åren möjliggjort framtagande av neurala nätverksarkitekturer med detekteringsförmågor. Då industrisektorn svårare tar till sig högst specialiserade algoritmer och komplexa bildbehandlingsmetoder (som tidigare varit fallet med objekt detektering) så ger djupinlärningsmetoder istället upphov till att skapa självlärande system som är återanpassningsbara och närmast intuitiva i dem fall där sådan teknologi åberopas. Den här studien har därför valt att studera ett par sådana teknologier för att hitta möjliga implementeringar som kan realiseras på något så enkelt som en mobiltelefon. Urvalet har därför bestått i att hitta detekteringsmodeller som är hårdvarumässigt resurssnåla och implementera ett sådant system för att agera prototyp och underlag till Valmets vidare diskussioner kring objekt-detekteringsslösningar. Studien valde att implementera en SSD-MobileNetsV2 modellarkitektur då den uppvisade lovande egenskaper kring hårdvarukraven. Modellen implementerades och utvärderades på Valmets mest förekommande telefon Samsung Galaxy S8 och resultatet visade på en god förmåga för modellen att detektera objekt. Den valda modellen gav 60 % precision på utvärderingsbilderna och lyckades nå 4.7 FPS på den implementerade telefonen. TensorFlow användes för programmering och som stödjande mjukvaruverktyg för träning, utvärdering samt vidare implementering. Studien påpekar optimistiska förväntningar av denna typ av teknologi; kombinerat med bättre smarttelefoner i framtiden kan det leda till revolutionerande lösningar för både industri och konsumenter.
|
42 |
Sledování osob v záznamu z dronu / Tracking People in Video Captured from a DroneLukáč, Jakub January 2020 (has links)
Práca rieši možnosť zaznamenávať pozíciu osôb v zázname z kamery drona a určovať ich polohu. Absolútna pozícia sledovanej osoby je odvodená vzhľadom k pozícii kamery, teda vzhľadom k umiestneniu drona vybaveného príslušnými senzormi. Zistené dáta sú po ich spracovaní vykreslené ako príslušné cesty. Práca si ďalej dáva za cieľ využiť dostupné riešenia čiastkových problémov: detekcia osôb v obraze, identifikácie jednotlivých osôb v čase, určenie vzdialenosti objektu od kamery, spracovanie potrebných senzorových dát. Následne využiť preskúmané metódy a navrhnúť riešenie, ktoré bude v reálnom čase pracovať na uvedenom probléme. Implementačná časť spočíva vo využití akcelerátoru Intel NCS v spojení s Raspberry Pi priamo ako súčasť drona. Výsledný systém je schopný generovať výstup o polohe osôb v zábere kamery a príslušne ho prezentovať.
|
43 |
Sledování osob ve videu z dronu / Tracking People in Video Captured from a DroneLukáč, Jakub January 2021 (has links)
Práca rieši možnosť zaznamenávať pozíciu osôb v zázname z kamery drona a určovať ich polohu. Absolútna pozícia sledovanej osoby je odvodená vzhľadom k pozícii kamery, teda vzhľadom k umiestneniu drona vybaveného príslušnými senzormi. Zistené dáta sú po ich spracovaní vykreslené ako príslušné cesty v grafe. Práca si ďalej dáva za cieľ využiť dostupné riešenia čiastkových problémov: detekcia osôb v obraze, identifikácia jednotlivých osôb v čase, určenie vzdialenosti objektu od kamery, spracovanie potrebných senzorových dát. Následne využiť preskúmané metódy a navrhnúť riešenie, ktoré bude v reálnom čase pracovať na uvedenom probléme. Implementačná časť spočíva vo využití akcelerátoru Intel NCS v spojení s Raspberry Pi priamo ako súčasť drona. Výsledný systém je schopný generovať výstup o polohe detekovaných osôb v zábere kamery a príslušne ho prezentovať.
|
44 |
APPLYING UAVS TO SUPPORT THE SAFETY IN AUTONOMOUS OPERATED OPEN SURFACE MINESHamren, Rasmus January 2021 (has links)
Unmanned aerial vehicle (UAV) is an expanding interest in numerous industries for various applications. Increasing development of UAVs is happening worldwide, where various sensor attachments and functions are being added. The multi-function UAV can be used within areas where they have not been managed before. Because of their accessibility, cheap purchase, and easy-to-use, they replace expensive systems such as helicopters- and airplane-surveillance. UAV are also being applied into surveillance, combing object detection to video-surveillance and mobility to finding an object from the air without interfering with vehicles or humans ground. In this thesis, we solve the problem of using UAV on autonomous sites, finding an object and critical situation, support autonomous site operators with an extra safety layer from UAVs camera. After finding an object on such a site, uses GPS-coordinates from the UAV to see and place the detected object on the site onto a gridmap, leaving a coordinate-map to the operator to see where the objects are and see if the critical situation can occur. Directly under the object detection, reporting critical situations can be done because of safety-distance-circle leaving warnings if objects come to close to each other. However, the system itself only supports the operator with extra safety and warnings, leaving the operator with the choice of pressing emergency stop or not. Object detection uses You only look once (YOLO) as main object detection Neural Network (NN), mixed with edge-detection for gaining accuracy during bird-eye-views and motion-detection for supporting finding all object that is moving on-site, even if UAV cannot find all the objects on site. Result proofs that the UAV-surveillance on autonomous site is an excellent way to add extra safety on-site if the operator is out of focus or finding objects on-site before startup since the operator can fly the UAV around the site, leaving an extra-safety-layer of finding humans on-site before startup. Also, moving the UAV to a specific position, where extra safety is needed, informing the operator to limit autonomous vehicles speed around that area because of humans operation on site. The use of single object detection limits the effects but gathered object detection methods lead to a promising result while printing those objects onto a global positions system (GPS) map has proposed a new field to study. It leaves the operator with a viewable interface outside of object detection libraries.
|
45 |
Training of Object Detection Spiking Neural Networks for Event-Based VisionJohansson, Olof January 2021 (has links)
Event-based vision offers high dynamic range, time resolution and lower latency than conventional frame-based vision sensors. These attributes are useful in varying light condition and fast motion. However, there are no neural network models and training protocols optimized for object detection with event data, and conventional artificial neural networks for frame-based data are not directly suitable for that task. Spiking neural networks are natural candidates but further work is required to develop an efficient object detection architecture and end-to-end training protocol. For example, object detection in varying light conditions is identified as a challenging problem for the automation of construction equipment such as earth-moving machines, aiming to increase the safety of operators and make repetitive processes less tedious. This work focuses on the development and evaluation of a neural network for object detection with data from an event-based sensor. Furthermore, the strengths and weaknesses of an event-based vision solution are discussed in relation to the known challenges described in former works on automation of earth-moving machines. A solution for object detection with event data is implemented as a modified YOLOv3 network with spiking convolutional layers trained with a backpropagation algorithm adapted for spiking neural networks. The performance is evaluated on the N-Caltech101 dataset with classes for airplanes and motorbikes, resulting in a mAP of 95.8% for the combined network and 98.8% for the original YOLOv3 network with the same architecture. The solution is investigated as a proof of concept and suggestions for further work is described based on a recurrent spiking neural network.
|
46 |
Advanced Data Augmentation : With Generative Adversarial Networks and Computer-Aided DesignThaung, Ludwig January 2020 (has links)
CNN-based (Convolutional Neural Network) visual object detectors often reach human level of accuracy but need to be trained with large amounts of manually annotated data. Collecting and annotating this data can frequently be time-consuming and financially expensive. Using generative models to augment the data can help minimize the amount of data required and increase detection per-formance. Many state-of-the-art generative models are Generative Adversarial Networks (GANs). This thesis investigates if and how one can utilize image data to generate new data through GANs to train a YOLO-based (You Only Look Once) object detector, and how CAD (Computer-Aided Design) models can aid in this process. In the experiments, different models of GANs are trained and evaluated by visual inspection or with the Fréchet Inception Distance (FID) metric. The data provided by Ericsson Research consists of images of antenna and baseband equipment along with annotations and segmentations. Ericsson Research supplied the YOLO detector, and no modifications are made to this detector. Finally, the YOLO detector is trained on data generated by the chosen model and evaluated by the Average Precision (AP). The results show that the generative models designed in this work can produce RGB images of high quality. However, the quality reduces if binary segmentation masks are to be generated as well. The experiments with CAD input data did not result in images that could be used for the training of the detector. The GAN designed in this work is able to successfully replace objects in images with the style of other objects. The results show that training the YOLO detector with GAN-modified data compared to training with real data leads to the same detection performance. The results also show that the shapes and backgrounds of the antennas contributed more to detection performance than their style and colour.
|
47 |
Detecting small and fast objects using image processing techniques : A project study within sport analysisGustafsson, Simon, Persson, Andreas January 2021 (has links)
This study has put three different object detecting techniques to the test. The goal was to investigate small and fast-moving objects to see which technique’s performance is most suitable within the sports of Padel. The study aims to cover and explain different affecting conditions that could cause better but also worse performance for small and fast object detection. The three techniques use different approaches for detecting one or multiple objects and could be a guideline for future object detection development. The proposed techniques utilize background histogram calculation, HSV masking with edge detection and DNN frameworks together with the COCO dataset. The process is tested through outdoor video footage across all techniques to generate data, which indicates that Canny edge detection is a prominent suggestion for further research given its high detection rate. However, YOLO shows excellent potential for multiple object detection at a very high confidence grade, which provides reliable and accurate detection of a targeted object. This study’s conclusion is that depending on what the end purpose aims to achieve, Canny and YOLO have potential for future small and fast object detection.
|
48 |
Object and Anomaly DetectionKlarin, Kristofer, Larsson, Daniel January 2022 (has links)
This project aims to contribute to the discussion regarding reproducibility of machinelearning research. This is done through utilizing the methods specified in the report ImprovingReproducibility in Machine Learning Research [30] to select an appropriateobject detection machine learning research paper for reproduction. Furthermore, this reportwill explain fundamental concepts of object detection. The chosen machine learningresearch paper, You Only Look Once (YOLO) [40] is then explained, implemented andtrained with various hyperparameters and pre-processing steps.While the reproduction did not achieve the results presented by the original machinelearning paper, some key insights were established. Firstly, the results of the projectdemonstrates the importance of pretraining. Secondly, the checklist provided by the NeurIPS[30] should be adjusted such that it is applicable in more situations.
|
49 |
A Study on Fault Tolerance of Object Detector Implemented on FPGA / En studie om feltolerans för objektdetektor Implementerad på FPGAYang, Tiancheng January 2023 (has links)
Objektdetektering har fått stort forskningsintresse de senaste åren, eftersom det är maskiners ögon och är en grundläggande uppgift inom datorseende som syftar till att identifiera och lokalisera föremål av intresse. Hårdvaruacceleratorer syftar vanligtvis till att öka genomströmningen för realtidskrav samtidigt som energiförbrukningen sänks. Studier av feltolerans säkerställer att algoritmen utförs korrekt även med felpresentation. Denna avhandling täcker dessa ämnen och tillhandahåller en Field-Programmable Gate Array (FPGA)-implementering av en objektdetekteringsalgoritm, You Only Look Once (YOLO), samtidigt som man undersöker implementeringens feltolerans. En baslinjeimplementering på FPGA tillhandahålls först och sedan tillämpas, implementeras och testas två feltoleranta implementeringar, en med trippelmodulär redundans och en med tidsredundans. Fastnade fel injiceras i implementeringarna för att studera feltoleransen. Vår FPGA-implementering av YOLO ger en höghastighets, låg strömförbrukning och mycket konfigurerbar hårdvaruaccelerator för objektdetektering. I detta examensarbete görs implementeringsdesignen med en kombination av egendesignade moduler med VHDL och Xilinx-försedd Intellectual Property (IP). Jämfört med andra forsknings- eller öppen källkodsversioner som använder High-Level Synthesis (HLS), är denna design mer konfigurerbar för framtida referenser och tar bort onödiga hårdvarusvarta lådor. Jämfört med andra studier om hårdvaruacceleratorer fokuserar denna avhandling på feltolerans. Detta examensarbete skapar utrymme för mer arbete med att utforska feltolerans, t.ex. skapa en mer feltolerant implementering eller undersöka hur vissa fel kan påverka resultatet. Det är också möjligt att använda implementeringen från denna avhandling som baslinje för andra forskningsändamål, eftersom implementeringen är fristående och mycket konfigurerbar. / Object detection gets great research interest in recent years, as it is the eyes of machines and is a fundamental task in computer vision that aims at identifying and locating objects of interest. Hardware accelerators usually aim at boosting the throughput for real-time requirements while lowering power consumption. Studies on fault tolerance ensure the algorithm to be performed correctly even with error presenting. This thesis covers these topics and provides a Field-Programmable Gate Array (FPGA) implementation of an object detection algorithm, You Only Look Once (YOLO), while investigating the fault tolerance of the implementation. A baseline implementation on FPGA is first provided and then two fault-tolerant implementations, one with triple-modular redundancy and one with time redundancy are applied, implemented, and tested. Stuck-at faults are injected into the implementations to study the fault tolerance. Our FPGA implementation of YOLO provides a high-speed, low-power-consumption, and highly-configurable hardware accelerator for object detection. In this thesis, the implementation design is done with a combination of self-designed modules with VHDL and Xilinx-provided Intellectual Property (IP). Compared to other research or open-source versions using High-Level Synthesis (HLS), this design is more configurable for future references and removes unnecessary hardware black boxes. Compared to other studies on hardware accelerators, this thesis focuses on fault tolerance. This thesis creates space for more work on exploring fault tolerance, e.g., creating a more fault-tolerant implementation or investigating how certain faults could affect the result. It is also possible to use the implementation from this thesis as a baseline for other research purposes, as the implementation is stand-alone and highly configurable.
|
50 |
Object Based Image Retrieval Using Feature Maps of a YOLOv5 Network / Objektbaserad bildhämtning med hjälp av feature maps från ett YOLOv5-nätverkEssinger, Hugo, Kivelä, Alexander January 2022 (has links)
As Machine Learning (ML) methods have gained traction in recent years, someproblems regarding the construction of such methods have arisen. One such problem isthe collection and labeling of data sets. Specifically when it comes to many applicationsof Computer Vision (CV), one needs a set of images, labeled as either being of someclass or not. Creating such data sets can be very time consuming. This project setsout to tackle this problem by constructing an end-to-end system for searching forobjects in images (i.e. an Object Based Image Retrieval (OBIR) method) using an objectdetection framework (You Only Look Once (YOLO) [16]). The goal of the project wasto create a method that; given an image of an object of interest q, search for that sameor similar objects in a set of other images S. The core concept of the idea is to passthe image q through an object detection model (in this case YOLOv5 [16]), create a”fingerprint” (can be seen as a sort of identity for an object) from a set of feature mapsextracted from the YOLOv5 [16] model and look for corresponding similar parts of aset of feature maps extracted from other images. An investigation regarding whichvalues to select for a few different parameters was conducted, including a comparisonof performance for a couple of different similarity metrics. In the table below,the parameter combination which resulted in the highest F_Top_300-score (a measureindicating the amount of relevant images retrieved among the top 300 recommendedimages) in the parameter selection phase is presented. Layer: 23Pool Methd: maxSim. Mtrc: eucFP Kern. Sz: 4 Evaluation of the method resulted in F_Top_300-scores as can be seen in the table below. Mouse: 0.820Duck: 0.640Coin: 0.770Jet ski: 0.443Handgun: 0.807Average: 0.696 / Medan ML-metoder har blivit mer populära under senare år har det uppstått endel problem gällande konstruktionen av sådana metoder. Ett sådant problem ärinsamling och annotering av data. Mer specifikt när det kommer till många metoderför datorseende behövs ett set av bilder, annoterande att antingen vara eller inte varaav en särskild klass. Att skapa sådana dataset kan vara väldigt tidskonsumerande.Metoden som konstruerades för detta projekt avser att bekämpa detta problem genomatt konstruera ett end-to-end-system för att söka efter objekt i bilder (alltså en OBIR-metod) med hjälp av en objektdetekteringsalgoritm (YOLO). Målet med projektet varatt skapa en metod som; givet en bild q av ett objekt, söka efter samma eller liknandeobjekt i ett bibliotek av bilder S. Huvudkonceptet bakom idén är att köra bilden qgenom objektdetekteringsmodellen (i detta fall YOLOv5 [16]), skapa ett ”fingerprint”(kan ses som en sorts identitet för ett objekt) från en samling feature maps extraheradefrån YOLOv5-modellen [16] och leta efter liknande delar av samlingar feature maps iandra bilder. En utredning angående vilka värden som skulle användas för ett antalolika parametrar utfördes, inklusive en jämförelse av prestandan som resultat av olikalikhetsmått. I tabellen nedan visas den parameterkombination som gav högst F_Top_300(ett mått som indikerar andelen relevanta bilder bland de 300 högst rekommenderadebilderna). Layer: 23Pool Methd: maxSim. Mtrc: eucFP Kern. Sz: 4 Evaluering av metoden med parameterval enligt tabellen ovan resulterade i F_Top_300enligt tabellen nedan. Mouse: 0.820Duck: 0.640Coin: 0.770Jet ski: 0.443Handgun: 0.807Average: 0.696
|
Page generated in 0.0334 seconds