• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 3
  • 1
  • 1
  • Tagged with
  • 61
  • 35
  • 33
  • 31
  • 28
  • 23
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • 16
  • 15
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Thermal human detection for Search & Rescue UAVs / Termisk människodetektion för sök- och räddnings UAVs

Wiklund-Oinonen, Tobias January 2022 (has links)
Unmanned Aerial Vehicles (UAVs) could play an important role in Search & Rescue (SAR) operations thanks to their ability to cover large, remote, or inaccessible search areas quickly without putting any personnel at risk. As UAVs are becoming autonomous, the problem of identifying humans in a variety of conditions can be solved with computer vision implemented with a thermal camera. In some cases, it would be necessary to operate with one or several small, agile UAVs to search for people in dense and narrow environments, where flying at a high altitude is not a viable option. This could for example be in a forest, cave, or a collapsed building. A small UAV has a limitation in carrying capacity, which is why this thesis aimed to propose a lightweight thermal solution for human detection that could be applied on a small SAR-UAV for operation in dense environments. The solution included a Raspberry Pi 4 and a FLIR Lepton 3.5 thermal camera in terms of hardware, which were mainly chosen thanks to their small footprint regarding size and weight, while also fitting within budget restrictions. In terms of object detection software, EfficentDet-Lite0 in TensorFlow Lite format was incorporated thanks to good balance between speed, accuracy, and resource usage. An own dataset of thermal images was collected and trained upon. The objective was to characterize disturbances and challenges this solution might face during a UAV SAR-operation in dense environments, as well as to measure how the performance of the proposed platform varied with increasing amount of environmental coverage of a human. This was solved by conducting a literature study, an experiment in a replicated dense environment and through observations of the system behavior combined with analysis of the measurements. Disturbances that affect a thermal camera in use for human detection were found to be a mixture of objective and subjective parameters, which formed a base of what type of phenomena to include in a diverse thermal dataset. The results from the experiment showed that stable and reliable detection performance can be expected up to 75% vegetational coverage of a human. When fully covered, the solution was not reliable when trained on the dataset used in this thesis. / Obemannade drönare (UAVs) kan spela en viktig roll i sök- och räddningsuppdrag (SAR) tack vare deras förmåga att snabbt täcka stora, avlägsna eller otillgängliga sökområden utan att utsätta personal för risker. För autonoma UAVs kan problemet med att identifiera människor i en mängd olika förhållanden lösas med datorseende implementerat tillsammans med en värmekamera. I vissa fall kan det vara nödvändigt att operera med en eller flera små, smidiga UAVs för att söka efter människor i täta och trånga miljöer, där flygning på hög höjd inte är ett genomförbart alternativ. Det kan till exempel vara i en skog, grotta eller i en kollapsad byggnad. En liten UAV har begränsad bärförmåga, vilket är varför denna avhandling syftade till att föreslå en lättviktslösning för mänsklig detektering med värmekamera som skulle kunna appliceras på en liten SAR-UAV för drift i täta miljöer. Lösningen inkluderade Raspberry Pi 4 och en FLIR Lepton 3.5 värmekamera gällande hårdvara, tack vare liten formfaktor och liten vikt, samtidigt som de passade inom budgetramen. Gällande detekterings-mjukvara användes EfficentDet-Lite0 i TensorFlow Lite-format tack vare en bra balans mellan hastighet, noggrannhet och resursanvändning. En egen uppsättning av värmebilder samlades in och tränades på. Målet var att identifiera vilka störningar och utmaningar som denna lösning kan påträffa under en sökoperation med UAVs i täta miljöer, samt att mäta hur prestandan för den föreslagna plattformen varierade när täckningsgraden av en människa ökar p.g.a. omgivningen. Detta löstes genom att genomföra en litteraturstudie, ett experiment i en replikerad tät miljö och genom observationer av systemets beteende kombinerat med analys av mätningarna. Störningar som påverkar en värmekamera som används för mänsklig detektion visade sig vara en blandning av objektiva och subjektiva parametrar, vilka utgjorde en bas för vilka typer av fenomen som skulle inkluderas i en mångsidig kollektion med värmebilder. Resultaten från experimentet visade att stabil och pålitlig detekteringsprestanda kan förväntas upp till 75% täckningsgrad av en människa p.g.a. vegetation. När människan var helt täckt var lösningen inte tillförlitlig när den var tränad på kollektionen som användes i denna avhandling.
12

A Study on Fault Tolerance of Image Sensor-based Object Detection in Indoor Navigation / En studie om feltolerans för bildsensorbaserad objektdetektering i inomhusnavigering

Wang, Yang January 2022 (has links)
With the fast development of embedded deep-learning computing systems, applications powered by deep learning are moving from the cloud to the edge. When deploying NN onto the devices under complex environments, there are various types of possible faults: soft errors caused by cosmic radiation and radioactive impurities, voltage instability, aging, temperature variations, etc. Thus, more attention is drawn on the reliability of the NN embedded system. In this project, we build a virtual simulation system in Gazebo to simulate and test the working of an embedded NN system in the virtual environment in indoor navigation. The system can detect objects in the virtual environment with the help of the virtual camera(the image sensor) and the object detection module, which is based on YOLO v3, and make corresponding control decisions. We also designed and simulated the corresponding error injection module according to the working principle of the image sensor, and tested the functionality, and fault tolerance of the YOLO network. At the same time, network pruning algorithm is also introduced to study the relationship between different degrees of network pruning and network fault tolerance to sensor faults. / Med den snabba utvecklingen av inbyggda datorsystem för djupinlärning flyttas applikationer som drivs av djupinlärning från molnet till kanten. När man distribuerar NN på enheterna under komplexa miljöer finns det olika typer av möjliga fel: mjuka fel orsakade av kosmisk strålning och radioaktiva föroreningar, spänningsinstabilitet, åldrande, temperaturvariationer, illvilliga angripare, etc. Därför är mer uppmärksamhet ritade om tillförlitligheten hos det inbyggda NN-systemet. I det här projektet bygger vi ett virtuellt simuleringssystem för att simulera och testa hur ett inbäddat NN-system fungerar i den virtuella miljö vi ställer upp. Systemet kan upptäcka objekt i den virtuella miljön enligt den virtuella kameran och objektdetekteringsmodulen, som är baserad på YOLO v3, och göra motsvarande kontrollstrategier. Vi designade och simulerade också motsvarande felinsprutningsmodul enligt bildsensorns arbetsprincip och testade funktionalitet, tillförlitlighet och feltolerans hos YOLO-nätverket. Samtidigt nätverk beskärningsalgoritm introduceras också för att studera sambandet mellan olika grader av nätverksbeskärning och nätverksfeltolerans.
13

Enhancing Simulated Sonar Images With CycleGAN for Deep Learning in Autonomous Underwater Vehicles / Djupinlärning, maskininlärning, sonar, simulering, GAN, cycleGAN, YOLO-v4, gles data, osäkerhetsanalys

Norén, Aron January 2021 (has links)
This thesis addresses the issues of data sparsity in the sonar domain. A data pipeline is set up to generate and enhance sonar data. The possibilities and limitations of using cycleGAN as a tool to enhance simulated sonar images for the purpose of training neural networks for detection and classification is studied. A neural network is trained on the enhanced simulated sonar images and tested on real sonar images to evaluate the quality of these images.The novelty of this work lies in extending previous methods to a more general framework and showing that GAN enhanced simulations work for complex tasks on field data.Using real sonar images to enhance the simulated images, resulted in improved classification compared to a classifier trained on solely simulated images. / Denna rapport ämnar undersöka problemet med gles data för djupinlärning i sonardomänen. Ett dataflöde för att generera och höja kvalitén hos simulerad sonardata sätts upp i syfte att skapa en stor uppsättning data för att träna ett neuralt nätverk. Möjligheterna och begränsningarna med att använda cycleGAN för att höja kvalitén hos simulerad sonardata studeras och diskuteras. Ett neuralt nätverk för att upptäcka och klassificera objekt i sonarbilder tränas i syfte att evaluera den förbättrade simulerade sonardatan.Denna rapport bygger vidare på tidigare metoder genom att generalisera dessa och visa att metoden har potential även för komplexa uppgifter baserad på icke trivial data.Genom att träna ett nätverk för klassificering och detektion på simulerade sonarbilder som använder cycleGAN för att höja kvalitén, ökade klassificeringsresultaten markant jämfört med att träna på enbart simulerade bilder.
14

Real-time Human Detection using Convolutional Neural Networks with FMCW RADAR RGB data / Upptäckt av människor i real-tid med djupa faltningsnät samt FMCW RADAR RGB data

Phan, Anna, Medina, Rogelio January 2022 (has links)
Machine learning has been employed in the automotive industry together with cameras to detect objects in surround sensing technology. You Only Look Once is a state-of-the-art object detection algorithm especially suitable for real-time applications due to its speed and relatively high accuracy compared to competing methods. Recent studies have investigated whether radar data can be used as an alternative to camera data with You Only Look Once, seeing as radars are more robust to changing environments such as various weather and lighting conditions. These studies have used 3D data from radar consisting of range, angle, and velocity, transformed into a 2D image representation, either in the Range-Angle or Range-Doppler domain. Furthermore, the processed radar image can use either a Cartesian or a polar coordinate system for the rendering. This study will combine previous studies, using You Only Look Once with Range-Angle radar images and examine which coordinate system of Cartesian or polar is most optimal. Additionally, evaluating the localization and classification performance will be done using a combination of concepts and evaluation metrics. Ultimately, the conclusion is that the Cartesian coordinate system prevails with asignificant improvement compared to polar. / Maskininlärning har sedan länge använts inom fordinsindustrin tillsammans med kameror för att upptäcka föremål och få en ökad överblick över omgivningar. You Only Look Once är en toppmodern objektdetekteringsalgoritm särskilt lämplig för realtidsapplikationer tack vare dess hastighet och relativt höga noggrannhet jämfört med konkurrerande metoder. Nyligen genomförda studier har undersökt om radardata kan användas som ett alternativ till kameradata med You Only Look Once, eftersom radar är mer robusta för ändrade miljöer så som olika väder- och ljusförhållanden. Dessa studier har utnyttjat 3D data från radar bestående av avstånd, vinkel och hastighet, som transformerats till en 2D bildrepresentation, antingen i domänen Range-Angle eller Range-Doppler. Vidare kan den bearbetade radarbilden använda antingen ett kartesiskt eller ett polärt koordinatsystem för framställningen. Denna studie kommer att kombinera tidigare studier om You Only Look Once med Range-Angle radarbilder och undersöka vilket koordinatsystem, kartesiskt eller polärt, som är mest optimalt att använda för människodetektering med radar. Dessutom kommer en utvärdering av lokaliserings- och klassificeringsförmåga att göras med hjälp av en blandning av koncept och olika mått på prestanda. Slutsatsen gjordes att det kartesiska koordinatsystemet är det bättre alternativet med en betydligt högre prestanda jämfört med det polära koordinatsystemet.
15

Deep YOLO-Based Detection of Breast Cancer Mitotic-Cells in Histopathological Images

Maisun Mohamed, Al Zorgani,, Irfan, Mehmood,, Hassan,Ugail,, Al Zorgani, Maisun M., Mehmood, Irfan, Ugail, Hassan 25 March 2022 (has links)
yes / Coinciding with advances in whole-slide imaging scanners, it is become essential to automate the conventional image-processing techniques to assist pathologists with some tasks such as mitotic-cells detection. In histopathological images analysing, the mitotic-cells counting is a significant biomarker in the prognosis of the breast cancer grade and its aggressiveness. However, counting task of mitotic-cells is tiresome, tedious and time-consuming due to difficulty distinguishing between mitotic cells and normal cells. To tackle this challenge, several deep learning-based approaches of Computer-Aided Diagnosis (CAD) have been lately advanced to perform counting task of mitotic-cells in the histopathological images. Such CAD systems achieve outstanding performance, hence histopathologists can utilise them as a second-opinion system. However, improvement of CAD systems is an important with the progress of deep learning networks architectures. In this work, we investigate deep YOLO (You Only Look Once) v2 network for mitotic-cells detection on ICPR (International Conference on Pattern Recognition) 2012 dataset of breast cancer histopathology. The obtained results showed that proposed architecture achieves good result of 0.839 F1-measure.
16

Near Realtime Object Detection : Optimizing YOLO Models for Efficiency and Accuracy for Computer Vision Applications

Abo Khalaf, Mulham January 2024 (has links)
Syftet med denna studie är att förbättra effektiviteten och noggrannheten hos YOLO-modeller genom att optimera dem, särskilt när de står inför begränsade datorresurser. Det akuta behovet av objektigenkänning i nära realtid i tillämpningar som övervakningssystem och autonom körning understryker betydelsen av bearbetningshastighet och exceptionell noggrannhet. Avhandlingen fokuserar på svårigheterna med att implementera komplexa modeller för objektidentifiering på enheter med låg kapacitet, nämligen Jetson Orin Nano. Den föreslår många optimeringsmetoder för att övervinna dessa hinder. Vi utförde flera försök och gjorde metodologiska förbättringar för att minska bearbetningskraven och samtidigt bibehålla en stark prestanda för objektdetektering. Viktiga komponenter i forskningen inkluderar noggrann modellträning, användning av bedömningskriterier och undersökning av optimeringseffekter på modellprestanda i verkliga miljöer. Studien visar att det är möjligt att uppnå optimal prestanda i YOLO-modeller trots begränsade resurser, vilket ger betydande framsteg inom datorseende och maskininlärning. / The objective of this study is to improve the efficiency and accuracy of YOLO models by optimizing them, particularly when faced with limited computing resources. The urgent need for near realtime object recognition in applications such as surveillance systems and autonomous driving underscores the significance of processing speed and exceptional accuracy. The thesis focuses on the difficulties of implementing complex object identification models on low-capacity devices, namely the Jetson Orin Nano. It suggests many optimization methods to overcome these obstacles. We performed several trials and made methodological improvements to decrease processing requirements while maintaining strong object detecting performance. Key components of the research include meticulous model training, the use of assessment criteria, and the investigation of optimization effects on model performance in reallife settings. The study showcases the feasibility of achieving optimal performance in YOLO models despite limited resources, bringing substantial advancements in computer vision and machine learning.
17

Detekce a klasifikace létajících objektů / Detection and classification of flying objects

Jurečka, Tomáš January 2021 (has links)
The thesis deals with the detection and classification of flying objects. The work can be divided into three parts. The first part describes the creation of dataset of flying objects. The reverse image search is used to create the dataset. The next part is a research of algorithms for detection, tracking and classification. Subsequently, the individual algorithms are applied and evaluated. In the last part, the design of hardware components is performed.
18

Localization of UAVs Using Computer Vision in a GPS-Denied Environment

Aluri, Ram Charan 05 1900 (has links)
The main objective of this thesis is to propose a localization method for a UAV using various computer vision and machine learning techniques. It plays a major role in planning the strategy for the flight, and acts as a navigational contingency method, in event of a GPS failure. The implementation of the algorithms employs high processing capabilities of the graphics processing unit, making it more efficient. The method involves the working of various neural networks, working in synergy to perform the localization. This thesis is a part of a collaborative project between The University of North Texas, Denton, USA, and the University of Windsor, Ontario, Canada. The localization has been divided into three phases namely object detection, recognition, and location estimation. Object detection and position estimation were discussed in this thesis while giving a brief understanding of the recognition. Further, future strategies to aid the UAV to complete the mission, in case of an eventuality, like the introduction of an EDGE server and wireless charging methods, was also given a brief introduction.
19

DETECTION AND SEGMENTATION OF DEFECTS IN X-RAY COMPUTED TOMOGRAPHY IMAGE SLICES OF ADDITIVELY MANUFACTURED COMPONENT USING DEEP LEARNING

Acharya, Pradip 01 June 2021 (has links)
Additive manufacturing (AM) allows building complex shapes with high accuracy. The X-ray Computed Tomography (XCT) is one of the promising non-destructive evaluation techniques for the evaluation of subsurface defects in an additively manufactured component. Automatic defect detection and segmentation methods can assist part inspection for quality control. However, automatic detection and segmentation of defects in XCT data of AM possess challenges due to contrast, size, and appearance of defects. In this research different deep learning techniques have been applied on publicly available XCT image datasets of additively manufactured cobalt chrome samples produced by the National Institute of Standards and Technology (NIST). To assist the data labeling image processing techniques were applied which are median filtering, auto local thresholding using Bernsen’s algorithm, and contour detection. A convolutional neural network (CNN) based state-of-art object algorithm YOLOv5 was applied for defect detection. Defect segmentation in XCT slices was successfully achieved applying U-Net, a CNN-based network originally developed for biomedical image segmentation. Three different variants of YOLOv5 which are YOLOv5s, YOLOv5m, and YOLOV5l were implemented in this study. YOLOv5s achieved defect detection mean average precision (mAP) of 88.45 % at an intersection over union (IoU) threshold of 0.5. And mAP of 57.78% at IoU threshold 0.5 to 0.95 using YOLOv5M was achieved. Additionally, defect detection recall of 87.65% was achieved using YOLOv5s, whereas a precision of 71.61 % was found using YOLOv5l. YOLOv5 and U-Net show promising results for defect detection and segmentation respectively. Thus, it is found that deep learning techniques can improve the automatic defect detection and segmentation in XCT data of AM.
20

ERROR DETECTION IN PRODUCTION LINES VIA DEPENDABLE ARCHITECTURES IN CONVOLUTIONAL NEURAL NETWORKS

Olsson, Erik January 2023 (has links)
The need for products has increased during the last few years, this high demand needs to bemet with higher means of production. The use of neural networks can be the key to increasedproduction without having to compromise product quality or human workers well being. This thesislooks into the concept of reliable architectures in convolutional neural networks and how they canbe implemented. The neural networks are trained to recognize the features in images to identifycertain objects, these recognition is then compared to other models to see which of them had the bestprediction. Using multiple models creates a reliable architecture from which results can be produced,these results can then be used in combinations with algorithms to improve prediction certainty. Theaim of implementing the networks with these algorithms are to improve the results without havingto change the networks configurations.

Page generated in 0.0498 seconds