• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Scenanalys av trafikmiljön

Alsalehy, Ahmad, Alsayed, Ghada January 2021 (has links)
Antalet vägtrafikanter ökar varje år, och med det ökar trängseln. Man har därför gjort undersökningar med hjälp av objektdetektionsalgoritmer på videoströmmar. Genom att analysera data resultat är det möjligt att bygga en bättre infrastruktur, för att minska trafikstockning samt olyckor. Data som analyseras kan till exempel vara att räkna hur många trafikanter som vistas på en viss väg (Slottsbron i Halmstad) under en viss tid. Detta examensarbete undersöker teoretiskt hur en YOLO algoritm samt TensorFlow kan användas för att detektera olika trafikanter. Utvärderingsmetoder som användes i projektet för att få resultatet och dra slutsatser är mAP, träning och testning av egna och andras YOLO modeller samt övervakning av FPS- och temperatur-värden. För att möjliggöra detekteringen av trafikflöde i realtid nyttjades Jetson nano toolkit. Flera olika jämförelser har skapats för att avgöra vilken YOLO modell som är lämpligast. Resultaten från tester av olika YOLO modeller visar att YOLO-TensorFlows implementationer kan detektera trafikanter med en godtagbar noggrannhet. Slutsatsen är att Jetson nano har tillräckligt med processorkraft för att detektera olika trafikanter i realtid med hjälp av original YOLO implementation. Metoderna för att detektera trafikanter är standard och fungerande för analysering av trafikflöden.Testning av mer varierande trafikmiljö under längre tidsperioder krävs för att ytterligare verifiera om Jetson nanos lämplighet.
2

3D-Objekterkennung mit Jetson Nano und Integration mit KUKA KR6-Roboter für autonomes Pick-and-Place

Pullela, Akhila, Wings, Elmar 27 January 2022 (has links)
Bildverarbeitungssysteme bieten innovative Lösungen für den Fertigungsprozess. Kameras und zugehörige Bildverarbeitungssysteme können zur Identifizierung, Prüfung und Lokalisierung von Teilen auf einem Förderband oder in einem Behälter mit Teilen eingesetzt werden. Roboter werden dann eingesetzt,um jedes Teil aufzunehmen und im Montagebereich zu platzieren oder sogar um die Grundmontage direkt durchzuführen. Das System für dieses Projekt besteht aus einem Roboter Kuka KR6 900, der die Position (x-, y- und z-Koordinaten des Objektschwerpunkts) und die Ausrichtung eines Bauteils von einem Bildverarbeitungssystem basierend auf einem Jetson Nano erhält. Das Ziel dieses Projekts ist es, eine automatische Erkennung eines Objekts mit Hilfe einer 2D-Kamera und der Auswertung mit dem Deep Learning Algorithmus Darknet YOLO V4 durchzuführen, so dass der Roboter das Objekt greifen und platzieren kann. Dieses Projekt verwendet zwei verschiedene Objekttypen: einen Quader und einen Zylinder. Die Bilderkennung erfolgt mit Hilfe des Jetson Nano, dort erfolgt aus den Pixelkoordinaten die Berechnung der realen Koordinaten, die dann über die TCP/IP-Schnittstelle des Kuka KR6 900 zur Durchführung der Entnahme und Platzierung übermittelt werden. Die Flexibilität des Roboters, dessen Steuerung auf diese Weise von der Bildverarbeitung unterstützt wird, kann den Bedarf an präzise konstruierten Teilezuführungen verringern und so die Flexibilität in der Fertigungszelle erhöhen und kurze Produktionsläufe und Anpassungsfähigkeit ermöglichen.
3

Design and Implementation of Sensing Methods on One-Tenth Scale of an Autonomous Race Car

Veeramachaneni, Harshitha 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Self-driving is simply the capacity of a vehicle to drive itself without human intervention. To accomplish this, the vehicle utilizes mechanical and electronic parts, sensors, actuators and an AI computer. The on-board PC runs advanced programming, which permits the vehicle to see and comprehend its current circumstance dependent on sensor input, limit itself in that climate and plan the ideal course from point A to point B. Independent driving is not an easy task, and to create self-sufficient driving arrangements is an exceptionally significant ability in the present programming designing field. ROS is a robust and versatile communication middle ware (framework) tailored and widely used for robotics applications. This thesis work intends to show how ROS could be used to create independent driving programming by investigating self-governing driving issues, looking at existing arrangements and building up a model vehicle utilizing ROS. The main focus of this thesis is to develop and implement a one-tenth scale of an autonomous RACECAR equipped with Jetson Nano board as the on-board computer, PCA9685 as PWM driver, sensors, and a ROS based software architecture. Finally, by following the methods presented in this thesis, it is conceivable to build an autonomous RACECAR that runs on ROS. By following the means portrayed in this theory of work, it is conceivable to build up a self-governing vehicle.
4

Machine Learning Aided Millimeter Wave System for Real Time Gait Analysis

Alanazi, Mubarak Alayyat 10 August 2022 (has links)
No description available.
5

Objektdetektering av trafikskyltar på inbyggda system med djupinlärning / Object detection of traffic signs on embedded systems using deep learning

Wikström, Pontus, Hotakainen, Johan January 2023 (has links)
In recent years, AI has developed significantly and become more popular than ever before. The applications of AI are expanding, making knowledge about its application and the systems it can be applied to more important. This project compares and evaluates deep learning models for object detection of traffic signs on the embedded systems Nvidia Jetson Nano and Raspberry Pi 3 Model B. The project compares and evaluates the models YOLOv5, SSD Mobilenet V1, FOMO, and Efficientdet-lite0. The project evaluates the performance of these models on the aforementioned embedded systems, measuring metrics such as CPU usage, FPS and RAM. Deep learning models are resource-intensive, and embedded systems have limited resources. Embedded systems often have different types of processor architectures than regular computers, which means that some frameworks and libraries may not be compatible. The results show that the tested systems are capable of object detection but with varying performance. Jetson Nano performs at a level we consider sufficiently high for use in production depending on the specific requirements. Raspberry Pi 3 performs at a level that may not be acceptable for real-time recognition of traffic signs. We see the greatest potential for Efficientdet-lite0 and YOLOv5 in recognizing traffic signs. The distance at which the models detect signs seems to be important for how many signs they find. For this reason, SSD MobileNet V1 is not recommended without further trai-ning despite its superior speed. YOLOv5 stood out as the model that detected signs at the longest distance and made the most detections overall. When considering all the results, we believe that Efficientdet-lite0 is the model that performs the best. / Under de senaste åren har AI utvecklats mycket och blivit mer populärt än någonsin. Tillämpningsområdena för AI ökar och därmed blir kunskap om hur det kan tillämpas och på vilka system viktigare. I det här projektet jämförs och utvärderas djupinlärningsmodeller för objektdetektering av trafikskyltar på de inbyggda systemen Nvidia Jetson Nano och Raspberry Pi 3 Model B. Modellerna som jämförs och utvärderas är YOLOv5, SSD Mobilenet V1, FOMO och Efficientdet-lite0. För varje modell mäts blandannat CPU-användning, FPS och RAM. Modeller för djupinlärning är resurskrävande och inbyggda system har begränsat med resurser. Inbyggda system har ofta andra typer av processorarkitekturer än en vanlig dator vilket gör att olika ramverk och andra bibliotek inte är kompatibla. Resultaten visar att de testade systemen klarar av objektdetektering med varierande prestation. Jetson Nano presterar på en nivå vi anser vara tillräckligt hög för användning i produktion beroende på hur hårda krav som ställs. Raspberry Pi 3 presterar på en nivå som möjligtvis inte är acceptabel för igenkänning av trafikskyltar i realtid. Vi ser störst potential för Efficientdet-lite0 och YOLOv5 för igenkänning av trafikskyltar. Hur långt avstånd modellerna upptäcker skyltar på verkar vara viktigt för hur många skyltar de hittar. Av den anledningen är SSD MobileNet V1 inte att rekommendera utan vidare träning trots sin överlägsna hastighet. YOLOv5 utmärkte sig som den som upptäckte skyltar på längst avstånd och som gjorde flest upptäckter totalt. När alla resultat vägs in anser vi dock att Efficientdet-lite0 är den modell som presterar bäst.
6

Robotické následování osoby pomocí neuronových sítí / Robotic Tracking of a Person using Neural Networks

Zakarovský, Matúš January 2020 (has links)
Hlavným cieľom práce bolo vytvorenie softvérového riešenia založeného na neurónových sieťach, pomocou ktorého bolo možné detegovať človeka a následne ho nasledovať. Tento výsledok bol dosiahnutý splnením jednotlivých bodov zadania tejto práce. V prvej časti práce je popísaný použitý hardvér, softvérové knižnice a rozhrania pre programovanie aplikácií (API), ako aj robotická platforma dodaná skupinou robotiky a umelej inteligencie ústavu automatizácie a meracej techniky Vysokého Učenia Technického v Brne, na ktorej bol výsledný robot postavený. Následne bola spracovaná rešerš viacerých typov neurónových sietí na detekciu osôb. Podrobne boli popísané štyri detektory. Niektoré z nich boli neskôr testované na klasickom počítači alebo na počítači NVIDIA Jetson Nano. V ďalšom kroku bolo vytvorené softvérové riešenie tvorené piatimi programmi, pomocou ktorého bolo dosiahnuté ciele ako rozpoznanie osoby pomocou neurónovej siete ped-100, určenie reálnej vzdialenosti vzhľadom k robotu pomocou monokulárnej kamery a riadenie roboty k úspešnému dosiahnutiu cieľa. Výstupom tejto práce je robotická platforma umožnujúca detekciu a nasledovanie osoby využiteľné v praxi.
7

VOICE COMMAND RECOGNITION WITH DEEP NEURAL NETWORK ON EDGE DEVICES

Md Naim Miah (11185971) 26 July 2021 (has links)
Interconnected devices are becoming attractive solutions to integrate physical parameters and making them more accessible for further analysis. Edge devices, located at the end of the physical world, measure and transfer data to the remote server using either wired or wireless communication. The exploding number of sensors, being used in the Internet of Things (IoT), medical fields, or industry, are demanding huge bandwidth and computational capabilities in the cloud, to be processed by Artificial Neural Networks (ANNs) – especially, processing audio, video and images from hundreds of edge devices. Additionally, continuous transmission of information to the remote server not only hampers privacy but also increases latency and takes more power. Deep Neural Network (DNN) is proving to be very effective for cognitive tasks, such as speech recognition, object detection, etc., and attracting researchers to apply it in edge devices. Microcontrollers and single-board computers are the most commonly used types of edge devices. These have gone through significant advancements over the years and capable of performing more sophisticated computations, making it a reasonable choice to implement DNN. In this thesis, a DNN model is trained and implemented for Keyword Spotting (KWS) on two types of edge devices: a bare-metal embedded device (microcontroller) and a robot car. The unnecessary components and noise of audio samples are removed, and speech features are extracted using Mel-Frequency Cepstral Co-efficient (MFCC). In the bare-metal microcontroller platform, these features are efficiently extracted using Digital Signal Processing (DSP) library, which makes the calculation much faster. A Depth wise Separable Convolutional Neural Network (DSCNN) based model is proposed and trained with an accuracy of about 91% with only 721 thousand trainable parameters. After implementing the DNN on the microcontroller, the converted model takes only 11.52 Kbyte (2.16%) RAM and 169.63 Kbyte (8.48%) Flash of the test device. It needs to perform 287,673 Multiply-and-Accumulate (MACC) operations and takes about 7ms to execute the model. This trained model is also implemented on the robot car, Jetbot, and designed a voice-controlled robotic vehicle. This robot accepts few selected voice commands-such as “go”, “stop”, etc. and executes accordingly with reasonable accuracy. The Jetbot takes about 15ms to execute the KWS. Thus, this study demonstrates the implementation of Neural Network based KWS on two different types of edge devices: a bare-metal embedded device without any Operating System (OS) and a robot car running on embedded Linux OS. It also shows the feasibility of bare-metal offline KWS implementation for autonomous systems, particularly autonomous vehicles.<br>
8

Hardware Implementation of Learning-Based Camera ISP for Low-Light Applications

Preston Rashad Rahim (17676693) 20 December 2023 (has links)
<p dir="ltr">A camera's image signal processor (ISP) is responsible for taking the mosaiced and noisy image signal from the image sensor and processing it such a way that an end-result image is produced that is informative and accurately captures the scene. Real-time video capture in photon-limited environments remains a challenge for many ISP's today. In these conditions, the image signal is dominated by the photon shot noise. Deep learning methods show promise in extracting the underlying image signal from the noise, but modern AI-based ISPs are too computationally complex to be realized as a fast and efficient hardware ISP. An ISP algorithm, BLADE2 has been designed, which leverages AI in a computationally conservative manner to demosaic and denoise low-light images. The original implementation of this algorihtm is in Python/PyTorch. This Thesis explores taking BLADE2 and implementing it on a general purpose GPU via a suite of Nvidia optimization toolkits, as well as a low-level implementation in C/C++, bringing the algorithm closer to FPGA realization. The GPU implementation demonstrated significant throughput gains and the C/C++ implementation demonstrated the feasibility of further hardware development.</p>
9

Methods for Multisensory Detection of Light Phenomena on the Moon as a Payload Concept for a Nanosatellite Mission

Maurer, Andreas January 2020 (has links)
For 500 years transient light phenomena (TLP) have been observed on the lunar surface by ground-based observers. The actual physical reason for most of these events is today still unknown. Current plans of NASA and SpaceX to send astronauts back to the Moon and already successful deep-space CubeSat mission will allow in the future research nanosatellite missions to the cislunar space. This thesis presents a new hardware and software concept for a future payload on such a nanosatellite. The main task was to develop and implement a high-performance image processing algorithm which task is to detect short brightening flashes on the lunar surface. Based on a review of historic reported phenomena, possible explanation theories for these phenomena and currently active and planed ground- or space-based observatories possible reference scenarios were analyzed. From the presented scenarios one, the detection of brightening events was chosen and requirements for this scenario stated. Afterwards, possible detectors, processing computers and image processing algorithms were researched and compared regarding the specified requirements. This analysis of available algorithm was used to develop a new high-performance detection algorithm to detect transient brightening events on the Moon. The implementation of this algorithm running on the processor and the internal GPU of a MacMini achieved a framerate of 55 FPS by processing images with a resolution of 4.2 megapixel. Its functionality and performance was verified on the remote telescope operated by the Chair of Space Technology of the University of Würzburg. Furthermore, the developed algorithm was also successfully ported on the Nvidia Jetson Nano and its performance compared with a FPGA based image processing algorithm. The results were used to chose a FPGA as the main processing computer of the payload. This concept uses two backside illuminated CMOS image sensor connected to a single FPGA. On the FPGA the developed image processing algorithm should be implemented. Further work is required to realize the proposed concept in building the actual hardware and porting the developed algorithm onto this platform.

Page generated in 0.0528 seconds