• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Increasing public's value-action on climate change: Integrating intelligence analytics to edge devices in industry 4.0

Fauzi, Muhammad Alfalah, Saragih, Harriman Samuel, Dwiandani, Amalia 12 March 2020 (has links)
Rapid growth of Big Data and Internet of Things (IoT) provides promising potentials to the advancements of methods and applications in increasing public awareness on climate change. The fundamental principle behind this method is to provide quantifiable calculation approach on several major factors that affect climate change, where one of the most well-known factors is the Greenhouse Gases (GHG) with CO2, methane, and nitrous oxide as major contributors. By utilizing Big Data and IoT, an approximate release of GHG can be calculated and embedded inside common household devices such as thermostats, water/heat/electricity/gas meter. An example is the CO2 released by a cubic of water. By using reverse calculation, an approximate CO2 release can be sequentially retrieved as follows: (1) water meter measures consumption, (2) calculate hp and kWh of pump used to supply one m3 of water, (3) calculate the amount of fossil fuel needed to produce one kWh, and (4) calculate CO2 released to the atmosphere from burning of fossil fuel per metric tons/barrel. Such analytical approaches are then embedded on household devices by providing updated information on GHG produced by hourly/daily/weekly/monthly energy usage, hence educating the public and increasing their awareness of climate change. This approach can be developed to provide an alarm of percentage of GHG released to the atmosphere by the excessive use of electricity/water/gas. Further actions in order to influence socio-economic function can later be established such as by establishing a rewards program by the government for people who can successfully manage their GHG emission.
2

Edge Machine Learning for Wildlife Conservation : Detection of Poachers Using Camera Traps

Arnesson, Pontus, Forslund, Johan January 2021 (has links)
This thesis presents how deep learning can be utilized for detecting humans ina wildlife setting using image classification. Two different solutions have beenimplemented where both of them use a camera-equipped microprocessor to cap-ture the images. In one of the solutions, the deep learning model is run on themicroprocessor itself, which requires the size of the model to be as small as pos-sible. The other solution sends images from the microprocessor to a more pow-erful computer where a larger object detection model is run. Both solutions areevaluated using standard image classification metrics and compared against eachother. To adapt the models to the wildlife environment,transfer learningis usedwith training data from a similar setting that has been manually collected andannotated. The thesis describes a complete system’s implementation and results,including data transfer, parallel computing, and hardware setup. One of the contributions of this thesis is an algorithm that improves the classifi-cation performance on images where a human is far away from the camera. Thealgorithm detects motion in the images and extracts only the area where thereis movement. This is specifically important on the microprocessor, where theclassification model is too simple to handle those cases. By only applying theclassification model to this area, the task is more simple, resulting in better per-formance. In conclusion, when integrating this algorithm, a model running onthe microprocessor gives sufficient results to run as a camera trap for humans.However, test results show that this implementation is still quite underperform-ing compared to a model that is run on a more powerful computer.
3

TOWARDS REVERSE ENGINEERING DEEP NEURAL NETWORKS ON EDGE DEVICES

Ruoyu Wu (18837580) 20 June 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) have been deployed on edge devices for numerous applications, ranging from computer vision, speech recognition, and anomaly detection. When deployed on edge devices, dedicated DNN compilers are used to compile DNNs into binaries to exploit instruction set architectures’ (ISAs’) features and hardware accelerators (e.g., NPU, GPU). These DNN binaries on edge devices process sensitive user information, conduct critical missions, and are considered confidential intellectual property.</p><p dir="ltr">From the security standpoint, the ability to reverse engineer such binaries (i.e., recovering the original, high-level representation of the implemented DNN) enables several applications, such as DNN models stealing, gray/white-box adversarial machine learning attacks and defenses, and backdoor detection. However, no existing reverse engineering technique can recover a high-level representation of a DNN model from its compiled binary code.</p><p dir="ltr">In this dissertation, we propose the following pioneering research for reverse engineering DNN on the edge device. (i) We design and implement the first compiler- and ISA-agnostic DNN decompiler, DnD, with the static analysis technique, capable of extracting DNN models from DNN binaries running on CPU-only devices without the hardware accelerator. We show that our decompiler can perfectly recover DNN models from different DNN binaries. Furthermore, it can extract DNN models used by real-world micro-controllers and enable white-box adversarial machine learning attacks against the DNN models. (ii) We design and implement a novel data-driven approach, NeuroScope, based on dynamic analysis and machine learning to reverse engineer DNN binaries. This compiler-independent and code-feature-free approach supports a larger variety of DNN binaries across different DNN compilers and hardware platforms. We demonstrate its capability by using it to reverse engineer DNN binaries unsupported by previous approaches with high accuracy. Moreover, we showcase how NeuroScope can be used to reverse engineer a proprietary DNN binary compiled with a closed-source compiler and enable gray-box adversarial machine learning attacks.</p>
4

A Smart Surveillance System Using Edge-Devices for Wildlife Preservation in Animal Sanctuaries

Linder, Johan, Olsson, Oscar January 2022 (has links)
The Internet of Things is a constantly developing field. With advancements of algorithms for object detection and classification for images and videos, the possibilities of what can be made with small and cost efficient edge-devices are increasing. This work presents how camera traps and deep learning can be utilized for surveillance in remote environments, such as animal sanctuaries in the African Savannah. The camera traps connect to a smart surveillance network where images and sensor-data are analysed. The analysis can then be used to produce valuable information, such as the location of endangered animals or unauthorized humans, to park rangers working to protect the wildlife in these animal sanctuaries. Different motion detection algorithms are tested and evaluated based on related research within the subject. The work made in this thesis builds upon two previous theses made within Project Ngulia. The implemented surveillance system in this project consists of camera sensors, a database, a REST API, a classification service, a FTP-server and a web-dashboard for displaying sensor data and resulting images. A contribution of this work is an end-to-end smart surveillance system that can use different camera sources to produce valuable information to stakeholders. The camera software developed in this work is targeting the ESP32 based M5Stack Timer Camera and runs a motion detection algorithm based on Self-Organizing Maps. This improves the selection of data that is fed to the image classifier on the server. This thesis also contributes with an algorithm for doing iterative image classifications that handles the issues of objects taking up small parts of an image, making them harder to classify correctly.
5

Cross-layer optimization for joint visual-inertial localization and object detection on resource-constrained devices

Baldassari, Elisa January 2021 (has links)
The expectations in performing high-performance cyber-physical applications in resource-constrained devices are continuously increasing. The available hardware is still a main limitation in this context, both in terms of computation capability and energy limits. On the other hand, one must ensure the robust and accurate execution of the applications deployed, since their failure may entail risks for humans and the surrounding environment. The limits and risks are enhanced when multiple applications are executed on the same device. The focus of this thesis is to provide a trade-off between the required performance and power consumption. The focus is on two fundamental applications in the mobile autonomous vehicles scenario: localization and object detection. The multi-objective optimization is performed in a cross-layer manner, exploring both applications and platform configurable parameters with Design Space Exploration (DSE). The focus is on localization and detection accuracy, detection latency and power consumption. Predictive models are designed to estimate the metrics of interest and ensure robust execution, excluding potential faulty configurations from the design space. The research is approached empirically, performing tests on the Nvidia Jetson AGX and NX platforms. Results show that optimal configurations for a single application are in general sub-optimal or faulty for the concurrent execution case, while the opposite is sometimes applicable. / Resursbegränsade enheter förväntas utföra mer och mer krävande cyberfysiska program. Hårdvaran är en av de huvudsakliga begränsningarna både vad gäller beräkningshastighet och energigränser. Samtidigt måste programmen som körs vara robusta och noggranna, eftersom ett fel kan påverka människor och deras omgivning. När flera program körs på samma enhet blir både begränsningar och risker större. Den här avhandlingen fokuserar på att göra en avvägning mellan krav på prestanda och energiförbrukning för två tillämpningar inom området autonoma fordon: lokalisering och objektigenkänning. Med hjälp av Design Space Exploration (DSE) utforskas parametrar både i applikationerna och på plattformen genom att utföra tvärlageroptimering med flera mål. Lokaliserings- och detekteringsnoggrannhet, fördröjning i igenkänning och energiförbrukning är egenskaper i fokus. Prediktiva modeller designas för att estimera måtten som är av intresse och garantera robust körning genom att utesluta potentiellt felaktiga konfigurationer. Empirisk forskning görs med tester på Nvidia Jetson AGXoch NX-plattformarna. Resultaten visar att de optimala konfigurationerna för ett enda program i allmänhet är suboptimala eller felaktiga vid körning av flera program samtidigt, medan motsatsen ibland är tillämplig.
6

Real Time Vehicle Detection for Intelligent Transportation Systems

Shurdhaj, Elda, Christián, Ulehla January 2023 (has links)
This thesis aims to analyze how object detectors perform under winter weather conditions, specifically in areas with varying degrees of snow cover. The investigation will evaluate the effectiveness of commonly used object detection methods in identifying vehicles in snowy environments, including YOLO v8, Yolo v5, and Faster R-CNN. Additionally, the study explores the method of labeling vehicle objects within a set of image frames for the purpose of high-quality annotations in terms of correctness, details, and consistency. Training data is the cornerstone upon which the development of machine learning is built. Inaccurate or inconsistent annotations can mislead the model, causing it to learn incorrect patterns and features. Data augmentation techniques like rotation, scaling, or color alteration have been applied to enhance some robustness to recognize objects under different alterations. The study aims to contribute to the field of deep learning by providing valuable insights into the challenges of detecting vehicles in snowy conditions and offering suggestions for improving the accuracy and reliability of object detection systems. Furthermore, the investigation will examine edge devices' real-time tracking and detection capabilities when applied to aerial images under these weather conditions. What drives this research is the need to delve deeper into the research gap concerning vehicle detection using drones, especially in adverse weather conditions. It highlights the scarcity of substantial datasets before Mokayed et al. published the Nordic Vehicle Dataset. Using unmanned aerial vehicles(UAVs) or drones to capture real images in different settings and under various snow cover conditions in the Nordic region contributes to expanding the existing dataset, which has previously been restricted to non-snowy weather conditions. In recent years, the leverage of drones to capture real-time data to optimize intelligent transport systems has seen a surge. The potential of drones in providing an aerial perspective efficiently collecting data over large areas to precisely and timely monitor vehicular movement is an area that is imperative to address. To a greater extent, snowy weather conditions can create an environment of limited visibility, significantly complicating data interpretation and object detection. The emphasis is set on edge devices' real-time tracking and detection capabilities, which in this study introduces the integration of edge computing in drone technologies to explore the speed and efficiency of data processing in such systems.
7

Hardware/Software Co-Design for Keyword Spotting on Edge Devices

Jacob Irenaeus M Bushur (15360553) 29 April 2023 (has links)
<p>The introduction of artificial neural networks (ANNs) to speech recognition applications has sparked the rapid development and popularization of digital assistants. These digital assistants perform keyword spotting (KWS), constantly monitoring the audio captured by a microphone for a small set of words or phrases known as keywords. Upon recognizing a keyword, a larger audio recording is saved and processed by a separate, more complex neural network. More broadly, neural networks in speech recognition have popularized voice as means of interacting with electronic devices, sparking an interest in individuals using speech recognition in their own projects. However, while large companies have the means to develop custom neural network architectures alongside proprietary hardware platforms, such development precludes those lacking similar resources from developing efficient and effective neural networks for embedded systems. While small, low-power embedded systems are widely available in the hobbyist space, a clear process is needed for developing a neural network that accounts for the limitations of these resource-constrained systems. In contrast, a wide variety of neural network architectures exists, but often little thought is given to deploying these architectures on edge devices. </p> <p><br></p> <p>This thesis first presents an overview of audio processing techniques, artificial neural network fundamentals, and machine learning tools. A summary of a set of specific neural network architectures is also discussed. Finally, the process of implementing and modifying these existing neural network architectures and training specific models in Python using TensorFlow is demonstrated. The trained models are also subjected to post-training quantization to evaluate the effect on model performance. The models are evaluated using metrics relevant to deployment on resource-constrained systems, such as memory consumption, latency, and model size, in addition to the standard comparisons of accuracy and parameter count. After evaluating the models and architectures, the process of deploying one of the trained and quantized models is explored on an Arduino Nano 33 BLE using TensorFlow Lite for Microcontrollers and on a Digilent Nexys 4 FPGA board using CFU Playground.</p>

Page generated in 0.0426 seconds