• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 134
  • 134
  • 63
  • 36
  • 35
  • 34
  • 34
  • 33
  • 33
  • 33
  • 26
  • 22
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Energy-Efficient Detection of Atrial Fibrillation in the Context of Resource-Restrained Devices

Kheffache, Mansour January 2019 (has links)
eHealth is a recently emerging practice at the intersection between the ICT and healthcare fields where computing and communication technology is used to improve the traditional healthcare processes or create new opportunities to provide better health services, and eHealth can be considered under the umbrella of the Internet of Things. A common practice in eHealth is the use of machine learning for a computer-aided diagnosis, where an algorithm would be fed some biomedical signal to provide a diagnosis, in the same way a trained radiologist would do. This work considers the task of Atrial Fibrillation detection and proposes a novel range of algorithms to achieve energy-efficiency. Based on our working hypothesis, that computationally simple operations and low-precision data types are key for energy-efficiency, we evaluate various algorithms in the context of resource-restrained health-monitoring wearable devices. Finally, we assess the sustainability dimension of the proposed solution.
72

A Low Power AI Inference Accelerator for IoT Edge Computing

Hansson, Olle January 2021 (has links)
This thesis investigates the possibility of porting a neural network model trained and modeled in TensorFlow to a low-power AI inference accelerator for IoT edge computing. A slightly modified LeNet-5 neural network model is presented and implemented such that an input frequency of 10 frames per second is possible while consuming 4mW of power. The system is simulated in software and synthesized using the FreePDK45 technology library. The simulation result shows no loss of accuracy, but the synthesis results do not show the same positive results for the area and power. The default version of the accelerator uses single-precision floating-point format, float32, while a modified accelerator using the bfloat16 number representation shows significant improvements in area and power with almost no additional loss of accuracy.
73

Edge Processing of Image for UAS Sense and Avoidance

Rave, Christopher J. 26 August 2021 (has links)
No description available.
74

TOWARDS TRUSTWORTHY ON-DEVICE COMPUTATION

Heejin Park (12224933) 20 April 2022 (has links)
<div>Driven by breakthroughs in mobile and IoT devices, on-device computation becomes promising. Meanwhile, there is a growing concern over its security: it faces many threats</div><div>in the wild, while not supervised by security experts; the computation is highly likely to touch users’ privacy-sensitive information. Towards trustworthy on-device computation, we present novel system designs focusing on two key applications: stream analytics, and machine learning training and inference.</div><div><br></div><div>First, we introduce Streambox-TZ (SBT), a secure stream analytics engine for ARM-based edge platforms. SBT contributes a data plane that isolates only analytics’ data and</div><div>computation in a trusted execution environment (TEE). By design, SBT achieves a minimal trusted computing base (TCB) inside TEE, incurring modest security overhead.</div><div><br></div><div>Second, we design a minimal GPU software stack (50KB), called GPURip. GPURip allows developers to record GPU computation ahead of time, which will be replayed later</div><div>on client devices. In doing so, GPURip excludes the original GPU stack from run time eliminating its wide attack surface and exploitable vulnerabilities.</div><div><br></div><div>Finally, we propose CoDry, a novel approach for TEE to record GPU computation remotely. CoDry provides an online GPU recording in a safe and practical way; it hosts GPU stacks in the cloud that collaboratively perform a dryrun with client GPU models. To overcome frequent interactions over a wireless connection, CoDry implements a suite of key optimizations.</div>
75

Offline Task Scheduling in a Three-layer Edge-Cloud Architecture

Mahjoubi, Ayeh January 2023 (has links)
Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method. / Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.
76

Dynamic container orchestration for a device-cloud continuum

Alfonso Rodriguez Garzon, Camilo January 2023 (has links)
Edge computing has emerged as a paradigm to support the growing demand for real-time processing of data generated at the edge of the network. As the devices at the edge are constrained, one of the challenges in the area is how to schedule workloads. The scheduling problem is difficult to tackle due to the multitude of sources from which variables originate, diverse algorithms and execution methods, and tasks involving information dissemination and action execution. This project aims to explore the problem and implement a system that simplifies the construction of a scheduler for the edge computing to reduce the cognitive load on developers that work on the area and focus their attention on their expertise area. To construct the solution, a literature review is conducted, a set of functional and non functional requirements are proposed, an implementation using a Kubernetes operator and Python application is performed, and an evaluation and validation of the solution against the requirements and an use case and test case are performed. The results demonstrate that the system generates customized instances capable of receiving any number of inputs, outsources the execution of the logic and interacts with different outputs. This allows developers to rapidly deploy instances for their own needs, focusing on their domain of expertise. / Edge computing har framträtt som ett paradigm för att stödja den växande efterfrågan på realtidsbehandling av data som genereras vid nätverkets kant. Eftersom enheterna vid kanten är begränsade utgör en av utmaningarna inom området hur arbetsbelastningar ska schemaläggas. Schemaläggningsproblemet är svårt att hantera på grund av den mångfald av källor varifrån variabler härstammar, varierande algoritmer och utförandemetoder samt uppgifter som involverar informationsförmedling och utförande av åtgärder. Detta projekt syftar till att utforska problemet och implementera ett system som förenklar konstruktionen av en schemaläggare för kantberäkning för att minska den kognitiva belastningen på utvecklare som arbetar inom området och fokusera deras uppmärksamhet på deras expertområde. För att konstruera lösningen genomförs en litteraturgenomgång, en uppsättning funktionella och ickefunktionella krav föreslås, en implementation med hjälp av en Kubernetesoperatör och en Python-applikation utförs, och en utvärdering och validering av lösningen gentemot kraven, inklusive både användnings- och testfall, genomförs. Resultaten visar att systemet genererar anpassade instanser som kan ta emot vilket antal inmatningar som helst, outsourcar utförandet av logiken och interagerar med olika utgångar. Detta gör det möjligt för utvecklare att snabbt distribuera instanser för sina egna behov och fokusera på sitt expertområde.
77

Belief Rule-Based Workload Orchestration in Multi-access Edge Computing

Jamil, Mohammad Newaj January 2022 (has links)
Multi-access Edge Computing (MEC) is a standard network architecture of edge computing, which is proposed to handle tremendous computation demands of emerging resource-intensive and latency-sensitive applications and services and accommodate Quality of Service (QoS) requirements for ever-growing users through computation offloading. Since the demand of end-users is unknown in a rapidly changing dynamic environment, processing offloaded tasks in a non-optimal server can deteriorate QoS due to high latency and increasing task failures. In order to deal with such a challenge in MEC, a two-stage Belief Rule-Based (BRB) workload orchestrator is proposed to distribute the workload of end-users to optimum computing units, support strict QoS requirements, ensure efficient utilization of computational resources, minimize task failures, and reduce the overall service time. The proposed BRB workload orchestrator decides the optimal execution location for each offloaded task from User Equipment (UE) within the overall MEC architecture based on network conditions, computational resources, and task requirements. EdgeCloudSim simulator is used to conduct comprehensive simulation experiments for evaluating the performance of the proposed BRB orchestrator in contrast to four workload orchestration approaches from the literature with different types of applications. Based on the simulation experiments, the proposed workload orchestrator outperforms state-of-the-art workload orchestration approaches and ensures efficient utilization of computational resources while minimizing task failures and reducing the overall service time.
78

A neuromorphic approach for edge use allocation

Petersson Steenari, Kim January 2022 (has links)
This paper introduces a new way of solving an edge user allocation problem. The problem is to be solved with a network of spiking neurons. This network should quickly and with low energy cost solve the optimization problem of allocating users to servers and minimizing the amount of servers hired to reduce the related hiring cost. The demonstrated method is a simulation of a method which could be implemented onto neuromorphic hardware. It is written in Python using the Brian2 spiking neural network simulator. The core of the method involves simulating an energy function through the use of circuit motifs. The dynamics of these circuit motifs mimic a search for the lowest energy point in an energy landscape, corresponding to a valid solution for the edge user allocation problem. The paper also shows the results of testing this network within the Brian2 environment.
79

Federated Learning for edge computing : Real-Time Object Detection

Memia, Ardit January 2023 (has links)
In domains where data is sensitive or private, there is a great value in methods that can learn in a distributed manner without the data ever leaving the local devices. Federated Learning (FL) has recently emerged as a promising solution to collaborative machine learning challenges while maintaining data privacy. With FL, multiple entities, whether cross-device or cross-silo, can jointly train models without compromising the locality or privacy of their data. Instead of moving data to a central storage system or cloud for model training, code is moved to the data owners’ local sites, and incremental local updates are combined into a global model. In this way FL enhances data pri-vacy and reduces the probability of eavesdropping to a certain extent. In this thesis we have utilized the means of Federated Learning into a Real-Time Object Detection (RTOB) model in order to investigate its performance and privacy awareness towards a traditional centralized ML environment. Several object detection models have been built us-ing YOLO framework and training with a custom dataset for indoor object detection. Local tests have been performed and the most opti-mal model has been chosen by evaluating training and testing metrics and afterwards using NVIDIA Jetson Nano external device to train the model and integrate into a Federated Learning environment using an open-source FL framework. Experiments has been conducted through the path in order to choose the optimal YOLO model (YOLOv8) and the best fitted FL framework to our study (FEDn).We observed a gradual enhancement in balancing the APC factors (Accuracy-Privacy-Communication) as we transitioned from basic lo-cal models to the YOLOv8 implementation within the FEDn system, both locally and on the SSC Cloud production environment. Although we encountered technical challenges deploying the YOLOv8-FEDn system on the SSC Cloud, preventing it from reaching a finalized state, our preliminary findings indicate its potential as a robust foundation for FL applications in RTOB models at the edge.
80

Heterogeneous IoT Network Architecture Design for Age of Information Minimization

Xia, Xiaohao 01 February 2023 (has links) (PDF)
Timely data collection and execution in heterogeneous Internet of Things (IoT) networks in which different protocols and spectrum bands coexist such as WiFi, RFID, Zigbee, and LoRa, requires further investigation. This thesis studies the problem of age-of-information minimization in heterogeneous IoT networks consisting of heterogeneous IoT devices, an intermediate layer of multi-protocol mobile gateways (M-MGs) that collects and relays data from IoT objects and performs computing tasks, and heterogeneous access points (APs). A federated matching framework is presented to model the collaboration between different service providers (SPs) to deploy and share M-MGs and minimize the average weighted sum of the age-of-information and energy consumption. Further, we develop a two-level multi-protocol multi-agent actor-critic (MP-MAAC) to solve the optimization problem, where M-MGs and SPs can learn collaborative strategies through their own observations. The M-MGs' strategies include selecting IoT objects for data collection, execution, relaying, and/or offloading to SPs’ access points while SPs decide on spectrum allocation. Finally, to improve the convergence of the learning process we incorporate federated learning into the multi-agent collaborative framework. The numerical results show that our Fed-Match algorithm reduces the AoI by factor four, collects twice more packets than existing approaches, reduces the penalty by factor five when enabling relaying, and establishes design principles for the stability of the training process.

Page generated in 0.0503 seconds