• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 147
  • 147
  • 66
  • 38
  • 37
  • 37
  • 35
  • 34
  • 32
  • 32
  • 29
  • 25
  • 24
  • 21
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

QoE-Aware Video Communication in Emerging Network Architectures

Sadat, Mohammad Nazmus 04 October 2021 (has links)
No description available.
72

Computation Offloading for Real-Time Applications

Tahirović, Emina January 2023 (has links)
With the vast and ever-growing range of applications which have started to seek real-time data processing and timing-predictable services comes an extensive list of problems when trying to establish these applications in the real-time domain. Depending on the purpose of the real-time application, the requests that they impose on resources are vastly different. Some real-time applications require large computational power, large storage capacities, and large energy storage. However, not all devices can be equipped with processors, batteries, or power banks adequate for such resource requirements. While these issues can be mitigated by offloading computations using cloud computing, this is not a universal solution for all applications. Real-time applications need to be predictable and reliable, whereas the cloud can cause high and unpredictable latencies. One possible improvement in the predictability and reliability aspect comes from offloading to the edge, which is closer than the cloud and can reduce latencies. However, even the edge comes with certain limitations, and it is not exactly clear how, where and when applications should be offloaded. The problem then presents itself as: how should real-time applications in the-edge cloud architecture be modeled? Moreover, how should they be modeled to be agnostic from certain technologies and provide support for timing analysis? Another thing to consider is the question of 'when' to offload to the edge-cloud architecture. For example, critical computations can be offloaded to the edge, while less critical computations can be offloaded to the cloud, but before one can determine 'where' to offload, one must determine 'when'. Thus, this thesis focuses on designing a new technology-agnostic mathematical model to allow holistic modeling of real-time applications on the edge-cloud continuum and provide support for timing analysis.
73

Distributed Artificial Intelligence Based on Edge Computing

Fagerström, Rebecca, Neüman, Simon January 2023 (has links)
The future Internet is expected to be driven by the prevalence of the Internet of Things (IoT), where it is envisioned that anything can be connected. In the last decade, there has been a paradigm shift in IoT from centralized cloud computing to so-called edge computing in order to compute tasks closer to the source of data generation. However, IoT still faces some major challenges when it comes to computational, storage and network. Therefore, this systematic literature review aims to investigate how edge computing can assist in accomplishing distributed intelligence in IoT systems and the known challenges and barriers. Using the PRISMA guidelines, a systematic database search and selection process was carried out to find relevant research on the topic. The data analysis method chosen for this study is content analysis, which aids in structuring and categorizing the data, allowing for the application of a coding process. By using content analysis and following the selection criteria, 15 out of 53 papers were chosen to be reviewed, published between 2017 and part of 2023. One of the main challenges mentioned by all published papers was the resource constraint of IoT devices together with the growing amounts of data that have become a bottleneck in the system. Limited processing capacity makes it difficult for the devices to independently complete complex data processing and AI analysis. The distributed nature of edge computing relies on heavy information exchange between edge devices, thus creating a huge communication load that limits its efficiency. However, edge computing opens up a more natural way of processing data at the edge of the network which aims to bring low latency, high reliability, distributed intelligence and network bandwidth for applications requiring real-time analysis.
74

Smart Security System Based on Edge Computing and Face Recognition

Heejae Han (9226565) 27 April 2023 (has links)
<p>Physical security is one of the most basic human needs. People care about it for various reasons; for the safety and security of personnel, to protect private assets, to prevent crime, and so forth. With the recent proliferation of AI, various smart physical security systems are getting introduced to the world. Many researchers and engineers are working on developing AI-driven physical security systems that have the capability to identify potential security threats by monitoring and analyzing data collected from various sensors. One of the most popular ways to detect unauthorized entrance to restricted space is using face recognition. With a collected stream of images and a proper algorithm, security systems can recognize faces detected from the image and send an alert when unauthorized faces are recognized. In recent years, there has been active research and development on neural networks for face recognition, e.g. FaceNet is one of the advanced algorithms. However, not much work has been done to showcase what kind of end-to-end system architecture is effective for running heavy-weight computational loads such as neural network inferences. Thus, this study explores different hardware options that can be used in security systems powered by a state-of-the-art face recognition algorithm and proposes that an edge computing based approach can significantly reduce the overall system latency and enhance the system reactiveness. To analyze the pros and cons of the proposed system, this study presents two different end-to-end system architectures. The first system is an edge computing-based system that operates most of the computational tasks at the edge node of the system, and the other is a traditional application server-based system that performs core computational tasks at the application server. Both systems adopt domain-specific hardware, Tensor Processing Units, to accelerate neural network inference. This paper walks through the implementation details of each system and explores its effectiveness. It provides a performance analysis of each system with regard to accuracy and latency and outlines the pros and cons of each system.</p> <p><br></p>
75

Real-time Cloudlet PaaS for GreenIoT : Design of a scalable server PaaS and a GreenIoT application

Li, Hengsha January 2018 (has links)
Cloudlet is a recent topic that has attained much interest in network system research. It can be characterized as a PaaS (Platform as a Service) layer that allows mobile clients to execute their code in the cloud. Cloudlet can be seen as a layer at the edge of the communication network.In this thesis, we present a cloudlet architecture design which includes cloudlet code as a part of client application itself. We first provide an overview of related work and describe existing challenges which need to be addressed. Next, we present an overview design for a cloudlet-based implementation. Finally, we present the cloudlet architecture including a prototype of both client application and cloudlet server. For the prototype of a CO2 data visualization application, we focus on how to format the functions in client side, how to schedule cloudlet PaaS in the server, and how to make the server scalable. Finally, we conclude with a performance evaluation.Cloudlet technology is likely to be widely applied in IoT projects, such as data visualization of air quality and water quality, for fan control and traffic steering, or other use cases. Compared to the traditional centralized cloud architecture, cloudlet has high responsiveness, flexibility and scalability. / Cloudlet är en ny teknik som har fått stort intresse inom nätverksforskning. Tekniken kan beskrivas som en PaaS-plattform (Platform as a Service) som tillåter mobila klienter att exekvera sin kod i molnet. Cloudlet kan ses som ett lager i kanten av kommunikationsnätet.I denna rapport presenteras en cloudlet-baserad arkitektur som inkluderar cloudlet-kod som en del av själva tillämpning på klient-sidan. Vi ger först en översikt av relaterat arbete inom området och beskriver existerande utmaningar som behöver adresseras. Därefter presenterar vi en övergripande design för en cloudlet-baserad implementering. Slutligen presenterar vi cloudlet-arkitekturen, inklusive en prototypimplementation av både klient-tillämpning och cloudlet-server. I vår prototyp av en datavisualiseringstillämpning för CO2, fokuserar vi på hur man formaterar funktionerna på klientsidan, hur man schemalägger cloudlet-PaaS på serversidan, samt hur servern kan göras skalbar. Rapporten avslutas med en prestandautvärdering.Cloudlet-tekniken bedöms i stor utsträckning att användas i IoT-projekt, såsom datavisualisering av luftkvalitet och vattenkvalitet, fläktstyrning och trafikstyrning eller andra användningsområden. Jämfört med den traditionella centraliserade molnarkitekturen har cloudlet hög respons, flexibilitet och skalbarhet.
76

Accelerating Multi-target Visual Tracking on Smart Edge Devices

Nalaie, Keivan January 2023 (has links)
\prefacesection{Abstract} Multi-object tracking (MOT) is a key building block in video analytics and finds extensive use in surveillance, search and rescue, and autonomous driving applications. Object detection, a crucial stage in MOT, dominates in the overall tracking inference time due to its reliance on Deep Neural Networks (DNNs). Despite the superior performance of cutting-edge object detectors, their extensive computational demands limit their real-time application on embedded devices that possess constrained processing capabilities. Hence, we aim to reduce the computational burdens of object detection while maintaining tracking performance. As the first approach, we adapt frame resolutions to reduce computational complexity. During inference, frame resolutions can be tuned according to the complexity of visual scenes. We present DeepScale, a model-agnostic frame resolution selection approach that operates on top of existing fully convolutional network-based trackers. By analyzing the effect of frame resolution on detection performance, DeepScale strikes good trade-offs between detection accuracy and processing speed by adapting frame resolutions on-the-fly. Our second approach focuses on enhancing the efficiency of a tracker by model adaptation. We introduce AttTrack to expedite tracking by interleaving the execution of object detectors of different model sizes in inference. A sophisticated network (teacher) runs for keyframes only while, for non-keyframe, knowledge is transferred from the teacher to a smaller network (student) to improve the latter’s performance. Our third contribution involves exploiting temporal-spatial redundancies to enable real-time multi-camera tracking. We propose the MVSparse pipeline which consists of a central processing unit that aggregates information from multiple cameras (on an edge server or in the cloud) and distributed lightweight Reinforcement Learning (RL) agents running on individual cameras that predict the informative blocks in the current frame based on past frames on the same camera and detection results from other cameras. / Thesis / Doctor of Science (PhD)
77

From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications

Antonini, Mattia 11 June 2021 (has links)
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
78

Utilising waste heat from Edge-computing Micro Data Centres : Financial and Environmental synergies, Opportunities, and Business Models / Tillvaratagande av spillvärme från Edge-computing Micro Data Center : finansiella och miljömässiga synergier, möjligheter, och affärsmodeller

Dowds, Eleanor Jane, El-Saghir, Fatme January 2021 (has links)
In recent times, there has been an explosion in the need for high-density computing and data processing. As a result the Internet and Communication Technology (ICT) demand on global energy resources has tripled in the last five years. Edge computing - bringing computing power close to the user, is set to be the cornerstone of future communication and information transport, satisfying the demand for instant response times and zero latency needed for applications such as 5G, self-driving vehicles, face recognition, and much more. The Micro Data Centre (micro DC) is key hardware in the shift to edge computing. Being self-contained, with in-rack liquid cooling systems, these micro data centres can be placed anywhere they are needed the most - often in areas not thought of as locations for datacentres, such as offices and housing blocks. This presents an opportunity to make the ICT industry greener and contribute to lowering total global energy demand, while fulfilling both the need for data processing and heating requirements. If a solution can be found to capture and utilise waste heat from the growing number of micro data centres, it would have a massive impact on overall energy consumption. This project will explore this potential synergy through investigating two different ways of utilising waste heat. The first being supplying waste heat to the District Heating network (Case 1), and the second using the micro DC as a ’data furnace’ supplying heat to the near vicinity (Case 2 and 3). Two scenarios of differing costs and incomes will be exploredin each case, and a sensitivity analysis will be performed to determine how sensitive each scenario is to changing internal and external factors. Results achieved were extremely promising. Capturing waste heat from micro data centres, and both supplying the local district heating network as well as providing the central heating of the near vicinity, is proving to be both economically and physically viable. The three different business models (’Cases’) created not only show good financial promise, but they demonstrate a way of creating value in a greener way of computing and heat supply. The amount of waste heat able to be captured is sufficient to heat many apartments in residential blocks and office buildings, and the temperatures achieved have proven to be sufficient to meet the heating requirements of these facilities, meaning no extra energy is required for the priming of waste heat. It is the hope that the investigations and analyses performed in this thesis will further the discussion around the utilisation of waste heat from lower energy sources, such as micro DCs, so that one day, potential can become reality. / På senare har tid har det skett en explosion i behovet av databehandling och databehandling med hög densitet. Som ett resultat har Internet- och kommunikationstekniksektorns (ICT) efterfråga på globala energiresurser tredubblats under de senaste fem åren. Edgecomputing för datorkraften närmre användaren och är hörnstenen i framtida kommunikation och informationsflöde. Omedelbar svarstid och noll latens som behövs för applikationersom 5G, självkörande fordon, ansiktsigenkänning och mycket mer tillfredställs av att datorkraften förs närme användaren. Micro Data Center är nycklen i övergången till edge computing. Eftersom att MicroData Center är fristående med inbyggda kylsystem kan de placeras där de behövs mest -ofta i områden som inte betraktas som platser för datacenter som exemeplvis kontor och bostadshus. Detta möjliggör för ICT-branschen att bli grönare och bidra till att sänka det totala globala energibehovet, samtidigt som behovet av databehandling kan tillgodoses. Om enlösning kan hittas för att fånga upp och använda spillvärme som genereras från växande antalet Micro Data Center, skulle det ha en enorm inverkan på den totala energiförbrukningen. Detta projekt kommer att undersöka potentiella synergier genom att undersöka två olikasätt att utnyttja spillvärme. Den första är att leverera spillvärme till fjärrvärmenätet (Case 1), och det andra att använda Micro Data Center som en "Data Furnace" som levererar värme till närområdet (Case 2 och 3). Två scenarier med olika kostnader och intäkter kommer att undersökas i varje Case och en känslighetsanalys kommer att utföras för att avgöra hur känsligt varje scenario är för ändrade interna och externa faktorer. Resultaten som uppnåtts är extremt lovande. Att fånga upp spillvärme från Micro Data Center och leverera till antingen det lokala fjärrvärmenätet eller nyttja spillvärmen lokalt har visat sig vara både ekonomiskt och fysiskt genomförbart. De tre olika affärsmodellerna (’Cases’) som skapats visar inte bara positivt ekonomiskt utfall, utan också ett sätt att skapa värde genom att på ett grönare sätt processa och lagra data och samtidigt värma städer. Mängden spillvärme som kan fångas upp är tillräcklig för att värma upp många lägenheter i bostadshus och kontorsbyggnader. Temperaturen på spillvärmen har visat sig vara tillräcklig för att uppfylla uppvärmningskraven i dessa anläggningar, vilket innebär att ingen extra energi krävs för att höja temperturen av spillvärme. Förhoppningen är att de undersökningar och analyser som utförs i denna rapport kommer att främja diskussionen kring utnyttjande av spillvärme från lägre energikällor, såsom Micro Data Center.
79

Low-Power Wireless Sensor Node with Edge Computing for Pig Behavior Classifications

Xu, Yuezhong 25 April 2024 (has links)
A wireless sensor node (WSN) system, capable of sensing animal motion and transmitting motion data wirelessly, is an effective and efficient way to monitor pigs' activity. However, the raw sensor data sampling and transmission consumes lots of power such that WSNs' battery have to be frequently charged or replaced. The proposed work solves this issue through WSN edge computing solution, in which a Random Forest Classifier (RFC) is trained and implemented into WSNs. The implementation of RFC on WSNs does not save power, but the RFC predicts animal behavior such that WSNs can adaptively adjust the data sampling frequency to reduce power consumption. In addition, WSNs can transmit less data by sending RFC predictions instead of raw sensor data to save power. The proposed RFC classifies common animal activities: eating, drinking, laying, standing, and walking with a F-1 score of 93%. The WSN power consumption is reduced by 25% with edge computing intelligence, compare to WSN power that samples and transmits raw sensor data periodically at 10 Hz. / Master of Science / A wireless sensor node (WSN) system that detects animal movement and wirelessly transmits this data is a valuable tool for monitoring pigs' activity. However, the process of sampling and transmitting raw sensor data consumes a significant amount of power, leading to frequent recharging or replacement of WSN batteries. To address this issue, our proposed solution integrates edge computing into WSNs, utilizing a Random Forest Classifier (RFC). The RFC is trained and deployed within the WSNs to predict animal behavior, allowing for adaptive adjustment of data sampling frequency to reduce power consumption. Additionally, by transmitting RFC predictions instead of raw sensor data, WSNs can conserve power by transmitting less data. Our RFC can accurately classify common animal activities, such as eating, drinking, laying, standing, and walking, achieving an F-1 score of 93%. With the integration of edge computing intelligence, WSN power consumption is reduced by 25% compared to traditional WSNs that periodically sample and transmit raw sensor data at 10 Hz.
80

EdgeFn: A Lightweight Customizable Data Store for Serverless Edge Computing

Paidiparthy, Manoj Prabhakar 01 June 2023 (has links)
Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications. / Master of Science / Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications.

Page generated in 0.2084 seconds