• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 67
  • 38
  • 37
  • 37
  • 36
  • 34
  • 33
  • 32
  • 29
  • 25
  • 24
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Low latency and Resource efficient Orchestration for Applications in Mobile Edge Cloud

Doan, Tung 21 March 2023 (has links)
Recent years have witnessed an increasing number of mobile devices such as smartphones and tablets characterized by low computing and storage capabilities. Meanwhile, there is an explosive growth of applications on mobile devices that require high computing and storage capabilities. These challenges lead to the introduction of cloud computing empowering mobile devices with remote computing and storage resources. However, cloud computing is centrally designed, thus encountering noticeable issues such as high communication latency and potential vulnerability. To tackle these problems posed by central cloud computing, Mobile Edge Cloud (MEC) has been recently introduced to bring the computing and storage resources in proximity to mobile devices, such as at base stations or shopping centers. Therefore, MEC has become a key enabling technology for various emerging use cases such as autonomous driving and tactile internet. Despite such a potential benefit, the design of MEC is challenging for the deployment of applications. First, as MEC aims to bring computation and storage resources closer to mobile devices, MEC servers that provide those resources become incredibly diverse in the network. Moreover, MEC servers typically have a small footprint design to flexibly place at various locations, thus providing limited resources. The challenge is to deploy applications in a cost-efficient manner. Second, applications have stringent requirements such as high mobility or low latency. The challenge is to deploy applications in MEC to satisfy their needs. Considering the above challenges, this thesis aims to study the orchestration of MEC applications. In particular, for computation offloading, we propose offloading schemes for immersive applications in MEC such as Augmented Reality or Virtual Reality (AR/VR) by employing application characteristics. For resource optimization, since many MEC applications such as gaming and streaming applications require the support of network functions such as encoder and decoder, we first present placement schemes that allow efficiently sharing network functions between multiple MEC applications. We then introduce the design of the proposed MANO framework in MEC, advocating the joint orchestration between MEC applications and network functions. For mobility support, low latency applications for use cases such as autonomous driving have to seamlessly migrate from one MEC server to another MEC server following the mobility of mobile device, to guarantee low latency communication. Traditional migration approaches based on virtual machine (VM) or container migration attempt to suspend the application at one MEC server and then recover it at another MEC server. These approaches require the transfer of the entire VM or container state and consequently lead to service interruption due to high migration time. Therefore, we advocate migration techniques that takes advantage of application states.
72

QoE-Aware Video Communication in Emerging Network Architectures

Sadat, Mohammad Nazmus 04 October 2021 (has links)
No description available.
73

Computation Offloading for Real-Time Applications

Tahirović, Emina January 2023 (has links)
With the vast and ever-growing range of applications which have started to seek real-time data processing and timing-predictable services comes an extensive list of problems when trying to establish these applications in the real-time domain. Depending on the purpose of the real-time application, the requests that they impose on resources are vastly different. Some real-time applications require large computational power, large storage capacities, and large energy storage. However, not all devices can be equipped with processors, batteries, or power banks adequate for such resource requirements. While these issues can be mitigated by offloading computations using cloud computing, this is not a universal solution for all applications. Real-time applications need to be predictable and reliable, whereas the cloud can cause high and unpredictable latencies. One possible improvement in the predictability and reliability aspect comes from offloading to the edge, which is closer than the cloud and can reduce latencies. However, even the edge comes with certain limitations, and it is not exactly clear how, where and when applications should be offloaded. The problem then presents itself as: how should real-time applications in the-edge cloud architecture be modeled? Moreover, how should they be modeled to be agnostic from certain technologies and provide support for timing analysis? Another thing to consider is the question of 'when' to offload to the edge-cloud architecture. For example, critical computations can be offloaded to the edge, while less critical computations can be offloaded to the cloud, but before one can determine 'where' to offload, one must determine 'when'. Thus, this thesis focuses on designing a new technology-agnostic mathematical model to allow holistic modeling of real-time applications on the edge-cloud continuum and provide support for timing analysis.
74

Distributed Artificial Intelligence Based on Edge Computing

Fagerström, Rebecca, Neüman, Simon January 2023 (has links)
The future Internet is expected to be driven by the prevalence of the Internet of Things (IoT), where it is envisioned that anything can be connected. In the last decade, there has been a paradigm shift in IoT from centralized cloud computing to so-called edge computing in order to compute tasks closer to the source of data generation. However, IoT still faces some major challenges when it comes to computational, storage and network. Therefore, this systematic literature review aims to investigate how edge computing can assist in accomplishing distributed intelligence in IoT systems and the known challenges and barriers. Using the PRISMA guidelines, a systematic database search and selection process was carried out to find relevant research on the topic. The data analysis method chosen for this study is content analysis, which aids in structuring and categorizing the data, allowing for the application of a coding process. By using content analysis and following the selection criteria, 15 out of 53 papers were chosen to be reviewed, published between 2017 and part of 2023. One of the main challenges mentioned by all published papers was the resource constraint of IoT devices together with the growing amounts of data that have become a bottleneck in the system. Limited processing capacity makes it difficult for the devices to independently complete complex data processing and AI analysis. The distributed nature of edge computing relies on heavy information exchange between edge devices, thus creating a huge communication load that limits its efficiency. However, edge computing opens up a more natural way of processing data at the edge of the network which aims to bring low latency, high reliability, distributed intelligence and network bandwidth for applications requiring real-time analysis.
75

Smart Security System Based on Edge Computing and Face Recognition

Heejae Han (9226565) 27 April 2023 (has links)
<p>Physical security is one of the most basic human needs. People care about it for various reasons; for the safety and security of personnel, to protect private assets, to prevent crime, and so forth. With the recent proliferation of AI, various smart physical security systems are getting introduced to the world. Many researchers and engineers are working on developing AI-driven physical security systems that have the capability to identify potential security threats by monitoring and analyzing data collected from various sensors. One of the most popular ways to detect unauthorized entrance to restricted space is using face recognition. With a collected stream of images and a proper algorithm, security systems can recognize faces detected from the image and send an alert when unauthorized faces are recognized. In recent years, there has been active research and development on neural networks for face recognition, e.g. FaceNet is one of the advanced algorithms. However, not much work has been done to showcase what kind of end-to-end system architecture is effective for running heavy-weight computational loads such as neural network inferences. Thus, this study explores different hardware options that can be used in security systems powered by a state-of-the-art face recognition algorithm and proposes that an edge computing based approach can significantly reduce the overall system latency and enhance the system reactiveness. To analyze the pros and cons of the proposed system, this study presents two different end-to-end system architectures. The first system is an edge computing-based system that operates most of the computational tasks at the edge node of the system, and the other is a traditional application server-based system that performs core computational tasks at the application server. Both systems adopt domain-specific hardware, Tensor Processing Units, to accelerate neural network inference. This paper walks through the implementation details of each system and explores its effectiveness. It provides a performance analysis of each system with regard to accuracy and latency and outlines the pros and cons of each system.</p> <p><br></p>
76

Real-time Cloudlet PaaS for GreenIoT : Design of a scalable server PaaS and a GreenIoT application

Li, Hengsha January 2018 (has links)
Cloudlet is a recent topic that has attained much interest in network system research. It can be characterized as a PaaS (Platform as a Service) layer that allows mobile clients to execute their code in the cloud. Cloudlet can be seen as a layer at the edge of the communication network.In this thesis, we present a cloudlet architecture design which includes cloudlet code as a part of client application itself. We first provide an overview of related work and describe existing challenges which need to be addressed. Next, we present an overview design for a cloudlet-based implementation. Finally, we present the cloudlet architecture including a prototype of both client application and cloudlet server. For the prototype of a CO2 data visualization application, we focus on how to format the functions in client side, how to schedule cloudlet PaaS in the server, and how to make the server scalable. Finally, we conclude with a performance evaluation.Cloudlet technology is likely to be widely applied in IoT projects, such as data visualization of air quality and water quality, for fan control and traffic steering, or other use cases. Compared to the traditional centralized cloud architecture, cloudlet has high responsiveness, flexibility and scalability. / Cloudlet är en ny teknik som har fått stort intresse inom nätverksforskning. Tekniken kan beskrivas som en PaaS-plattform (Platform as a Service) som tillåter mobila klienter att exekvera sin kod i molnet. Cloudlet kan ses som ett lager i kanten av kommunikationsnätet.I denna rapport presenteras en cloudlet-baserad arkitektur som inkluderar cloudlet-kod som en del av själva tillämpning på klient-sidan. Vi ger först en översikt av relaterat arbete inom området och beskriver existerande utmaningar som behöver adresseras. Därefter presenterar vi en övergripande design för en cloudlet-baserad implementering. Slutligen presenterar vi cloudlet-arkitekturen, inklusive en prototypimplementation av både klient-tillämpning och cloudlet-server. I vår prototyp av en datavisualiseringstillämpning för CO2, fokuserar vi på hur man formaterar funktionerna på klientsidan, hur man schemalägger cloudlet-PaaS på serversidan, samt hur servern kan göras skalbar. Rapporten avslutas med en prestandautvärdering.Cloudlet-tekniken bedöms i stor utsträckning att användas i IoT-projekt, såsom datavisualisering av luftkvalitet och vattenkvalitet, fläktstyrning och trafikstyrning eller andra användningsområden. Jämfört med den traditionella centraliserade molnarkitekturen har cloudlet hög respons, flexibilitet och skalbarhet.
77

Accelerating Multi-target Visual Tracking on Smart Edge Devices

Nalaie, Keivan January 2023 (has links)
\prefacesection{Abstract} Multi-object tracking (MOT) is a key building block in video analytics and finds extensive use in surveillance, search and rescue, and autonomous driving applications. Object detection, a crucial stage in MOT, dominates in the overall tracking inference time due to its reliance on Deep Neural Networks (DNNs). Despite the superior performance of cutting-edge object detectors, their extensive computational demands limit their real-time application on embedded devices that possess constrained processing capabilities. Hence, we aim to reduce the computational burdens of object detection while maintaining tracking performance. As the first approach, we adapt frame resolutions to reduce computational complexity. During inference, frame resolutions can be tuned according to the complexity of visual scenes. We present DeepScale, a model-agnostic frame resolution selection approach that operates on top of existing fully convolutional network-based trackers. By analyzing the effect of frame resolution on detection performance, DeepScale strikes good trade-offs between detection accuracy and processing speed by adapting frame resolutions on-the-fly. Our second approach focuses on enhancing the efficiency of a tracker by model adaptation. We introduce AttTrack to expedite tracking by interleaving the execution of object detectors of different model sizes in inference. A sophisticated network (teacher) runs for keyframes only while, for non-keyframe, knowledge is transferred from the teacher to a smaller network (student) to improve the latter’s performance. Our third contribution involves exploiting temporal-spatial redundancies to enable real-time multi-camera tracking. We propose the MVSparse pipeline which consists of a central processing unit that aggregates information from multiple cameras (on an edge server or in the cloud) and distributed lightweight Reinforcement Learning (RL) agents running on individual cameras that predict the informative blocks in the current frame based on past frames on the same camera and detection results from other cameras. / Thesis / Doctor of Science (PhD)
78

Utilising waste heat from Edge-computing Micro Data Centres : Financial and Environmental synergies, Opportunities, and Business Models / Tillvaratagande av spillvärme från Edge-computing Micro Data Center : finansiella och miljömässiga synergier, möjligheter, och affärsmodeller

Dowds, Eleanor Jane, El-Saghir, Fatme January 2021 (has links)
In recent times, there has been an explosion in the need for high-density computing and data processing. As a result the Internet and Communication Technology (ICT) demand on global energy resources has tripled in the last five years. Edge computing - bringing computing power close to the user, is set to be the cornerstone of future communication and information transport, satisfying the demand for instant response times and zero latency needed for applications such as 5G, self-driving vehicles, face recognition, and much more. The Micro Data Centre (micro DC) is key hardware in the shift to edge computing. Being self-contained, with in-rack liquid cooling systems, these micro data centres can be placed anywhere they are needed the most - often in areas not thought of as locations for datacentres, such as offices and housing blocks. This presents an opportunity to make the ICT industry greener and contribute to lowering total global energy demand, while fulfilling both the need for data processing and heating requirements. If a solution can be found to capture and utilise waste heat from the growing number of micro data centres, it would have a massive impact on overall energy consumption. This project will explore this potential synergy through investigating two different ways of utilising waste heat. The first being supplying waste heat to the District Heating network (Case 1), and the second using the micro DC as a ’data furnace’ supplying heat to the near vicinity (Case 2 and 3). Two scenarios of differing costs and incomes will be exploredin each case, and a sensitivity analysis will be performed to determine how sensitive each scenario is to changing internal and external factors. Results achieved were extremely promising. Capturing waste heat from micro data centres, and both supplying the local district heating network as well as providing the central heating of the near vicinity, is proving to be both economically and physically viable. The three different business models (’Cases’) created not only show good financial promise, but they demonstrate a way of creating value in a greener way of computing and heat supply. The amount of waste heat able to be captured is sufficient to heat many apartments in residential blocks and office buildings, and the temperatures achieved have proven to be sufficient to meet the heating requirements of these facilities, meaning no extra energy is required for the priming of waste heat. It is the hope that the investigations and analyses performed in this thesis will further the discussion around the utilisation of waste heat from lower energy sources, such as micro DCs, so that one day, potential can become reality. / På senare har tid har det skett en explosion i behovet av databehandling och databehandling med hög densitet. Som ett resultat har Internet- och kommunikationstekniksektorns (ICT) efterfråga på globala energiresurser tredubblats under de senaste fem åren. Edgecomputing för datorkraften närmre användaren och är hörnstenen i framtida kommunikation och informationsflöde. Omedelbar svarstid och noll latens som behövs för applikationersom 5G, självkörande fordon, ansiktsigenkänning och mycket mer tillfredställs av att datorkraften förs närme användaren. Micro Data Center är nycklen i övergången till edge computing. Eftersom att MicroData Center är fristående med inbyggda kylsystem kan de placeras där de behövs mest -ofta i områden som inte betraktas som platser för datacenter som exemeplvis kontor och bostadshus. Detta möjliggör för ICT-branschen att bli grönare och bidra till att sänka det totala globala energibehovet, samtidigt som behovet av databehandling kan tillgodoses. Om enlösning kan hittas för att fånga upp och använda spillvärme som genereras från växande antalet Micro Data Center, skulle det ha en enorm inverkan på den totala energiförbrukningen. Detta projekt kommer att undersöka potentiella synergier genom att undersöka två olikasätt att utnyttja spillvärme. Den första är att leverera spillvärme till fjärrvärmenätet (Case 1), och det andra att använda Micro Data Center som en "Data Furnace" som levererar värme till närområdet (Case 2 och 3). Två scenarier med olika kostnader och intäkter kommer att undersökas i varje Case och en känslighetsanalys kommer att utföras för att avgöra hur känsligt varje scenario är för ändrade interna och externa faktorer. Resultaten som uppnåtts är extremt lovande. Att fånga upp spillvärme från Micro Data Center och leverera till antingen det lokala fjärrvärmenätet eller nyttja spillvärmen lokalt har visat sig vara både ekonomiskt och fysiskt genomförbart. De tre olika affärsmodellerna (’Cases’) som skapats visar inte bara positivt ekonomiskt utfall, utan också ett sätt att skapa värde genom att på ett grönare sätt processa och lagra data och samtidigt värma städer. Mängden spillvärme som kan fångas upp är tillräcklig för att värma upp många lägenheter i bostadshus och kontorsbyggnader. Temperaturen på spillvärmen har visat sig vara tillräcklig för att uppfylla uppvärmningskraven i dessa anläggningar, vilket innebär att ingen extra energi krävs för att höja temperturen av spillvärme. Förhoppningen är att de undersökningar och analyser som utförs i denna rapport kommer att främja diskussionen kring utnyttjande av spillvärme från lägre energikällor, såsom Micro Data Center.
79

From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications

Antonini, Mattia 11 June 2021 (has links)
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
80

Promoting Systematic Practices for Designing and Developing Edge Computing Applications via Middleware Abstractions and Performance Estimation

Dantas Cruz, Breno 09 April 2021 (has links)
Mobile, IoT, and wearable devices have been transitioning from passive consumers to active generators of massive amounts of user-generated data. Edge-based processing eliminates network bottlenecks and improves data privacy. However, developing edge applications remains hard, with developers often have to employ ad-hoc software development practices to meet their requirements. By doing so, developers introduce low-level and hard-to-maintain code to the codebase, which is error-prone, expensive to maintain, and vulnerable in terms of security. The thesis of this research is that modular middleware abstractions, exemplar use cases, and ML-based performance estimation can make the design and development of edge applications more systematic. To prove this thesis, this dissertation comprises of three research thrusts: (1) understand the characteristics of edge-based applications, in terms of their runtime, architecture, and performance; (2) provide exemplary use cases to support the development of edge-based application; (3) innovate in the realm of middleware to address the unique challenges of edge-based data transfer and processing. We provide programming support and performance estimation methodologies to help edge-based application developers improve their software development practices. This dissertation is based on three conference papers, presented at MOBILESoft 2018, VTC 2020, and IEEE SMDS 2020. / Doctor of Philosophy / Mobile, IoT, and wearable devices are generating massive volumes of user data. Processing this data can reveal valuable insights. For example, a wearable device collecting its user's vitals can use the collected data to provide health advice. Typically the collected data is sent to some remote computing resources for processing. However, due to the vastly increasing volumes of such data, it becomes infeasible to efficiently transfer it over the network. Edge computing is an emerging system architecture that employs nearby devices for processing and can be used to alleviate the aforementioned data transfer problem. However, it remains hard to design and develop edge computing applications, making it a task reserved for expert developers. This dissertation is concerned with democratizing the development of edge applications, so the task would become accessible for regular developers. The overriding idea is to make the design and implementation of edge applications more systematic by means of programming support, exemplary use cases, and methodologies.

Page generated in 0.0993 seconds