• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 147
  • 147
  • 66
  • 38
  • 37
  • 37
  • 35
  • 34
  • 32
  • 32
  • 29
  • 25
  • 24
  • 21
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

RESOURCE MANAGEMENT IN EDGE COMPUTING FOR INTERNET OF THINGS APPLICATIONS

Galanis, Ioannis 01 December 2020 (has links)
The Internet of Things (IoT) computing paradigm has connected smart objects “things” and has brought new services at the proximity of the user. Edge Computing, a natural evolution of the traditional IoT, has been proposed to deal with the ever-increasing (i) number of IoT devices and (ii) the amount of data traffic that is produced by the IoT endpoints. EC promises to significantly reduce the unwanted latency that is imposed by the multi-hop communication delays and suggests that instead of uploading all the data to the remote cloud for further processing, it is beneficial to perform computation at the “edge” of the network, close to where the data is produced. However, bringing computation at the edge level has created numerous challenges as edge devices struggle to keep up with the growing application requirements (e.g. Neural Networks, or video-based analytics). In this thesis, we adopt the EC paradigm and we aim at addressing the open challenges. Our goal is to bridge the performance gap that is caused by the increased requirements of the IoT applications with respect to the IoT platform capabilities and provide latency- and energy-efficient computation at the edge level. Our first step is to study the performance of IoT applications that are based on Deep Neural Networks (DNNs). The exploding need to deploy DNN-based applications on resource-constrained edge devices has created several challenges, mainly due to the complex nature of DNNs. DNNs are becoming deeper and wider in order to fulfill users expectations for high accuracy, while they also become power hungry. For instance, executing a DNN on an edge device can drain the battery within minutes. Our solution to make DNNs more energy and inference friendly is to propose hardware-aware method that re-designs a given DNN architecture. Instead of proxy metrics, we measure the DNN performance on real edge devices and we capture their energy and inference time. Our method manages to find alternative DNN architectures that consume up to 78.82% less energy and are up to35.71% faster than the reference networks. In order to achieve end-to-end optimal performance, we also need to manage theedge device resources that will execute a DNN-based application. Due to their unique characteristics, we distinguish the edge devices into two categories: (i) a neuromorphic platform that is designed to execute Spiking Neural Networks (SNNs), and (ii) a general-purpose edge device that is suitable to host a DNN. For the first category, we train a traditional DNN and then we convert it to a spiking representation. We target the SpiNNaker neuromorphic platform and we develop a novel technique that efficiently configures the platform-dependent parameters, in order to achieve the highest possible SNN accuracy.Experimental results show that our technique is 2.5× faster than an exhaustive approach and can reach up to 0.8% higher accuracy compared to a CPU-based simulation method. Regarding the general-purpose edge devices, we show that a DNN-unaware platform can result in sub-optimal DNN performance in terms of power and inference time. Our approachconfigures the frequency of the device components (GPU, CPU, Memory) and manages to achieve average of 33.4% and up to 66.3% inference time improvements and an average of 42.8% and up to 61.5% power savings compared to the predefined configuration of an edge device. The last part of this thesis is the offloading optimization between the edge devicesand the gateway. The offloaded tasks create contention effects on gateway, which can lead to application slowdown. Our proposed solution configures (i) the number of application stages that are executed on each edge device, and (ii) the achieved utility in terms of Quality of Service (QoS) on each edge device. Our technique manages to (i) maximize theoverall QoS, and (ii) simultaneously satisfy network constraints (bandwidth) and user expectations (execution time). In case of multi-gateway deployments, we tackled the problem of unequal workload distribution. In particular, we propose a workload-aware management scheme that performs intra- and inter-gateway optimizations. The intra-gateway mechanism provides a balanced execution environment for the applications, and it achieves up to 95% performance deviation improvement, compared to un-optimized systems. The presented inter-gateway method manages to balance the workload among multiple gateways and is able to achieve a global performance threshold.
62

Energy-Efficient Bandwidth Allocation for Integrating Fog with Optical Access Networks

Helmy, Ahmed 03 December 2019 (has links)
Access networks have been going through many reformations to make them adapt to arising traffic trends and become better suited for many new demanding applications. To that end, incorporating fog and edge computing has become a necessity for supporting many emerging applications as well as alleviating network congestions. At the same time, energy-efficiency has become a strong imperative for access networks to reduce both their operating costs and carbon footprint. In this dissertation, we address these two challenges in long-reach optical access networks. We first study the integration of fog and edge computing with optical access networks, which is believed to form a highly capable access network by combining the huge fiber capacity with closer-to-the-edge computing and storage resources. In our study, we examine the offloading performance under different cloudlet placements when the underlying bandwidth allocation is either centralized or decentralized. We combine between analytical modeling and simulation results in order to identify the different factors that affect the offloading performance within each paradigm. To address the energy efficiency requirement, we introduce novel enhancements and modifications to both allocation paradigms that aim to enhance their network performance while conserving energy. We consider this work to be one of the first to explore the integration of fog and edge computing with optical access networks from both bandwidth allocation and energy efficiency perspectives in order to identify which allocation paradigm would be able to meet the requirements of next-generation access networks.
63

Evaluating mobile edge-computing on base stations : Case study of a sign recognition application

Castellanos Nájera, Eduardo January 2015 (has links)
Mobile phones have evolved from feature phones to smart phones with processing power that can compete with personal computers ten years ago. Nevertheless, the computing power of personal computers has also multiplied in the past decade. Consequently, the gap between mobile platforms and personal computers and servers still exists. Mobile Cloud Computing (MCC) has emerged as a paradigm that leverages this difference in processing power. It achieve this goal by augmenting smart phones with resources from the cloud, including processing power and storage capacity. Recently, Mobile Edge Computing (MEC) has brought the benefits from MCC one hop away from the end user. Furthermore, it also provides additional advantages, e.g., access to network context information, reduced latency, and location awareness. This thesis explores the advantages provided by MEC in practice by augmenting an existing application called Human-Centric Positioning System (HoPS). HoPS is a system that relies on context information and information extracted from a photograph of signposts to estimate a user's location. This thesis presents the challenges of enabling HoPS in practice, and implement strategies that make use of the advantages provided by MEC to tackle the challenges. Afterwards, it presents an evaluation of the resulting system, and discusses the implications of the results. To summarise, we make three primary contributions in this thesis: (1) we find out that it is possible to augment HoPS and improve its response time by a factor of four by offloading the code processing; (2) we can improve the overall accuracy of HoPS by leveraging additional processing power at the MEC; (3) we observe that improved network conditions can lead to reduced response time, nevertheless, the difference becomes insignificant compared with the heavy processing required. / Utvecklingen av mobiltelefoner har skett på en rusande takt. Dagens smartphones har mer processorkraft än vad stationära datorer hade för tio år sen. Samtidigt så har även datorernas processorer blivit mycket starkare. Därmed så finns det fortfarande klyftor mellan mobil plattform och datorer och servrar. Mobile Cloud Computing (MCC) används idag som en hävstång för de olika plattformernas processorkraft. Den uppnår detta genom att förbättra smartphonens processorkraft och datorminne med hjälp från datormolnet. På sistånde så har Mobile Edge Computing (MEC) gjort så att förmånerna med MCC är ett steg ifrån slutanvändaren. Dessutom så finns det andra fördelar med MEC, till exempel tillgång till nätverkssammanhangsinformation, reducerad latens, och platsmedvetenhet. Denna tes utforskar de praktiska fördelarna med MEC genom att använda tillämpningsprogrammet Human-Centric Positioning System (HoPS). HoPS är ett system som försöker att hitta platsen där användaren befinner sig på genom att använda sammanhängande information samt information från bilder med vägvisare. Tesen presenterar även de hinder som kan uppstå när HoPS implementeras i verkligheten, och använder förmåner från MEC för att hitta lösningar till eventuella hinder. Sedan så utvärderar och diskuterar tesen det resulterande systemet. För att sammanfatta så består tesen av tre huvuddelar: (1) vi tar reda på att det är möjligt att förbättra HoPS och minska svarstiden med en fjärdedel genom att avlasta kodsprocessen; (2) vi tar reda på att man kan generellt förbättra HoPS noggrannhet genom att använda den utökade processorkraften från MEC; (3) vi ser att förbättrade nätverksförutsättningar kan leda till minskad svarstid, dock så är skillnaden försumbar jämfört med hur mycket bearbetning av information som krävs.
64

DYNAMIC TASK OFFLOADING FOR LATENCY MINIMIZATION IN IOT EDGE-CLOUD ENVIRONMENTS

Haimin Ku (12457464) 26 April 2022 (has links)
<p>With the exponential growth and diversity of Internet of Things (IoT) devices, computational-intensive and delay-sensitive applications, such as object detection, smart homes, and smart grids, are emerging constantly. We can adopt the paradigm of cloud computing to offload computation-heavy tasks from IoT devices to a cloud server which can break through the limitation of IoT devices with more powerful resources. However, cloud computing architecture can cause high latency which is not suitable for IoT devices that have limited computing and storage capabilities. Edge computing has been introduced to improve this situation by deploying an edge device nearby IoT devices that can provide IoT devices computing resources with low latency compared to cloud computing. Nevertheless, the edge server may not be able to complete all the offloaded tasks from the devices in time when the requests are flooding. In such cases, the edge server can offload some of the requested tasks to a cloud server to further speed up the offloading process with more powerful cloud resources. In this paper, we aim to minimize the average completion time of tasks in an IoT edge-cloud environment, by optimizing the task offloading ratio from edge to cloud, based on Deep Deterministic Policy Gradient (DDPG), a type of Reinforcement Learning (RL) approach. We propose a dynamic task offloading decision mechanism deployed on the edge that can determine the amounts of computational resources to be processed in the cloud server considering multiple factors to complete a task. Simulation results demonstrate that our dynamic task offloading decision mechanism can improve the overall completion time of tasks than naïve approaches. </p>
65

Universal Mobile Service Execution Framework for Device-To-Device Collaborations

Le, Minh 01 May 2018 (has links)
There are high demands of effective and high-performance of collaborations between mobile devices in the places where traditional Internet connections are unavailable, unreliable, or significantly overburdened, such as on a battlefield, disaster zones, isolated rural areas, or crowded public venues. To enable collaboration among the devices in opportunistic networks, code offloading and Remote Method Invocation are the two major mechanisms to ensure code portions of applications are successfully transmitted to and executed on the remote platforms. Although these domains are highly enjoyed in research for a decade, the limitations of multi-device connectivity, system error handling or cross platform compatibility prohibit these technologies from being broadly applied in the mobile industry. To address the above problems, we designed and developed UMSEF - an Universal Mobile Service Execution Framework, which is an innovative and radical approach for mobile computing in opportunistic networks. Our solution is built as a component-based mobile middleware architecture that is flexible and adaptive with multiple network topologies, tolerant for network errors and compatible for multiple platforms. We provided an effective algorithm to estimate the resource availability of a device for higher performance and energy consumption and a novel platform for mobile remote method invocation based on declarative annotations over multi-group device networks. The experiments in reality exposes our approach not only achieve the better performance and energy consumption, but can be extended to large-scaled ubiquitous or IoT systems.
66

Smart Resource Allocation in Internet-of-Things: Perspectives of Network, Security, and Economics

January 2019 (has links)
abstract: Emerging from years of research and development, the Internet-of-Things (IoT) has finally paved its way into our daily lives. From smart home to Industry 4.0, IoT has been fundamentally transforming numerous domains with its unique superpower of interconnecting world-wide devices. However, the capability of IoT is largely constrained by the limited resources it can employ in various application scenarios, including computing power, network resource, dedicated hardware, etc. The situation is further exacerbated by the stringent quality-of-service (QoS) requirements of many IoT applications, such as delay, bandwidth, security, reliability, and more. This mismatch in resources and demands has greatly hindered the deployment and utilization of IoT services in many resource-intense and QoS-sensitive scenarios like autonomous driving and virtual reality. I believe that the resource issue in IoT will persist in the near future due to technological, economic and environmental factors. In this dissertation, I seek to address this issue by means of smart resource allocation. I propose mathematical models to formally describe various resource constraints and application scenarios in IoT. Based on these, I design smart resource allocation algorithms and protocols to maximize the system performance in face of resource restrictions. Different aspects are tackled, including networking, security, and economics of the entire IoT ecosystem. For different problems, different algorithmic solutions are devised, including optimal algorithms, provable approximation algorithms, and distributed protocols. The solutions are validated with rigorous theoretical analysis and/or extensive simulation experiments. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
67

A ZERO-TRUST-BASED IDENTITY MANAGEMENT MODEL FOR VOLUNTEER CLOUD COMPUTING

albuali, abdullah 01 December 2021 (has links) (PDF)
Non-conventional cloud computing models such as volunteer and mobile clouds have been increasingly popular in cloud computing research. Volunteer cloud computing is a more economical, greener alternative to the current model based on data centers in which tens of thousands of dedicated servers facilitate cloud services. Volunteer clouds offer numerous benefits: no upfront investment to procure the many servers needed for traditional data center hosting; no maintenance costs, such as electricity for cooling and running servers; and physical closeness to edge computing resources, such as individually owned PCs. Despite these benefits, such systems introduce their own technical challenges due to the dynamics and heterogeneity of volunteer computers that are shared not only among cloud users but also between cloud and local users. The key issues in cloud computing such as security, privacy, reliability, and availability thus need to be addressed more critically in volunteer cloud computing.Emerging paradigms are plagued by security issues, such as in volunteer cloud computing, where trust among entities is nonexistent. Thus, this study presents a zero-trust model that does not assign trust to any volunteer node (VN) and always verifies using a server-client topology for all communications, whether internal or external (between VNs and the system). To ensure the model chooses only the most trusted VNs in the system, two sets of monitoring mechanisms are used. The first uses a series of reputation-based trust management mechanisms to filter VNs at various critical points in their life-cycle. This set of mechanisms helps the volunteer cloud management system detect malicious activities, violations, and failures among VNs through innovative monitoring policies that affect the trust scores of less trusted VNs and reward the most trusted VNs during their life-cycle in the system. The second set of mechanisms uses adaptive behavior evaluation contexts in VN identity management. This is done by calculating the challenge score and risk rate of each node to calculate and predict a trust score. Furthermore, the study resulted in a volunteer computing as a service (VCaaS) cloud system using undedicated hosts as resources. Both cuCloud and the open-source CloudSim platform are used to evaluate the proposed model.The results shows that zero-trust identity management for volunteer clouds can execute a range of applications securely, reliably, and efficiently. With the help of the proposed model, volunteer clouds can be a potential enabler for various edge computing applications. Edge computing could use volunteer cloud computing along with the proposed trust system and penalty module (ZTIMM and ZTIMM-P) to manage the identity of all VNs that are part of the volunteer edge computing architecture.
68

Joint Resource Management and Task Scheduling for Mobile Edge Computing

Wei, Xinliang January 2023 (has links)
In recent years, edge computing has become an increasingly popular computing paradigm to enable real-time data processing and mobile intelligence. Edge computing allows computing at the edge of the network, where data is generated and distributed at the nearby edge servers to reduce the data access latency and improve data processing efficiency. In addition, with the advance of Artificial Intelligence of Things (AIoT), not only millions of data are generated from daily smart devices, such as smart light bulbs, smart cameras, and various sensors, but also a large number of parameters of complex machine learning models have to be trained and exchanged by these AIoT devices. Classical cloud-based platforms have difficulty communicating and processing these data/models effectively with sufficient privacy and security protection. Due to the heterogeneity of edge elements including edge servers, mobile users, data resources, and computing tasks, the key challenge is how to effectively manage resources (e.g. data, services) and schedule tasks (e.g. ML/FL tasks) in the edge clouds to meet the QoS of mobile users or maximize the platform's utility. To that end, this dissertation studies joint resource management and task scheduling for mobile edge computing. The key contributions of the dissertation are two-fold. Firstly, we study the data placement problem in edge computing and propose a popularity-based method as well as several load-balancing strategies to effectively place data in the edge network. We further investigate a joint resource placement and task dispatching problem and formulate it as an optimization problem. We propose a two-stage optimization method and a reinforcement learning (RL) method to maximize the total utilities of all tasks. Secondly, we focus on a specific computing task, i.e., federated learning (FL), and study the joint participant selection and learning scheduling problem for multi-model federated edge learning. We formulate a joint optimization problem and propose several multi-stage optimization algorithms to solve the problem. To further improve the FL performance, we leverage the power of the quantum computing (QC) technique and propose a hybrid quantum-classical Benders' decomposition (HQCBD) algorithm as well as a multiple-cuts version to accelerate the convergence speed of the HQCBD algorithm. We show that the proposed algorithms can achieve the consistent optimal value compared with the classical Benders' decomposition running in the classical CPU computer, but with fewer convergence iterations. / Computer and Information Science
69

Light Field Video Processing and Streaming Using Applied AI

Hu, Xinjue 16 November 2022 (has links)
As a new form of volumetric media, a Light Field (LF) can provide users with a true 6 Degrees-Of-Freedom (DOF) immersive experience, because LF captures the scene with photo-realism, including aperture-limited changes in viewpoint. Nevertheless, the larger size and higher dimension of LF data bring greater challenges to processing and transmission. The main focus of this study is the application of the applied Artificial Intelligence (AI) method to the transmission and processing of LF data, thereby alleviating the performance bottleneck in existing methods. Uncompressed LF data are too large for network transmission, which is why LF compression has become an important research topic. A new LF compression algorithm based on Graph Neural Network (GNN) is proposed in this work. It can use the graph network model to fit the similarity between the LF viewpoints, so that only the data of a few essential anchor viewpoints need to be transmitted after compression, and a complete LF matrix can be reconstructed according to the graph model at the decoding end. This method also solves the problem of weak generalization of the LF reconstruction algorithm when dealing with high-frequency components through the design of two-layer compression structure. Compared with existing compression methods, a higher compression ratio and better quality can be achieved using this algorithm. Furthermore, to improve the adaptability of the real-time requirements of different LF applications and robustness requirements in unreliable network environments, an adaptive LF video transmission scheme based on Multiple Description Coding (MDC) is proposed. It can divide the LF matrix into LF descriptions at different levels of downsampling ratios, and optimize the scheduling of the descriptions transmission queue, which can ensure that it can adaptively adjust the design of basic GNN unit so that the proposed method can adapt more flexibly to the real-time changes of user viewpoint requests, so as to save unnecessary viewpoint transmission overhead to the greatest extent, and minimize the adverse impact of network packet loss and network status fluctuations on LF transmission services. For LF processing, depth estimation has been a very hot topic in recent years. To achieve a good balance between the performance of both narrow- or wide-baseline LF data, a novel optical-flow-based LF depth estimation scheme, which uses a convolutional neural network (CNN) to predict the patch matrix after optical flow offset, is proposed. After the optical-flow-assisted offset, the disparity between patches is processed to a unified numerical range, which can effectively solve the overfitting problem of the LF depth estimation network caused by the uneven distribution of the baseline range of LF samples. Experimental results show that the proposed uniform-patch-based estimation mechanism has good generalization on LF data of different baselines and is compatible with various existing narrow-baseline LF depth estimation algorithms. Finally, since LF processing places high requirements on both the computing and caching capabilities of the infrastructure, a framework that combines Multi-access Edge Computing (MEC) technology with LF applications is proposed in this thesis. In this study, the problem is transformed by the Lyapunov optimization, and an optimized search algorithm based on the Markov approximation method is designed, which can adaptively schedule and adjust the task offloading strategy and resource allocation scheme, so as to provide users with the best service experience in the LF viewpoint interpolation task. Numerical results demonstrate that this edge-based framework can achieve a dynamic balance between energy and caching consumption while meeting the low latency requirements of LF applications.
70

Low latency and Resource efficient Orchestration for Applications in Mobile Edge Cloud

Doan, Tung 21 March 2023 (has links)
Recent years have witnessed an increasing number of mobile devices such as smartphones and tablets characterized by low computing and storage capabilities. Meanwhile, there is an explosive growth of applications on mobile devices that require high computing and storage capabilities. These challenges lead to the introduction of cloud computing empowering mobile devices with remote computing and storage resources. However, cloud computing is centrally designed, thus encountering noticeable issues such as high communication latency and potential vulnerability. To tackle these problems posed by central cloud computing, Mobile Edge Cloud (MEC) has been recently introduced to bring the computing and storage resources in proximity to mobile devices, such as at base stations or shopping centers. Therefore, MEC has become a key enabling technology for various emerging use cases such as autonomous driving and tactile internet. Despite such a potential benefit, the design of MEC is challenging for the deployment of applications. First, as MEC aims to bring computation and storage resources closer to mobile devices, MEC servers that provide those resources become incredibly diverse in the network. Moreover, MEC servers typically have a small footprint design to flexibly place at various locations, thus providing limited resources. The challenge is to deploy applications in a cost-efficient manner. Second, applications have stringent requirements such as high mobility or low latency. The challenge is to deploy applications in MEC to satisfy their needs. Considering the above challenges, this thesis aims to study the orchestration of MEC applications. In particular, for computation offloading, we propose offloading schemes for immersive applications in MEC such as Augmented Reality or Virtual Reality (AR/VR) by employing application characteristics. For resource optimization, since many MEC applications such as gaming and streaming applications require the support of network functions such as encoder and decoder, we first present placement schemes that allow efficiently sharing network functions between multiple MEC applications. We then introduce the design of the proposed MANO framework in MEC, advocating the joint orchestration between MEC applications and network functions. For mobility support, low latency applications for use cases such as autonomous driving have to seamlessly migrate from one MEC server to another MEC server following the mobility of mobile device, to guarantee low latency communication. Traditional migration approaches based on virtual machine (VM) or container migration attempt to suspend the application at one MEC server and then recover it at another MEC server. These approaches require the transfer of the entire VM or container state and consequently lead to service interruption due to high migration time. Therefore, we advocate migration techniques that takes advantage of application states.

Page generated in 0.0612 seconds