1 |
BC Framework for CAV Edge ComputingChen, Haidi 05 1900 (has links)
Edge computing and CAV (Connected Autonomous Vehicle) fields can work as a team. With the short latency and high responsiveness of edge computing, it is a better fit than cloud computing in the CAV field. Moreover, containerized applications are getting rid of the annoying procedures for setting the required environment. So that deployment of applications on new machines is much more user-friendly than before. Therefore, this paper proposes a framework developed for the CAV edge computing scenario. This framework consists of various programs written in different languages. The framework uses Docker technology to containerize these applications so that the deployment could be simple and easy. This framework consists of two parts. One is for the vehicle on-board unit, which exposes data to the closest edge device and receives the output generated by the edge device. Another is for the edge device, which is responsible for collecting and processing big load of data and broadcasting output to vehicles. So the vehicle does not need to perform the heavyweight tasks that could drain up the limited power.
|
2 |
Agile and Scalable Design and Dimensioning of NFV-Enabled MEC Infrastructure to Support Heterogeneous Latency-Critical ApplicationsAbou Haibeh, Lina 12 May 2023 (has links)
Mobile edge computing (MEC) has recently been introduced as a key technology, emerging in response to the increased focus on the emergence of new heterogeneous computing applications, resource-constrained mobile devices, and the long delay of traditional cloud data centers. Although many researchers have studied how the heterogeneous latency-critical application requirements can interact with the MEC system, very few have addressed how to deploy a flexible and scalable MEC infrastructure at the mobile operator for the expected heterogeneous mobile traffic.
The proposed system model in this research project relies on the Network Function Virtualization (NFV) concept to virtualize the MEC infrastructure and provide scalable and flexible infrastructure regardless of the underlying physical hardware. In NFV-enabled networks, the received mobile workload is often deployed as Service Function Chains (SFCs), responsible for accomplishing users' service requests by steering traffic through different VNF types and virtual links. Thus, efficient VNF placement and orchestration mechanisms are required to address the challenges of the heterogenous users' requests, various Quality of Service (QoS) requirements, and network traffic dynamicity.
This research project addresses the scalable design and dimensioning of an agile NFV-enabled MEC infrastructure problem from a dual perspective. First, a neural network model (i.e., a subset of machine learning) helps proactively auto-scale the various virtual service instances by predicting the number of SFCs required for a time-varying mobile traffic load. Second, the Mixed-Integer Linear Program (MILP) is used to create a physical MEC system infrastructure by mapping the predicted virtual SFC networks to the MEC nodes while minimizing deployment costs. Numerical results show that the machine learning (ML) model achieves a high prediction accuracy of 95.6%, which demonstrates the added value of using the ML technique at the edge network in reducing deployment costs while ensuring delay requirements for different latency-critical applications with high acceptance rates. Due to the exponential nature of this MILP formulation, we also propose a scalable bender decomposition approach with near-optimal results at a significantly reduced design and dimensioning cost. Numerical results show the viability of the bender decomposition approach in its proximity to the optimal dimensioning cost and in its reasonable solution time.
|
3 |
Multi-Access Edge Computing Assisted Mobile Ad-hoc CloudBhuchhada, Jay Kumar 05 September 2019 (has links)
Mobile Ad-hoc Cloud offers users the capability to offload intensive tasks on a cloud composed of voluntary mobile devices. Due to the availability of these devices in the proximity, intensive tasks can be processed locally. In addition, the literature referred to in the text, distinguishes a specific class of application to be well addressed when processed at the user level. However, due to lack of commitment, mobility, and unpredictability of the mobile devices, providing a rich ad-hoc cloud service is challenging. Furthermore, the resource availability of these devices impacts the service offered to the requester.
As a result, this thesis aims to address the challenges mentioned above. With the support of Multi-Access Edge Computing, a mobile ad-hoc Infrastructure as a Service composition framework is proposed. An ad-hoc application server is designed to operate over the MEC platform to compose and manage the mobile ad-hoc cloud. The server uses the information provided by the MEC services to compose volunteer resources for a given request. As well, a heuristic approach for a multi-dimensional bin packing technique is considered, while extending the Euclidean distance for sub-tasks selection. In addition, to address the lack of resource availability, an architecture for MAC using SDN is proposed. The logically centralized controller works with the application server to migrate requests seamlessly from one region to another. Inspired by the benefits of the MEC, a mobility mechanism is introduced to address the movement of the participants. Finally, based on the evaluation, it was observed that the proposed MAC framework not only provided better use of resources but also provided a consisted and scalable service.
|
4 |
Online Optimization for Edge Computing under Uncertainty in Wireless NetworksLee, Gilsoo 24 April 2020 (has links)
Edge computing is an emerging technology that can overcome the limitations of centralized cloud computing by enabling distributed, low-latency computation at a network edge. Particularly, in edge computing, some of the cloud's functionalities such as storage, processing, and computing are migrated to end-user devices called edge nodes so as to reduce the round-trip delay needed to reach the cloud data center. Despite the major benefits and practical applications of using edge computing, one must address many technical challenges that include edge network formation, computational task allocation, and radio resource allocation, while considering the uncertainties innate in edge nodes, such as incomplete future information on their wireless channel gains and computing capabilities. The goal of this dissertation is to develop foundational science for the deployment, performance analysis, and low-complexity optimization of edge computing under the aforementioned uncertainties. First, the problems of edge network formation and task distribution are jointly investigated while considering a hybrid edge-cloud architecture under uncertainty on the arrivals of computing tasks. In particular, a novel online framework is proposed to form an edge network, distribute the computational tasks, and update a target competitive ratio defined as the ratio between the latency achieved by the proposed online algorithm and the optimal latency. The results show that the proposed framework achieves the target competitive ratio that is affected by the wireless data rate and computing speeds of edge nodes. Next, a new notion of ephemeral edge computing is proposed in which edge computing must occur under a stringent requirement on the total computing time period available for the computing process. To maximize the number of computed tasks in ephemeral edge networks under the uncertainty on future task arrivals, a novel online framework is proposed to enable a source edge node to offload computing tasks from sensors and allocate them to neighboring edge nodes for distributed task computing, within the limited total time period. Then, edge computing is applied for mobile blockchain and online caching systems, respectively. First, a mobile blockchain framework is designed to use edge devices as mobile miners, and the performance is analyzed in terms of the probability of forking event and energy consumption. Second, an online computational caching framework is designed to minimize the edge network latency. The proposed caching framework enables each edge node to store intermediate computation results (IRs) from previous computations and download IRs from neighboring nodes under uncertainty on future computation. Subsequently, online optimization is extended to investigate other edge networking applications. In particular, the problem of online ON/OFF scheduling of self-powered small cell base stations is studied, in the presence of energy harvesting uncertainty with the goal of minimizing the operational costs that consist of energy consumption and transmission delay of a network. Such a framework can enable the self-powered base stations to be functioned as energy-efficient edge nodes. Also, the problem of radio resource allocation is studied when a base station is assisted by self-powered reconfigurable intelligent surfaces (RIS). To this end, a deep reinforcement learning approach is proposed to jointly optimize the transmit power, phase shifting, and RIS reflector's ON/OFF states under the uncertainties on the downlink wireless channel information and the harvested energy at the RIS. Finally, the online problem of dynamic channel allocation is studied for full-duplex device-to-device (D2D) networks so that D2D users can share their data with a low communication latency when users dynamically arrive on the network. In conclusion, the analytical foundations and frameworks presented in this dissertation will provide key guidelines for effective design of edge computing in wireless networks. / Doctor of Philosophy / Smart cities will rely on an Internet of Things (IoT) system that interconnects cars, drones, sensors, home appliances, and other digital devices. Modern IoT systems are inherently designed to process real-time information such as temperature, humidity, or even car navigational data, at any time and location. A unique challenge in the design of such an IoT is the need to process large volumes of data over a wireless network that consists of heterogeneous IoT devices such as smartphones, vehicles, home access points, robots, and drones. These devices must perform local (on-device or so-called edge) processing of their data without relying on a remote cloud. This vision of a smart city seen as a mobile computing platform gives rise to the emerging concept of edge computing using which smartphones, sensors, vehicles, and drones can exchange and process data locally on their own devices. Edge computing allows overcoming the limitations of centralized cloud computation by enabling distributed, low-latency computation at the network edge.
Despite the promising opportunities of edge computing as an enabler for smart city services such as autonomous vehicles, drones, or smart homes, one must address many challenges related to managing time-varying resources such as energy and storage, in a dynamic way. For instance, managing communication, energy, and computing resources in an IoT requires handling many uncertain factors such as the intermittent availability of wireless connectivity and the fact that the devices do not know a priori what type of tasks they need to process. The goal of this dissertation is to address the fundamental challenges in edge computing under uncertainty in an IoT. In particular, this dissertation introduces novel mathematical algorithms and frameworks that exploit ideas from the fields of online optimization, machine learning, and wireless communication to enable future IoT services such as smart factories, virtual reality, and autonomous systems. In this dissertation, holistic frameworks are developed by designing, analyzing, and optimizing wireless communications systems with an emphasize on emerging IoT applications. To this end, various mathematical frameworks and efficient algorithms are proposed by drawing on tools from wireless communications, online optimization, and machine learning to yield key innovations. The results show that the developed solutions can enable an IoT to operate efficiently in presence of uncertainty stemming from time-varying dynamics such as mobility of vehicles or changes in the wireless networking environment. As such, the outcomes of this research can be used as a building block for the large deployment of smart city technologies that heavily rely on the IoT.
|
5 |
Self-Adaptive Edge Services: Enhancing Reliability, Efficiency, and Adaptiveness under Unreliable, Scarce, and Dissimilar ResourcesSong, Zheng 27 May 2020 (has links)
As compared to traditional cloud computing, edge computing provides computational, sensor, and storage resources co-located with client requests, thereby reducing network transmission and providing context-awareness. While server farms can allocate cloud computing resources on demand at runtime, edge-based heterogeneous devices, ranging from stationary servers to mobile, IoT, and energy harvesting devices are not nearly as reliable and abundant. As a result, edge application developers face the following obstacles: 1) heterogeneous devices provide hard-to-access resources, due to dissimilar capabilities, operating systems, execution platforms, and communication interfaces; 2) unreliable resources cause high failure rates, due to device mobility, low energy status, and other environmental factors; 3) resource scarcity hinders the performance; 4) the dissimilar and dynamic resources across edge environments make QoS impossible to guarantee. Edge environments are characterized by the prevalence of equivalent functionalities, which satisfy the same application requirements by different means. The thesis of this research is that equivalent functionalities can be exploited to improve the reliability, efficiency, and adaptiveness of edge-based services. To prove this thesis, this dissertation comprises three key interrelated research thrusts: 1) create a system architecture and programming support for providing edge services that run on heterogeneous and ever changing edge devices; 2) introduce programming abstractions for executing equivalent functionalities; 3) apply equivalent functionalities to improve the reliability, efficiency, and adaptiveness of edge services. We demonstrate how the connected devices with unreliable, dynamic, and scarce resources can automatically form a reliable, adaptive, and efficient execution environment for sensing, computing, and other non-trivial tasks. This dissertation is based on 5 conference papers, presented at ICDCS'20, ICWS'19, EDGE'19, CLOUD'18, and MobileSoft'18 / Doctor of Philosophy / As mobile and IoT devices are generating ever-increasing volumes of sensor data, it has become impossible to transfer this data to remote cloud-based servers for processing. As an alternative, edge computing coordinates nearby computing resources that can be used for local processing. However, while cloud computing resources are abundant and reliable, edge computing ones are scarce and unreliable. This dissertation research introduces novel execution strategies that make it possible to provide reliable, efficient, and flexible edge-based computing services in dissimilar edge environments.
|
6 |
Computational Offloading for Real-Time Computer Vision in Unreliable Multi-Tenant Edge SystemsJackson, Matthew Norman 26 June 2023 (has links)
The demand and interest in serving Computer Vision applications at the Edge, where Edge Devices generate vast quantities of data, clashes with the reality that many Devices are largely unable to process their data in real time. While computational offloading, not to the Cloud but to nearby Edge Nodes, offers convenient acceleration for these applications, such systems are not without their constraints. As Edge networks may be unreliable or wireless, offloading quality is sensitive to communication bottlenecks. Unlike seemingly unlimited Cloud resources, an Edge Node, serving multiple clients, may incur delays due to resource contention. This project describes relevant Computer Vision workloads and how an effective offloading framework must adapt to the constraints that impact the Quality of Service yet have not been effectively nor properly addressed by previous literature. We design an offloading controller, based on closed-loop control theory, that enables Devices to maximize their throughput by appropriately offloading under variable conditions. This approach ensures a Device can utilize the maximum available offloading bandwidth. Finally, we constructed a realistic testbed and conducted measurements to demonstrate the superiority of our offloading controller over previous techniques. / Master of Science / Devices like security cameras and some Internet of Things gadgets produce valuable real-time video for AI applications. A field within AI research called Computer Vision aims to use this visual data to compute a variety of useful workloads in a way that mimics the human visual system. However, many workloads, such as classifying objects displayed in a video, have large computational demands, especially when we want to keep up with the frame rate of a real-time video. Unfortunately, these devices, called Edge Devices because they are located far from Cloud datacenters at the edge of the network, are notoriously weak for Computer Vision algorithms, and, if running on a battery, will drain it quickly. In order to keep up, we can offload the computation of these algorithms to nearby servers, but we need to keep in mind that the bandwidth of the network might be variable and that too many clients connected to a single server will overload it. A slow network or an overloaded server will incur delays which slow processing throughput. This project describes relevant Computer Vision workloads and how an effective offloading framework that effectively adapts to these constraints has not yet been addressed by previous literature. We designed an offloading controller that measures feedback from the system and adapts how a Device offloads computation, in order to achieve the best possible throughput despite variable conditions. Finally, we constructed a realistic testbed and conducted measurements to demonstrate the superiority of our offloading controller over previous techniques.
|
7 |
System Infrastructure for Mobile-Cloud ConvergenceHa, Kiryong 01 December 2016 (has links)
The convergence of mobile computing and cloud computing enables new mobile applications that are both resource-intensive and interactive. For these applications, end-to-end network bandwidth and latency matter greatly when cloud resources are used to augment the computational power and battery life of a mobile device. This dissertation designs and implements a new architectural element called a cloudlet, that arises from the convergence of mobile computing and cloud computing. Cloudlets represent the middle tier of a 3-tier hierarchy, mobile device — cloudlet—cloud, to achieve the right balance between cloud consolidation and network responsiveness. We first present quantitative evidence that shows cloud location can affect the performance of mobile applications and cloud consolidation. We then describe an architectural solution using cloudlets that are a seamless extension of todays cloud computing infrastructure. Finally, we define minimal functionalities that cloudlets must offer above/beyond standard cloud computing, and address corresponding technical challenges.
|
8 |
Mobility-Oriented Data Retrieval for Computation Offloading in Vehicular Edge ComputingSoto Garcia, Victor 21 February 2019 (has links)
Vehicular edge computing (VEC) brings the cloud paradigm to the edge of the network, allowing nodes such as Roadside Units (RSUs) and On-Board Units (OBUs) in vehicles to perform services with location awareness and low delay requirements. Furthermore, it alleviates the bandwidth congestion caused by the large amount of data requests in the network. One of the major components of VEC, computation offloading, has gained increasing attention with the emergence of mobile and vehicular applications with high-computing and low-latency demands, such as Intelligent Transportation Systems and IoT-based applications. However, existing challenges need to be addressed for vehicles' resources to be used in an efficient manner. The primary challenge consists of the mobility of the vehicles, followed by intermittent or lack of connectivity. Therefore, the MPR (Mobility Prediction Retrieval) data retrieval protocol proposed in this work allows VEC to efficiently retrieve the output processed data of the offloaded application by using both vehicles and road side units as communication nodes. The developed protocol uses geo-location information of the network infrastructure and the users to accomplish an efficient data retrieval in a Vehicular Edge Computing environment. Moreover, the proposed MPR Protocol relies on both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication to achieve a reliable retrieval of data, giving it a higher retrieval rate than methods that use V2I or V2V only. Finally, the experiments performed show the proposed protocol to achieve a more reliable data retrieval with lower communication delay when compared to related techniques.
|
9 |
Deployment of AI Model inside Docker on ARM-Cortex-based Single-Board Computer : Technologies, Capabilities, and PerformanceWoldeMichael, Helina Getachew January 2018 (has links)
IoT has become tremendously popular. It provides information access, processing and connectivity for a huge number of devices or sensors. IoT systems, however, often do not process the information locally, rather send the information to remote locations in the Cloud. As a result, it adds huge amount of data traffic to the network and additional delay to data processing. The later feature might have significant impact on applications that require fast response times, such as sophisticated artificial intelligence (AI) applications including Augmented reality, face recognition, and object detection. Consequently, edge computing paradigm that enables computation of data near the source has gained a significant importance in achieving a fast response time in the recent years. IoT devices can be employed to provide computational resources at the edge of the network near the sensors and actuators. The aim of this thesis work is to design and implement a kind of edge computing concept that brings AI models to a small embedded IoT device by the use of virtualization concepts. The use of virtualization technology enables the easy packing and shipping of applications to different hardware platforms. Additionally, this enable the mobility of AI models between edge devices and the Cloud. We will implement an AI model inside a Docker container, which will be deployed on a FireflyRK3399 single-board computer (SBC). Furthermore, we will conduct CPU and memory performance evaluations of Docker on Firefly-RK3399. The methodology adopted to reach to our goal is experimental research. First, different literatures have been studied to demonstrate by implementation the feasibility of our concept. Then we setup an experiment that covers measurement of performance metrics by applying synthetic load in multiple scenarios. Results are validated by repeating the experiment and statistical analysis. Results of this study shows that, an AI model can successfully be deployed and executed inside a Docker container on Arm-Cortex-based single-board computer. A Docker image of OpenFace face recognition model is built for ARM architecture of the Firefly SBC. On the other hand, the performance evaluation reveals that the performance overhead of Docker in terms of CPU and Memory is negligible. The research work comprises the mechanisms how AI application can be containerized in ARM architecture. We conclude that the methods can be applied to containerize software application in ARM based IoT devices. Furthermore, the insignificant overhead brought by Docker facilitates for deployment of applications inside a container with less performance overhead. The functionality of IoT device i.e. Firefly-RK3399 is exploited in this thesis. It is shown that the device is capable and powerful and gives an insight for further studies.
|
10 |
Efficient Resource Management for Video Applications in the Era of Internet-of-Things (IoT)Perala, Sai Saketh Nandan 01 May 2018 (has links)
The Internet-of-Things (IoT) is a network of interconnected devices with sensing, monitoring and processing functionalities that work in a cooperative way to offer services. Smart buildings, self-driving cars, house monitoring and management, city electricity and pollution monitoring are some examples where IoT systems have been already deployed. Amongst different kinds of devices in IoT, cameras play a vital role, since they can capture rich and resourceful content. However, since multiple IoT devices share the same gateway, the data that is produced from high definition cameras congest the network and deplete the available computational resources resulting in Quality-of-Service degradation corresponding to the visual content. In this thesis, we present an edge-based resource management framework for serving video processing applications in an Internet-of-Things (IoT) environment. In order to support the computational demands of latency-sensitive video applications and utilize effectively the available network resources, we employ edge-based resource management policy. We evaluate our proposed framework with a face recognition use case.
|
Page generated in 0.0826 seconds