Spelling suggestions: "subject:"edgecomputing"" "subject:"geocomputing""
11 |
Software-Defined Computational Offloading for Mobile Edge ComputingKrishna, Nitesh 03 May 2018 (has links)
Computational offloading advances the deployment of Mobile Edge Computing (MEC) in the next generation communication networks. However, the distributed nature of the mobile users and the complex applications make it challenging to schedule the tasks reasonably among multiple devices. Therefore, by leveraging the idea of Software-Defined Networking (SDN) and Service Composition (SC), we propose a Software-Defined Service Composition model (SDSC). In this model, the SDSC controller is deployed at the edge of the network and composes service in a centralized manner to reduce the latency of the task execution and the traffic on the access links by satisfying the user-specific requirement. We formulate the low latency service composition as a Constraint Satisfaction Problem (CSP) to make it a user-centric approach. With the advent of the SDN, the global view and the control of the entire network are made available to the network controller which is further leveraged by our SDSC approach.
Furthermore, the service discovery and the offloading of tasks are designed for MEC environment so that the users can have a complex and robust system. Moreover, this approach performs the task execution in a distributed manner. We also define the QoS model which provides the composition rule that forms the best possible service composition at the time of need.
Moreover, we have extended our SDSC model to involve the constant mobility of the mobile devices. To solve the mobility issue, we propose a mobility model and a mobility-aware QoS approach enabled in the SDSC model. The experimental simulation results demonstrate that our approach can obtain better performance than the energy saving greedy algorithm and the random offloading approach in a mobile environment.
|
12 |
Adaptive Distributed Caching for Scalable Machine Learning ServicesDrolia, Utsav 01 August 2017 (has links)
Applications for Internet-enabled devices use machine learning to process captured data to make intelligent decisions or provide information to users. Typically, the computation to process the data is executed in cloud-based backends. The devices are used for sensing data, offloading it to the cloud, receiving responses and acting upon them. However, this approach leads to high end-to-end latency due to communication over the Internet. This dissertation proposes reducing this response time by minimizing offloading, and pushing computation close to the source of the data, i.e. to edge servers and devices themselves. To adapt to the resource constrained environment at the edge, it presents an approach that leverages spatiotemporal locality to push subparts of the model to the edge. This approach is embodied in a distributed caching framework, Cachier. Cachier is built upon a novel caching model for recognition, and is distributed across edge servers and devices. The analytical caching model for recognition provides a formulation for expected latency for recognition requests in Cachier. The formulation incorporates the effects of compute time and accuracy. It also incorporates network conditions, thus providing a method to compute expected response times under various conditions. This is utilized as a cost function by Cachier, at edge servers and devices. By analyzing requests at the edge server, Cachier caches relevant parts of the trained model at edge servers, which is used to respond to requests, minimizing the number of requests that go to the cloud. Then, Cachier uses context-aware prediction to prefetch parts of the trained model onto devices. The requests can then be processed on the devices, thus minimizing the number of offloaded requests. Finally, Cachier enables cooperation between nearby devices to allow exchanging prefetched data, reducing the dependence on remote servers even further. The efficacy of Cachier is evaluated by using it with an art recognition application. The application is driven using real world traces gathered at museums. By conducting a large-scale study with different control variables, we show that Cachier can lower latency, increase scalability and decrease infrastructure resource usage, while maintaining high accuracy.
|
13 |
Efficient and Proactive Offloading Techniques for Sustainable and Mobility-aware Resource Management in Heterogeneous Mobile Cloud EnvironmentsGuan, Shichao 28 May 2020 (has links)
To support increasingly sophisticated sensors and resource-hungry applications with the current-used Lithium-based batteries and to augment mobile computing power further, the concept of the Cloudlet-based offloading is proposed which enables to migrate part of application computing tasks from battery-limited low-capacity mobile elements to the local edge. Such Cloudlet-based offloading technologies extend the provisioning of computing and storage capabilities from remote Cloud Data Centers to the proximity of end users via heterogeneous networks. However, Cloudlet-based offloading is required to coordinate among User Equipment, inter-Cloudlet nodes and remote Cloud Data Centers, which emerges new challenges and issues regarding how to enable Cloudlet-based offloading in the context of mobile edge environment and how to achieve execution- and energy-efficient offloading allocation under limited available resources.
In this dissertation, a Cloudlet-based Mobile Cloud offloading prototype is first proposed. A mechanism for handling diverse computing resources is described; by adopting it, idle public resources can be easily configured as additional computing capabilities in the virtual resource pool. A fast deployment model is built to relieve the migration and installation cost when adapting the platform. An energy-saving strategy is utilized to reduce the consumption of computing resources. Security components are implemented to protect sensitive information and block malicious attacks in the cloud.
Concerning the limited processing capability on the edge, a task-centric energy-aware Cloudlet-based Mobile Cloud model is formulated. A Cloudlet task-based offloading mechanism is proposed to achieve energy-aware offloading resource preparation and scheduling on the Cloudlet. A Cloud task-centric scheduling algorithm is presented for the green collaborative offloading processing between Cloudlet and remote Cloud.
Considering the dynamic and heterogeneity of the offloading environment, a hybrid offloading model to solve the heterogeneous resource-constraint offloading issues on the dynamic Cloudlets. A queue-based offloading framework is developed to formulate and analyze the mixed migration-based and partition-based offloading behaviours on the Cloudlet. The execution and energy-aware heterogeneous offloading resource allocation problem is formalized and solved. A time series-based load prediction model is designed on the Cloudlet to achieve fine-grain proactive resource allocation.
Regarding the mobility of User Equipment and the diverse priority of offloading tasks, an edge-based mobility-aware offloading model is modeled to solve the intra-Cloudlet offloading scheduling issue and inter-Cloudlet load-aware heterogeneous resource allocation issue. A priority-based queueing model is designed to formulate the intra-Cloudlet mobility-aware offloading scheduling problem, resolved by a heuristic solution. The energy-aware inter-Cloudlet resource selection procedure is formalized in a mobility-aware multi-site resource allocation model, which is further solved by lightweight dynamic load balancing.
|
14 |
Cooperative Perception for Connected Autonomous Vehicle Edge Computing SystemChen, Qi 08 1900 (has links)
This dissertation first conducts a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems for connected autonomous vehicles (CAVs). A LiDAR (Light Detection and Ranging sensor) point cloud-based 3D object detection method is deployed to enhance detection performance by expanding the effective sensing area, capturing critical information in multiple scenarios and improving detection accuracy. In addition, a point cloud feature based cooperative perception framework is proposed on edge computing system for CAVs. This dissertation also uses the features' intrinsically small size to achieve real-time edge computing, without running the risk of congesting the network. In order to distinguish small sized objects such as pedestrian and cyclist in 3D data, an end-to-end multi-sensor fusion model is developed to implement 3D object detection from multi-sensor data. Experiments show that by solving multiple perception on camera and LiDAR jointly, the detection model can leverage the advantages from high resolution image and physical world LiDAR mapping data, which leads the KITTI benchmark on 3D object detection. At last, an application of cooperative perception is deployed on edge to heal the live map for autonomous vehicles. Through 3D reconstruction and multi-sensor fusion detection, experiments on real-world dataset demonstrate that a high definition (HD) map on edge can afford well sensed local data for navigation to CAVs.
|
15 |
I-BOT: INTERFERENCE BASED ORCHESTRATION OF TASKS FOR DYNAMIC UNMANAGED EDGE COMPUTINGShikhar Suryavansh (9193610) 31 July 2020 (has links)
<div><div><div><p>The increasing cost of cloud services and the need for decentralization of servers has led to a rise of interest in edge computing. In recent years, edge computing has become a popular choice for latency-sensitive applications like facial recognition and augmented reality because it is closer to the end users compared to the cloud. However, the presence of multiple edge servers adversely affects the reliability due to difficulty in maintenance of heterogeneous servers. In this thesis, we first evaluate the performance of various server configuration models in edge computing using EdgeCloudSim, a popular simulator for edge computing. The performance is evaluated in terms of service time and percentage of failed tasks for an Augmented Reality application. We evaluated the performance of the following edge computing models, Exclusive: Mobile only, Edge only, Cloud only; and Hybrid: Edge & Cloud hybrid with load-balancing on the Edge, and Mobile & Edge hybrid. We analyzed the impact of variation of different parameters such as WAN bandwidth, cost of cloud resources, heterogeneity of edge servers, etc., on the performance of the edge computing mod- els. We show that due to variation in the above parameters, the exclusive models are not sufficient for computational requirements and there is a need for hybrid edge computing models. </p><p>Next, we introduce a novel edge computing model called unmanaged edge computing and propose an orchestration scheme in this scenario. Although infrastructure providers are working toward creating managed edge networks, personal devices such as laptops, desktops, and tablets, which are widely available and are underutilized, can also be used as potential edge devices. We call such devices Unmanaged Edge Devices (UEDs). Scheduling application tasks on such an unmanaged edge system is not straightforward because of three fundamental reasons—heterogeneity in the computational capacity of the UEDs, uncertainty in the availability of the UEDs (due to the devices leaving the system), and interference among multiple tasks sharing a UED. In this work, we present I-BOT, an interference-based orchestration scheme for latency sensitive tasks on an Unmanaged Edge Platform (UEP). It minimizes the completion time of applications and is bandwidth efficient. I-BOT brings forth three innovations. First, it profiles and predicts the interference patterns of the tasks to make scheduling decisions. Second, it uses a feedback mechanism to adjust for changes in the computational capacity of the UEDs and a prediction mechanism to handle their sporadic exits, both of which are fundamental characteristics of a UEP. Third, it accounts for input dependence of tasks in its scheduling decision (such as, two tasks requiring the same input data). To demonstrate the effectiveness of I-BOT, we run real-world unit experiments on UEDs to collect data to drive our simulations. We then run end-to-end simulations with applications representing autonomous driv- ing, composed of multiple tasks. We compare to two basic baselines (random and round-robin) and two state-of-the-arts, Lavea [SEC-2017] and Petrel [MSN-2018] for scheduling these applications on varying-sized UEPs. Compared to these baselines, I-BOT significantly reduces the average service time of application tasks. This reduction is more pronounced in dynamic heterogeneous environments, which would be the case in a UEP.</p></div></div></div>
|
16 |
Computation offloading of 5G devices at the Edge using WebAssemblyHansson, Gustav January 2021 (has links)
With an ever-increasing percentage of the human population connected to the internet, the amount of data produced and processed is at an all-time high. Edge Computing has emerged as a paradigm to handle this growth and, combined with 5G, enables complex time-sensitive applications running on resource-restricted devices. This master thesis investigates the use of WebAssembly in the context of computa¬tional offloading at the Edge. The focus is on utilizing WebAssembly to move computa¬tional heavy parts of a system from an end device to an Edge Server. An objective is to improve program performance by reducing the execution time and energy consumption on the end device. A proof-of-concept offloading system is developed to research this. The system is evaluated on three different use cases; calculating Fibonacci numbers, matrix multipli¬cation, and image recognition. Each use case is tested on a Raspberry Pi 3 and Pi 4 comparing execution of the WebAssembly module both locally and offloaded. Each test will also run natively on both the server and the end device to provide some baseline for comparison.
|
17 |
Privacy Protection and Mobility Enhancement in InternetZhang, Ping 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The Internet has substantially embraced mobility since last decade. Cellular data network carries majority of Internet mobile access traffic and become the de facto solution of accessing Internet in mobile fashion, while many clean-slate Internet mobility solutions were proposed but none of them has been largely deployed. Internet mobile users increasingly concern more about their privacy as both researches and real-world incidents show leaking of communication and location privacy could lead to serious consequences. Just the communication itself between mobile user and their peer users or websites could leak considerable privacy of mobile user, such as location history, to other parties.
Additionally, comparing to ordinary Internet access, connecting through cellular network yet provides equivalent connection stability or longevity.
In this research we proposed a novelty paradigm that leverages concurrent far-side proxies to maximize network location privacy protection and minimize interruption and performance penalty brought by mobility.To avoid the deployment feasibility hurdle we also investigated the root causes impeding popularity of existing Internet mobility proposals and proposed guidelines on how to create an economical feasible solution for this goal.
Based on these findings we designed a mobility support system offered as a value-added service by mobility service providers and built on elastic infrastructure that leverages various cloud aided designs, to satisfy economic feasibility and explore the architectural trade-offs among service QoS, economic viability, security and privacy.
|
18 |
Intelligent Device Selection in Federated Edge Learning with Energy EfficiencyPeng, Cheng 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Due to the increasing demand from mobile devices for the real-time response of cloud computing services, federated edge learning (FEL) emerges as a new computing paradigm, which utilizes edge devices to achieve efficient machine learning while protecting their data privacy. Implementing efficient FEL suffers from the challenges of devices' limited computing and communication resources, as well as unevenly distributed datasets, which inspires several existing research focusing on device selection to optimize time consumption and data diversity. However, these studies fail to consider the energy consumption of edge devices given their limited power supply, which can seriously affect the cost-efficiency of FEL with unexpected device dropouts.
To fill this gap, we propose a device selection model capturing both energy consumption and data diversity optimization, under the constraints of time consumption and training data amount. Then we solve the optimization problem by reformulating the original model and designing a novel algorithm, named E2DS, to reduce the time complexity greatly. By comparing with two classical FEL schemes, we validate the superiority of our proposed device selection mechanism for FEL with extensive experimental results.
Furthermore, for each device in a real FEL environment, it is the fact that multiple tasks will occupy the CPU at the same time, so the frequency of the CPU used for training fluctuates all the time, which may lead to large errors in computing energy consumption. To solve this problem, we deploy reinforcement learning to learn the frequency so as to approach real value. And compared to increasing data diversity, we consider a more direct way to improve the convergence speed using loss values. Then we formulate the optimization problem that minimizes the energy consumption and maximizes the loss values to select the appropriate set of devices. After reformulating the problem, we design a new algorithm FCE2DS as the solution to have better performance on convergence speed and accuracy. Finally, we compare the performance of this proposed scheme with the previous scheme and the traditional scheme to verify the improvement of the proposed scheme in multiple aspects.
|
19 |
Contributions to Infrastructure Deployment and Management in Vehicular NetworksLamb, Zachary W. 01 October 2019 (has links)
No description available.
|
20 |
Computation Offloading and Service Caching in Heterogeneous MEC Wireless NetworksZhang, Yongqiang 04 1900 (has links)
Mobile edge computing (MEC) can dramatically promote the compu- tation capability and prolong the lifetime of mobile users by offloading computation- intensive tasks to edge cloud. In this thesis, a spatial-random two-tier heterogeneous network (HetNet) is modelled to feature random node distribution, where the small- cell base stations (SBSs) and the macro base stations (MBSs) are cascaded with resource-limited servers and resource-unlimited servers, respectively. Only a certain type of application services and finite number of offloaded tasks can be cached and processed in the resource-limited edge server. For that setup, we investigate the per- formance of two offloading strategies corresponding to integrated access and backhaul (IAB)-enabled MEC networks and traditional cellular MEC networks. By using tools from stochastic geometry and queuing theory, we derive the average delay for the two different strategies, in order to better understand the influence of IAB on MEC networks. Simulations results are provided to verify the derived expressions and to reveal various system-level insights.
|
Page generated in 0.0733 seconds