Spelling suggestions: "subject:"amobile edge computing"" "subject:"0mobile edge computing""
1 |
Agile and Scalable Design and Dimensioning of NFV-Enabled MEC Infrastructure to Support Heterogeneous Latency-Critical ApplicationsAbou Haibeh, Lina 12 May 2023 (has links)
Mobile edge computing (MEC) has recently been introduced as a key technology, emerging in response to the increased focus on the emergence of new heterogeneous computing applications, resource-constrained mobile devices, and the long delay of traditional cloud data centers. Although many researchers have studied how the heterogeneous latency-critical application requirements can interact with the MEC system, very few have addressed how to deploy a flexible and scalable MEC infrastructure at the mobile operator for the expected heterogeneous mobile traffic.
The proposed system model in this research project relies on the Network Function Virtualization (NFV) concept to virtualize the MEC infrastructure and provide scalable and flexible infrastructure regardless of the underlying physical hardware. In NFV-enabled networks, the received mobile workload is often deployed as Service Function Chains (SFCs), responsible for accomplishing users' service requests by steering traffic through different VNF types and virtual links. Thus, efficient VNF placement and orchestration mechanisms are required to address the challenges of the heterogenous users' requests, various Quality of Service (QoS) requirements, and network traffic dynamicity.
This research project addresses the scalable design and dimensioning of an agile NFV-enabled MEC infrastructure problem from a dual perspective. First, a neural network model (i.e., a subset of machine learning) helps proactively auto-scale the various virtual service instances by predicting the number of SFCs required for a time-varying mobile traffic load. Second, the Mixed-Integer Linear Program (MILP) is used to create a physical MEC system infrastructure by mapping the predicted virtual SFC networks to the MEC nodes while minimizing deployment costs. Numerical results show that the machine learning (ML) model achieves a high prediction accuracy of 95.6%, which demonstrates the added value of using the ML technique at the edge network in reducing deployment costs while ensuring delay requirements for different latency-critical applications with high acceptance rates. Due to the exponential nature of this MILP formulation, we also propose a scalable bender decomposition approach with near-optimal results at a significantly reduced design and dimensioning cost. Numerical results show the viability of the bender decomposition approach in its proximity to the optimal dimensioning cost and in its reasonable solution time.
|
2 |
Software-Defined Computational Offloading for Mobile Edge ComputingKrishna, Nitesh 03 May 2018 (has links)
Computational offloading advances the deployment of Mobile Edge Computing (MEC) in the next generation communication networks. However, the distributed nature of the mobile users and the complex applications make it challenging to schedule the tasks reasonably among multiple devices. Therefore, by leveraging the idea of Software-Defined Networking (SDN) and Service Composition (SC), we propose a Software-Defined Service Composition model (SDSC). In this model, the SDSC controller is deployed at the edge of the network and composes service in a centralized manner to reduce the latency of the task execution and the traffic on the access links by satisfying the user-specific requirement. We formulate the low latency service composition as a Constraint Satisfaction Problem (CSP) to make it a user-centric approach. With the advent of the SDN, the global view and the control of the entire network are made available to the network controller which is further leveraged by our SDSC approach.
Furthermore, the service discovery and the offloading of tasks are designed for MEC environment so that the users can have a complex and robust system. Moreover, this approach performs the task execution in a distributed manner. We also define the QoS model which provides the composition rule that forms the best possible service composition at the time of need.
Moreover, we have extended our SDSC model to involve the constant mobility of the mobile devices. To solve the mobility issue, we propose a mobility model and a mobility-aware QoS approach enabled in the SDSC model. The experimental simulation results demonstrate that our approach can obtain better performance than the energy saving greedy algorithm and the random offloading approach in a mobile environment.
|
3 |
Efficient and Proactive Offloading Techniques for Sustainable and Mobility-aware Resource Management in Heterogeneous Mobile Cloud EnvironmentsGuan, Shichao 28 May 2020 (has links)
To support increasingly sophisticated sensors and resource-hungry applications with the current-used Lithium-based batteries and to augment mobile computing power further, the concept of the Cloudlet-based offloading is proposed which enables to migrate part of application computing tasks from battery-limited low-capacity mobile elements to the local edge. Such Cloudlet-based offloading technologies extend the provisioning of computing and storage capabilities from remote Cloud Data Centers to the proximity of end users via heterogeneous networks. However, Cloudlet-based offloading is required to coordinate among User Equipment, inter-Cloudlet nodes and remote Cloud Data Centers, which emerges new challenges and issues regarding how to enable Cloudlet-based offloading in the context of mobile edge environment and how to achieve execution- and energy-efficient offloading allocation under limited available resources.
In this dissertation, a Cloudlet-based Mobile Cloud offloading prototype is first proposed. A mechanism for handling diverse computing resources is described; by adopting it, idle public resources can be easily configured as additional computing capabilities in the virtual resource pool. A fast deployment model is built to relieve the migration and installation cost when adapting the platform. An energy-saving strategy is utilized to reduce the consumption of computing resources. Security components are implemented to protect sensitive information and block malicious attacks in the cloud.
Concerning the limited processing capability on the edge, a task-centric energy-aware Cloudlet-based Mobile Cloud model is formulated. A Cloudlet task-based offloading mechanism is proposed to achieve energy-aware offloading resource preparation and scheduling on the Cloudlet. A Cloud task-centric scheduling algorithm is presented for the green collaborative offloading processing between Cloudlet and remote Cloud.
Considering the dynamic and heterogeneity of the offloading environment, a hybrid offloading model to solve the heterogeneous resource-constraint offloading issues on the dynamic Cloudlets. A queue-based offloading framework is developed to formulate and analyze the mixed migration-based and partition-based offloading behaviours on the Cloudlet. The execution and energy-aware heterogeneous offloading resource allocation problem is formalized and solved. A time series-based load prediction model is designed on the Cloudlet to achieve fine-grain proactive resource allocation.
Regarding the mobility of User Equipment and the diverse priority of offloading tasks, an edge-based mobility-aware offloading model is modeled to solve the intra-Cloudlet offloading scheduling issue and inter-Cloudlet load-aware heterogeneous resource allocation issue. A priority-based queueing model is designed to formulate the intra-Cloudlet mobility-aware offloading scheduling problem, resolved by a heuristic solution. The energy-aware inter-Cloudlet resource selection procedure is formalized in a mobility-aware multi-site resource allocation model, which is further solved by lightweight dynamic load balancing.
|
4 |
A Comparative Study on Service Migration for Mobile Edge Computing Based on Deep LearningPark, Sung woon 15 June 2023 (has links)
Over the past few years, Deep Learning (DL), a promising technology leading the next generation of intelligent environments, has attracted significant attention and has been intensively utilized in various fields in the fourth industrial revolution era. The applications of Deep Learning in the area of Mobile Edge Computing (MEC) have achieved remarkable outcomes. Among several functionalities of MEC, the service migration frameworks have been proposed to overcome the shortcomings of the traditional methodologies in supporting high-mobility users with real-time responses.
The service migration in MEC is a complex optimization problem that considers several dynamic environmental factors to make an optimal decision on whether, when, and where to migrate. In line with the trend, various service migration frameworks based on a variety of optimization algorithms have been proposed to overcome the limitations of the traditional methodologies. However, it is required to devise a more sophisticated and realistic model by solving the computational complexity and improving the inefficiency of existing frameworks. Therefore, an efficient service migration mechanism that is able to capture the environmental variables comprehensively is required.
In this thesis, we propose an enhanced service migration model to address user proximity issues. We first introduce innovative service migration models for single-user and multi-user to overcome the users’ proximity issue while enforcing the service execution efficiency. Secondly, We formulate the service migration process as a complicated optimization problem and utilize Deep Reinforcement Learning (DRL) to estimate the optimal policy to minimize the migration cost, transaction cost, and consumed energy jointly. Lastly, we compare the proposed models with existing migration methodologies through analytical simulations from various aspects. The numerical results demonstrate that the proposed models can estimate the optimal policy despite the computational complexity caused by the dynamic environment and high-mobility users.
|
5 |
RESOURCE MANAGEMENT FOR MOBILE COMPUTATION OFFLOADINGChen, Hong 11 1900 (has links)
Mobile computation offloading (MCO) is a way of improving mobile device (MD) performance by offloading certain task executions to a more resourceful edge server (ES), rather than running them locally on the MD. This thesis first considers the problem of assigning the wireless communication bandwidth and the ES capacity needed for this remote task execution, so that task completion time constraints are satisfied. The objective is to minimize the average power consumption of the MDs, subject to a cost budget constraint on communication and computation resources. The thesis includes contributions for both soft and hard task completion deadline constraints. The soft deadline case aims to create assignments so that the probability of task completion time deadline violation does not exceed a given violation threshold. In the hard deadline case, it creates resource assignments where task completion time deadlines are always satisfied. The problems are first formulated as mixed integer nonlinear programs. Approximate solutions are then obtained by decomposing the problems into a collection of convex subproblems that can be efficiently solved. Results are presented that demonstrate the quality of the proposed solutions, which can achieve near optimum performance over a wide range of system parameters.
The thesis then introduces algorithms for static task class partitioning in MCO. The objective is to partition a given set of task classes into two sets that are either executed locally or those classes that are permitted to contend for remote ES execution. The goal is to find the task class partition that gives the minimum mean MD power consumption subject to task completion deadlines. The thesis generates these partitions for both soft and hard task completion deadlines. Two variations of the problem are considered. The first assumes that the wireless and computational capacities are given and the second generates both capacity assignments subject to an additional resource cost budget constraint. Two class ordering methods are introduced, one based on a task latency criterion, and another that first sorts and groups classes based on a mean power consumption criterion and then orders the task classes within each group based on a task completion time criterion. A variety of simulation results are presented that demonstrate the excellent performance of the proposed solutions.
The thesis then considers the use of digital twins (DTs) to offload physical system (PS) activity. Each DT periodically communicates with its PS, and uses these updates to implement features that reflect the real behaviour of the device. A given feature can be implemented using different models that create the feature with differing levels of system accuracy. The objective is to maximize the minimum feature accuracy for the requested features by making appropriate model selections subject to wireless channel and ES resource availability. The model selection problem is first formulated as an NP-complete integer program. It is then decomposed into multiple subproblems, each consisting of a modified Knapsack problem. A polynomial-time approximation algorithm is proposed using dynamic programming to solve it efficiently, by violating its constraints by at most a given factor. A generalization of the model selection problem is then given and the thesis proposes an approximation algorithm using dependent rounding to solve it efficiently with guaranteed constraint violations. A variety of simulation results are presented that demonstrate the excellent performance of the proposed solutions. / Thesis / Doctor of Philosophy (PhD) / Mobile devices (MDs) such as smartphones are currently used to run a wide variety of application tasks. An alternative to local task execution is to arrange for some MD tasks to be run on a remote non-mobile edge server (ES). This is referred to as mobile computation offloading (MCO). The work in this thesis studies two important facets of the MCO problem.
1. The first considers the joint effects of communication and computational resource assignment on task completion times. This work optimizes task offloading decisions, subject to task completion time requirements and the cost that one is willing to incur when designing the network. Procedures are proposed whose objective is to minimize average mobile device power consumption, subject to these cost constraints.
2. The second considers the use of digital twins (DTs) as a way of implementing mobile computation offloading. A DT implements features that describe its physical system (PS) using models that are hosted at the ES. A model selection problem is studied, where multiple DTs share the execution services at a common ES. The objective is to optimize the feature accuracy obtained by DTs subject to the communication and computation resource availability. The thesis proposes different approximation and decomposition methods that solve these problems efficiently.
|
6 |
Binary Multi-User Computation Offloading via Time Division Multiple AccessManouchehrpour, Mohammad Amin January 2023 (has links)
The limited energy and computing power of small smart devices restricts their ability to support a wide range of applications, especially those needing quick responses. Mobile edge computing offers a potential solution by providing computing resources at the network access points that can be shared by the devices. This enables the devices to offload some of their computational tasks to the access points. To make this work well for multiple devices, we need to judiciously allocate the available communication and computing resources among the devices.
The main focus of this thesis is on (near) optimal resource allocation in a K-user offloading system that employs the time division multiple access (TDMA) scheme. In this thesis, we develop effective algorithms for the resource allocation problem that aim to minimize the overall (cost of the) energy that the devices consume in completing their computational tasks within the specified deadlines while respecting the devices' constraints.
This problem is tackled for tasks that cannot be divided and hence the system must make a binary decision as to whether or not a task should be offloaded. This implies the need to develop an effective decision-making algorithm to identify a suitable group of devices for offloading. This thesis commences by developing efficient communication resource algorithms that incorporate the impact of integer finite block length in low-latency computational offloading systems with reserved computing resources. In particular, it addresses the challenge of minimizing total energy consumption in a binary offloading scenario involving K users.
The approach considers different approximations of the fundamental rate limit in the finite block length regime, departing from the conventional asymptotic rate limits developed by Shannon. Two such alternatives, namely the normal approximation and the SNR-gap approximation, are explored.
A decomposition approach is employed, dividing the problem into an inner component that seeks an optimal solution for the communication resource allocation within a defined set of offloading devices, and an outer component aimed at identifying a suitable set of offloading devices.
Given the finiteness of the block length and its integer nature, various relaxation techniques are employed to determine an appropriate communication resource allocation. These include incremental and independent roundings, alongside an extended search that utilizes randomization-based methods in both rounding schemes.
The findings reveal that incremental randomized rounding, when applied to the normal approximation of the rate limits, enhances system performance in terms of reducing the energy consumption of the offloading users.
Furthermore, customized pruned greedy search techniques for selecting the offloading devices efficiently generate good decisions. Indeed, the proposed approach outperforms a number of existing approaches. In the second contribution, we develop efficient algorithms that address the challenge of jointly allocating both computation and communication resources in a binary offloading system.
We employ a similar decomposition methodology as in the previous work to perform the decision-making, but this is now done along with joint computation and communication resource allocation. For the inner resource allocation problem, we divide the problem into two components: determining the allocation of computation resources and the optimal allocation of communication resources for the given allocation of computation resources. The allocation of the computation resources implicitly determines a suitable order for data transmission, which facilitates the subsequent optimal allocation of the communication resources. In this thesis, we introduce two heuristic approaches for allocating the computation resources. These approaches sequentially maximize the allowable transmission time for the devices in sequence, starting from the largest leading to a reduction in total offloading energy.
We demonstrate that the proposed heuristics substantially lower the computational burden associated with solving the joint computation--communication resource allocation problem while maintaining a low total energy.
In particular, its use results in substantially lower energy consumption than other simple heuristics. Additionally, the heuristics narrow the energy gap in comparison to a fictitious scenario in which each task has access to the whole computation resource without the need for sharing. / Thesis / Master of Applied Science (MASc)
|
7 |
Multiple Access Computation OffloadingSalmani, Mahsa January 2019 (has links)
The limited energy and computational resources in small-scale smart devices impede the expansion of the range of applications that those devices can support, especially to applications with tight latency constraints. Mobile edge computing is a promising framework that provides shared computational resources in the access points in the network and provides devices in that network with the opportunity to offload (a portion of) their computational tasks to the access points. To effectively capture that opportunity in an offloading system with multiple devices, the available communication and computation resources must be efficiently allocated. The main focus of this thesis is on the optimal allocation of communication resources in a K-user offloading system. The resource allocation problem that is considered in this thesis captures minimizing the total energy consumption of users while the requirements of the users, and their computational tasks, are met. That problem is addressed for two of the most widely-considered classes of computational tasks in the literature, namely, indivisible tasks (binary offloading) and divisible tasks (partial offloading).
This thesis begins with an exploration of the impact of the choice of multiple access scheme that is employed by the system on the total energy consumption of the users. In particular, the problem of minimizing the total energy consumption of a two-user
binary offloading system is tackled under various multiple access schemes, namely time division multiple access (TDMA), sequential decoding without time sharing, independent decoding, and multiple access schemes that can exploit the full capabilities of the channel, which are referred to as full multiple access schemes (FullMA) in this thesis. Using a decomposition-based approach, closed-form solutions to the resource allocation problem are obtained. Those expressions show that by exploiting the full capabilities of the channel, a FullMA scheme can significantly reduce the total energy consumption of the users as compared to the other schemes. The closed-form expressions also show that when the channel gains of the two users are equal, the TDMA scheme can achieve the optimal energy consumption. For the case of partial offloading, an analogous analysis leads to a reduced-dimension design problem and an extension to the optimally result for TDMA. In the next step of the development, the insights obtained from the decomposition-based analysis of the two-user case are used to tackle the communication resource allocation problem for a K-user offloading system in which the users are assumed to be served over a single time slot. Based on their performance in the two-user case, FullMA and TDMA schemes are considered. The mixed-integer optimization problem that arises in the binary offloading case is addressed by employing a decomposition approach with a closed-form expression obtained for the optimal resource allocation for given offloading decisions, and a tailored pruned greedy search algorithm developed herein for the offloading decisions. By exploiting the maximum allowable latency of each individual user, the proposed algorithm is able to significantly reduce the energy consumption of the users in comparison to the existing algorithms in the literature that assume equal latency constraints for all users. Furthermore, with the closed-form optimal solution to the resource allocation problem obtained for given offloading decisions, the proposed algorithm has a significantly lower computational cost compared to the existing algorithms. In the partial offloading case, a quasi-closed- form solution is obtained for the resource allocation problem.
Finally, a time-slotted signalling structure is proposed as an optimal transmission structure for a generic K-user offloading system. Furthermore, an optimal times-lotted structure that requires only K time slots is developed for a K-user offloading system that employs a FullMA scheme. The proposed time-slotted structure not only exploits the maximum latency constraint of each user, it also exploits the differences between the latency constraints of the users by taking advantage of the interference reduction that arises when a user finishes offloading. The proposed time-slotted FullMA signalling structure significantly reduces the energy consumption of the users compared to some existing methods that employ the TDMA scheme, and compared to those with FullMA, but sub-optimal single-time-slot signalling structures. Moreover, the computational cost of the proposed time-slotted algorithm is significantly lower than that of the existing algorithms in the literature. / Dissertation / Doctor of Philosophy (PhD) / The rapid increase in the number of smart devices in wireless communication networks, and the expansion in the range of computationally-intensive and latency sensitive applications that those devices are required to support, have highlighted their resource limitations in terms of energy, power, central processing unit (CPU), and memory. Mobile edge computing is a framework that provides shared computational resources at the access points of wireless networks and gives such devices the opportunity to offload (a portion of) their applications to be executed at the access points. In order to fully exploit such an opportunity when multiple devices seek to offload their applications, the available communication and computation resources must be efficiently allocated amongst those devices. The ultimate goal of this thesis is to obtain the optimal communication resource allocation in a K-user offloading system while different constraints on the devices and on the applications are satis ed. To that end, this thesis shows that the minimum energy consumption is obtained when the system exploits the full capabilities of the channel, the maximum allowable latency of each user, and the differences between the latency constraints of each user. Accordingly, this thesis proposes an optimized signalling structure and, based on that structure, low-complexity algorithms that achieve an energy-optimal resource allocation in a K-user offloading system.
|
8 |
Agile Mobile Edge Computing and Network-coded Cooperation in 5GTorre Arranz, Roberto 28 July 2021 (has links)
The architecture of the network is undergoing a series of structural changes from the core network to the user to pave the way for 5G. New infrastructure elements are being massively deployed, thus making 5G more heterogeneous. This emerging paradigm, along with new services and handheld devices, creates a massive, highly mobile, heterogeneous environment with hard constraints in throughput, latency, resilience, and power consumption. This dissertation presents Agile MEC (AMEC), a shift in the concept of MEC to support user's mobility with the rapid relocation of services; and Network-coded Cooperation (NCC), a new system for massive content distribution in cellular networks. In summary, AMEC provides a mobility framework that reliably reduces the latency and power consumption in the system, and NCC improves network throughput, network resilience, and power consumption by offloading cellular traffic to underlay networks. / Die Architektur des Netzes durchläuft eine Reihe von strukturellen Veränderungen vom Kernnetz bis zum Benutzer, um den Weg für 5G zu ebnen. Neue Infrastruktur Elemente werden massiv eingesetzt, wodurch 5G heterogener wird. Dieses aufkommende Paradigma bildet zusammen mit neuen Diensten und Handheld-Geräten eine massive, hochmobile, heterogene Umgebung mit harten Einschränkungen in Bezug auf Durchsatz, Latenz, Belastbarkeit und Stromverbrauch. In dieser Dissertation werden Agile MEC (AMEC), eine Verschiebung des MEC-Konzepts zur Unterstützung der Mobilität der Benutzer durch die schnelle Verlagerung von Diensten, und Network-coded Cooperation (NCC), ein neues System zur massiven Verteilung von Inhalten in zellularen Netzwerken, vorgestellt. Zusammenfassend lässt sich sagen, dass AMEC einen Mobilitätsrahmen bietet, der die Latenzzeit und den Stromverbrauch im System zuverlässig reduziert, und NCC verbessert den Netzwerkdurchsatz, die Netzwerkstabilität und den Stromverbrauch, indem es den zellularen Datenverkehr auf unterlagerte Netzwerke verlagert.
|
9 |
Cloudlet for the Internet-of- ThingsVargas Vargas, Fernando January 2016 (has links)
With an increasing number of people currently living in urban areas, many cities around the globe are faced with issues such as increased pollution and traffic congestion. In an effort to tackle such challenges, governments and city councils are formulating new and innovative strategies. The integration of ICT with these strategies creates the concept of smart cities. The Internet of Things (IoT) is a key driver for smart city initiatives, making it necessary to have an IT infrastructure that can take advantage of the many benefits that IoT can provide. The Cloudlet is a new infrastructure model that offers cloud-computing capabilities at the edge of the mobile network. This environment is characterized by low latency and high bandwidth, constituting a novel ecosystem where network operators can open their network edge to third parties, allowing them to flexibly and rapidly deploy innovative applications and services towards mobile subscribers. In this thesis, we present a cloudlet architecture that leverages edge computing to provide a platform for IoT devices on top of which many smart city applications can be deployed. We first provide an overview of existing challenges and requirements in IoT systems development. Next, we analyse existing cloudlet solutions. Finally, we present our cloudlet architecture for IoT, including design and a prototype solution. For our cloudlet prototype, we focused on a micro-scale emission model to calculate the CO2 emissions per individual trip of a vehicle, and implemented the functionality that allows us to read CO2 data from CO2 sensors. The location data is obtained from an Android smartphone and is processed in the cloudlet. Finally, we conclude with a performance evaluation. / Med en befolkning som ökar i urbana områden, står många av världens städer inför utmaningar som ökande avgaser och trafikstockning. I ett försök att tackla sådana utmaningar, formulerar regeringar och stadsfullmäktige nya och innovativa strategier. Integrationen av ICT med dessa strategier bildar konceptet smart cities. Internet of Things (IoT) är en drivande faktor för smart city initiativ, vilket gör det nödvändigt för en IT infrastruktur som kan ta till vara på de många fördelar som IoT bidrar med. Cloudlet är en ny infrastrukturell modell som erbjuder datormolnskompetens i mobilnätverkets edge. Denna miljö karakteriseras av låg latens och hög bandbredd, utgörande ett nytt ekosystem där nätverksoperatörer kan hålla deras nätverks-edge öppet för utomstående, vilket tillåter att flexibelt och snabbt utveckla innovativa applikationer och tjänster för mobila subskribenter. I denna avhandling presenterar vi en cloudlet-arkitektur som framhäver edge computing, för att förse en plattform för IoT utrustning där många smart city applikationer kan utvecklas. Vi förser er först med en överblick av existerande utmaningar och krav i IoT systemutveckling. Sedan analyserar vi existerande cloudlet lösningar. Slutligen presenteras vår cloudlet arkitektur för IoT, inklusive design och en prototyplösning. För vår cloudlet-prototyp har vi fokuserat på en modell av mikroskala för att räkna ut CO2 emissioner per enskild resa med fordon, och implementerat en funktion som tillåter oss att läsa CO2 data från CO2 sensorer. Platsdata är inhämtad med hjälp av en Android smartphone och behandlas i cloudlet. Det hela sammanfattas med en prestandaevaluering.
|
10 |
System Infrastructure for Mobile-Cloud ConvergenceHa, Kiryong 01 December 2016 (has links)
The convergence of mobile computing and cloud computing enables new mobile applications that are both resource-intensive and interactive. For these applications, end-to-end network bandwidth and latency matter greatly when cloud resources are used to augment the computational power and battery life of a mobile device. This dissertation designs and implements a new architectural element called a cloudlet, that arises from the convergence of mobile computing and cloud computing. Cloudlets represent the middle tier of a 3-tier hierarchy, mobile device — cloudlet—cloud, to achieve the right balance between cloud consolidation and network responsiveness. We first present quantitative evidence that shows cloud location can affect the performance of mobile applications and cloud consolidation. We then describe an architectural solution using cloudlets that are a seamless extension of todays cloud computing infrastructure. Finally, we define minimal functionalities that cloudlets must offer above/beyond standard cloud computing, and address corresponding technical challenges.
|
Page generated in 0.0755 seconds