• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 110
  • 110
  • 110
  • 36
  • 24
  • 22
  • 22
  • 19
  • 19
  • 19
  • 19
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Real-Time Resource Optimization for Wireless Networks

Huang, Yan 11 January 2021 (has links)
Resource allocation in modern wireless networks is constrained by increasingly stringent real-time requirements. Such real-time requirements typically come from, among others, the short coherence time on a wireless channel, the small time resolution for resource allocation in OFDM-based radio frame structure, or the low-latency requirements from delay-sensitive applications. An optimal resource allocation solution is useful only if it can be determined and applied to the network entities within its expected time. For today's wireless networks such as 5G NR, such expected time (or real-time requirement) can be as low as 1 ms or even 100 μs. Most of the existing resource optimization solutions to wireless networks do not explicitly take real-time requirement as a constraint when developing solutions. In fact, the mainstream of research works relies on the asymptotic complexity analysis for designing solution algorithms. Asymptotic complexity analysis is only concerned with the growth of its computational complexity as the input size increases (as in the big-O notation). It cannot capture the real-time requirement that is measured in wall-clock time. As a result, existing approaches such as exact or approximate optimization techniques from operations research are usually not useful in wireless networks in the field. Similarly, many problem-specific heuristic solutions with polynomial-time asymptotic complexities may suffer from a similar fate, if their complexities are not tested in actual wall-clock time. To address the limitations of existing approaches, this dissertation presents novel real- time solution designs to two types of optimization problems in wireless networks: i) problems that have closed-form mathematical models, and ii) problems that cannot be modeled in closed-form. For the first type of problems, we propose a novel approach that consists of (i) problem decomposition, which breaks an original optimization problem into a large number of small and independent sub-problems, (ii) search intensification, which identifies the most promising problem sub-space and selects a small set of sub-problems to match the available GPU processing cores, and (iii) GPU-based large-scale parallel processing, which solves the selected sub-problems in parallel and finds a near-optimal solution to the original problem. The efficacy of this approach has been illustrated by our solutions to the following two problems. • Real-Time Scheduling to Achieve Fair LTE/Wi-Fi Coexistence: We investigate a resource optimization problem for the fair coexistence between LTE and Wi-Fi in the unlicensed spectrum. The real-time requirement for finding the optimal channel division and LTE resource allocation solution is on 1 ms time scale. This problem involves the optimal division of transmission time for LTE and Wi-Fi across multi- ple unlicensed bands, and the resource allocation among LTE users within the LTE's "ON" periods. We formulate this optimization problem as a mixed-integer linear pro- gram and prove its NP-hardness. Then by exploiting the unique problem structure, we propose a real-time solution design that is based on problem decomposition and GPU-based parallel processing techniques. Results from an implementation on the NVIDIA GPU/CUDA platform demonstrate that the proposed solution can achieve near-optimal objective and meet the 1 ms timing requirement in 4G LTE. • An Ultrafast GPU-based Proportional Fair Scheduler for 5G NR: We study the popular proportional-fair (PF) scheduling problem in a 5G NR environment. The real-time requirement for determining the optimal (with respect to the PF objective) resource allocation and MCS selection solution is 125 μs (under 5G numerology 3). In this problem, we need to allocate frequency-time resource blocks on an operating channel and assign modulation and coding scheme (MCS) for each active user in the cell. We present GPF+ — a GPU based real-time PF scheduler. With GPF+, the original PF optimization problem is decomposed into a large number of small and in- dependent sub-problems. We then employ a cross-entropy based search intensification technique to identify the most promising problem sub-space and select a small set of sub-problems to fit into a GPU. After solving the selected sub-problems in parallel using GPU cores, we find the best sub-problem solution and use it as the final scheduling solution. Evaluation results show that GPF+ is able to provide near-optimal PF performance in a 5G cell while meeting the 125 μs real-time requirement. For the second type of problems, where there is no closed-form mathematical formulation, we propose to employ model-free deep learning (DL) or deep reinforcement learning (DRL) techniques along with judicious consideration of timing requirement throughout the design. Under DL/DRL, we employ deep function approximators (neural networks) to learn the unknown objective function of an optimization problem, approximate an optimal algorithm to find resource allocation solutions, or discover important mapping functions related to the resource optimization. To meet the real-time requirement, we propose to augment DL or DRL methods with optimization techniques at the input or output of the deep function approximators to reduce their complexities and computational time. Under this approach, we study the following two problems: • A DRL-based Approach to Dynamic eMBB/URLLC Multiplexing in 5G NR: We study the problem of dynamic multiplexing of eMBB and URLLC on the same channel through preemptive resource puncturing. The real-time requirement for determining the optimal URLLC puncturing solution is 1 ms (under 5G numerology 0). A major challenge in solving this problem is that it cannot be modeled using closed-form mathematical expressions. To address this issue, we develop a model-free DRL approach which employs a deep neural network to learn an optimal algorithm to allocate the URLLC puncturing over the operating channel, with the objective of minimizing the adverse impact from URLLC traffic on eMBB. Our contributions include a novel learning method that exploits the intrinsic properties of the URLLC puncturing optimization problem to achieve a fast and stable learning convergence, and a mechanism to ensure feasibility of the deep neural network's output puncturing solution. Experimental results demonstrate that our DRL-based solution significantly outperforms state-of-the-art algorithms proposed in the literature and meets the 1 ms real-time requirement for dynamic multiplexing. • A DL-based Link Adaptation for eMBB/URLLC Multiplexing in 5G NR: We investigate MCS selection for eMBB traffic under the impact of URLLC preemptive puncturing. The real-time requirement for determining the optimal MCSs for all eMBB transmissions scheduled in a transmission interval is 125 μs (under 5G numerology 3). The objective is to have eMBB meet a given block-error rate (BLER) target under the adverse impact of URLLC puncturing. Since this problem cannot be mathematically modeled in closed-form, we proposed a DL-based solution design that uses a deep neural network to learn and predict the BLERs of a transmission under each MCS level. Then based on the BLER predictions, an optimal MCS can be found for each transmission that can achieve the BLER target. To meet the 5G real-time requirement, we implement this design through a hybrid CPU and GPU architecture to minimize the execution time. Extensive experimental results show that our design can select optimal MCS under the impact of preemptive puncturing and meet the 125 μs timing requirement. / Doctor of Philosophy / In modern wireless networks such as 4G LTE and 5G NR, the optimal allocation of radio resources must be performed within a real-time requirement of 1 ms or even 100 μs time scale. Such a real-time requirement comes from the physical properties of wireless channels, the short time resolution for resource allocation defined in the wireless communication standards, and the low-latency requirement from delay-sensitive applications. Real-time requirement, although necessary for wireless networks in the field, has hardly been considered as a key constraint for solution design in the research community. Existing solutions in the literature mostly consider theoretical computational complexities, rather than actual computation time as measured by wall clock. To address the limitations of existing approaches, this dissertation presents real-time solution designs to two types of optimization problems in wireless networks: i) problems that have mathematical models, and ii) problems that cannot be modeled mathematically. For the first type of problems, we propose a novel approach that consists of (i) problem decomposition, (ii) search intensification, and (iii) GPU-based large-scale parallel processing techniques. The efficacy of this approach has been illustrated by our solutions to the following two problems. • Real-Time Scheduling to Achieve Fair LTE/Wi-Fi Coexistence: We investigate a resource optimization problem for the fair coexistence between LTE and Wi-Fi users in the same (unlicensed) spectrum. The real-time requirement for finding the optimal LTE resource allocation solution is on 1 ms time scale. • An Ultrafast GPU-based Proportional Fair Scheduler for 5G NR: We study the popular proportional-fair (PF) scheduling problem in a 5G NR environment. The real-time requirement for determining the optimal resource allocation and modulation and coding scheme (MCS) for each user is 125 μs. For the second type of problems, where there is no mathematical formulation, we propose to employ model-free deep learning (DL) or deep reinforcement learning (DRL) techniques along with judicious consideration of timing requirement throughout the design. Under this approach, we study the following two problems: • A DRL-based Approach to Dynamic eMBB/URLLC Multiplexing in 5G NR: We study the problem of dynamic multiplexing of eMBB and URLLC on the same channel through preemptive resource puncturing. The real-time requirement for determining the optimal URLLC puncturing solution is 1 ms. • A DL-based Link Adaptation for eMBB/URLLC Multiplexing in 5G NR: We investigate MCS selection for eMBB traffic under the impact of URLLC preemptive puncturing. The real-time requirement for determining the optimal MCSs for all eMBB transmissions scheduled in a transmission interval is 125 μs.
42

End-to-End Autonomous Driving with Deep Reinforcement Learning in Simulation Environments

Wang, Bingyu 10 April 2024 (has links)
In the rapidly evolving field of autonomous driving, the integration of Deep Reinforcement Learning (DRL) promises significant advancements towards achieving reliable and efficient vehicular systems. This study presents a comprehensive examination of DRL’s application within a simulated autonomous driving context, with a focus on the nuanced impact of representation learning parameters on the performance of end-to-end models. An overview of the theoretical underpinnings of machine learning, deep learning, and reinforcement learning is provided, laying the groundwork for their application in autonomous driving scenarios. The methodology outlines a detailed framework for training autonomous vehicles in the Duckietown simulation environment, employing both non-end-to-end and end-to-end models to investigate the effectiveness of various reinforcement learning algorithms and representation learning techniques. At the heart of this research are extensive simulation experiments designed to evaluate the Proximal Policy Optimization (PPO) algorithm’s effectiveness within the established framework. The study delves into reward structures and the impact of representation learning parameters on the performance of end-to-end models. A critical comparison of the models in the validation chapter highlights the significant role of representation learning parameters in the outcomes of DRL-based autonomous driving systems. The findings reveal that meticulous adjustment of representation learning parameters markedly influences the end-to-end training process. Notably, image segmentation techniques significantly enhance feature recognizability and model performance.:Contents List of Figures List of Tables List of Abbreviations List of Symbols 1 Introduction 1.1 Autonomous Driving Overview 1.2 Problem Description 1.3 Research Structure 2 Research Background 2.1 Theoretical Basis 2.1.1 Machine Learning 2.1.2 Deep Learning 2.1.3 Reinforcement Learning 2.2 Related Work 3 Methodology 3.1 Problem Definition 3.2 Simulation Platform 3.3 Observation Space 3.3.1 Observation Space of Non-end-to-end model 3.3.2 Observation Space of end-to-end model 3.4 Action Space 3.5 Reward Shaping 3.5.1 speed penalty 3.5.2 position reward 3.6 Map and training dataset 3.6.1 Map Design 3.6.2 Training Dataset 3.7 Variational Autoencoder Structure 3.7.1 Mathematical fundation for VAE 3.8 Reinforcement Learning Framework 3.8.1 Actor-Critic Method 3.8.2 Policy Gradient 3.8.3 Trust Region Policy Optimization 3.8.4 Proximal Policy Optimization 4 Simulation Experiments 4.1 Experimental Setup 4.2 Representation Learning Model 4.3 End-to-end Model 5 Result 6 Validation and Evaluation 6.1 Validation of End-to-end Model 6.2 Evaluation of End-to-end Model 6.2.1 Comparison with Baselines 6.2.2 Comparison with Different Representation Learning Model 7 Conclusion and Future Work 7.1 Summary 7.2 Future Research
43

Autonomous Navigation with Deep Reinforcement Learning in Carla Simulator

Wang, Peilin 08 December 2023 (has links)
With the rapid development of autonomous driving and artificial intelligence technology, end-to-end autonomous driving technology has become a research hotspot. This thesis aims to explore the application of deep reinforcement learning in the realizing of end-to-end autonomous driving. We built a deep reinforcement learning virtual environment in the Carla simulator, and based on it, we trained a policy model to control a vehicle along a preplanned route. For the selection of the deep reinforcement learning algorithms, we have used the Proximal Policy Optimization algorithm due to its stable performance. Considering the complexity of end-to-end autonomous driving, we have also carefully designed a comprehensive reward function to train the policy model more efficiently. The model inputs for this study are of two types: firstly, real-time road information and vehicle state data obtained from the Carla simulator, and secondly, real-time images captured by the vehicle's front camera. In order to understand the influence of different input information on the training effect and model performance, we conducted a detailed comparative analysis. The test results showed that the accuracy and significance of the information has a significant impact on the learning effect of the agent, which in turn has a direct impact on the performance of the model. Through this study, we have not only confirmed the potential of deep reinforcement learning in the field of end-to-end autonomous driving, but also provided an important reference for future research and development of related technologies.
44

AI-Based Self-Adaptive Software in a 5G Simulation / AI-baserad självanpassande programvara i 5G-simulering

Jönsson, Axel, Hammarhjelm, Erik January 2024 (has links)
5G has emerged to revolutionize the telecommunications industry. With its many possibilities, there are also great challenges, such as maintaining the increased complexity of themany parameters in these new networks. It is a common practice to test new features of thenetworks before employing them, and this is often done in a simulated environment. Thetask of this thesis was to investigate if self-adaptive software, in simulations at Ericsson,could dynamically change the bandwidth to increase the net throughput while minimizingthe packet loss, i.e. to maximize the overall quality of service on the network, without theneed of human intervention. A simple simulation of a 5G network was created to trainand test the effect of two proposed AI-models. The models tested were Proximal PolicyOptimization and Deep Deterministic Policy Gradient, where the former model showedpromising results while the latter did not yield any significant improvements comparedto the benchmarks. The study indicates that self-adaptive software, in simulated environments, can effectively be achieved by using AI while increasing the quality of service.
45

Intelligent autoscaling in Kubernetes : the impact of container performance indicators in model-free DRL methods / Intelligent autoscaling in Kubernetes : påverkan av containerprestanda-indikatorer i modellfria DRL-metoder

Praturlon, Tommaso January 2023 (has links)
A key challenge in the field of cloud computing is to automatically scale software containers in a way that accurately matches the demand for the services they run. To manage such components, container orchestrator tools such as Kubernetes are employed, and in the past few years, researchers have attempted to optimise its autoscaling mechanism with different approaches. Recent studies have showcased the potential of Actor-Critic Deep Reinforcement Learning (DRL) methods in container orchestration, demonstrating their effectiveness in various use cases. However, despite the availability of solutions that integrate multiple container performance metrics to evaluate autoscaling decisions, a critical gap exists in understanding how model-free DRL algorithms interact with a state space based on those metrics. Thus, the primary objective of this thesis is to investigate the impact of the state space definition on the performance of model-free DRL methods in the context of horizontal autoscaling within Kubernetes clusters. In particular, our findings reveal distinct behaviours associated with various sets of metrics. Notably, those sets that exclusively incorporate parameters present in the reward function demonstrate superior effectiveness. Furthermore, our results provide valuable insights when compared to related works, as our experiments demonstrate that a careful metric selection can lead to remarkable Service Level Agreement (SLA) compliance, with as low as 0.55% violations and even surpassing baseline performance in certain scenarios. / En viktig utmaning inom området molnberäkning är att automatiskt skala programvarubehållare på ett sätt som exakt matchar efterfrågan för de tjänster de driver. För att hantera sådana komponenter, container orkestratorverktyg som Kubernetes används, och i det förflutna några år har forskare försökt optimera dess autoskalning mekanism med olika tillvägagångssätt. Nyligen genomförda studier har visat potentialen hos Actor-Critic Deep Reinforcement Learning (DRL) metoder i containerorkestrering, som visar deras effektivitet i olika användningsfall. Men trots tillgången på lösningar som integrerar flera behållarprestandamått att utvärdera autoskalningsbeslut finns det ett kritiskt gap när det gäller att förstå hur modellfria DRLalgoritmer interagerar med ett tillståndsutrymme baserat på dessa mätvärden. Det primära syftet med denna avhandling är alltså att undersöka vilken inverkan statens rymddefinition har på prestandan av modellfria DRL-metoder i samband med horisontell autoskalning inom Kubernetes-kluster. I synnerhet visar våra resultat distinkta beteenden associerade med olika uppsättningar mätvärden. Särskilt de set som uteslutande innehåller parametrar som finns i belöningen funktion visar överlägsen effektivitet. Dessutom våra resultat ge värdefulla insikter jämfört med relaterade verk, som vår experiment visar att ett noggrant urval av mätvärden kan leda till anmärkningsvärt Service Level Agreement (SLA) efterlevnad, med så låg som 0, 55% överträdelser och till och med överträffande baslinjeprestanda i vissa scenarier.
46

Deep Reinforcement Learning for Building Control : A comparative study for applying Deep Reinforcement Learning to Building Energy Management / Djup förstärkningsinlärning för byggnadskontroll : En jämförande studie för att tillämpa djup förstärkningsinlärning på byggnadsenergihushållning

Zheng, Wanfu January 2022 (has links)
Energy and environment have become hot topics in the world. The building sector accounts for a high proportion of energy consumption, with over one-third of energy use globally. A variety of optimization methods have been proposed for building energy management, which are mainly divided into two types: model-based and model-free. Model Predictive Control is a model-based method but is not widely adopted by the building industry as it requires too much expertise and time to develop a model. Model-free Deep Reinforcement Learning(DRL) has successful applications in game-playing and robotics control. Therefore, we explored the effectiveness of the DRL algorithms applied to building control and investigated which DRL algorithm performs best. Three DRL algorithms were implemented, namely, Deep Deterministic Policy Gradient(DDPG), Double Deep Q learning(DDQN) and Soft Actor Critic(SAC). We used the building optimization testing (BOPTEST) framework, a standardized virtual testbed, to test the DRL algorithms. The performance is evaluated by two Key Performance Indicators(KPIs): thermal discomfort and operational cost. The results show that the DDPG agent performs best, and outperforms the baseline with the saving of thermal discomfort by 91.5% and 18.3%, and the saving of the operational cost by 11.0% and 14.6% during the peak and typical heating periods, respectively. DDQN and SAC agents do not show a clear advantage of performance over the baseline. This research highlights the excellent control performance of the DDPG agent, suggesting that the application of DRL in building control can achieve a better performance than the conventional control method. / Energi och miljö blir heta ämnen i världen. Byggsektorn står för en hög andel av energiförbrukningen, med över en tredjedel av energianvändningen globalt. En mängd olika optimeringsmetoder har föreslagits för Building Energy Management, vilka huvudsakligen är uppdelade i två typer: modellbaserade och modellfria. Model Predictive Control är en modellbaserad metod men är inte allmänt antagen av byggbranschen eftersom det kräver för mycket expertis och tid för att utveckla en modell. Modellfri Deep Reinforcement Learning (DRL) har framgångsrika tillämpningar inom spel och robotstyrning. Därför undersökte vi effektiviteten av DRL-algoritmerna som tillämpas på byggnadskontroll och undersökte vilken DRL-algoritm som presterar bäst. Tre DRL-algoritmer implementerades, nämligen Deep Deterministic Policy Gradient (DDPG), Double Deep Q Learning (DDQN) och Soft Actor Critic (SAC). Vi använde ramverket Building Optimization Testing (BOPTEST), en standardiserad virtuell testbädd, för att testa DRL-algoritmerna. Prestandan utvärderas av två Key Performance Indicators (KPIs): termiskt obehag och driftskostnad. Resultaten visar att DDPG-medlet presterar bäst och överträffar baslinjen med besparingen av termiskt obehag med 91.5% och 18.3%, och besparingen av driftskostnaden med 11.0% och 14.6% under topp och typisk uppvärmning perioder, respektive. DDQN- och SAC-agenter visar inte en klar fördel i prestanda jämfört med baslinjen. Denna forskning belyser DDPG-medlets utmärkta prestanda, vilket tyder på att tillämpningen av DRL i byggnadskontroll kan uppnå bättre prestanda än den konventionella metoden.
47

Reinforcement learning for EV charging optimization : A holistic perspective for commercial vehicle fleets

Cording, Enzo Alexander January 2023 (has links)
Recent years have seen an unprecedented uptake in electric vehicles, driven by the global push to reduce carbon emissions. At the same time, intermittent renewables are being deployed increasingly. These developments are putting flexibility measures such as dynamic load management in the spotlight of the energy transition. Flexibility measures must consider EV charging, as it has the ability to introduce grid constraints: In Germany, the cumulative power of all EV onboard chargers amounts to ca. 120 GW, while the German peak load only amounts to 80 GW. Commercial operations have strong incentives to optimize charging and flatten peak loads in real-time, given that the highest quarter-hour can determine the power-related energy bill, and that a blown fuse due to overloading can halt operations. Increasing research efforts have therefore gone into real-time-capable optimization methods. Reinforcement Learning (RL) has particularly gained attention due to its versatility, performance and realtime capabilities. This thesis implements such an approach and introduces FleetRL as a realistic RL environment for EV charging, with a focus on commercial vehicle fleets. Through its implementation, it was found that RL saved up to 83% compared to static benchmarks, and that grid overloading was entirely avoided in some scenariosby sacrificing small portions of SOC, or by delaying the charging process. Linear optimization with one year of perfect knowledge outperformed RL, but reached its practical limits in one use-case, where a feasible solution could not be found by thesolver. Overall, this thesis makes a strong case for RL-based EV charging. It further provides a foundation which can be built upon: a modular, open-source software framework that integrates an MDP model, schedule generation, and non-linear battery degradation. / Elektrifieringen av transportsektorn är en nödvändig men utmanande uppgift. I kombination med ökande solcellsproduktion och förnybara energikällor skapar det ett dilemma för elnätet som kräver omfattande flexibilitetsåtgärder. Dessa åtgärder måste inkludera laddning av elbilar, ett fenomen som har lett till aldrig tidigare skådade belastningstoppar. Ur ett kommersiellt perspektiv är incitamentet att optimera laddningsprocessen och säkerställa drifttid. Forskningen har fokuserat på realtidsoptimeringsmetoder som Deep Reinforcement Learning (DRL). Denna avhandling introducerar FleetRL som en ny RL-miljö för EV-laddning av kommersiella flottor. Genom att tillämpa ramverket visade det sig att RL sparade upp till 83% jämfört med statiska riktmärken, och att överbelastning av nätet helt kunde undvikas i de flesta scenarier. Linjär optimering överträffade RL men nådde sina gränser i snävt begränsade användningsfall. Efter att ha funnit ett positivt business case förvarje kommersiellt användningsområde, ger denna avhandling ett starkt argument för RL-baserad laddning och en grund för framtida arbete via praktiska insikter och ett modulärt mjukvaruramverk med öppen källkod.
48

Robust Deep Reinforcement Learning for Portfolio Management

Masoudi, Mohammad Amin 27 September 2021 (has links)
In Finance, the use of Automated Trading Systems (ATS) on markets is growing every year and the trades generated by an algorithm now account for most of orders that arrive at stock exchanges (Kissell, 2020). Historically, these systems were based on advanced statistical methods and signal processing designed to extract trading signals from financial data. The recent success of Machine Learning has attracted the interest of the financial community. Reinforcement Learning is a subcategory of machine learning and has been broadly applied by investors and researchers in building trading systems (Kissell, 2020). In this thesis, we address the issue that deep reinforcement learning may be susceptible to sampling errors and over-fitting and propose a robust deep reinforcement learning method that integrates techniques from reinforcement learning and robust optimization. We back-test and compare the performance of the developed algorithm, Robust DDPG, with UBAH (Uniform Buy and Hold) benchmark and other RL algorithms and show that the robust algorithm of this research can reduce the downside risk of an investment strategy significantly and can ensure a safer path for the investor’s portfolio value.
49

Nuclear Renewable Integrated Energy System Power Dispatch Optimization forTightly Coupled Co-Simulation Environment using Deep Reinforcement Learning

Sah, Suba January 2021 (has links)
No description available.
50

Slice-Aware Radio Resource Management for Future Mobile Networks

Khodapanah, Behnam 05 June 2023 (has links)
The concept of network slicing has been introduced in order to enable mobile networks to accommodate multiple heterogeneous use cases that are anticipated to be served within a single physical infrastructure. The slices are end-to-end virtual networks that share the resources of a physical network, spanning the core network (CN) and the radio access network (RAN). RAN slicing can be more challenging than CN slicing as the former deals with the distribution of radio resources, where the capacity is not constant over time and is hard to extend. The main challenge in RAN slicing is to simultaneously improve multiplexing gains while assuring enough isolation between slices, meaning one of the slices cannot negatively influence other slices. In this work, a flexible and configurable framework for RAN slicing is provided, where diverse requirements of slices are taken into account, and slice management algorithms adjust the control parameters of different radio resource management (RRM) mechanisms to satisfy the slices' service level agreements (SLAs). A new entity that translates the key performance indicator (KPI) targets of the SLAs to the control parameters is introduced and is called RAN slice orchestrator. Diverse algorithms governing this entity are introduced, which range from heuristics-based to model-free methods. Besides, a protection mechanism is constructed to prevent the negative influences of slices on each other's performances. The simulation-based analysis demonstrates the feasibility of slicing the RAN with multiplexing gains and slice isolation.

Page generated in 0.177 seconds