1 |
An Automated VNF Manager based on Parameterized-Action MDP and Reinforcement LearningLi, Xinrui 15 April 2021 (has links)
Managing and orchestrating the behaviour of virtualized Network Functions (VNFs) remains a major challenge due to their heterogeneity and the ever increasing resource demands of the served flows. In this thesis, we propose a novel VNF manager (VNFM) that employs a parameterized actions-based reinforcement learning mechanism to simultaneously decide on the optimal VNF management action (e.g., migration, scaling, termination or rebooting) and the action's corresponding configuration parameters (e.g., migration location or amount of resources needed for scaling ). More precisely, we first propose a novel parameterized-action Markov decision process (PAMDP) model to accurately describe each VNF, instances of its components and their communication as well as the set of permissible management actions by the VNFM and the rewards of realizing these actions. The use of parameterized actions allows us to rigorously represent the functionalities of the VNFM in order perform various Lifecycle management (LCM) operations on the VNFs. Next, we propose a two-stage reinforcement learning (RL) scheme that alternates between learning an action-value function for the discrete LCM actions and updating the actions parameters selection policy. In contrast to existing machine learning schemes, the proposed work uniquely provides a holistic management platform the unifies individual efforts targeting individual LCM functions such as VNF placement and scaling. Performance evaluation results demonstrate the efficiency of the proposed VNFM in maintaining the required performance level of the VNF while optimizing its resource configurations.
|
2 |
A Data Model Driven Approach to Managing Network Functions Virtualization : Aiding Network Operators in Provisioning and Configuring Network FunctionsSällberg, Kristian January 2015 (has links)
This master’s thesis explains why certain network services are difficult to provision and configure using IT automation and cloud orchestration software. An improvement is proposed and motivated. This proposed improvement enables network operators to define a set of data models describing how to provision and interconnect a set of Virtual Network Functions (VNFs) (and possibly existing physical network functions) to form networks. Moreover, the proposed solution enables network operators to change the configuration at runtime. The work can be seen as a step towards self managing and auto scaling networks. The proposed approach is compared to a well known cloud management system (OpenStack) in order to evaluate if the proposed approach decreases the amount of time needed for network operators to design network topologies and services containing VNFs. Data is collected through observations of network operators, interviews, and experiment. Analysis of this data shows that the proposed approach can decrease the amount of time required for network operators to design network topologies and services. This applies if the network operators are already acquainted with the data modeling language YANG. The amount of time required to provision VNFs so that they respond to connections can also be decreased using the proposed approach. The proposed approach does not offer as much functionality as OpenStack, as it is limited to VNF scenarios. / Denna masteruppsats förklarar varför vissa nätverkstjänster är svåra att skapa och konfigurera med IT-automationsverktyg och mjukvara för molnorkestrering. En förbättring föreslås och motiveras. Den föreslagna förbättringen tillåter nätverksoperatörer att definiera en mängd datamodeller, för att beskriva hur Virtuella Nätverksfunktioner (VNF:er) skall instantieras och kopplas ihop till nätverkstjänster. Dessutom tillåter lösningen nätverksoperatörer att ändra konfiguration under tiden nätverken hanterar trafik. Arbetet kan ses som ett steg mot självhanterande och automatiskt skalande nätverk. Den föreslagna lösningen jämförs med ett välkänt molnorkestreringsverktyg (OpenStack) för att utvärdera om den föreslagna lösningen sänker mängden tid som nätverksoperatörer behöver för att designa nätverkstopologier och tjänster som innehåller VNF:er. Data samlas in genom observationer av nätverksoperatörer, intervjuer, och experiment. Analys av datan visar att den föreslagna lösningen kan minska tiden som behövs för att designa nätverkstopologier och tjänster. Fallen där detta är applicerbart, är när VNF:er närvarar i nätverk. Dessa är enklare att skapa, konfigurera, och ändra under tiden de exekverar, med den föreslagna metoden. Detta kräver också att nätverksoperatören är bekant med datamodelleringsspråket YANG. Tiden det tar att provisionera VNF:er, tills dess att de svarar till anslutningar, kan sänkas med hjälp av den föreslagna metoden. Den förslagna metoden erbjuder väsentligt begränsad funktionalitet jämfört med OpenStack, den fokuserar på att hantera VNF:er.
|
3 |
Evaluation of Using Secure Enclaves in Virtualized Radio EnvironmentsNorberg, Emil January 2019 (has links)
Virtual Network Functions (VNFs) are software applications that process network packets in virtualized environments such as clouds. Using VNFs to process network traffic inside a cloud, which could be controlled by a third-party, exposes the secrets that are stored within the VNFs to a significant amount of threats. Trusted Execution Environments (TEEs) are hardware technologies dedicated to protect software from other malicious applications and users. Open Enclave and Asylo are two SDKs that decouple software and hardware and enable developers to build applications that utilize TEEs without creating hardware dependencies. Open Enclave and Asylo are still in an early stage of development, Asylo in particular. The impact of integrating Open Enclave and Asylo to VNFs from a security and performance perspective was addressed by performing a risk assessment and running performance experiments. The identified vulnerabilities in VNFs were mitigated by using available security properties from TEEs. The results show that protecting VNFs with Open Enclave and Asylo mitigate a significant amount of threats. However, the VNFs suffer from a performance penalty when using TEEs, and are still vulnerable to side-channel and Denial-of-Service attacks.
|
4 |
Resource allocation in cloud and Content Delivery Network (CDN) / Allocation des ressources dans le cloud et les réseaux de diffusion de contenuAhvar, Shohreh 10 July 2018 (has links)
L’objectif de cette thèse est de présenter de nouveaux algorithmes de répartition des ressources sous la forme de machines virtuelles (VMs) et fonction de réseau virtuel (VNFs) dans les Clouds et réseaux de diffusion de contenu (CDNs). La thèse comprend deux principales parties: la première se concentre sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. Cette partie comprend quatre contributions. La première contribution est une étude de l’état de l’art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d’allocation des ressources, appelée NACER, pour les clouds distribués. La troisième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. Pour obtenir une meilleure performance, la quatrième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. La deuxième partie propose des algorithmes de placement de VNFs dans les Clouds et réseaux de CDNs pour optimiser les coûts. Cette partie comprend cinq contributions. Une étude de l’état de l’art sur les solutions proposées est le but de la première contribition. La deuxième contribution propose une méthode d’allocation des ressources, appelée CCVP, pour le provisionnement de service réseau dans les clouds et réseaux de ISP. La troisième contribution implémente le résultat de l’algorithme CCVP dans une plateforme réelle. La quatrième contribution considère l’effet de la permutation de VNFs dans les chaîne de services et la cinquième contribution explique le placement de VNFs pour les services à valeur ajoutée dans les CDNs / High energy costs and carbon emissions are two significant problems in distributed computing domain, such as distributed clouds and Content Delivery Networks (CDNs). Resource allocation methods (e.g., in form of Virtual Machine (VM) or Virtual Network Function (VNF) placement algorithms) have a direct effect on cost, carbon emission and Quality of Service (QoS). This thesis includes three related parts. First, it targets the problem of resource allocation (i.e., in the form of network aware VM placement algorithms) for distributed clouds and proposes cost and carbon emission efficient resource allocation algorithms for green distributed clouds. Due to the similarity of the network-aware VM placement problem in distributed clouds with a VNF placement problem, the second part of the thesis, getting experience from the first part, proposes a new cost efficient resource allocation algorithm (i.e., VNF placement) for network service provision in data centers and Internet Service Provider (ISP) network. Finally, the last part of the thesis presents new cost efficient resource allocation algorithms (i.e., VNF placement) for value-added service provisioning in NFV-based CDNs
|
5 |
HAALO : A cloud native hardware accelerator abstraction with low overheadFacchetti, Jeremy January 2019 (has links)
With the upcoming 5G deployment and the exponentially increasing data transmitted over cellular networks, off the shelf hardware won't provide enough performance to cope with the data being transferred over cellular networks. To tackle that problem, hardware accelerators will be of great support thanks to their better performances and lower energy consumption. However, hardware accelerators are not a silver bullet as their very nature prevents them to be as flexible as CPUs. Hardware accelerators integration into Kubernetes and Docker, respectively the most used tools for orchestration and containerization, is still not as flexible as it would need. In this thesis, we developed a framework that allows for a more flexible integration of these accelerators into a Kubernetes cluster using Docker containers making use of an abstraction layer instead of the classic virtualization process. Our results compare the performance of an execution with and without the framework that was developed during this thesis. We found that the framework's overhead depends on the size of the data being processed by the accelerator but does not go over a very low percentage of the total execution time. This framework provides an abstraction for hardware accelerators and thus provides an easy way to integrate hardware accelerated applications into a heterogeneous cluster or even across different clusters with different hardware accelerators types. This framework also moves the hardware specific parts of an accelerated program from the containers to the infrastructure and enables a new kind of service, OpenCL as a service.
|
6 |
Offloading Virtual Network Functions – Hierarchical ApproachLanglet, Jonatan January 2020 (has links)
Next generation mobile networks are designed to run in a virtualized environment, enabling rapid infrastructure deployment and high flexibility for coping with increasing traffic demands and new service requirements. Such network function virtualization imposes additional packet latencies and potential bottlenecks not present in legacy network equipment when run on dedicated hardware; such bottlenecks include PCIe transfer delays, virtualization overhead, and utilizing commodity server hardware which is not optimized for packet processing operations.Through recent developments in P4 programmable networking devices, it is possible to implement complex packet processing pipelines directly in the network data plane; allowing critical traffic flows to be offloaded and flexibly hardware accelerated on new programmable packet processing hardware, prior to entering the virtualized environment.In this thesis, we design and implement a novel hybrid NFV processing architecture which integrates programmable NICs and commodity server hardware, capable of offloading virtual network functions for specified traffic flows directly to the server network card; allowing these flows to completely bypass softwarization overhead, while less sensitive traffic process on the underlying host server.An evaluation in a testbed with customized traffic generators show that accelerated flows have significantly lower jitter and latency, compared with flows processed on commodity server hardware. Our evaluation gives important insights into the designs of such hardware accelerated virtual network deployments, showing that hybrid network architectures are a viable solution for enabling infrastructure scalability without sacrificing critical flow performance.
|
7 |
Content Delivery Networks as a Service (CDNaaS) / Les réseaux de diffusion de contenuYala, Louiza 23 November 2018 (has links)
Le but de cette thèse est d’étudier et d’évaluer le rôle de la virtualisation des réseau de diffusion de contenu. Nous proposons une implémentation d’une architecture CDN permettant à un opérateur de réseau de virtualiser son infrastructure CDN et de la louer à des fournisseurs de contenu. Afin d’avoir une allocation optimale des ressources, nous proposons une méthode qui combine les informations fournies lors de la demande par le fournisseur de contenu avec les données du réseau et de l’infrastructure de calcul. Nous avons modélisé ce problème d’allocation de ressources en problème d’optimisation, résolu par un algorithme. Les résultats obtenues donnent suite à la proposition d’algorithmes et d’heuristiques de placement pour l’allocation conjointe de vCPU-à-VM et le placement des VMs dans les Pms. / The goal of this thesis is to study and evaluate the role a Virtual CDNs in improving the end-users QoE while saving on service providers’ costs and service availability. First, we present the design and implementation of an architecture for on-demand deployment of a vCDN infrastructure over a telco cloud. Second, we propose different algorithms for solving the Virtual Network Function (VNF) placement problem. We propose a polynomialtime heuristic algorithms to solve a relaxed version of the problem’s assumptions, we show experimentally that the derived solutions are close to the optimal. Finally, we study and evaluate solutions for the placement of VNF at the edge, by moving from the traditional central cloud to the edge one. We have also shown how our method can reduce delays and still provide a highly-available service.
|
8 |
Autonomic Management and Orchestration Strategies in MEC-Enabled 5G NetworksSubramanya, Tejas 26 October 2021 (has links)
5G and beyond mobile network technology promises to deliver unprecedented ultra-low latency and high data rates, paving the way for many novel applications and services. Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) are two technologies expected to play a vital role in achieving ambitious Quality of Service requirements of such applications. While NFV provides flexibility by enabling network functions to be dynamically deployed and inter-connected to realize Service Function Chains (SFC), MEC brings the computing capability to the mobile network's edges, thus reducing latency and alleviating the transport network load. However, adequate mechanisms are needed to meet the dynamically changing network service demands (i.e., in single and multiple domains) and optimally utilize the network resources while ensuring that the end-to-end latency requirement of services is always satisfied. In this dissertation work, we break the problem into three separate stages and present the solutions for each one of them.Firstly, we apply Artificial Intelligence (AI) techniques to drive NFV resource orchestration in MEC-enabled 5G architectures for single and multi-domain scenarios. We propose three deep learning approaches to perform horizontal and vertical Virtual Network Function (VNF) auto-scaling: (i) Multilayer Perceptron (MLP) classification and regression (single-domain), (ii) Centralized Artificial Neural Network (ANN), centralized Long-Short Term Memory (LSTM) and centralized Convolutional Neural Network-LSTM (CNN-LSTM) (single-domain), and (iii) Federated ANN, federated LSTM and federated CNN-LSTM (multi-domain). We evaluate the performance of each of these deep learning models trained over a commercial network operator dataset and investigate the pros and cons of different approaches for VNF auto-scaling. For the first approach, our results show that both MLP classifier and MLP regressor models have strong predicting capability for auto-scaling. However, MLP regressor outperforms MLP classifier in terms of accuracy. For the second approach (one-step prediction), CNN-LSTM performs the best for the QoS-prioritized objective and LSTM performs the best for the cost-prioritized objective. For the second approach (multi-step prediction), the encoder-decoder CNN-LSTM model outperforms the encoder-decoder LSTM model for both QoS and Cost prioritized objectives. For the third approach, both federated LSTM and federated CNN-LSTM models perform equally better than the federated ANN model. It was also noted that in general federated learning approaches performs poorly compared to centralized learning approaches. Secondly, we employ Integer Linear Programming (ILP) techniques to formulate and solve a joint user association and SFC placement problem, where each SFC represents a service requested by a user with end-to-end latency and data rate requirements. We also develop a comprehensive end-to-end latency model considering radio delay, backhaul network delay and SFC processing delay for 5G mobile networks. We evaluated the proposed model using simulations based on real-operator network topology and real-world latency values. Our results show that the average end-to-end latency reduces significantly when SFCs are placed at the ME hosts according to their latency and data rate demands. Furthermore, we propose an heuristic algorithm to address the issue of scalability in ILP, that can solve the above association/mapping problem in seconds rather than hours.Finally, we introduce lightMEC - a lightweight MEC platform for deploying mobile edge computing functionalities which allows hosting of low-latency and bandwidth-intensive applications at the network edge. Measurements conducted over a real-life test demonstrated that lightMEC could actually support practical MEC applications without requiring any change to existing mobile network nodes' functionality in the access and core network segments. The significant benefits of adopting the proposed architecture are analyzed based on a proof-of-concept demonstration of the content caching use case. Furthermore, we introduce the AI-driven Kubernetes orchestration prototype that we implemented by leveraging the lightMEC platform and assess the performance of the proposed deep learning models (from stage 1) in an experimental setup. The prototype evaluations confirm the simulation results achieved in stage 1 of the thesis.
|
9 |
Network Update and Service Chain Management in Software Defined NetworksChen, Yang, 0000-0003-0578-2016 January 2020 (has links)
Software Defined Networking (SDN) emerged in recent years to fundamentally change how we design, build and manage networks. To maximize the network utilization, its control plane needs to frequently update the data plane via flow migration as the network conditions change dynamically, which is known as network update. Network Function Virtualization (NFV) addresses the problems of traditional expensive hardware appliances by leveraging virtualization technology to implement network functions in software modules (middleboxes). These software modules, also called Virtual Network Functions (VNFs), are provisioned most commonly in modern networks to demonstrate their increasing importance. The technical combination of SDN and NFV enables network service providers to pick service locations from multiple available servers and maneuvers traffic through appropriate VNFs, which is known as VNF deployment. A service chain consists of multiple chained VNFs in some order. VNFs are executed on virtualization platforms, which makes them more prone to error compared with dedicated hardware. As a result, one important issue of service chain is its reliability, meaning that each type of VNF in a service chain acts properly on its function, which is known as service chain resilience.
This dissertation lists our research on the above three mentioned topics in order to improve the network performance. Details are as follows:
1. Network Update: SDNs always need to migrate flows to update the network configuration for a better system performance. However, the existing literature does not take flow path overlapping information into consideration when flows’ routes are re-allocated. Consequently, congestion happens, resulting in deadlocks among flows and link resources, which will block the update process and cause severe packet loss. We propose multiple solutions with various kinds of leisure resources in the network.
2. VNF Deployment: We focus on the VNF deployment problem with different settings and constraints, including: (1) network topology; (2) vertex capacity constraint; (3) traffic-changing effect; (4) heterogeneous or homogeneous model for one VNF kind; (5) dependency relations between VNFs. We efficiently deploy VNF instances and at the same time make sure that the processing requirement of all flows are satisfied.
3. Resilient Service Chain Management: One effective way of ensuring VNF robustness is to provision redundancy in the form of deploying backup instances besides active ones. In order to guarantee the service chain reliability, we consider both the server resource allocation and the VNF backup assignment. We aim at minimizing the total cost in terms of transmission delay and rule changes. / Computer and Information Science
|
10 |
Design and Performance Evaluation of Resource Allocation Mechanisms in Optical Data Center NetworksVikrant, Nikam January 2016 (has links)
A datacenter hosts hundreds of thousands of servers and a huge amount of bandwidth is required to accommodate communication between thousands of servers. Several packet switched based datacenter architectures are proposed to cater the high bandwidth requirement using multilayer network topologies, however at the cost of increased network complexity and high power consumption. In recent years, the focus has shifted from packet switching to optical circuit switching to build the data center networks as it can support on demand connectivity and high bit rates with low power consumption. On the other hand, with the advent of Software Defined Networking (SDN) and Network Function Virtualization (NFV), the role of datacenters has become more crucial. It has increased the need of dynamicity and flexibility within a datacenter adding more complexity to datacenter networking. With NFV, service chaining can be achieved in a datacenter where virtualized network functions (VNFs) running on commodity servers in a datacenter are instantiated/terminated dynamically. A datacenter also needs to cater large capacity requirement as service chaining involves steering of large aggregated flows. Use of optical circuit switching in data center networks is quite promising to meet such dynamic and high capacity traffic requirements. In this thesis work, a novel and modular optical data center network (DCN) architecture that uses multi-directional wavelength switches (MD-WSS) is introduced. VNF service chaining use case is considered for evaluation of this DCN and the end-to-end service chaining problem is formulated as three inter-connected sub-problems: multiplexing of VNF service chains, VNFs placement in the datacenter and routing and wavelength assignment. This thesis presents integer linear programming (ILP) formulation and heuristics for solving these problems, and numerically evaluate them. / Ett datacenter inrymmer hundratusentals servrar och en stor mängd bandbredd krävs för att skicka data mellan tusentals servrar. Flera datacenter baserade på paketförmedlande arkitekturer föreslås för att tillgodose kravet på hög bandbredd med hjälp av flerskiktsnätverkstopologier, men på bekostnad av ökad komplexitet i nätverken och hög energiförbrukning. Under de senaste åren har fokus skiftat från paketförmedling till optisk kretsomkoppling for att bygga datacenternätverk som kan stödja på-begäran-anslutningar och höga bithastigheter med låg strömförbrukning. Å andra sidan, med tillkomsten av Software Defined Networking (SDN) och nätverksfunktionen Virtualisering (NFV), har betydelsen av datacenter blivit mer avgörande. Det har ökat behovet av dynamik och flexibilitet inom ett datacenter, vilket leder till storre komplexitet i datacenternätverken. Med NFV kan tjänstekedjor åstadkommas i ett datacenter, där virtualiserade nätverksfunktioner (VNFs) som körs på servrar i ett datacenter kan instansieras och avslutas dynamiskt. Ett datacenter måste också tillgodose kravet på stor kapacitet eftersom tjänstekedjan innebär styrning av stora aggregerade flöden. Användningen av optisk kretsomkoppling i datacenternätverk ser ganska lovande ut for att uppfylla sådana trafikkrav dynamik och hög kapacitet. I detta examensarbete, har en ny och modulär optisk datacenternätverksarkitektur (DCN) som använder flerriktningvåglängdsswitchar (MD-WSS) införs. Ett användningsfall av VNF-tjänstekedjor noga övervägd för utvärdering av denna DCN och end-to-end-servicekedjans problem formuleras som tre sammankopplade delproblem: multiplexering av VNF-servicekedjor, VNF placering i datacentret och routing och våglängd uppdrag. Denna avhandling presenterar heltalsprogrammering (ILP) formulering och heuristik för att lösa dessa problem och numeriskt utvärdera dem.
|
Page generated in 0.0315 seconds