• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 40
  • 40
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Fog-based Cloud Paradigm for Time-Sensitive Applications

Bhowmick, Satyajit 20 October 2016 (has links)
No description available.
12

The security of big data in fog-enabled IoT applications including blockchain: a survey

Tariq, N., Asim, M., Al-Obeidat, F., Farooqi, M.Z., Baker, T., Hammoudeh, M., Ghafir, Ibrahim 24 January 2020 (has links)
Yes / The proliferation of inter-connected devices in critical industries, such as healthcare and power grid, is changing the perception of what constitutes critical infrastructure. The rising interconnectedness of new critical industries is driven by the growing demand for seamless access to information as the world becomes more mobile and connected and as the Internet of Things (IoT) grows. Critical industries are essential to the foundation of today’s society, and interruption of service in any of these sectors can reverberate through other sectors and even around the globe. In today’s hyper-connected world, the critical infrastructure is more vulnerable than ever to cyber threats, whether state sponsored, criminal groups or individuals. As the number of interconnected devices increases, the number of potential access points for hackers to disrupt critical infrastructure grows. This new attack surface emerges from fundamental changes in the critical infrastructure of organizations technology systems. This paper aims to improve understanding the challenges to secure future digital infrastructure while it is still evolving. After introducing the infrastructure generating big data, the functionality-based fog architecture is defined. In addition, a comprehensive review of security requirements in fog-enabled IoT systems is presented. Then, an in-depth analysis of the fog computing security challenges and big data privacy and trust concerns in relation to fog-enabled IoT are given. We also discuss blockchain as a key enabler to address many security related issues in IoT and consider closely the complementary interrelationships between blockchain and fog computing. In this context, this work formalizes the task of securing big data and its scope, provides a taxonomy to categories threats to fog-based IoT systems, presents a comprehensive comparison of state-of-the-art contributions in the field according to their security service and recommends promising research directions for future investigations.
13

Fog Computing for Heterogeneous Multi-Robot Systems With Adaptive Task Allocation

Bhal, Siddharth 21 August 2017 (has links)
The evolution of cloud computing has finally started to affect robotics. Indeed, there have been several real-time cloud applications making their way into robotics as of late. Inherent benefits of cloud robotics include providing virtually infinite computational power and enabling collaboration of a multitude of connected devices. However, its drawbacks include higher latency and overall higher energy consumption. Moreover, local devices in proximity incur higher latency when communicating among themselves via the cloud. At the same time, the cloud is a single point of failure in the network. Fog Computing is an extension of the cloud computing paradigm providing data, compute, storage and application services to end-users on a so-called edge layer. Distinguishing characteristics are its support for mobility and dense geographical distribution. We propose to study the implications of applying fog computing concepts in robotics by developing a middle-ware solution for Robotic Fog Computing Cluster solution for enabling adaptive distributed computation in heterogeneous multi-robot systems interacting with the Internet of Things (IoT). The developed middle-ware has a modular plug-in architecture based on micro-services and facilitates communication of IOT devices with the multi-robot systems. In addition, the developed middle-ware solutions support different load balancing or task allocation algorithms. In particular, we establish that we can enhance the performance of distributed system by decreasing overall system latency by using already established multi-criteria decision-making algorithms like TOPSIS and TODIM with naive Q-learning and with Neural Network based Q-learning. / Master of Science
14

Predicting Performance Run-time Metrics in Fog Manufacturing using Multi-task Learning

Nallendran, Vignesh Raja 26 February 2021 (has links)
The integration of Fog-Cloud computing in manufacturing has given rise to a new paradigm called Fog manufacturing. Fog manufacturing is a form of distributed computing platform that integrates Fog-Cloud collaborative computing strategy to facilitate responsive, scalable, and reliable data analysis in manufacturing networks. The computation services provided by Fog-Cloud computing can effectively support quality prediction, process monitoring, and diagnosis efforts in a timely manner for manufacturing processes. However, the communication and computation resources for Fog-Cloud computing are limited in Fog manufacturing. Therefore, it is significant to effectively utilize the computation services based on the optimal computation task offloading, scheduling, and hardware autoscaling strategies to finish the computation tasks on time without compromising on the quality of the computation service. A prerequisite for adapting such optimal strategies is to accurately predict the run-time metrics (e.g., Time-latency) of the Fog nodes by capturing their inherent stochastic nature in real-time. It is because these run-time metrics are directly related to the performance of the computation service in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The run-time metrics that reflect the performance in the Fog nodes are heterogenous in nature and the performance cannot be effectively modeled through traditional predictive analysis. In this thesis, a multi-task learning methodology is adopted to predict the run-time metrics that reflect performance in Fog manufacturing by addressing the heterogeneities among the Fog nodes. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The proposed model can be further extended in computation tasks offloading and architecture optimization in Fog manufacturing to minimize the time-latency and improve the robustness of the system. / Master of Science / Smart manufacturing aims at utilizing Internet of things (IoT), data analytics, cloud computing, etc. to handle varying market demand without compromising the productivity or quality in a manufacturing plant. To support these efforts, Fog manufacturing has been identified as a suitable computing architecture to handle the surge of data generated from the IoT devices. In Fog manufacturing computational tasks are completed locally through the means of interconnected computing devices called Fog nodes. However, the communication and computation resources in Fog manufacturing are limited. Therefore, its effective utilization requires optimal strategies to schedule the computational tasks and assign the computational tasks to the Fog nodes. A prerequisite for adapting such strategies is to accurately predict the performance of the Fog nodes. In this thesis, a multi-task learning methodology is adopted to predict the performance in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The metrics that reflect the performance in the Fog nodes are heterogenous in nature and cannot be effectively modeled through conventional predictive analysis. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The results show that the multi-task learning model has better prediction accuracy than the benchmarks and that it can model the heterogeneities among the Fog nodes. The proposed model can further be incorporated in scheduling and assignment strategies to effectively utilize Fog manufacturing's computational services.
15

Placement des données de l'internet des objets dans une infrastructure de fog / Placement of internet of things data in a fog infrastructure

Naas, Mohammed Islam 19 February 2019 (has links)
Dans les prochaines années, l’Internet des objets (IoT) constituera l’une des applications générant le plus de données. Actuellement, les données de l’IoT sont stockées dans le Cloud. Avec l’augmentation du nombre d’objets connectés, la transmission de la grande quantité de données produite vers le Cloud génèrera des goulets d’étranglement. Par conséquent, les latences seront élevées. Afin de réduire ces latences, le Fog computing a été proposé comme un paradigme étendant les services du Cloud jusqu’aux périphéries du réseau. Il consiste à utiliser tout équipement localisé dans le réseau (ex. routeur) pour faire le stockage et le traitement des données. Cependant, le Fog présente une infrastructure hétérogène. En effet, ses équipements présentent des différences de performances de calcul, de capacités de stockage et d’interconnexions réseaux.Cette hétérogénéité peut davantage augmenter la latence du service. Cela pose un problème : le mauvais choix des emplacements de stockage des données peut augmenter la latence du service. Dans cette thèse, nous proposons une solution à ce problème sous la forme de quatre contributions : 1. Une formulation du problème de placement de données de l’IoT dans le Fog comme un programme linéaire. 2. Une solution exacte pour résoudre le problème de placement de données en utilisant CPLEX, un solveur de problème linéaire. 3. Deux heuristiques basées sur le principe de “diviser pour régner” afin de réduire le temps du calcul de placement. 4. Une plate-forme expérimentale pour évaluer des solutions de placement de données de l’IoT dans le Fog, en intégrant la gestion du placement de données à iFogSim, un simulateur d’environnement Fog et IoT. / In the coming years, Internet of Things (IoT) will be one of the applications generating the most data. Nowadays, IoT data is stored in the Cloud. As the number of connected objects increases, transmitting the large amount of produced data to the Cloud will create bottlenecks. As a result, latencies will be high and unpredictable. In order to reduce these latencies, Fog computing has been proposed as a paradigm extending Cloud services to the edge of the network. It consists of using any equipment located in the network (e.g. router) to store and process data. Therefore, the Fog presents a heterogeneous infrastructure. Indeed, its components have differences in computing performance, storage capacity and network interconnections. This heterogeneity can further increase the latency of the service. This raises a problem: the wrong choice of data storage locations can increase the latency of the service. In this thesis, we propose a solution to this problem in the form of four contributions: 1. A formulation of the IoT data placement problem in the Fog as a linear program. 2. An exact solution to solve the data placement problem using the CPLEX, a mixed linear problem solver. 3. Two heuristics based on the principle of “divide and conquer” to reduce the time of placement computation. 4. An experimental platform for testing and evaluating solutions for IoT data placement in the Fog, integrating data placement management with iFogSim, a Fog and IoT environment simulator.
16

Towards interoperable IOT systems with a constraint-aware semantic web of things / Vers une gestion intelligente des données de l'Internet des Objets

Seydoux, Nicolas 16 November 2018 (has links)
Cette thèse porte sur le Web Sémantique des Objets (WSdO), un domaine de recherche à l'interface de l'Internet des Objets (IdO) et du Web Sémantique (WS). L’intégration des approche du WS à l'IdO permettent de traiter l'importante hétérogénéité des ressources, des technologies et des applications de l'IdO, laquelle est une source de problèmes d'interopérabilité freinant le déploiement de systèmes IdO. Un premier verrou scientifique est lié à la consommation en ressource des technologies du WS, là où l'IdO s’appuie sur des objets aux capacités de calcul et de communication limitées. De plus, les réseaux IdO sont déployés à grande échelle, quand la montée en charge est difficile pour les technologies du WS. Cette thèse a pour objectif de traiter ce double défi, et comporte deux contributions. La première porte sur l'identification de critères de qualité pour les ontologies de l'IdO, et l’élaboration de IoT-O, une ontologie modulaire pour l'IdO. IoT-O a été implantée pour enrichir les données d'un bâtiment instrumenté, et pour être moteur de semIoTics, notre application de gestion autonomique. La seconde contribution est EDR (Emergent Distributed Reasoning), une approche générique pour distribuer dynamiquement le raisonnement à base de règles. Les règles sont propagées de proche en proche en s'appuyant sur les descriptions échangées entre noeuds. EDR est évaluée dans deux scénario concrets, s'appuyant sur un serveur et des noeuds contraints pour simuler le déploiement. / This thesis is situated in the Semantic Web of things (SWoT) domain, at the interface between the Internet of Things (IoT) and the Semantic Web (SW). The integration of SW approaches into the IoT aim at tackling the important heterogeneity of resources, technologies and applications in the IoT, which creates interoperability issues impeding the deployment of IoT systems. A first scientific challenge is risen by the resource consumption of the SW technologies, inadequated to the limites computation and communication capabilities of IoT devices. Moreover, IoT networks are deployed at a large scale, when SW technologies have scalability issues. This thesis addresses this double challenge by two contributions. The first one is the identification of quality criteria for IoT ontologies, leading to the proposition of IoT-O, a modular IoT ontology. IoT-O is deployed to enrich data from a smart building, and drive semIoTics, our autonomic computing application. The second contribution is EDR (Emergent Distributed Reasoning), a generic approach to dynamically distributed rule-based reasoning. Rules are propagated peer-to-peer, guided by descriptions exchanged among nodes. EDR is evaluated in two use-cases, using both a server and some constrained nodes to simulate the deployment.
17

Un modèle à composant pour la gestion de contextes pervasifs orientés service / A component model for pervasive service oriented context management

Aygalinc, Colin 18 December 2017 (has links)
L'informatique pervasive promeut une vision d'un cadre dans lequel un patchwork de ressources hétérogènes et volatiles est intégré dans les environnements du quotidien. Ces ressources, matérielles ou logicielles, coopèrent de manière transparente, souvent aux travers d'applications, pour fournir des services à haute valeur ajoutée adaptés à chaque utilisateur et son environnement, grâce à la notion de contexte. Ces applications sont déployées dans un large spectre d'environnements d'exécution, allant d'infrastructures distantes de Cloud Computing jusqu'au plus près de l'utilisateur dans des passerelles Fog Computing ou directement dans les capteurs du réseau. Dans ces travaux, nous nous intéressons spécifiquement au module de contexte d'une plateforme Fog Computing. Pour faciliter la conception et l'exécution des applications Fog Computing, une approche populaire est de les bâtir au dessus d'une plateforme adoptant l'architecture à service, ce qui permet de réduire leur complexité et simplifie la gestion du dynamisme. Dans nos travaux, nous proposons d'étendre cette approche en modélisant le contexte comme un ensemble de descriptions de services, disponible à la conception, et exposé dynamiquement par le module de contexte à l'exécution, selon les besoins des applications et l'état de l'environnement. Ce module est programmé à l'aide d'un modèle à composant spécifique. L'unité de base de notre modèle à composant est l'entité de contexte, qui est composé de modules hautement cohérents implémentant distinctement les spécifications des services proposées par l'entité de contexte. Ces modules peuvent décrire de manière simple leur logique de synchronisation avec les sources de contexte distantes grâce à un langage dédié à ce domaine. A l'exécution, les instances d'entitées de contexte sont rendues introspectables et reconfigurables dynamiquement, ce qui permet, grâce à un manager autonomique externe, de veiller à la satisfaction des besoins des applications. Nous avons développé une implémentation de référence de ce modèle à composant, nommée CReAM, qui a pu être utilisée dans la passerelle domotique iCASA, développée en partenariat avec Orange Labs. / Pervasive computing promotes environments where a patchwork of heterogeneous and volatile resources are integrated in places of daily life. These hardware and software resources cooperate in a transparent way, through applications, in order to provide high valueadded services. These services are adapted to each user and its environment, via the notion of context. Pervasive applications are now widely distributed, from distant cloud facilities down to Fog Computing gateway or even in sensors, near the user. Depending on the localization, various forms of context are needed by the applications. In this thesis, we focus on the context module at Fog Level. In order to simplify the design and execution, Fog applications are built on top of a service-oriented platform, freeing the developer of technical complexity and providing a support to handle the dynamism. We propose to extend this approach by providing the context as a set of service descriptions, available at design to the application developer. At runtime, depending on the context sources availability and on application current needs, context services are published or withdrawn inside the platform by the context module. We tailor a specific component model to program this context module. The base unit of our component model is called context entity. It is composed of highly coherent modules, implementing distinctly each service description proposed by the underlying context entity. These modules can simply describe their synchronization logic with context sources thanks to a domain specific language. At runtime, context entity instances can be introspected and reconfigured. An external autonomic manager uses these properties to match dynamically the context services exposed by the context module to the application needs. We have developed a reference implementation of our work, called CReAM, which can be used in a smart home gateway called iCASA, developed in a partnership with Orange Labs.
18

Evaluating Distributed Machine Learning for Fog Computing loT scenarios : A Comparison Between Distributed and Cloud-based Training on Tensorflow

El Ghamri, Hassan January 2022 (has links)
Dag för dag blir sakernas internet-enheter (IoT) en större del av vårt liv. För närvarande är dessa enheter starkt beroende av molntjänster vilket kan utgöra en integritetsrisk. Det allmänna syftet med denna rapport är att undersöka alternativ till molntjänster, ett ganska fascinerande alternativ är fog computing. Fog computing är en struktur som utnyttjar processorkraften hos enheter i utkanten av nätverket (lokala enheter) snarare än att helt förlita sig på molntjänster. Ett specifikt fall av denna struktur undersöks ytterligare som huvudsyftet i denna rapport vilket är distribuerad maskininlärning för IoT-enheter. Detta mål uppnås genom att besvara frågorna om vilka metoder/verktyg som finns tillgängliga för att åstadkomma det och hur väl fungerar de jämfört med molntjänster. Det finns tre huvudsteg i denna studie. Det första steget var informationsinsamling på två olika nivåer. Först på en grundläggande nivå där området för studien undersöks. Den andra nivån var mer specifik och handlade om att ytterligare samla information om tillgängliga verktyg för distribuering av maskininlärning och utvärdera dessa verktyg. Det andra steget var att implementera tester för att verifiera prestandan för varje verktyg vald baserat på den insamlade informationen. Det sista steget var att sammanfatta resultaten och dra slutsatser. Studien har visat att distribuerad maskininlärning fortfarande är för omogen för att ersätta molntjänster eftersom de befintliga verktygen inte är optimerade för IoT-enheter. Det bästa alternativet för tillfället är att hålla sig till molntjänster, men om lägre prestanda till viss del kan tolereras, så är vissa IoT-enheter kraftfulla nog att bearbeta maskininlärningsuppgiften självständigt. Distribuerad maskininlärning är fortfarande ett ganska nytt koncept, men det utvecklas snabbt, förhoppningsvis når denna utveckling snart IoT-enheter. / By day, internet of things (IoT) devices is becoming a bigger part of our life. Currently these devices are heavily dependent on cloud computing which can be a privacy risk. The general aim of this report is to investigate alternatives to cloud computing, a quite fascinating alternative is fog computing. Fog computing is a structure that utilizes the processing power of devices at the edge of the network (local devices) rather than fully relying on cloud computing. A specific case of this structure is further investigated as the main objective of this report which is distributed machine learning for IoT devices. This objective is achieved by answering the questions of what methods/tools are available to accomplish that and how well do they function in comparison to cloud computing. There are three main stages of this study. The first stage was information gathering on two different levels. First on a basic level exploring the field. The second one was to further gather information about available tools for distributing machine learning and evaluate them. The second stage was implementing tests to verify the performance of each approach/tool chosen from the information gathered. The last stage was to summarize the results and reach to conclusions. The study has shown that distributed machine learning is still too immature to replace cloud computing since the existing tools isn’t optimized for this use case. The best option for now is to stick to cloud computing, but if lower performance to some extent can be tolerated, then some IoT devices is powerful enough to process the machine learning task independently. Distributed machine learning is still quite a new concept but it’s growing fast, hoping this growth soon expands to support IoT devices.
19

Connected cars : a networking challenge and a computing resource for smart cities / Voitures connectées : un défi de réseautage et une ressource de calcul pour les villes intelligentes

Grassi, Giulio 31 October 2017 (has links)
Récemment, les villes sont devenues "de plus en plus intelligentes", avec une multitude de périphériques IoT et de capteurs déployés partout. Parmi ces objets intelligents, les voitures peuvent jouer un rôle important. Les véhicules sont (ou seront), en effet, équipés avec plusieurs interfaces réseau, ils ont (ou auront) des capacités de calcul et des dispositifs capables d'analyser l'environnement. Pour réaliser le concept de "connected-car" il faut un changement de modèle Internet, à partir d'une architecture centrée sur l'hôte (IP) vers un paradigme centré sur l'information, comment l'architecture ICN (Information Centric Networking). Cette thèse analyse ainsi les avantages et les défis du paradigme ICN, en particulier du Named Data Networking (NDN), dans le domaine VANET, en présentant la première implémentation de NDN pour VANET (V-NDN). Il propose ensuite Navigo, un mécanisme de forwarding basé sur NDN pour la récupération de contenu en utilisant les communications V2V et V2I. Ensuite, le problème de la mobilité des fournisseurs de données est traité, proposant une solution distribuée basée sur NDN, MAP-Me. Toutefois, le rôle du véhicule dans les villes intelligentes ne s'arrête pas au niveau de la connectivité. Les voitures, avec leurs nouvelles capacités de calcul, sont les candidates idéales pour jouer un rôle dans l'architecture Fog Computing, en déplaçant des tâches de calcul vers l'edge du réseau. En tant que preuve de concept, cette thèse présente ParkMaster, un système qui combine les techniques de machine learning, le cloud et l'edge pour analyser l'environnement et traiter le problème de la disponibilité du stationnement. / In the recent years we have seen a continuous integration of technology with the urban environment. This fusion aims to improve the efficiency and the quality of living in big urban agglomerates, while reducing the costs for their management. Cities are getting “smarter and smarter”, with a plethora of IoT devices and sensors deployed all over the urban areas. Among those intelligent objects, an important role may be played by cars. Modern vehicles are (or will be) indeed equipped with multiple network interfaces, they have (or will have) computational capabilities and devices able to sense the environment. However, smart and connected cars do not represent only an opportunity, but also a challenge. Computation capabilities are limited, mobility and the diversity of network interfaces are obstacles when providing connectivity to the Internet and to other vehicles. When addressing the networking aspect, we believe that a shift in the Internet model is needed, from a host oriented architecture (IP) to a more content focused paradigm, the Information Centric Networking (ICN) architectures. This thesis thus analyzes the benefits and the challenges of the ICN paradigm, in particular of Named Data Networking (NDN), in the VANET domain, presenting the first implementation running on real cars of NDN for VANET (V-NDN). It then proposes Navigo, an NDN based forwarding mechanism for content retrieval over V2V and V2I communications, with the goal of efficiently discovering and retrieving data while reducing the network overhead. Networking mobility is not only a challenge for vehicles, but for any connected mobile device. For this reason, this thesis extends its initial area of interest — VANET — and addresses the network mobility problem for generic mobile nodes, proposing a NDN-based solution, dubbed MAP–Me. MAP-Me tackles the intra-AS content provider mobility problem without relying on any fixed node in the network. It exploits notifications messages at the time of a handover and the forwarding plane to maintain the data provider “always” reachable.Finally, the “connected car” concept is not the only novel element in modern vehicles. Cars indeed won’t be only connected, but also smart, able to locally process data produced by in-car sensors. Vehicles are the perfect candidates to play an important role in the recently proposed Fog Computing architecture. Such an architecture moves computational tasks typical of the cloud away from it and brings them to the edge, closer to where the data is produced. To prove that such a model, with the car as computing edge node, is already feasible with the current technology and not only a vision for the future, this thesis presents ParkMaster. Parkmaster is a fully deployed edge-based system that combines vision and machine learning techniques, the edge (driver’s smartphone) and the cloud to sense the environment and tackle the parking availability problem.
20

An operating system for 5G Edge Clouds / Un système d'exploitation pour 5G Edge Clouds

Manzalini, Antonio 08 July 2016 (has links)
La technologie et les conducteurs socio-économiques créent les conditions d'une transformation profonde, appelée "Softwarization", du Telco et des TIC. Réseaux définis par logiciel et réseau Fonctions de virtualisation sont deux des principales technologies permettant ouvrant la voie à cette transformation. Softwarization permettra de virtualiser toutes les fonctions de réseau et de services d'une infrastructure de Telco et de les exécuter sur une plates-formes logicielles, entièrement découplés de l'infrastructure physique sous (presque basé sur du matériel standard). Tous les services seront fournis en utilisant un «continuum» des ressources virtuelles (traitement, de stockage et de communication) avec un investissement en capital initial pratiquement très limité et avec des coûts d'exploitation modestes. 5G sera la première exploitation de Softwarization. 5G sera une infrastructure distribuée massivement dense, intégrant le traitement, le stockage et (fixes et radio) des capacités de mise en réseau. En résumé, l'objectif général de cette thèse a étudié les défis techniques et les opportunités d'affaires apportées par le "Softwarization" et 5G. En particulier, la thèse propose que le 5G devra avoir une sorte de système d'exploitation (5GOS) capable de fonctionner les RAN et de base et les infrastructures fixes convergés. Les contributions de cette thèse ont été: 1) définir une vision pour les futures infrastructures 5G, des scénarios, des cas d'utilisation et les exigences principales: 2) définissant l'architecture fonctionnelle d'un système d'exploitation pour 5G; 3) la conception de l'architecture logicielle d'un 5GOS pour le "bord Cloud"; 4) comprendre les impacts technico-économiques de la vision et 5GOS, et les stratégies les plus efficaces pour l'exploiter / Technology and socio-economic drivers are creating the conditions for a profound transformation, called “Softwarization”, of the Telco and ICT. Software-Defined Networks and Network Functions Virtualization are two of the key enabling technologies paving the way towards this transformation. Softwarization will allow to virtualize all network and services functions of a Telco infrastructure and executing them onto a software platforms, fully decoupled from the underneath physical infrastructure (almost based on standard hardware). Any services will be provided by using a “continuum” of virtual resources (processing, storage and communications) with practically very limited upfront capital investment and with modest operating costs. 5G will be the first exploitation of Softwarization. 5G will be a massively dense distributed infrastructure, integrating processing, storage and (fixed and radio) networking capabilities. In summary, the overall goal of this thesis has been investigating technical challenges and business opportunities brought by the “Softwarization” and 5G. In particular, the thesis proposes that the 5G will have to have a sort of Operating System (5GOS) capable of operating the converged fixed and RAN and core infrastructures. Main contributions of this thesis have been: 1) defining a vision for future 5G infrastructures, scenarios, use-cases and main requirements; 2) defining the functional architecture of an Operating System for 5G; 3) designing the software architecture of a 5G OS for the “Edge Cloud”; 4) understanding the techno-economic impacts of the vision and 5GOS, and the most effective strategies to exploit it

Page generated in 0.0488 seconds