Spelling suggestions: "subject:"[een] EDGE COMPUTING"" "subject:"[enn] EDGE COMPUTING""
61 |
Scaled: Scalable Federated Learning via Distributed Hash Table Based OverlaysKim, Taehwan 14 April 2022 (has links)
In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data.
However, due to the privacy concern, collecting the private data in cloud centers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. Yet, central bottleneck has become a severe concern since the central node in traditional FL is responsible for the communication and aggregation of mil- lions of edge devices. In this paper, we propose Scalable Federated Learning via Distributed Hash Table Based Overlays for network (Scaled) to conduct multiple concurrently running FL-based applications over edge networks. Specifically, Scaled adopts a fully decentral- ized multiple-master and multiple-slave architecture by exploiting Distributed Hash Table (DHT) based overlay networks. Moreover, Scaled improves the scalability and adaptability by involving all edge nodes in training, aggregating, and forwarding. Overall, we make the following contributions in the paper. First, we investigate the existing FL frameworks and discuss their drawbacks. Second, we improve the existing FL frameworks from centralized master-slave architecture by using DHT-based Peer-to-Peer (P2P) overlay networks. Third, we implement the subscription-based application-level hierarchical forest for FL training.
Finally, we demonstrate Scaled's scalability and adaptability over large scale experiments. / Master of Science / In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data.
However, due to privacy concerns, collecting the private data in central servers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. In traditional ML, data from edge devices (i.e. phones) should be collected to the central server to start model training. In FL, training results, instead of the data, are collected to perform training. The benefit of FL is that private data can never be leaked during the training. However, there is a major problem in traditional FL:
a single point of failure. When power to a central server goes down or the central server is disconnected from the system, it will lose all the data. To address this problem, Scaled:
Scalable Federated Learning via Distributed Hash Table Based Overlays is proposed. Instead of having one powerful main server, Scaled launches many different servers to distribute the workload. Moreover, since Scaled is able to build and manage multiple trees at the same time, it allows multi-model training.
|
62 |
Edge computing-based access network selection for heterogeneous wireless networks / Sélection de réseau d'accès basée sur le Edge Computing pour des réseaux sans fil hétérogènesLi, Yue 29 September 2017 (has links)
Au cours de ces dernières décennies, les réseaux de télécommunications mobiles ont évolué de la 1G à la 4G. La 4G permet la coexistence de différents réseaux d'accès. Ainsi, les utilisateurs ont la capacité de se connecter à un réseau hétérogène, constitué de plusieurs réseaux d'accès. Toutefois, la sélection du réseau approprié n'est pas une tâche facile pour les utilisateurs mobiles puisque les conditions de chaque réseau d'accès changent rapidement. Par ailleurs, en termes d'usage, le streaming vidéo devient le service principal de transfert de données sur les réseaux mobiles, ce qui amène les fournisseurs de contenu et les opérateurs de réseau à coopérer pour garantir la qualité de la diffusion. Dans ce contexte, la thèse propose la conception d'une approche novatrice pour la prise de décision optimale de sélection de réseau et une architecture améliorant les performances des services de streaming adaptatif dans un réseau hétérogène. En premier lieu, nous introduisons un modèle analytique décrivant la procédure de sélection de réseau en ne considérant déjà qu'une seule classe de trafic. Nous concevons ensuite une stratégie de sélection basée sur des fondements de la théorie du contrôle optimal linéaire. Des simulations sous MATLAB sont effectuées pour valider l'efficacité du mécanisme proposé. Sur ce même principe, nous étendons ce modèle avec un modèle analytique général décrivant les procédures de sélection de réseau dans des environnements de réseaux hétérogènes avec de multiples classes de trafic. Le modèle proposé est ensuite utilisé pour dériver un mécanisme adaptatif basé sur la théorie du contrôle, qui permet non seulement d'aider à piloter dynamiquement le trafic vers l'accès réseau le plus approprié mais aussi de bloquer dynamiquement le trafic résiduel lorsque le réseau est congestionné en ajustant les probabilités d'accès optimales. Nous discutons aussi les avantages d'une intégration transparente du mécanisme proposé avec l'ANDSF, solution fonctionnelle normalisée pour la sélection de réseau. Un prototype est également implémenté dans ns-3. En second lieu, nous nous concentrons sur l'amélioration des performances de DASH pour les utilisateurs mobiles dans un environnement de réseau d'accès 4G uniquement. Nous introduisons une nouvelle architecture basée sur l'utilisation de serveurs distribués en périphérie de réseau suivant le standard MEC. Le mécanisme d'adaptation proposé, fonctionnant en tant que service MEC, peut modifier les fichiers de manifeste en temps réel, en réponse à la congestion du réseau et à la demande dynamique de flux de streaming. Ces modifications conduisent ainsi les clients à sélectionner des représentations vidéo de débit / qualité plus appropriées. Nous avons développé une plateforme de test virtualisée pour l'expérimentation de notre proposition. Les résultats ainsi obtenus démontrent ses avantages en terme de QoE comparés aux approches d'adaptation traditionnelles, purement pilotées par les clients, car notre approche améliore non seulement le MOS mais aussi l'équité face à la congestion. Enfin, nous étendons l'architecture proposée basée sur MEC pour supporter le service de streaming adaptatif DASH dans un réseau hétérogène multi-accès afin de maximiser la QoE et l'équité des utilisateurs mobiles. Dans ce scénario, notre mécanisme doit aider les utilisateurs à sélectionner la qualité vidéo et le réseau et nous le formulons comme un problème d'optimisation. Ce problème d'optimisation peut être résolu par l'outil IBM CPLEX, mais cela prend du temps et ne peut être envisagé à grande échelle. Par conséquent, nous introduisons une heuristique pour aborder la solution optimale avec moins de complexité. Ensuite, nous mettons en œuvre une expérimentation sur notre plateforme de tests. Le résultat démontre que, par rapport à l'outil IBM CPLEX, notre algorithme permet d'obtenir des performances similaires sur la QoE globale et l'équité, avec un gain de temps significatif. / Telecommunication network has evolved from 1G to 4G in the past decades. One of the typical characteristics of the 4G network is the coexistence of heterogeneous radio access technologies, which offers end-users the capability to connect them and to switch between them with their mobile devices of the new generation. However, selecting the right network is not an easy task for mobile users since access network condition changes rapidly. Moreover, video streaming is becoming the major data service over the mobile network where content providers and network operators should cooperate to guarantee the quality of video delivery. In order to cope with this context, the thesis concerns the design of a novel approach for making an optimal network selection decision and architecture for improving the performance of adaptive streaming in the context of a heterogeneous network. Firstly, we introduce an analytical model (i.e. linear discrete-time system) to describe the network selection procedure considering one traffic class. Then, we consider the design of a selection strategy based on foundations from linear optimal control theory, with the objective to maximize network resource utilization while meeting the constraints of the supported services. Computer simulations with MATLAB are carried out to validate the efficiency of the proposed mechanism. Based on the same principal we extend this model with a general analytical model describing the network selection procedures in heterogeneous network environments with multiple traffic classes. The proposed model was, then, used to derive a scalable mechanism based on control theory, which allows not only to assist in steering dynamically the traffic to the most appropriate network access but also helps in blocking the residual traffic dynamically when the network is congested by adjusting dynamically the access probabilities. We discuss the advantages of a seamless integration with the ANDSF. A prototype is also implemented into ns-3. Simulation results sort out that the proposed scheme prevents the network congestion and demonstrates the effectiveness of the controller design, which can maximize the network resources allocation by converging the network workload to the targeted network occupancy. Thereafter, we focus on enhancing the performance of DASH in a mobile network environment for the users which has one access network. We introduce a novel architecture based on MEC. The proposed adaptation mechanism, running as an MEC service, can modify the manifest files in real time, responding to network congestion and dynamic demand, thus driving clients towards selecting more appropriate quality/bitrate video representations. We have developed a virtualized testbed to run the experiment with our proposed scheme. The simulation results demonstrate its QoE benefits compared to traditional, purely client-driven, bitrate adaptation approaches since our scheme notably improves both on the achieved MOS and on fairness in the face of congestion. Finally, we extend the proposed the MEC-based architecture to support the DASH service in a multi-access heterogeneous network in order to maximize the QoE and fairness of mobile users. In this scenario, our scheme should help users select both video quality and access network and we formulate it as an optimization problem. This optimization problem can be solved by IBM CPLEX tool. However, this tool is time-consuming and not scalable. Therefore, we introduce a heuristic algorithm to make a sub-optimal solution with less complexity. Then we implement a testbed to conduct the experiment and the result demonstrates that our proposed algorithm notably can achieve similar performance on overall achieved QoE and fairness with much more time-saving compared to the IBM CPLEX tool.
|
63 |
Semantic Driven Approach for Rapid Application Development in Industrial Internet of ThingsThuluva, Aparna Saisree 13 May 2022 (has links)
The evolution of IoT has revolutionized industrial automation. Industrial devices at every level such as field devices, control devices, enterprise level devices etc., are connected to the Internet, where they can be accessed easily. It has significantly changed the way applications are developed on the industrial automation systems. It led to the paradigm shift where novel IoT application development tools such as Node-RED can be used to develop complex industrial applications as IoT orchestrations. However, in the current state, these applications are bound strictly to devices from specific vendors and ecosystems. They cannot be re-used with devices from other vendors and platforms, since the applications are not semantically interoperable. For this purpose, it is desirable to use platform-independent, vendor-neutral application templates for common automation tasks. However, in the current state in Node-RED such reusable and interoperable application templates cannot be developed. The interoperability problem at the data level can be addressed in IoT, using Semantic Web (SW) technologies. However, for an industrial engineer or an IoT application developer, SW technologies are not very easy to use. In order to enable efficient use of SW technologies to create interoperable IoT applications, novel IoT tools are required. For this purpose, in this paper we propose a novel semantic extension to the widely used Node-RED tool by introducing semantic definitions such as iot.schema.org semantic models into Node-RED. The tool guides a non-expert in semantic technologies such as a device vendor, a machine builder to configure the semantics of a device consistently. Moreover, it also enables an engineer, IoT application developer to design and develop semantically interoperable IoT applications with minimal effort. Our approach accelerates the application development process by introducing novel semantic application templates called Recipes in Node-RED. Using Recipes, complex application development tasks such as skill matching between Recipes and existing things can be automated.We will present the approach to perform automated skill matching on the Cloud or on the Edge of an automation system. We performed quantitative and qualitative evaluation of our approach to test the feasibility and scalability of the approach in real world scenarios. The results of the evaluation are presented and discussed in the paper. / Die Entwicklung des Internet der Dinge (IoT) hat die industrielle Automatisierung revolutioniert. Industrielle Geräte auf allen Ebenen wie Feldgeräte, Steuergeräte, Geräte auf Unternehmensebene usw. sind mit dem Internet verbunden, wodurch problemlos auf sie zugegriffen werden kann. Es hat die Art und Weise, wie Anwendungen auf industriellen Automatisierungssystemen entwickelt werden, deutlich verändert. Es führte zum Paradigmenwechsel, wo neuartige IoT Anwendungsentwicklungstools, wie Node-RED, verwendet werden können, um komplexe industrielle Anwendungen als IoT-Orchestrierungen zu entwickeln. Aktuell sind diese Anwendungen jedoch ausschließlich an Geräte bestimmter Anbieter und Ökosysteme gebunden. Sie können nicht mit Geräten anderer Anbieter und Plattformen verbunden werden, da die Anwendungen nicht semantisch interoperabel sind. Daher ist es wünschenswert, plattformunabhängige, herstellerneutrale Anwendungsvorlagen für allgemeine Automatisierungsaufgaben zu verwenden. Im aktuellen Status von Node-RED können solche wiederverwendbaren und interoperablen Anwendungsvorlagen jedoch nicht entwickelt werden. Diese Interoperabilitätsprobleme auf Datenebene können im IoT mithilfe von Semantic Web (SW) -Technologien behoben werden. Für Ingenieure oder Entwickler von IoT-Anwendungen sind SW-Technologien nicht sehr einfach zu verwenden. Zur Erstellung interoperabler IoT-Anwendungen sind daher neuartige IoT-Tools erforderlich. Zu diesem Zweck schlagen wir eine neuartige semantische Erweiterung des weit verbreiteten Node-RED-Tools vor, indem wir semantische Definitionen wie iot.schema.org in die semantischen Modelle von NODE-Red einführen. Das Tool leitet einen Gerätehersteller oder Maschinebauer, die keine Experten in semantische Technologien sind, an um die Semantik eines Geräts konsistent zu konfigurieren. Darüber hinaus ermöglicht es auch einem Ingenieur oder IoT-Anwendungsentwickler, semantische, interoperable IoT-Anwendungen mit minimalem Aufwand zu entwerfen und entwicklen Unser Ansatz beschleunigt die Anwendungsentwicklungsprozesse durch Einführung neuartiger semantischer Anwendungsvorlagen namens Rezepte für Node-RED. Durch die Verwendung von Rezepten können komplexe Anwendungsentwicklungsaufgaben wie das Abgleichen von Funktionen zwischen Rezepten und vorhandenen Strukturen automatisiert werden. Wir demonstrieren Skill-Matching in der Cloud oder am Industrial Edge eines Automatisierungssystems. Wir haben dafür quantitative und qualitative Bewertung unseres Ansatzes durchgeführt, um die Machbarkeit und Skalierbarkeit des Ansatzes in realen Szenarien zu testen. Die Ergebnisse der Bewertung werden in dieser Arbeit vorgestellt und diskutiert.
|
64 |
Network Slicing to Enhance Edge Computing for Automated Warehouse / Network Slicing för att förbättra Edge Computing för Automated WarehouseWei, Xiaoyi January 2022 (has links)
In a previous work, a distributed safety framework supported by edge computing was developed to enable real-time response of robots that collaborate with humans in the Human-Robot Collaboration (HRC) scenario. However, as the number of robots in the automated warehouse increases, the network is easier to induce the congestion. A network infrastructure that can fulfill the automated warehouse needs is therefore desired. This work develops network slicing technology in the aforementioned network infrastructure and investigates its application in the automated warehouse scenario. The goal is to improve the performance of the network through network slicing, in order that it can provide differentiated services to devices in the automated warehouse based on their needs, allowing network resources to be more efficiently allocated. With network optimization, low-latency and high reliability communication of the robot can be achieved in the automated warehouse. The performance of network slicing was compared to the scenario without this technology in the experiments. Specifically, in the standard Wireless Fidelity (Wi-Fi) network scenario without network slicing, all devices and robots will be connected to one channel to send data to the Multi-access Edge Computing (MEC) server. For the network with slicing, we divide it into three slices based on different use cases, including computers, Internet of Things (IoT) devices, and robots. Slices are created by defining multiple Service Set Identifiers (SSIDs) in a single Access Point (AP). Our results show that network slicing technology can significantly improve network performance in the automated warehouse. The network with slicing is superior to that without slicing in terms of latency at different levels of network load, which is reduced by up to 53.6%. The throughput is also increased by up to 33.5% compared to the network without slicing. Meanwhile, the network with slicing can maintain a relatively low error probability of all flows, of which the median value is 0%. It can prove that network slicing technology is beneficial for the automated warehouse network. / Begreppet samarbete mellan människa och robot (HRC) har blivit vanligt förekommande inom modern industri. I det tidigare arbetet presenteras en säkerhetsram som är utrustad med en MEC-server (Multi-access Edge Computing) för att tillhandahålla tillräcklig resurser till roboten som arbetar i det automatiserade lagret med HRC scenario. När antalet robotar i det automatiserade lagret ökar ökar, kommer nätverket att bli en flaskhals. En långsiktig, modern och robust nätverk för automatiserade lager är därför önskvärt för att anpassa sig till eventuella framtida behov. I det här projektet undersöks genomförandet av nätverksindelning i automatiserade lager med HRC-scenario. Målet är att förbättra prestanda för nätverket genom att dela upp nätverket så att det kan tillhandahålla differentierade tjänster till enheter i det automatiserade lagret baserat på utifrån deras behov, vilket gör att nätverksresurserna kan fördelas mer effektivt. Med nätverksoptimering kan kommunikation med låg latenstid och hög tillförlitlighet av roboten kan uppnås i det automatiserade lagret. Vi utförde experiment med två scenarier: standardscenarier med en Wireless Fidelity (Wi-Fi)-nätverk och Wi-Fi-nätverk med nätverksslicing. I standardscenariot för Wi-Fi-nätverk är alla enheter och robotar anslutna till en kanal för att skicka data till MEC-servern. För nätverket med slicing delar vi upp det i tre skivor baserat på olika användningsfall, inklusive datorer, IoT-enheter (Internet of Things) och robotar. Skivorna är skapas genom att definiera flera SSID:er (Service Set Identifiers) i ett enda åtkomstnät. punkt (AP). Våra resultat visar att tekniken för att dela upp nätverk kan förbättra följande avsevärt nätverksprestanda i det automatiserade lagret. Nätet med skivning är överlägset det utan skivning när det gäller latens på olika nivåer av nätverks belastning, som minskas med upp till 53,63 %. Nätet med skivning kan också fortfarande upprätthålla en relativt låg felsannolikhet för att säkerställa nätverkskvaliteten samtidigt som samtidigt som det ger hög genomströmning. Det visar att tekniken för nätverksskivning är fördelaktig för det automatiserade lagernätverket.
|
65 |
An analysis of 5G orchestration : Defining the role of software orchestrators in 5G networks, and building a method to compare implementations of 5G orchestrators / En analys av 5G orkestrering : Hur orkestreringsprogramvaror används i 5G nätverk, och ett sätt att jämföra varianter av orkestreringsprogramvaror.Lex-Hammarskjöld, Justin January 2021 (has links)
Software orchestrators like Kubernetes are growing in popularity with computer engineers for deploying and running complex software systems. Interestingly, there are now new technical standards being proposed for the telecom industry to begin utilizing software orchestration for the software that runs inside cellular networks. The telecom industry is currently transitioning from 4G to 5G technology. One of the central pieces of this development work is implementing a software orchestrator for 5G networks. This raises some questions about how and why the telecom industry will use software orchestration in their cellular networks. Software orchestration is a complex technology and it is challenging to develop an implementation of a software orchestrator. Some important questions that this thesis addresses are: What do network operators need from this technology? Furthermore, telecom vendors, like Ericsson and Huawei, have developed their own versions of a 5G software orchestrator, which orchestrator should the network operators choose? Furthermore, we investigate what 5G is, why the telecom industry is developing software orchestrators for the 5G roll-out, and importantly, we determine the design requirements that the telecom industry has for these "5G orchestration systems". We interpret and break down technical whitepapers from the industry, and we build a picture of the IT stack of upcoming 5G networks. In our research, we find that software orchestration is being used to deploy and maintain complex software stacks such as software-defined networking (SDN) system that is central to 5G networks. We uncover some of the specializations needed in a software orchestrator for the telecom industry, such as modularity, high-availability, and specialized system integration. With this information, we make feature and design recommendations for 5G orchestrators, and we compile a list of criteria that network operators can use to assess and compare different 5G orchestrators. / Orkestreringsprogramvaror som Kubernetes växer i popularitet med IT ingenjörer för att installera och köra komplexa mjukvarasystem. På grund av pågående transitionen från 4G till 5G, används orkestreringsprogramvaror nu också i mobilnäten. I den här uppsatsen undersöks vad är 5G, varför telekombranschen använder orkestreringsprogramvaror för nya 5G nätverk, och vad krav har telekombranschen på denna "5G orkestreringsprogramvaror". Denna undersökning utförs genom en litteraturstudie. Genom den här undersökningen, det visar sig att orkestreringsprogramvaror används för att installera och köra komplexa mjukvarasystem som är centralt till 5G nätverk. Specialiseringskrav för orkestreringsprogramvaror i telekombranschen upptäcks, som modularitet, hög tillgänglighet, och specialiserad API-hookar. Rekommendationer görs för 5G orkestreringsprogramvarors funktioner, och en lista sammanställas av kriterier som telekomoperatör kan använda för att bedöma och jämföra 5G orkestreringsprogramvaror.
|
66 |
Candidate generation for relocation of black box applications in mobile edge computing environments / Kandidat generering för omlokalisering av applikationer i mobile edge computing-miljöerWalden, Love January 2022 (has links)
Applications today are generally deployed in public cloud environments such as Azure, AWS etc. Mobile edge computing (MEC) enables these applications to be relocated to edge nodes which are located in close proximity to the end user, thereby allowing the application to serve the user at lower latency. However, these edge nodes have limited capacity and hence a problem arises of when to relocate an application to an edge. This thesis project attempts to tackle the problem of detecting when an application’s quality of experience is degraded, and how to use this information in order to generate candidates for relocation to edge nodes. The assumption for this thesis project is there is no insight to the application itself, meaning the applications are treated as blackboxes. To detect quality of experience degradation we chose to capture network packets and inspect protocol-level information. We chose WebRTC and HTTP as communication protocols because they were the most common protocols used by the target environment. We developed two application prototypes. The first prototype was a rudimentary server based on HTTP and the second prototype was a video streaming application based on WebRTC. The prototypes were used to study the possibility of breaking down latency components and obtaining quality of service parameters. We then developed a recommendation engine to use this information in order to generate relocation candidates. The recommendation engine was evaluated by placing the WebRTC prototype under quality of experience affecting scenarios and measuring the time taken to generate a relocation candidate of the application. The result of this project show it is possible in some cases to break down latency components for HTTP based applications. However, for WebRTC based applications our approach was not sufficient enough to break down latency components. Instead, we had to rely on quality of service parameters to generate relocation candidates. Based on the outcomes of the project, we conclude detecting quality of experience degradation for blackbox applications have three generalizations. Firstly, the underlying transport and communication protocol has an impact on available approaches and obtainable information. Secondly, the implementation of the communication protocol also has an impact on obtainable information. Lastly, the underlying infrastructure can matter for the approaches used in this project. / Applikationer idag produktionssätts allmänhet i offentliga molntjänster som Azure, AWS etc. Mobile edge computing (MEC) gör att dessa applikationer kan flyttas till gränsnoder som är placerade i närheten av slutanvändaren, vilket gör att applikationen kan erbjuda användaren lägre latens. Dessa gränsnoder har emellertid begränsad kapacitet och därför uppstår ett problem om när en applikation ska flyttas till en gränsnod. Detta examensarbete försöker ta itu med problemet med att upptäcka när en applikations upplevelsekvalitet försämras, och hur man använder denna information för att generera kandidater för omlokalisering till gränsnoder. Antagandet för detta examensarbete är att det inte finns någon insikt i själva applikationen, vilket innebär att applikationer behandlas som svarta lådor. För att upptäcka försämring av upplevelsekvalitet valde vi att fånga nätverkspaket och inspektera information på protokollnivå. Vi valde WebRTC och HTTP som kommunikationsprotokoll eftersom de var de vanligaste protokollen som användes i målmiljön. Vi utvecklade två applikationsprototyper. Den första prototypen var en rudimentär server baserad på HTTPoch den andra prototypen var en videoströmningsapplikation baserad på WebRTC. Prototyperna användes för att studera möjligheten att bryta ned latenskomponenter och erhålla tjänstekvalitetsparametrar. Vi utvecklade sedan en rekommendationsmotor för att använda denna information till att generera omplaceringskandidater. Rekommendationsmotorn utvärderades genom att placera WebRTC-prototypen under scenarion som påverkar upplevelsekvaliten, och sedan mäta tiden det tog att generera en omlokaliseringskandidat av applikationen. Resultatet av detta projekt visar att det i vissa fall är möjligt att bryta ned latenskomponenter för HTTP-baserade applikationer. Dock för WebRTCbaserade applikationer var vårt tillvägagångssätt inte tillräckligt för att bryta ned latenskomponenter. Istället var vi tvungna att förlita oss på kvalitetsparametrar för tjänsten för att generera omlokaliseringskandidater. Baserat på resultaten av projektet drar vi slutsatsen att upptäcka kvalitetsförsämring av erfarenheter för blackbox-applikationer har tre generaliseringar. För det första har det underliggande transport- och kommunikationsprotokollet en inverkan på tillgängliga tillvägagångssätt och tillgänglig information. För det andra har implementeringen av kommunikationsprotokollet också en inverkan på tillgänglig information. Slutligen kan den underliggande infrastrukturen ha betydelse för de tillvägagångssätt som används i detta projekt.
|
67 |
Anomaly Detection in Industrial Networks using a Resource-Constrained Edge DeviceEliasson, Anton January 2019 (has links)
The detection of false data-injection attacks in industrial networks is a growing challenge in the industry because it requires knowledge of application and protocol specific behaviors. Profinet is a common communication standard currently used in the industry, which has the potential to encounter this type of attack. This motivates an examination on whether a solution based on machine learning with a focus on anomaly detection can be implemented and used to detect abnormal data in Profinet packets. Previous work has investigated this topic; however, a solution is not available in the market yet. Any solution that aims to be adopted by the industry requires the detection of abnormal data at the application level and to run the analytics on a resource-constrained device. This thesis presents an implementation, which aims to detect abnormal data in Profinet packets represented as online data streams generated in real-time. The implemented unsupervised learning approach is validated on data from a simulated industrial use-case scenario. The results indicate that the method manages to detect all abnormal behaviors in an industrial network.
|
68 |
Towards Unifying Stream Processing over Central and Near-the-Edge Data CentersPeiro Sajjad, Hooman January 2016 (has links)
In this thesis, our goal is to enable and achieve effective and efficient real-time stream processing in a geo-distributed infrastructure, by combining the power of central data centers and micro data centers. Our research focus is to address the challenges of distributing the stream processing applications and placing them closer to data sources and sinks. We enable applications to run in a geo-distributed setting and provide solutions for the network-aware placement of distributed stream processing applications across geo-distributed infrastructures. First, we evaluate Apache Storm, a widely used open-source distributed stream processing system, in the community network Cloud, as an example of a geo-distributed infrastructure. Our evaluation exposes new requirements for stream processing systems to function in a geo-distributed infrastructure. Second, we propose a solution to facilitate the optimal placement of the stream processing components on geo-distributed infrastructures. We present a novel method for partitioning a geo-distributed infrastructure into a set of computing clusters, each called a micro data center. According to our results, we can increase the minimum available bandwidth in the network and likewise, reduce the average latency to less than 50%. Next, we propose a parallel and distributed graph partitioner, called HoVerCut, for fast partitioning of streaming graphs. Since a lot of data can be presented in the form of graph, graph partitioning can be used to assign the graph elements to different data centers to provide data locality for efficient processing. Last, we provide an approach, called SpanEdge that enables stream processing systems to work on a geo-distributed infrastructure. SpenEdge unifies stream processing over the central and near-the-edge data centers (micro data centers). As a proof of concept, we implement SpanEdge by extending Apache Storm that enables it to run across multiple data centers. / <p>QC 20161005</p>
|
69 |
Co-conception Logiciel/FPGA pour Edge-computing : promotion de la conception orientée objet / software/FPGA co-design for Edge-computing : Promoting object-oriented designLe, Xuan Sang 31 May 2017 (has links)
L’informatique en nuage (cloud computing) est souvent le modèle de calcul le plus référencé pour l’internet des objets (Internet of Things).Ce modèle adopte une architecture où toutes les données de capteur sont stockées et traitées de façon centralisée. Malgré de nombreux avantages, cette architecture souffre d’une faible évolutivité alors même que les données disponibles sur le réseau sont en constante augmentation. Il est à noter que, déjà actuellement, plus de50 % des connexions sur Internet sont inter objets. Cela peut engendrer un problème de fiabilité dans les applications temps réel. Le calcul en périphérie (Edge computing) qui est basé sur une architecture décentralisée, est connue comme une solution pour ce problème émergent en : (1) renforçant l’équipement au bord du réseau et (2) poussant le traitement des données vers le bord.Le calcul en périphérie nécessite des noeuds de capteurs dotés d’une plus grande capacité logicielle et d’une plus grande puissance de traitement, bien que contraints en consommation d’énergie. Les systèmes matériels hybrides constitués de FPGAs et de processeurs offrent un bon compromis pour cette exigence. Les FPGAs sont connus pour permettre des calculs exhibant un parallélisme spatial, aussi que pour leur rapidité, tout en respectant un budget énergétique limité. Coupler un processeur au FPGA pour former un noeud garantit de disposer d’un environnement logiciel flexible pour ce nœud.La conception d’applications pour ce type de systèmes hybrides (réseau/logiciel/matériel) reste toujours une tâche difficile. Elle couvre un vaste domaine d’expertise allant du logiciel de haut niveau au matériel de bas niveau (FPGA). Il en résulte un flux de conception de système complexe, qui implique l’utilisation d’outils issus de différents domaines d’ingénierie. Une solution commune est de proposer un environnement de conception hétérogène qui combine/intègre l’ensemble de ces outils. Cependant, l’hétérogénéité intrinsèque de cette approche peut compromettre la fiabilité du système lors des échanges de données entre les outils.L’objectif de ce travail est de proposer une méthodologie et un environnement de conception homogène pour un tel système. Cela repose sur l’application d’une méthodologie de conception moderne, en particulier la conception orientée objet (OOD), au domaine des systèmes embarqués. Notre choix de OOD est motivé par la productivité avérée de cette méthodologie pour le développement des systèmes logiciels. Dans le cadre de cette thèse, nous visons à utiliser OOD pour développer un environnement de conception homogène pour les systèmes de type Edge Computing. Notre approche aborde trois problèmes de conception: (1) la conception matérielle, où les principes orientés objet et les patrons de conception sont utilisés pour améliorer la réutilisation, l’adaptabilité et l’extensibilité du système matériel. (2) la co-conception matériel/logiciel, pour laquelle nous proposons une utilisation de OOD afin d’abstraire l’intégration et la communication entre matériel et logiciel, ce qui encourage la modularité et la flexibilité du système. (3) la conception d’un intergiciel pour l’Edge Computing. Ainsi il est possible de reposer sur un environnement de développement centralisé des applications distribuées† tandis ce que l’intergiciel facilite l’intégration des nœuds périphériques dans le réseau, et en permet la reconfiguration automatique à distance. Au final, notre solution offre une flexibilité logicielle pour la mise en oeuvre d’algorithmes distribués complexes, et permet la pleine exploitation des performances des FPGAs. Ceux-ci sont placés dans les nœuds, au plus près de l’acquisition des données par les capteurs, pour déployer un premier traitement intensif efficace. / Cloud computing is often the most referenced computational model for Internet of Things. This model adopts a centralized architecture where all sensor data is stored and processed in a sole location. Despite of many advantages, this architecture suffers from a low scalability while the available data on the network is continuously increasing. It is worth noting that, currently, more than 50% internet connections are between things. This can lead to the reliability problem in realtime and latency-sensitive applications. Edge-computing which is based on a decentralized architecture, is known as a solution for this emerging problem by: (1) reinforcing the equipment at the edge (things) of the network and (2) pushing the data processing to the edge.Edge-centric computing requires sensors nodes with more software capability and processing power while, like any embedded systems, being constrained by energy consumption. Hybrid hardware systems consisting of FPGA and processor offer a good trade-off for this requirement. FPGAs are known to enable parallel and fast computation within a low energy budget. The coupled processor provides a flexible software environment for edge-centric nodes.Applications design for such hybrid network/software/hardware (SW/HW) system always remains a challenged task. It covers a large domain of system level design from high level software to low-level hardware (FPGA). This result in a complex system design flow and involves the use of tools from different engineering domains. A common solution is to propose a heterogeneous design environment which combining/integrating these tools together. However the heterogeneous nature of this approach can pose the reliability problem when it comes to data exchanges between tools.Our motivation is to propose a homogeneous design methodology and environment for such system. We study the application of a modern design methodology, in particular object-oriented design (OOD), to the field of embedded systems. Our choice of OOD is motivated by the proven productivity of this methodology for the development of software systems. In the context of this thesis, we aim at using OOD to develop a homogeneous design environment for edge-centric systems. Our approach addresses three design concerns: (1) hardware design where object-oriented principles and design patterns are used to improve the reusability, adaptability, and extensibility of the hardware system. (2) hardware / software co-design, for which we propose to use OOD to abstract the SW/HW integration and the communication that encourages the system modularity and flexibility. (3) middleware design for Edge Computing. We rely on a centralized development environment for distributed applications, while the middleware facilitates the integration of the peripheral nodes in the network, and allows automatic remote reconfiguration. Ultimately, our solution offers software flexibility for the implementation of complex distributed algorithms, complemented by the full exploitation of FPGAs performance. These are placed in the nodes, as close as possible to the acquisition of the data by the sensors† in order to deploy a first effective intensive treatment.
|
70 |
Energy-Efficient Detection of Atrial Fibrillation in the Context of Resource-Restrained DevicesKheffache, Mansour January 2019 (has links)
eHealth is a recently emerging practice at the intersection between the ICT and healthcare fields where computing and communication technology is used to improve the traditional healthcare processes or create new opportunities to provide better health services, and eHealth can be considered under the umbrella of the Internet of Things. A common practice in eHealth is the use of machine learning for a computer-aided diagnosis, where an algorithm would be fed some biomedical signal to provide a diagnosis, in the same way a trained radiologist would do. This work considers the task of Atrial Fibrillation detection and proposes a novel range of algorithms to achieve energy-efficiency. Based on our working hypothesis, that computationally simple operations and low-precision data types are key for energy-efficiency, we evaluate various algorithms in the context of resource-restrained health-monitoring wearable devices. Finally, we assess the sustainability dimension of the proposed solution.
|
Page generated in 0.0963 seconds