Spelling suggestions: "subject:"edge computing"" "subject:"edge omputing""
91 |
Towards Unifying Stream Processing over Central and Near-the-Edge Data CentersPeiro Sajjad, Hooman January 2016 (has links)
In this thesis, our goal is to enable and achieve effective and efficient real-time stream processing in a geo-distributed infrastructure, by combining the power of central data centers and micro data centers. Our research focus is to address the challenges of distributing the stream processing applications and placing them closer to data sources and sinks. We enable applications to run in a geo-distributed setting and provide solutions for the network-aware placement of distributed stream processing applications across geo-distributed infrastructures. First, we evaluate Apache Storm, a widely used open-source distributed stream processing system, in the community network Cloud, as an example of a geo-distributed infrastructure. Our evaluation exposes new requirements for stream processing systems to function in a geo-distributed infrastructure. Second, we propose a solution to facilitate the optimal placement of the stream processing components on geo-distributed infrastructures. We present a novel method for partitioning a geo-distributed infrastructure into a set of computing clusters, each called a micro data center. According to our results, we can increase the minimum available bandwidth in the network and likewise, reduce the average latency to less than 50%. Next, we propose a parallel and distributed graph partitioner, called HoVerCut, for fast partitioning of streaming graphs. Since a lot of data can be presented in the form of graph, graph partitioning can be used to assign the graph elements to different data centers to provide data locality for efficient processing. Last, we provide an approach, called SpanEdge that enables stream processing systems to work on a geo-distributed infrastructure. SpenEdge unifies stream processing over the central and near-the-edge data centers (micro data centers). As a proof of concept, we implement SpanEdge by extending Apache Storm that enables it to run across multiple data centers. / <p>QC 20161005</p>
|
92 |
Co-conception Logiciel/FPGA pour Edge-computing : promotion de la conception orientée objet / software/FPGA co-design for Edge-computing : Promoting object-oriented designLe, Xuan Sang 31 May 2017 (has links)
L’informatique en nuage (cloud computing) est souvent le modèle de calcul le plus référencé pour l’internet des objets (Internet of Things).Ce modèle adopte une architecture où toutes les données de capteur sont stockées et traitées de façon centralisée. Malgré de nombreux avantages, cette architecture souffre d’une faible évolutivité alors même que les données disponibles sur le réseau sont en constante augmentation. Il est à noter que, déjà actuellement, plus de50 % des connexions sur Internet sont inter objets. Cela peut engendrer un problème de fiabilité dans les applications temps réel. Le calcul en périphérie (Edge computing) qui est basé sur une architecture décentralisée, est connue comme une solution pour ce problème émergent en : (1) renforçant l’équipement au bord du réseau et (2) poussant le traitement des données vers le bord.Le calcul en périphérie nécessite des noeuds de capteurs dotés d’une plus grande capacité logicielle et d’une plus grande puissance de traitement, bien que contraints en consommation d’énergie. Les systèmes matériels hybrides constitués de FPGAs et de processeurs offrent un bon compromis pour cette exigence. Les FPGAs sont connus pour permettre des calculs exhibant un parallélisme spatial, aussi que pour leur rapidité, tout en respectant un budget énergétique limité. Coupler un processeur au FPGA pour former un noeud garantit de disposer d’un environnement logiciel flexible pour ce nœud.La conception d’applications pour ce type de systèmes hybrides (réseau/logiciel/matériel) reste toujours une tâche difficile. Elle couvre un vaste domaine d’expertise allant du logiciel de haut niveau au matériel de bas niveau (FPGA). Il en résulte un flux de conception de système complexe, qui implique l’utilisation d’outils issus de différents domaines d’ingénierie. Une solution commune est de proposer un environnement de conception hétérogène qui combine/intègre l’ensemble de ces outils. Cependant, l’hétérogénéité intrinsèque de cette approche peut compromettre la fiabilité du système lors des échanges de données entre les outils.L’objectif de ce travail est de proposer une méthodologie et un environnement de conception homogène pour un tel système. Cela repose sur l’application d’une méthodologie de conception moderne, en particulier la conception orientée objet (OOD), au domaine des systèmes embarqués. Notre choix de OOD est motivé par la productivité avérée de cette méthodologie pour le développement des systèmes logiciels. Dans le cadre de cette thèse, nous visons à utiliser OOD pour développer un environnement de conception homogène pour les systèmes de type Edge Computing. Notre approche aborde trois problèmes de conception: (1) la conception matérielle, où les principes orientés objet et les patrons de conception sont utilisés pour améliorer la réutilisation, l’adaptabilité et l’extensibilité du système matériel. (2) la co-conception matériel/logiciel, pour laquelle nous proposons une utilisation de OOD afin d’abstraire l’intégration et la communication entre matériel et logiciel, ce qui encourage la modularité et la flexibilité du système. (3) la conception d’un intergiciel pour l’Edge Computing. Ainsi il est possible de reposer sur un environnement de développement centralisé des applications distribuées† tandis ce que l’intergiciel facilite l’intégration des nœuds périphériques dans le réseau, et en permet la reconfiguration automatique à distance. Au final, notre solution offre une flexibilité logicielle pour la mise en oeuvre d’algorithmes distribués complexes, et permet la pleine exploitation des performances des FPGAs. Ceux-ci sont placés dans les nœuds, au plus près de l’acquisition des données par les capteurs, pour déployer un premier traitement intensif efficace. / Cloud computing is often the most referenced computational model for Internet of Things. This model adopts a centralized architecture where all sensor data is stored and processed in a sole location. Despite of many advantages, this architecture suffers from a low scalability while the available data on the network is continuously increasing. It is worth noting that, currently, more than 50% internet connections are between things. This can lead to the reliability problem in realtime and latency-sensitive applications. Edge-computing which is based on a decentralized architecture, is known as a solution for this emerging problem by: (1) reinforcing the equipment at the edge (things) of the network and (2) pushing the data processing to the edge.Edge-centric computing requires sensors nodes with more software capability and processing power while, like any embedded systems, being constrained by energy consumption. Hybrid hardware systems consisting of FPGA and processor offer a good trade-off for this requirement. FPGAs are known to enable parallel and fast computation within a low energy budget. The coupled processor provides a flexible software environment for edge-centric nodes.Applications design for such hybrid network/software/hardware (SW/HW) system always remains a challenged task. It covers a large domain of system level design from high level software to low-level hardware (FPGA). This result in a complex system design flow and involves the use of tools from different engineering domains. A common solution is to propose a heterogeneous design environment which combining/integrating these tools together. However the heterogeneous nature of this approach can pose the reliability problem when it comes to data exchanges between tools.Our motivation is to propose a homogeneous design methodology and environment for such system. We study the application of a modern design methodology, in particular object-oriented design (OOD), to the field of embedded systems. Our choice of OOD is motivated by the proven productivity of this methodology for the development of software systems. In the context of this thesis, we aim at using OOD to develop a homogeneous design environment for edge-centric systems. Our approach addresses three design concerns: (1) hardware design where object-oriented principles and design patterns are used to improve the reusability, adaptability, and extensibility of the hardware system. (2) hardware / software co-design, for which we propose to use OOD to abstract the SW/HW integration and the communication that encourages the system modularity and flexibility. (3) middleware design for Edge Computing. We rely on a centralized development environment for distributed applications, while the middleware facilitates the integration of the peripheral nodes in the network, and allows automatic remote reconfiguration. Ultimately, our solution offers software flexibility for the implementation of complex distributed algorithms, complemented by the full exploitation of FPGAs performance. These are placed in the nodes, as close as possible to the acquisition of the data by the sensors† in order to deploy a first effective intensive treatment.
|
93 |
Energy-Efficient Detection of Atrial Fibrillation in the Context of Resource-Restrained DevicesKheffache, Mansour January 2019 (has links)
eHealth is a recently emerging practice at the intersection between the ICT and healthcare fields where computing and communication technology is used to improve the traditional healthcare processes or create new opportunities to provide better health services, and eHealth can be considered under the umbrella of the Internet of Things. A common practice in eHealth is the use of machine learning for a computer-aided diagnosis, where an algorithm would be fed some biomedical signal to provide a diagnosis, in the same way a trained radiologist would do. This work considers the task of Atrial Fibrillation detection and proposes a novel range of algorithms to achieve energy-efficiency. Based on our working hypothesis, that computationally simple operations and low-precision data types are key for energy-efficiency, we evaluate various algorithms in the context of resource-restrained health-monitoring wearable devices. Finally, we assess the sustainability dimension of the proposed solution.
|
94 |
Design and implementation of simulation tools, protocols and architectures to support service platforms on vehicular networksBáguena Albaladejo, Miguel 18 July 2017 (has links)
Products related with Intelligent Transportation Systems (ITS) are becoming
a reality on our roads.
All car manufacturers are starting to include Internet
access in their vehicles and to integrate smartphones directly from the
dashboard, but more and more services will be introduced in the near future.
Connectivity through "vehicular networks" will become a cornerstone of every
new proposal, and offering an adequate quality of service is obviously desirable.
However, a lot of work is needed for vehicular networks to offer performances
similar to those of the wired networks.
Vehicular networks can be characterized by two main features: high variability
due to mobility levels that can reach up to 250 kilometers per hour,
and heterogeneity, being that various competing versions from different vendors
have and will be released. Therefore, to make the deployment of efficient
services possible, an extensive study must be carried out and adequate tools
must be proposed and developed. This PhD thesis addresses the service deployment
problem in these networks at three different levels: (i) the physical
and link layer, showing an exhaustive analysis of the physical channel and
models; (ii) the network layer, proposing a forwarding protocol for IP packets;
and (iii) the transport layer, where protocols are proposed to improve data
delivery.
First of all, the two main wireless technologies used in vehicular networks
where studied and modeled, namely the 802.11 family of standards, particularly
802.11p, and the cellular networks focusing on LTE. Since 802.11p is a
quite mature standard, we defined (i) a propagation and attenuation model
capable of replicating the transmission range and the fading behavior of real
802.11p devices, both in line-of-sight conditions and when obstructed by small
obstacles, and (ii) a visibility model able to deal with large obstacles, such
as buildings and houses, in a realistic manner.
Additionally, we proposed a
model based on high-level performance indicators (bandwidth and delay) for
LTE, which makes application validation and evaluation easier.
At the network layer, a hybrid protocol called AVE is proposed for packet
forwarding by switching among a set of standard routing strategies. Depending
on the specific scenario, AVE selects one out of four different routing solutions:
a) two-hop direct delivery, b) Dynamic MANET On-demand (DYMO), c)
greedy georouting, and d) store-carry-and-forward technique, to dynamically
adapt its behavior to the specific situation.
At the transport layer, we proposed a content delivery protocol for reliable
and bidirectional unicast communication in lossy links that improves content
delivery in situations where the wireless network is the bottleneck.
It has
been designed, validated, optimized, and its performance has been analyzed
in terms of throughput and resource efficiency.
Finally, at system level, we propose an edge-assisted computing model that
allows reducing the response latency of several queries by placing a computing
unit at the network edge. This way, traffic traversal through the Internet is
avoided when not needed.
This scheme could be used in both 802.11p and
cellular networks, and in this thesis we decided to focus on its evaluation using
LTE networks.
The platform presented in this thesis combines all the individual efforts to
create a single efficient platform. This new environment could be used by any
provider to improve the quality of the user experience obtainable through the
proposed vehicular network-based services. / Los productos relacionados con los Sistemas Inteligentes de Transporte (ITS)
se están transformando en una realidad en nuestras carreteras. Todos los
fabricantes de coches comienzan a incluir acceso a internet en sus vehículos y a
facilitar su integración con los teléfonos móviles, pero más y más servicios se
introducirán en el futuro.
La conectividad usando las "redes vehiculares" se
convertirá en la piedra angular de cada nueva propuesta, y ofrecer una calidad
de servicio adecuada será, obviamente, deseable. Sin embargo, se necesita
una gran cantidad de trabajo para que las redes vehiculares ofrezcan un
rendimiento similar al de las redes cableadas.
Las redes vehiculares quedan definidas por sus dos características básicas:
alto dinamismo, pues los nodos pueden alcanzar una velocidad relativa de más
de 250 km/h; y heterogeneidad, por la gran cantidad de propuestas diferentes
que los fabricantes están lanzando al mercado. Por ello, para hacer posible el
despliegue de servicios sobre ellas, se impone la necesidad de hacer un estudio
en profundidad de este entorno, y deben de proponerse y desarrollarse las
herramientas adecuadas.
Esta tesis ataca la problemática del despliegue de servicios en estas redes
a tres niveles diferentes: (i) el nivel físico y de enlace, mostrando varios análisis
en profundidad del medio físico y modelos derivados para su simulación;
(ii) el nivel de red, proponiendo un protocolo de difusión de la información
para los paquetes IP; y (iii) el nivel de transporte, donde otros protocolos son
propuestos para mejorar el rendimiento del transporte de datos.
En primer lugar, se han estudiado y modelado las dos principales tecnologías
inalámbricas que se utilizan para la comunicación en redes vehiculares,
la rama de estándares 802.11, en concreto 802.11p; y la comunicación celular,
en particular LTE. Dado que el estándar 802.11p es un estándar bastante
maduro, nos centramos en crear (i) un modelo de propagación y atenuación
capaz de replicar el rango de transmisión de dispositivos 802.11p reales, en
condiciones de visión directa y obstrucción por pequeños obstáculos, y (ii) un
modelo de visibilidad capaz de simular el efecto de grandes obstáculos, como
son los edifcios, de una manera realista.
Además, proponemos un modelo
basado en indicadores de rendimiento de alto nivel (ancho de banda y retardo)
para LTE, que facilita la validación y evaluación de aplicaciones.
En el plano de red, se propone un protocolo híbrido, llamado AVE, para
el encaminamiento y reenvío de paquetes usando un conjunto de estrategias
estándar de enrutamiento. Dependiendo del escenario, AVE elige entre cuatro
estrategias diferentes: a) entrega directa a dos saltos, b) Dynamic MANET
On-demand (DYMO) c) georouting voraz, y d) una técnica store-carry-and-
forward, para adaptar su comportamiento dinámicamente a cada situación.
En el plano de transporte, se propone un protocolo bidireccional de distribución
de contenidos en canales con pérdidas que mejora la entrega de contenidos
en situaciones en las que la red es un cuello de botella, como las redes
inalámbricas.
Ha sido diseñado, validado, optimizado, y su rendimiento ha
sido analizado en términos de productividad y eficiencia en la utilización de
recursos.
Finalmente, a nivel de sistema, proponemos un modelo de computación
asistida que permite reducir la latencia en la respuesta a muchas consultas
colocando una unidad de computación en el borde de la red, i.e., la red de
acceso. Este esquema podría ser usado en redes basadas en 802.11p y en redes
celulares, si bien en esta tesis decidimos centrarnos en su evaluación usando
redes LTE.
La plataforma presentada en esta tesis combina todos los esfuerzos individuales
para crear una plataforma única y eficiente. Este nuevo entorno puede
ser usado por cualquier proveedor para mejorar la calidad de la experiencia de
usuario en los servicios desplegados sobre redes vehiculares. / Els productes relacionats amb els sistemes intel · ligents de transport (ITS)
s'estan transformant en una realitat en les nostres carreteres. Tots els fabri-
cants de cotxes comencen a incloure accés a internet en els vehicles i a facilitar-
ne la integració amb els telèfons mòbils, però en el futur més i més serveis s'hi
introduiran. La connectivitat usant les xarxes vehicular esdevindrà la pedra
angular de cada nova proposta, i oferir una qualitat de servei adequada serà,
òbviament, desitjable.
No obstant això, es necessita una gran quantitat de
treball perquè les xarxes vehiculars oferisquen un rendiment similar al de les
xarxes cablejades.
Les xarxes vehiculars queden definides per dues característiques bàsiques:
alt dinamisme, ja que els nodes poden arribar a una velocitat relativa de més
de 250 km/h; i heterogeneïtat, per la gran quantitat de propostes diferents
que els fabricants estan llançant al mercat.
Per això, per a fer possible el
desplegament de serveis sobre aquestes xarxes, s'imposa la necessitat de fer un
estudi en profunditat d'aquest entorn, i cal proposar i desenvolupar les eines
adequades.
Aquesta tesi ataca la problemàtica del desplegament de serveis en aquestes
xarxes a tres nivells diferents: (i) el nivell físic i d'enllaç , mostrant diverses
anàlisis en profunditat del medi físic i models derivats per simular-lo; (ii) el
nivell de xarxa, proposant un protocol de difusió de la informació per als
paquets IP; i (iii) el nivell de transport, on es proposen altres protocols per a
millorar el rendiment del transport de dades.
En primer lloc, s'han estudiat i modelat les dues principals tecnologies
sense fils que s'utilitzen per a la comunicació en xarxes vehiculars, la branca
d'estàndards 802.11, en concret 802.11p; i la comunicació cel · lular, en partic-
ular LTE. Atès que l'estàndard 802.11p és un estàndard bastant madur, ens
centrem a crear (i) un model de propagació i atenuació capaç de replicar el
rang de transmissió de dispositius 802.11p reals, en condicions de visió directa
i obstrucció per petits obstacles, i (ii) un model de visibilitat capaç de simular
l'efecte de grans obstacles, com són els edificis, d'una manera realista. A més,
proposem un model basat en indicadors de rendiment d'alt nivell (ample de
banda i retard) per a LTE, que facilita la validació i l'avaluació d'aplicacions.
En el pla de xarxa, es proposa un protocol híbrid, anomenat AVE, per
a l'encaminament i el reenviament de paquets usant un conjunt d'estratègies
estàndard d'encaminament.
Depenent de l'escenari , AVE tria entre quatre
estratègies diferents: a) lliurament directe a dos salts, b) Dynamic MANET
On-demand (DYMO) c) georouting voraç, i d) una tècnica store-carry-and-
forward, per a adaptar-ne el comportament dinàmicament a cada situació.
En el pla de transport, es proposa un protocol bidireccional de distribució
de continguts en canals amb pèrdues que millora el lliurament de continguts
en situacions en què la xarxa és un coll de botella, com les xarxes sense fils.
Ha sigut dissenyat, validat, optimitzat, i el seu rendiment ha sigut analitzat
en termes de productivitat i eficiència en la utilització de recursos.
Finalment, a nivell de sistema, proposem un model de computació assistida
que permet reduir la latència en la resposta a moltes consultes col · locant una
unitat de computació a la vora de la xarxa, és a dir, la xarxa d'accés. Aquest
esquema podria ser usat en xarxes basades en 802.11p i en xarxes cel · lulars, si
bé en aquesta tesi decidim centrar-nos en la seua avaluació usant xarxes LTE.
La plataforma presentada en aquesta tesi combina tots els esforços indi-
viduals per a crear una plataforma única i eficient. Aquest nou entorn pot ser
usat per qualsevol proveïdor per a millorar la qualitat de l'experiència d'usuari
en els serveis desplegats sobre xarxes vehiculars. / Báguena Albaladejo, M. (2017). Design and implementation of simulation tools, protocols and architectures to support service platforms on vehicular networks [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/85333 / TESIS
|
95 |
A Low Power AI Inference Accelerator for IoT Edge ComputingHansson, Olle January 2021 (has links)
This thesis investigates the possibility of porting a neural network model trained and modeled in TensorFlow to a low-power AI inference accelerator for IoT edge computing. A slightly modified LeNet-5 neural network model is presented and implemented such that an input frequency of 10 frames per second is possible while consuming 4mW of power. The system is simulated in software and synthesized using the FreePDK45 technology library. The simulation result shows no loss of accuracy, but the synthesis results do not show the same positive results for the area and power. The default version of the accelerator uses single-precision floating-point format, float32, while a modified accelerator using the bfloat16 number representation shows significant improvements in area and power with almost no additional loss of accuracy.
|
96 |
Edge Processing of Image for UAS Sense and AvoidanceRave, Christopher J. 26 August 2021 (has links)
No description available.
|
97 |
TOWARDS TRUSTWORTHY ON-DEVICE COMPUTATIONHeejin Park (12224933) 20 April 2022 (has links)
<div>Driven by breakthroughs in mobile and IoT devices, on-device computation becomes promising. Meanwhile, there is a growing concern over its security: it faces many threats</div><div>in the wild, while not supervised by security experts; the computation is highly likely to touch users’ privacy-sensitive information. Towards trustworthy on-device computation, we present novel system designs focusing on two key applications: stream analytics, and machine learning training and inference.</div><div><br></div><div>First, we introduce Streambox-TZ (SBT), a secure stream analytics engine for ARM-based edge platforms. SBT contributes a data plane that isolates only analytics’ data and</div><div>computation in a trusted execution environment (TEE). By design, SBT achieves a minimal trusted computing base (TCB) inside TEE, incurring modest security overhead.</div><div><br></div><div>Second, we design a minimal GPU software stack (50KB), called GPURip. GPURip allows developers to record GPU computation ahead of time, which will be replayed later</div><div>on client devices. In doing so, GPURip excludes the original GPU stack from run time eliminating its wide attack surface and exploitable vulnerabilities.</div><div><br></div><div>Finally, we propose CoDry, a novel approach for TEE to record GPU computation remotely. CoDry provides an online GPU recording in a safe and practical way; it hosts GPU stacks in the cloud that collaboratively perform a dryrun with client GPU models. To overcome frequent interactions over a wireless connection, CoDry implements a suite of key optimizations.</div>
|
98 |
Offline Task Scheduling in a Three-layer Edge-Cloud ArchitectureMahjoubi, Ayeh January 2023 (has links)
Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method. / Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.
|
99 |
Latency-aware Optimization of the Existing Service Mesh in Edge Computing EnvironmentSun, Zhen January 2019 (has links)
Edge computing, as an approach to leveraging computation capabilities located in different places, is widely deployed in the industry nowadays. With the development of edge computing, many big companies move from the traditional monolithic software architecture to the microservice design. To provide better performance of the applications which contain numerous loosely coupled modules that are deployed among multiple clusters, service routing among multiple clusters needs to be effective. However, most existing solutions are dedicated to static service routing and load balancing strategy, and thus the performance of the application cannot be effectively optimized when network condition changes.To address the problem mentioned above, we proposed a dynamic weighted round robin algorithm and implemented it on top of the cutting edge service mesh Istio. The solution is implemented as a Docker image called RoutingAgent, which is simple to deployed and managed. With the RoutingAgent running in the system, the weights of the target routing clusters will be dynamically changed based on the detected inter-cluster network latency. Consequently, the client-side request turnaround time will be decreased.The solution is evaluated in an emulated environment. Compared to the Istio without RoutingAgent, the experiment results show that the client-side latency can be effectively minimized by the proposed solution in the multicluster environment with dynamic network conditions. In addition to minimizing response time, emulation results demonstrate that loads of each cluster are well balanced. / Edge computing, som ett tillvägagångssätt för att utnyttja beräkningsfunktioner som finns på olika ställen, används i stor utsträckning i branschen nuförtiden. Med utvecklingen av kantdatabasen flyttar många stora företag från den traditionella monolitiska mjukvaruarkitekturen till mikroserviceteknik. För att ge bättre prestanda för de applikationer som innehåller många löst kopplade moduler som distribueras bland flera kluster, måste service routing bland flera kluster vara effektiva. De flesta befintliga lösningarna är dock dedikerade till statisk service-routing och belastningsbalanseringsstrategi, vilket gör att programmets prestanda inte effektivt kan optimeras när nätverksförhållandena ändras.För att ta itu med problemet som nämnts ovan föreslog vi en dynamisk viktad round robin-algoritm och implementerade den ovanpå den avancerade servicenätverket Istio. Lösningen implementeras som en Docker-bild som heter RoutingAgent, som är enkel att distribuera och hantera. Med agenten som körs i systemet ändras vikten av målruteringsklustret dynamiskt baserat på den upptäckta interklusternätets latens. Följaktligen kommer klientsidans begäran om omställningstid att minskas.Lösningen utvärderas i en emulerad miljö. Jämfört med Istio utan agent visar experimentresultaten att klientens latentitet effektivt kan minimeras av den föreslagna lösningen i multicluster-miljö med dynamiska nätverksförhållanden. Förutom att minimera responstid visar emuleringsresultat att belastningar i varje kluster är välbalanserade.
|
100 |
Sensor data computation in a heavy vehicle environment : An Edge computation approachVadivelu, Somasundaram January 2018 (has links)
In a heavy vehicle, internet connection is not reliable, primarily because the truck often travels to a remote location where network might not be available. The data generated from the sensors in a vehicle might not be sent to the internet when the connection is poor and hence it would be appropriate to store and do some basic computation on those data in the heavy vehicle itself and send it to the cloud when there is a good network connection. The process of doing computation near the place where data is generated is called Edge computing. Scania has its own Edge computation solution, which it uses for doing computations like preprocessing of sensor data, storing data etc. Scania’s solution is compared with a commercial edge computing platform called as AWS (Amazon Web Service’s) Greengrass. The comparison was in terms of Data efficiency, CPU load, and memory footprint. In the conclusion it is shown that Greengrass solution works better than the current Scania solution in terms of CPU load and memory footprint, while in data efficiency even though Scania solution is more efficient compared to Greengrass solution, it was shown that as the truck advances in terms of increasing data size the Greengrass solution might prove competitive to the Scania solution.One more topic that is explored in this thesis is Digital twin. Digital twin is the virtual form of any physical entity, it can be formed by obtaining real-time sensor values that are attached to the physical device. With the help of sensor values, a system with an approximate state of the device can be framed and which can then act as the digital twin. Digital twin can be considered as an important use case of edge computing. The digital twin is realized with the help of AWS Device shadow. / I ett tungt fordonsscenario är internetanslutningen inte tillförlitlig, främst eftersom lastbilen ofta reser på avlägsna platser nätverket kanske inte är tillgängligt. Data som genereras av sensorer kan inte skickas till internet när anslutningen är dålig och det är därför bra att ackumulera och göra en viss grundläggande beräkning av data i det tunga fordonet och skicka det till molnet när det finns en bra nätverksanslutning. Processen att göra beräkning nära den plats där data genereras kallas Edge computing. Scania har sin egen Edge Computing-lösning, som den använder för att göra beräkningar som förbehandling av sensordata, lagring av data etc. Jämförelsen skulle vara vad gäller data efficiency, CPU load och memory consumption. I slutsatsen visar det sig att Greengrass-lösningen fungerar bättre än den nuvarande Scania-lösningen när det gäller CPU-belastning och minnesfotavtryck, medan det i data-effektivitet trots att Scania-lösningen är effektivare jämfört med Greengrass-lösningen visades att när lastbilen går vidare i Villkor för att öka datastorleken kan Greengrass-lösningen vara konkurrenskraftig för Scania-lösningen. För att realisera Edge computing används en mjukvara som heter Amazon Web Service (AWS) Greengrass.Ett annat ämne som utforskas i denna avhandling är digital twin. Digital twin är den virtuella formen av någon fysisk enhet, den kan bildas genom att erhålla realtidssensorvärden som är anslutna till den fysiska enheten. Med hjälp av sensorns värden kan ett system med ungefärligt tillstånd av enheten inramas och som sedan kan fungera som digital twin. Digital twin kan betraktas som ett viktigt användningsfall vid kantkalkylering. Den digital twin realiseras med hjälp av AWS Device Shadow.
|
Page generated in 0.1249 seconds