• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 1
  • 1
  • Tagged with
  • 16
  • 9
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Komunikace v prostředí tzv. mobile edge-cloud / Communication in mobile edge-cloud environment

Papík, Ondřej January 2018 (has links)
Edge-cloud brings the computation power as close to the clients as possible. This reduces latencies and overall computation time in the cloud. Thanks to the mobile nature of clients we must be able to migrate tasks among different servers. The goal of this thesis is to examine possible problems in communication and propose the architecture of framework. Our framework uses gRPC and is written as module to it. It is platform independent, uses reliable communication and focuses on easy usage. We provide implementation of this framework with some example uses. 1
2

Comparative Study of REST and gRPC for Microservices in Established Software Architectures

Johansson, Martin, Isabella, Olivos January 2023 (has links)
This study compares two commonly used communication architectural styles for distributed systems, REST and gRPC. With the increase of microservice usage when migrating from monolithic structures, the importance of network performance plays a significantly larger role. Companies rely on their users, and they demand higher performance for applications to enhance their experience. This study aims to determine which of these frameworks performs faster in different scenarios regarding response time. We performed four tests that reflect real-life scenarios within an established API and baseline performance tests to evaluate them. The results imply that gRPC performs better than REST the larger the size of transmitted data is. The study provides a brief understanding of how REST performs compared to newer frameworks and that exploring new options is valuable. A more in-depth evaluation is needed to understand the different factors of performance influences further.
3

Étude des gerbes hadroniques à l'aide du prototype du calorimètre hadronique semi-digital et comparaison avec les modèles théoriques utilisés dans le logiciel GEANT4 / Hadronic shower study with the semi-digital hadronic calorimeter and comparison with theoretical models used in GEANT4

Steen, Arnaud 26 November 2015 (has links)
Le Collisionneur Linéaire International ILC est un projet de collisionneur électron-positon développé pour prendre le relais du Grand Collisionneur de Hadrons LHC. Ce projet permettra d'étudier précisément les caractéristiques du nouveau boson de 125 GeV , découvert en 2012 par les expérience CMS et ATLAS, compatible avec le boson de Higgs du modèle standard. Cette expérience pourrait aussi permettre aux physiciens de mettre à jour des phénomènes physiques inconnus. Pour exploiter au maximum ce nouvel accélérateur, deux collaborations travaillent sur le développement de deux détecteurs : le Grand Détecteur International ILD et le Détecteur au Silicium SiD. Ces détecteurs sont dits généralistes et sont optimisés pour la mise en oeuvre de technique de suivi des particules. Ils sont constitués d'un trajectographe dans leur partie centrale et de systèmes de calorimétrie. L'ensemble est inséré dans un aimant supraconducteur, lui même entouré d'une culasse instrumentée avec des chambres à muon. Le groupe lyonnais dans lequel j'ai effectué mes travaux de recherche pendant mon doctorat, a grandement participé au développement du calorimètre hadronique à lecture semi-digitale. Ce calorimètre ultra-granulaire fait partie des options pour le calorimètre hadronique du Grand Détecteur International. Un prototype a été construit en 2011. D'environ 1 m3, il est constitué de 48 chambres à plaque résistive de verre, comporte plus de 440000 canaux de lecture de 1 cm2 et pèse environ 10 tonnes. Ce calorimètre répond aux contraintes imposées pour le Collisionneur Linaire International (une haute granularité, une consommation électrique faible, une alimentation pulsée etc) et est régulièrement testé sur des lignes de faisceau au CERN. Les données ainsi collectées m'ont permis d'étudier en détail le phénomène de gerbe hadronique. De nombreux efforts ont été réalisé pour développer des méthodes efficace de reconstruction de l'énergie des gerbes hadroniques et pour améliorer la résolution en énergie du prototype SDHCAL. La simulation des gerbes hadroniques dans le SDHCAL constitue une part importante de mes travaux de recherche. Une simulation réaliste des chambres à plaque résistive de verre a été développée en étudiant la réponse du prototype au passage de muons et de gerbes électromagnétiques. J'ai alors confronté les modèles de simulation des gerbes hadroniques avec des données expérimentales. La granularité du SDHCAL rend aussi possible des études fines sur la topologie des gerbes hadroniques, notamment sur leur extension latéraleet longitudinale. J'ai finalement pu étudier, en m'appuyant sur mes travaux de simulations, la reconstruction de la masse des bosons W et Z dans une simulation complète du Grand Détecteur International. Cette étude permet d'estimer les performances de l'ILD avec le SDHCAL et les techniques de suivi des particules / The International Linear Collider ILC is an electron-positron collider project proposed to become the next particle collider after the Large Hadron Collider LHC. This collider will allow to study, in details, the new 125 GeV boson, discovered in 2012 by CMS and ATLAS experiments. This new particle seems compatible with the standard model Higgs boson. The International Collider may also allow physicists to discover new physics. In order to operate this new collider, two collaborations are developing two detectors : the International Large Detector ILD and the Silicon Detector SiD. These general-purpose detectors are optimised for particle flow algorithms. The team from Lyon in which I worked during my Ph.D., has widely participated in the development of the semi-digital hadronic calorimeter SDHCAL. This high granular calorimeter is one option for the International Large Detector hadronic calorimeter. A prototype has been built in 2011. This 1 m3 prototype is made of 48 glass resistive plate chambers and contains more than 440000 electronic readout channels. This technological calorimeter is often tested with beam of particles at CERN. The collected allowed me to study the hadronic showers with many details. Methods to reconstruct precisely the hadronic showers energy has been developed in order to improve the SDHCAL energy resolution. My main contribution was the development of the hadronic shower simulation within the SDHCAL. A realistic simulation of the SDHCAL was performed by studying the SDHCAL response to the passage of muons and electromagnetic showers. I was then able to compared different simulation models with experimental data. The SDHCAL granularity allows precise studies on the hadronic showers topology, such as longitudinal and lateral shower extent. I finally worked on the W and Z boson mass reconstruction in a full simulation of the International Large Detector in order to study the performance of this calorimeter option with particle flow techniques
4

Using Non-Intrusive Instrumentation to Analyze any Distributed Middleware in Real-Time

Lui, Nyalia 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Dynamic Binary Instrumentation (DBI) is one way to monitor a distributed system in real-time without modifying source code. Previous work has shown it is possible to instrument distributed systems using standards-based distributed middleware. Existing work, however, only applies to a single middleware, such as CORBA. This thesis therefore presents a tool named the Standards-based Distributed Middleware Monitor (SDMM), which generalizes two modern standards-based distributed middleware, the Data Distribution Service (DDS) and gRemote Procedure Call (gRPC). SDMM uses DBI to extract values and other data relevant to monitoring a distributed system in real-time. Using dynamic instrumentation allows SDMM to capture information without a priori knowledge of the distributed system under instrumentation. We applied SDMM to systems created with two DDS vendors, RTI Connext DDS and OpenDDS, as well as gRPC which is a complete remote procedure call framework. Our results show that the data collection process contributes to less than 2% of the run-time overhead in all test cases.
5

Benchmarking and Accelerating TensorFlow-based Deep Learning on Modern HPC Systems

Biswas, Rajarshi 12 October 2018 (has links)
No description available.
6

CoreWCF - en väg till .NET5?

Campalto, Anton January 2023 (has links)
I dagsläget ställer sig många företag frågan om man ska uppgradera sina applikationer från Dotnet Framework till Dotnet 6. Saab AB är ett av dessa företag och har därför valt att undersöka möjligheten att migrera sin MSS applikation till Dotnet 6. De vill även veta hur stort jobb det skulle innebära att migrera från Dotnet Framework till Dotnet 6. För att migrera en applikation som använder det obsoleta ramverket WCF behöver det ersättas med antingen gRPC eller CoreWCF. Då MSS använder sig av WCF kommer den här rapporten rikta sig på den sistnämnda ersättningen, CoreWCF. Baserat på resultatet är det tydligt att det kommer krävas ett stort arbete för att få över MSS till Dotnet 6. Avslutningsvis ges förslag på vidare forskning inom området.
7

Imagerie tomograpbique d'un volcan à l'aide des muons atmosphériques / Tomographic imaging of volcanoes using atmospheric muons

Béné, Samuel 22 December 2016 (has links)
Les muons atmosphériques sont des particules élémentaires créées lors de l’interaction des rayons cosmiques de haute énergie avec les atomes de la haute atmosphère. Leur capacité à traverser de grandes quantités de matière et leur abondance au niveau du sol permet d’utiliser leur flux comme support à la radiographie de grands objets. Cette technique, la muographie, possède notamment comme sujet d’application possible l’étude de volcans. La collaboration Tomuvol, au sein de laquelle cette thèse s’est déroulée, vise à mettre au point un détecteur et les techniques d’analyse permettant la réalisation d’une telle mesure avec comme sujet d’étude un volcan auvergnat : le Puy de Dôme. Ce manuscrit présente les contributions à ce travail du point de vue instrumental tout d’abord, avec la calibration et l’optimisation des performances des chambres GRPC utilisées pour la mesure. Les performances du détecteur lors des diverses campagnes de prise de données qui se sont déroulées au pied du Puy de Dôme sont également résumées. Dans une deuxième partie, l’accent est porté sur l’analyse physique des données obtenues avec, dans un premier temps, la description des travaux de simulation Monte-Carlo mis en œuvre avec le logiciel GEANT4. Puis, une technique d’estimation du flux transmis de muons atmosphériques à l’aide d’une méthode de type noyaux est présentée, et la carte de densité estimée du Puy de Dôme qui en découle est comparée aux résultats issus de techniques géophysiques. / Atmospheric muons are elementary particles originating from the interaction of high energy cosmic rays with atoms in the upper atmosphere. Their ability to travel through a large amount of matter and their abundance at ground level allows for their flux to be used as a probe for the radiography of big objects. This technique, muography, can in particular be of interest for the study of volcanoes. The Tomuvol collaboration, within which this thesis took place, aims at developing a detector and analysis techniques allowing to perform such a measurment, using a volcano from Auvergne as a case study : the Puy de Dôme. This document describes the author’s contributions to this work, focusing on the intrumentation aspect first, with the calibration and optimisation of the GRPC chambers used to perform the measurment. The performances of the detector during the various campaigns of data acquisition at the base of the Puy de Dôme are also sumed up. A second part is dedicated to the physical analysis of the data with, firstly, the description of the Monte-Carlo simulations that were developed using the GEANT4 software. Then, a kernel-like estimation method of the transmitted flux of atmospheric muons is described, and the density map of the Puy de Dôme thus obtained is compared to results coming from geophysical techniques.
8

A Comparison of Pull- and Push- based Network Monitoring Solutions : Examining Bandwidth and System Resource Usage

Pettersson, Erik January 2021 (has links)
Monitoring of computer networks is central to ensuring that they function as intended, with solutions based on SNMP being used since the inception of the protocol. SNMP is however increasingly being challenged by solutions that, instead of requiring a request-response message flow, simply send information to a central collector at predefined intervals. These solutions are often based on Protobuf and gRPC, which are supported and promoted by equipment manufacturers such as Cisco, Huawei, and Juniper. Two models exist for monitoring. The pull model used by SNMP where requests are sent out in order to retrieve data, has historically been widely used. The push model, where data is sent at predefined intervals without a preceding request, is used by the implementations using Protobuf and gRPC. There is a perceived need to understand which model more efficiently uses bandwidth and the monitored system’s memory and processing resources. The purpose of the thesis is to compare two monitoring solutions, one being SNMP, and one based on Protobuf and gRPC. This is done to determine if one solution makes more efficient use of bandwidth and the system resources available to the network equipment. This could aid those who operate networks or develop monitoring software in determining how to implement their solutions. The study is conducted as a case study, where two routers manufactured by Cisco and Huawei were used to gather data about the bandwidth, memory, and CPU utilisation of the two solutions. The results of the measurements show that when retrieving information about objects that have 1-byte values SNMP was the better performer. When objects with larger values were retrieved SNMP performed best until 26 objects were retrieved per message. Above this point the combination of Protobuf and gRPC performed better, resulting in fewer bytes being sent for a given number of objects. No impact on the memory and CPU utilisation in the routers was shown. / Övervakning av nätverk är av yttersta vikt för att säkerställa att de fungerar som tänkt. Lösningar baserade på SNMP har använts sen protokollet kom till. SNMP utmanas mer och mer av lösningar som, istället för att använda ett meddelandeflöde baserat på fråga-svar, helt enkelt sänder information till en insamlande enhet i fördefinierade intervall. Dessa lösningar baseras ofta på Protobuf och gRPC, vilka stöds och propageras för av bland andra utrustningstillverkarna Cisco, Huawei, och Juniper. Två modeller för övervakning finns. Pull-modellen där frågor skickas ut för att hämta data, används av SNMP och har historiskt sett använts i stor skala. Push- modellen, där data skickas i fördefinierade intervall utan föregående fråga, används av lösningar som använder Protobuf och gRPC. Det finns ett behov av att förstå vilken modell som på ett mer effektivt sätt använder bandbredd och de övervakade systemens minnes- och processorresurser. Syftet med denna rapport är att jämföra två övervakningslösningar. SNMP är den ena lösningen, och den andra lösningen är baserad på Protobuf och gRPC. Detta i syfte att utröna om endera lösning på ett mer effektivt sätt använder bandbredd och systemresurser i nätverksutrustning. Detta kan hjälpa nätverksoperatörer och utvecklare av mjukvara för övervakning att avgöra hur dessa bör implementeras. För att besvara detta används en fallstudie, där två routrar tillverkade av Cisco och Huawei används för att samla in data om de två lösningarnas användning av bandbredd, minne, och processorkraft. Mätningarnas resultat visade att när objekt vars värde var 1 byte hämtades så presterade SNMP bättre. När objekt vars värden var större hämtades presterade SNMP bäst tills 26 objekt hämtades per meddelande. Därefter presterade kombinationen Protobuf och gRPC bättre, och krävde färre bytes för att skicka information om ett givet antal objekt. Ingen påverkan på minnes- eller processoranvändningen i routrarna påvisades av mätresultaten.
9

Lookaside Load Balancing in a Service Mesh Environment / Extern Lastbalansering i en Service Mesh Miljö

Johansson, Erik January 2020 (has links)
As more online services are migrated from monolithic systems into decoupled distributed micro services, the need for efficient internal load balancing solutions increases. Today, there exists two main approaches for load balancing internal traffic between micro services. One approach uses either a central or sidecar proxy to load balance queries over all available server endpoints. The other approach lets client themselves decide which of all available endpoints to send queries to. This study investigates a new approach called lookaside load balancing. This approach consists of a load balancer that uses the control plane to gather a list of service endpoints and their current load. The load balancer can then dynamically provide clients with a subset of suitable endpoints they connect to directly. The endpoint distribution is controlled by a lookaside load balancing algorithm. This study presents such an algorithm that works by changing the endpoint assignment in order to keep current load between an upper and lower bound. In order to compare each of these three load balancing approaches, a test environment in Kubernetes is constructed and modeled to be similar to a real service mesh. With this test environment, we perform four experiments. The first experiment aims at finding suitable settings for the lookaside load balancing algorithm as well as a baseline load configuration for clients and servers. The second experiments evaluates the underlying network infrastructure to test for possible bias in latency measurements. The final two experiments evaluate each load balancing approach in both high and low load scenarios. Results show that lookaside load balancing can achieve similar performance as client-side load balancing in terms of latency and load distribution, but with a smaller CPU and memory footprint. When load is high and uneven, or when compute resource usage should be minimized, the centralized proxy approach is better. With regards to traffic flow control and failure resilience, we can show that lookaside load balancing is better than client-side load balancing. We draw the conclusion that lookaside load balancing can be an alternative approach to client-side load balancing as well as proxy load balancing for some scenarios. / Då fler online tjänster flyttas från monolitsystem till uppdelade distribuerade mikrotjänster, ökas behovet av intern lastbalansering. Idag existerar det två huvudsakliga tillvägagångssätt för intern lastbalansering mellan interna mikrotjänster. Ett sätt använder sig antingen utav en central- eller sido-proxy for att lastbalansera trafik över alla tillgängliga serverinstanser. Det andra sättet låter klienter själva välja vilken utav alla serverinstanser att skicka trafik till. Denna studie undersöker ett nytt tillvägagångssätt kallat extern lastbalansering. Detta tillvägagångssätt består av en lastbalanserare som använder kontrollplanet för att hämta en lista av alla serverinstanser och deras aktuella last. Lastbalanseraren kan då dynamiskt tillsätta en delmängd av alla serverinstanser till klienter och låta dom skapa direktkopplingar. Tillsättningen av serverinstanser kontrolleras av en extern lastbalanseringsalgoritm. Denna studie presenterar en sådan algoritm som fungerar genom att ändra på tillsättningen av serverinstanser för att kunna hålla lasten mellan en övre och lägre gräns. För att kunna jämföra dessa tre tillvägagångssätt för lastbalansering konstrueras och modelleras en testmiljö i Kubernetes till att vara lik ett riktigt service mesh. Med denna testmiljö utför vi fyra experiment. Det första experimentet har som syfte att hitta passande inställningar till den externa lastbalanseringsalgoritmen, samt att hitta en baskonfiguration för last hos klienter or servrar. Det andra experimentet evaluerar den underliggande nätverksinfrastrukturen för att testa efter potentiell partiskhet i latensmätningar. De sista två experimenten evaluerar varje tillvägagångssätt av lastbalansering i både scenarier med hög och låg belastning. Resultaten visar att extern lastbalansering kan uppnå liknande prestanda som klientlastbalansering avseende latens och lastdistribution, men med lägre CPU- och minnesanvändning. När belastningen är hög och ojämn, eller när beräkningsresurserna borde minimeras, är den centraliserade proxy-metoden bättre. Med hänsyn till kontroll över trafikflöde och resistans till systemfel kan vi visa att extern lastbalansering är bättre än klientlastbalansering. Vi drar slutsatsen att extern lastbalansering kan vara ett alternativ till klientlastbalansering samt proxylastbalansering i vissa fall.
10

Using Non-Intrusive Instrumentation to Analyze any Distributed Middleware in Real-Time

Nyalia James-Korsuk Lui (10686993) 10 May 2021 (has links)
<div>Dynamic Binary Instrumentation (DBI) is one way to monitor a distributed system in real-time without modifying source code. Previous work has shown it is possible to instrument distributed systems using standards-based distributed middleware. Existing work, however, only applies to a single middleware, such as CORBA.</div><div><br></div><div>This thesis therefore presents a tool named the Standards-based Distributed Middleware Monitor (SDMM), which generalizes two modern standards-based distributed middleware, the Data Distribution Service (DDS) and gRemote Procedure Call (gRPC). SDMM uses DBI to extract values and other data relevant to monitoring a distributed system in real-time. Using dynamic instrumentation allows SDMM to capture information without a priori knowledge of the distributed system under instrumentation. We applied SDMM to systems created with two DDS vendors, RTI Connext DDS and OpenDDS, as well as gRPC which is a complete remote procedure call framework. Our results show that the data collection process contributes to less than 2% of the run-time overhead in all test cases.</div>

Page generated in 0.0172 seconds