• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 25
  • 11
  • 8
  • 8
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 159
  • 39
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Distributed Communication for Streetlight Systems : A decentralized solution / Distributerad kommunication för gatlyktesystem : En decentraliserad lösning

Wallin, Fredrik January 2016 (has links)
Streetlights are usually lit during all dark hours even though vehicles or other objects are not using the road. Instead of wasting energy on keeping the streetlights lit when no vehicles are using the road, the streetlights should be lit whenever vehicles are in proximity of the streetlights and turned off otherwise. A distributed network can be used to handle the communication between streetlights for sharing information about vehicles in proximity. There are streetlight systems that adapt from the environment and handles communication but are still not optimized for country roads with low frequency of vehicles. Therefore, distributed communication for streetlight systems is implemented, by letting the streetlights be a part of a distributed system. Each streetlight is represented with a Zolertia RE-Mote, a sensor for detecting objects and an LED. The representation of the streetlights are wirelessly connected as a mesh network where they can communicate with each other and forward data packets to nodes more far away in the network. The concept of having the streetlights in a distributed system is believed to work and can be considered to be applied on streetlights at country roads to save energy. / Gatlyktor är oftast tända under alla timmar då det är mörkt ute, även fast det inte är något fordon eller annat objekt som använder vägen. Istället för att slösa energi på att ha gatlyktorna tända när det inte är några fordon som använder vägen, bör gatlyktorna vara tända när fordon är i närheten av dem och släckta annars. Ett distribuerat nätverk kan användas för att hantera kommunikationen mellan gatlyktor till att dela information om fordon i närheten. Det finns gatlyktsystem som anpassar efter miljön och hanterar kommunikationen, men är inte optimerat för landsvägar med låg trafik. Därför är distribuerad kommunikation för gatlyktsystem implementerat genom att låta gatlyktorna vara en del av ett distribuerat system. Varje gatlykta är representerad med en Zolertia RE-Mote, en sensor för detektering av objekt och en LED. Representationen är trådlöst kopplat som ett meshnätverk där de kan kommunicera med varandra och skicka vidare datapaket till noder längre bort i nätverket. Konceptet att ha gatlyktorna i ett distribuerat system tros fungera och kan tänkas att appliceras på gatlyktor på landsvägar för att spara energi.
122

Data Processing and Collection in Distributed Systems

Andersson, Sara January 2021 (has links)
Distributed systems can be seen in a variety of applications that is in use today. Tritech provides several systems that to some extent consist of distributed systems of nodes. These nodes collect data and the data have to be processed. A problem that often appears when designing these systems, is deciding where the data should be processed, i.e., which architecture is the most suitable one for the system. Decide the architecture for these systems are not simple, especially since it changes rather quickly due to the development in these areas. The thesis aims to perform a study regarding which factors affect the choice of architecture in a distributed system and how these factors relate to each other. To be able to analyze which factors do affect the choice of architecture and to what extent, a simulator was implemented. The simulator received information about the factors as input, and return one or several architecture configurations as output. By performing qualitative interviews, the input factors to the simulator were chosen. The factors that were analyzed in the thesis was: security, storage, working memory, size of data, number of nodes, data processing per data set, robust communication, battery consumption, and cost. From the qualitative interviews as well as from the prestudy five architecture configuration was chosen. The chosen architectures were: thin-client server, thick-client server, three-tier client-server, peer-to-peer, and cloud computing. The simulator was validated regarding the three given use cases: agriculture, the train industry, and industrial Internet of Things. The validation consisted of five existing projects from Tritech. From the results of the validation, the simulator produced correct results for three of the five projects. By using the simulator results, it could be seen which factors affect the choice of architecture more than others and are hard to provide in the same architecture since they are conflicting factors. The conflicting factors were security together with working memory and robust communication. The factor working memory together with battery consumption also showed to be conflicting factors and is hard to provide within the same architecture. Therefore, according to the simulator, it can be seen that the factors that affect the choice of architecture were working memory, battery consumption, security, and robust communication. By using the results of the simulator, a decision matrix was designed whose purpose was to facilitate the choice of architecture. The evaluation of the decision matrix consisted of four projects from Tritech including the three given use cases: agriculture, the train industry, and industrial Internet of Things. The evaluation of the decision matrix showed that the two architectures that received the most points, one of the architectures were used in the validated project. / Distribuerade system kan ses i en mängd olika applikationer som används idag. Tritech jobbar med flera produkter som till viss del består av distribuerade system av noder. Det dessa system har gemensamt är att noderna samlar in data och denna data kommer på ett eller ett annat sätt behöva bearbetas. En fråga som ofta behövs besvaras vid uppsättning av arkitekturen för sådana projekt är huruvida datan ska bearbetas, d.v.s. vilken arkitektkonfiguration som är mest lämplig för systemet. Att ta dessa beslut har visat sig inte alltid vara helt simpelt, och det ändrar sig relativt snabbt med den utvecklingen som sker på dessa områden. Denna uppsats syftar till att utföra en studie om vilka faktorer som påverkar valet av arkitektur för ett distribuerat system samt hur dessa faktorer förhåller sig mot varandra. För att kunna analysera vilka faktorer som påverkar valet av arkitektur och i vilken utsträckning, implementerades en simulator. Simulatorn tog faktorerna som input och returnerade en eller flera arkitekturkonfigurationer som output. Genom att utföra kvalitativa intervjuer valdes faktorerna till simulatorn. Faktorerna som analyserades i denna uppsats var: säkerhet, lagring, arbetsminne, storlek på data, antal noder, databearbetning per datamängd, robust kommunikation, batteriförbrukning och kostnad. Från de kvalitativa intervjuerna och från förstudien valdes även fem stycken arkitekturkonfigurationer. De valda arkitekturerna var: thin-client server, thick-client server, three-tier client-server, peer-to-peer, och cloud computing. Simulatorn validerades inom de tre givna användarfallen: lantbruk, tågindustri och industriell IoT. Valideringen bestod av fem befintliga projekt från Tritech. Från resultatet av valideringen producerade simulatorn korrekta resultat för tre av de fem projekten. Utifrån simulatorns resultat, kunde det ses vilka faktorer som påverkade mer vid valet av arkitektur och är svåra att kombinera i en och samma arkitekturkonfiguration. Dessa faktorer var säkerhet tillsammans med arbetsminne och robust kommunikation. Samt arbetsminne tillsammans med batteriförbrukning visade sig också vara faktorer som var svåra att kombinera i samma arkitektkonfiguration. Därför, enligt simulatorn, kan det ses att de faktorer som påverkar valet av arkitektur var arbetsminne, batteriförbrukning, säkerhet och robust kommunikation. Genom att använda simulatorns resultat utformades en beslutsmatris vars syfte var att underlätta valet av arkitektur. Utvärderingen av beslutsmatrisen bestod av fyra projekt från Tritech som inkluderade de tre givna användarfallen: lantbruk, tågindustrin och industriell IoT. Resultatet från utvärderingen av beslutsmatrisen visade att de två arkitekturerna som fick flest poäng, var en av arkitekturerna den som användes i det validerade projektet
123

Secured trust and reputation system : analysis of malicious behaviors and optimization / Gestion de la confiance et de la réputation sécurisée : analyse des attaques possibles et optimisation

Bradai, Amira 29 September 2014 (has links)
Les mécanismes de réputation offrent un moyen nouveau et efficace pour assurer le niveau nécessaire de confiance qui est indispensable au bon fonctionnement de tout système critique. Ce fonctionnement consiste à collecter les informations sur l’historique des participants et rendent public leur réputation. Le système guide les décisions en tenant compte de ces informations et ainsi faire des choix plussécurisés. Des mécanismes de réputation en ligne sont présents dans la plupart des sites e-commerce disponibles aujourd’hui. Les systèmes existants ont été conçus avec l’hypothèse que les utilisateurs partagent les informations honnêtement. Mais, beaucoup de systèmes de réputation sont en général un sujet d’attaque par les utilisateurs malveillants. L’attaque peut affecter la coopération, l’agrégation et l’´évaluation. Certains utilisateurs veulent utiliser les ressources du réseau, mais ne veulent pas contribuer en retour. Autres manipulent les évaluations de la confiance et donnent une mauvaise estimation. Nous avons vu récemment de plus en plus que ça devient évident que certains utilisateurs manipulent stratégiquement leurs évaluations et se comportent d’une façon malhonnête. Pour une protection adéquate contre ces utilisateurs, un système sécurisé pour la gestion de la réputation est nécessaire. Dans notre système, une entité centrale existe et peut agréger les informations. Cependant, Les réseaux pair à pair n’ont pas de contrôle central ou un référentiel ce qui rend la tâche plus difficile. Ainsi, le système de gestion de la réputation doit effectuer toutes les tâches de manière distribuée. Lorsque ce genre des systèmes est mis en œuvre, les pairs essaient de plus en plus de manipuler les informations. Cette thèse décrit les moyens pour rendre les mécanismes de réputation plus sécurisé en analysant les risques et en fournissant un mécanisme de défense. Différents types de comportements malveillants existent et pour chacun d’eux, nous présentons une analyse complète, des simulations et un exemple d’utilisation réel / Reputation mechanisms offer a novel and effective way of ensuring the necessary level of trust which is essential to the functioning of any critical system. They collect information about the history (i.e., past transactions) of participants and make public their reputation. Prospective participants guide their decisions by considering reputation information, and thus make more informative choices. Online reputation mechanisms enjoy huge success. They are present in most e-commerce sites available today, and are seriously taken into consideration by human users. Existing reputation systems were conceived with the assumption that users will share feedback honestly. But, such systems like those in peer to peer are generally compromise of malicious users. This leads to the problem in cooperation, aggregation and evaluation. Some users want to use resources from network but do not want to contribute back to the network. Others manipulate the evaluations of trust and provide wrong estimation. We have recently seen increasing evidence that some users strategically manipulate their reports and behave maliciously. For proper protecting against those users, some kind of reputation management system is required. In some system, a trusted third entity exists and can aggregate the information. However, Peer-to-peer networks don’t have any central control or repository. Large size of distributed and hybrid networks makes the reputation management more challenging task. Hence reputation management system should perform all the tasks in distributed fashion. When these kinds of systems are implemented, peers try to deceive them to take maximum advantage. This thesis describes ways of making reputation mechanisms more trustworthy and optimized by providing defense mechanism and analysis. Different kinds of malicious behaviors exist and for each one, we present a complete analysis, simulation and a real use case example in distributed and non-distributed way
124

The Architecture of Blockchain System across the Manufacturing Supply Chain

Lu, Zheyi January 2018 (has links)
With the increasing popularity of blockchain - the cryptocurrency technology, the decentralized potential of the Blockchain technique is driving a new wave across the manufacturing industry. This paper introduce how to use the blockchain technique as a tool for solving supply chain related tasks in manufacture industry, and drive quantum leaps in efficiency, agility and innovation comparing with traditional centralized management system. This paper introduces the blockchain technique with its value properties and the requirement of this technique from manufacture industry. It also presents a clear blockchain architecture based on manufacture industry supply chain management mechanism describing its characteristics, unique consensus algorithms, smart contracts, network, scalability, databases. The paper also gives out a practical supply chain Dapp upon this architecture. / I och med det ökande intresset för kryptovaluta-teknologin Blockchain, går decentraliseringen av Blockchain-tekniken som en ny våg över tillverkningsindustrin. Denna uppsats syftar till att introducera hur tekniken av blockchain kan användas som ett verktyg för att lösa problem relaterade till leverantörskedjan i tillverkningen. Den belyser även vilka fördelar tekniken har gällande effektivitet, flexibilitet och förnyelse jämfört med traditionella centraliserade styrningssystem. Arbetet presenterar fördelarna med blockchain och hur industrin är i behov av denna teknik. Uppsatsen presenterar även en tydlig blockchain konstruerad struktur baserad på tillverkningskedjans mekanism som består av unika algoritmer, nätverk och databaser. Ett praktiskt exempel på en decentraliserad applikation baserat på denna struktur ges även.
125

SPOONS: Netflix Outage Detection Using Microtext Classification

Augusitne, Eriq A 01 March 2013 (has links) (PDF)
Every week there are over a billion new posts to Twitter services and many of those messages contain feedback to companies about their services. One company that recognizes this unused source of information is Netflix. That is why Netflix initiated the development of a system that lets them respond to the millions of Twitter and Netflix users that are acting as sensors and reporting all types of user visible outages. This system enhances the feedback loop between Netflix and its customers by increasing the amount of customer feedback that Netflix receives and reducing the time it takes for Netflix to receive the reports and respond to them. The goal of the SPOONS (Swift Perceptions of Online Negative Situations) system is to use Twitter posts to determine when Netflix users are reporting a problem with any of the Netflix services. This work covers the architecture of the SPOONS system and framework as well as outage detection using tweet classification.
126

DISTRIBUTED ARCHITECTURE FOR A GLOBAL TT&C NETWORK

Martin, Fredric W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / Use of top-down design principles and standard interface techniques provides the basis for a global telemetry data collection, analysis, and satellite control network with a high degree of survivability via use of distributed architecture. Use of Commercial Off-The-Shelf (COTS) hardware and software minimizes costs and provides for easy expansion and adaption to new satellite constellations. Adaptive techniques and low cost multiplexers provide for graceful system wide degradation and flexible data distribution.
127

"Índices de carga e desempenho em ambientes paralelos/distribuídos - modelagem e métricas" / Load and Performance Index for Parallel/Distributed System - Modelling and Metrics

Branco, Kalinka Regina Lucas Jaquie Castelo 15 December 2004 (has links)
Esta tese aborda o problema de obtenção de um índice de carga ou de desempenho adequado para utilização no escalonamento de processos em sistemas computacionais heterogêneos paralelos/distribuídos. Uma ampla revisão bibliográfica com a correspondente análise crítica é apresentada. Essa revisão é a base para a comparação das métricas existentes para a avaliação do grau de heterogeneidade/homogeneidade dos sistemas computacionais. Uma nova métrica é proposta neste trabalho, removendo as restrições identificadas no estudo comparativo realizado. Resultados de aplicações dessa nova métrica são apresentados e discutidos. Esta tese propõe também o conceito de heterogeneidade/homogeneidade temporal que pode ser utilizado para futuros aprimoramentos de políticas de escalonamento empregadas em plataformas computacionais heterogêneas paralelas/distribuídas. Um novo índice de desempenho (Vector for Index of Performance - VIP), generalizando o conceito de índice de carga, é proposto com base em uma métrica Euclidiana. Esse novo índice é aplicado na implementação de uma política de escalonamento e amplamente testado através de modelagem e simulação. Os resultados obtidos são apresentados e analisados estatisticamente. É demonstrado que o novo índice leva a bons resultados de modo geral e é apresentado um mapeamento mostrando as vantagens e desvantagens de sua adoção quando comparado às métricas tradicionais. / This thesis approaches the problem of evaluating an adequate load index or a performance index, for using in process scheduling in heterogeneous parallel/distributed computing systems. A wide literature review with the corresponding critical analysis is presented. This review is the base for the comparison of the existing metrics for the evaluation of the computing systems homogeneity/heterogeneity degree. A new metric is proposed in this work, removing the restrictions identified during the comparative study realized. Results from the application of the new metric are presented and discussed. This thesis also proposes the concept of temporal heterogeneity/homogeneity that can be used for future improvements in scheduling polices for parallel/distributed heterogeneous computing platforms. A new performance index (Vector for Index of Performance - VIP), generalizing the concept of load index, is proposed based on an Euclidean metric. This new index is applied to the implementation of a scheduling police and widely tested through modeling and simulation. The results obtained are presented and statistically analyzed. It is shown that the new index reaches good results in general and it is also presented a mapping showing the advantages and disadvantages of its adoption when compared with the traditional metrics.
128

MOS - Modelo Ontológico de Segurança para negociação de política de controle de acesso em multidomínios. / MOS - Ontological Security Model for access control policy negotiation in multi-domains.

Venturini, Yeda Regina 07 July 2006 (has links)
A evolução nas tecnologias de redes e o crescente número de dispositivos fixos e portáteis pertencentes a um usuário, os quais compartilham recursos entre si, introduziram novos conceitos e desafios na área de redes e segurança da informação. Esta nova realidade estimulou o desenvolvimento de um projeto para viabilizar a formação de domínios de segurança pessoais e permitir a associação segura entre estes domínios, formando um multidomínio. A formação de multidomínios introduziu novos desafios quanto à definição da política de segurança para o controle de acesso, pois é composto por ambientes administrativos distintos que precisam compartilhar seus recursos para a realização de trabalho colaborativo. Este trabalho apresenta os principais conceitos envolvidos na formação de domínio de segurança pessoal e multidomínios, e propõe um modelo de segurança para viabilizar a negociação e composição dinâmica da política de segurança para o controle de acesso nestes ambientes. O modelo proposto é chamado de Modelo Ontológico de Segurança (MOS). O MOS é um modelo de controle de acesso baseado em papéis, cujos elementos são definidos por ontologia. A ontologia define uma linguagem semântica comum e padronizada, viabilizando a interpretação da política pelos diferentes domínios. A negociação da política ocorre através da definição da política de importação e exportação de cada domínio. Estas políticas refletem as contribuições parciais de cada domínio para a formação da política do multidomínio. O uso de ontologia permite a composição dinâmica da política do multidomínio, assim como a verificação e resolução de conflitos de interesses, que refletem incompatibilidades entre as políticas de importação e exportação. O MOS foi validado através da análise de sua viabilidade de aplicação em multidomínios pessoais. A análise foi feita pela definição de um modelo concreto e pela simulação da negociação e composição da política de controle de acesso. Para simulação foi definido um multidomínio para projetos de pesquisa. Os resultados mostraram que o MOS permite a definição de um procedimento automatizável para criação da política de controle de acesso em multidomínios. / The evolution in the network technology and the growing number of portable and fixed devices belonging to a user, which shares resources, introduces new concepts and challenges in the network and information security area. This new reality has motivated the development of a project for personal security domain formation and security association between them, creating a multi-domain. The multi-domain formation introduces new challenges concerning the access control security policy, since multi-domains are composed by independent administrative domains that share resources for collaborative work. This work presents the main concept concerning the personal security domains and multi-domains, and proposes a security model to allow the dynamic security policy negotiation and composition for access control in multi-domain. The proposed model is called MOS, which is an ontological security model. The MOS is a role-based access control model, which elements are defined by an ontology. The ontology defines a semantic language, common and standardized, allowing the policy interpretation by different domains. The policy negotiation is made possible by the definition of the policy importation and exportation in each domain. These policies mean the partial contributions of each domain for the multi-domain policy formation. The use of ontology allows the dynamic multi-domain policy composition, as well as the verification and resolution of interest conflicts. These conflicts mean incompatibilities between the importation and exportation policy. The MOS was validated through the viability analysis for personal multi-domain application. The analysis was made through the definition of a factual model and the simulation of access control policy negotiation and composition. The simulation was taken place through the definition of a collaborative research projects multi-domain. The results demonstrate the MOS is feasible for implementation in automatic procedures for multi-domain access control policy creation.
129

Petrinetze zum Entwurf selbststabilisierender Algorithmen

Vesper, Tobias 08 December 2000 (has links)
Edsger W. Dijkstra prägte im Jahr 1974 den Begriff Selbststabilisierung (self-stabilization) in der Informatik. Ein System ist selbststabilisierend, wenn es von jedem denkbaren Zustand aus nach einer endlichen Anzahl von Aktionen ein stabiles Verhalten erreicht. Im Mittelpunkt dieser Arbeit steht der Entwurf selbststabilisierender Algorithmen. Wir stellen eine Petrinetz-basierte Methode zum Entwurf selbststabilisierender Algorithmen vor. Wir validieren unsere Methode an mehreren Fallstudien: Ausgehend von algorithmischen Ideen existierender Algorithmen beschreiben wir jeweils die die schrittweise Entwicklung eines neuen Algorithmus. Dazu gehört ein neuer randomisierter selbststabilisierender Algorithmus zur Leader Election in einem Ring von Prozessoren. Dieser Algorithmus ist abgeleitet aus einem publizierten Algorithmus, von dem wir hier erstmals zeigen, daß er fehlerhaft arbeitet. Wir weisen die Speicherminimalität unseres Algorithmus nach. Ein weiteres Ergebnis ist der erste Algorithmus, der ohne Time-Out-Aktionen selbststabilisierenden Tokenaustausch in asynchronen Systemen realisiert. Petrinetze bilden einen einheitlichen formalen Rahmen für die Modellierung und Verifikation dieser Algorithmen. / In 1974, Edsger W. Dijkstra suggested the notion of self-stabilization. A system is self-stabilizing if regardless of the initial state it eventually reaches a stable behaviour. This thesis focuses on the design of self-stabilizing algorithms. We introduce a new Petri net based method for the design of self-stabilizing algorithms. We validate our method on several case studies. In each of the case studies, our stepwise design starts from an algorithmic idea and leads to a new self-stabilizing algorithm. One of these algorithms is a new randomized self-stabilizing algorithm for leader election in a ring of processors. This algorithm is derived from a published algorithm which we show to be incorrect. We prove that our algorithm is space-minimal. A further result is the first algorithm for token-passing in a asynchronous environment which works without time-out actions. Petri nets form a unique framework for modelling and verification of these algorithms.
130

Simulation réaliste de l'exécution des applications déployées sur des systèmes distribués avec un focus sur l'amélioration de la gestion des fichiers / Realistic simulation of the execution of applications deployed on large distributed systems with a focus on improving file management

Chai, Anchen 14 January 2019 (has links)
La simulation est un outil puissant pour étudier les systèmes distribués. Elle permet aux chercheurs d’évaluer différents scénarios de manière reproductible, ce qui est impossible à travers des expériences réelles. Cependant, les simulations reposent souvent sur des modèles simplifiés, dont le réalisme est trop rarement étudié dans la littérature. Ceci mène à une pertinence et une applicabilité discutables des résultats obtenus. Dans ce contexte, l'objectif principal de notre travail est d'améliorer le réalisme de la simulation en mettant l'accent sur le transfert de fichiers dans un système distribué à large échelle, tel que l’infrastructure de production EGI. Le réalisme de la simulation est étudié ici au travers de deux aspects principaux : le simulateur et le modèle de plate-forme (l’infrastructure distribuée). Ensuite, à partir de simulations réalistes, nous proposons des recommandations fiables pour améliorer la gestion des fichiers à travers des portails scientifiques, tels que la plate-forme d'imagerie virtuelle (VIP). Afin de reproduire de manière réaliste les comportements du système réel en simulation, nous devons en obtenir une vue de l'intérieur. Par conséquent, nous recueillons et analysons un ensemble de traces d’exécutions d'une application particulière exécutée sur EGI via VIP. En plus des traces, nous identifions et examinons les composants du système réel à reproduire de manière réaliste en simulation. Il s’agit, par exemple, des algorithmes utilisés pour la sélection d’un réplica dans le système de production. Nous simulons ensuite ces composants en utilisant les mécanismes mis à disposition par SimGrid. Nous construisons, à partir des traces, un modèle de plate-forme réaliste essentiel à la simulation des transferts des fichiers. La précision de notre modèle de plate-forme est évaluée en confrontant les résultats de simulations avec la vérité terrain des transferts réels. Nous montrons que le modèle proposé surpasse largement le modèle issu de l’état de l’art pour reproduire la variabilité réelle des transferts de fichiers sur EGI. Ce modèle est ensuite enrichi pour permettre l’analyse de nouveaux scénarios, au-delà de la reproduction des traces d’exécution en production. Enfin, nous évaluons différentes stratégies de réplication de fichiers par simulation en utilisant deux modèles différents: un modèle issu de l’état de l’art amélioré et notre modèle de plate-forme construit à partir des traces. Les résultats de simulations montrent que les deux modèles conduisent à des décisions de réplication différentes, même s’ils reflètent une topologie de réseau hiérarchique similaire. Ceci montre que le réalisme du modèle de plateforme utilisé en simulation est essentiel pour produire des résultats pertinents et applicables aux systèmes réels. / Simulation is a powerful tool to study distributed systems. It allows researchers to evaluate different scenarios in a reproducible manner, which is hardly possible in real experiments. However, the realism of simulations is rarely investigated in the literature, leading to a questionable accuracy of the simulated metrics. In this context, the main aim of our work is to improve the realism of simulations with a focus on file transfer in a large distributed production system (i.e., the EGI federated e-Infrastructure (EGI)). Then, based on the findings obtained from realistic simulations, we can propose reliable recommendations to improve file management in the Virtual Imaging Platform (VIP). In order to realistically reproduce certain behaviors of the real system in simulation, we need to obtain an inside view of it. Therefore, we collect and analyze a set of execution traces of one particular application executed on EGI via VIP. The realism of simulations is investigated with respect to two main aspects in this thesis: the simulator and the platform model. Based on the knowledge obtained from traces, we design and implement a simulator to provide a simulated environment as close as possible to the real execution conditions for file transfers on EGI. A complete description of a realistic platform model is also built by leveraging the information registered in traces. The accuracy of our platform model is evaluated by confronting the simulation results with the ground truth of real transfers. Our proposed model is shown to largely outperform the state-of-the-art model to reproduce the real-life variability of file transfers on EGI. Finally, we cross-evaluate different file replication strategies by simulations using an enhanced state-of-the-art model and our platform model built from traces. Simulation results highlight that the instantiation of the two models leads to different qualitative decisions of replication, even though they reflect a similar hierarchical network topology. Last but not least, we find that selecting sites hosting a large number of executed jobs to replicate files is a reliable recommendation to improve file management of VIP. In addition, adopting our proposed dynamic replication strategy can further reduce the duration of file transfers except for extreme cases (very poorly connected sites) that only our proposed platform model is able to capture.

Page generated in 0.265 seconds