Spelling suggestions: "subject:"distributed algorithms"" "subject:"eistributed algorithms""
81 |
Using a Diffusive Approach for Load Balancing in Peer-to-peer SystemsQiao, Ying January 2012 (has links)
We developed a diffusive load balancing scheme that equalizes the available capacities of nodes in a peer-to-peer (P2P) system. These nodes may have different resource capacities, geographic locations, or availabilities (i.e., length of time being part of the peer-to-peer system). The services on these nodes may have different service times and arrival rates of requests. Using the diffusive scheme, the system is able to maintain similar response times for its services. Our scheme is a modification of the diffusive load balancing algorithms proposed for parallel computing systems. This scheme is able to handle services with heterogeneous resource requirements and P2P nodes with heterogeneous capacities. We also adapted the diffusive scheme to clustered peer-to-peer system, where a load balancing operation may move services or nodes between clusters.
After a literature survey of this field, this thesis investigates the following issues using analytical reasoning and extensive simulation studies. The load balancing operations equalize the available capacities of the nodes in a neighborhood to their averages. As a result, the available capacities of all nodes in the P2P system converge to a global average. We found that this convergence is faster when the scheme uses neighborhoods defined by the structure of the structured P2P overlay network rather than using randomly selected neighbors. For a system with churn (i.e. nodes joining and leaving), the load balancing operations maintain the standard deviation of the available capacities of nodes within a bound. This bound depends on the amount of churn and the frequency of load balancing operations, as well as on the capacities of the nodes. However, the sizes of the services have little impact on this bound. In a clustered peer-to-peer system, the size of the bound largely depends on the average cluster size. When nodes are moved among clusters for load balancing, the numbers of cluster splits and merges are reduced. This may reduce the maintenance cost of the overlay network.
|
82 |
Distributed Algorithms for Networks Formation in a Scalable Internet of ThingsJedda, Ahmed January 2014 (has links)
The Internet of Things (IoT) is a vision that aims at inter-connecting every physical identifiable object (or, a thing) via a global networking infrastructure (e.g., the legacy Internet). Several architectures are proposed to realize this vision; many of which agree that the IoT shall be considered as a global network of networks. These networks are used to manage wireless sensors, Radio Frequency IDentification (RFID) tags, RFID readers and other types of electronic devices and integrate them into the IoT. A major requirement of the IoT architectures is scalability, which is the capability of delivering high performance even if the input size (e.g., number of the IoT objects) is large. This thesis studies and proposes solutions to meet this requirement, and specifically focuses on the scalability issues found in the networks of the IoT. The thesis proposes several network formation algorithms to achieve these objectives, where a network formation algorithm is an algorithm that, if applied to a certain network, optimizes it to perform its tasks in a more efficient manner by virtually deleting some of its nodes and/or edges.
The thesis focuses on three types of networks found in the IoT: 1) RFID readers coverage networks; whose main task is to cover (i.e., identify, monitor, track, sense) IoT objects located in a given area, 2) readers inter-communications networks; whose main task is to guarantee that their nodes are able to inter-communicate with each other and hence use their resources more efficiently (the thesis specifically considers inter-communication networks of readers using Bluetooth for communications), and 3) Object Name Systems (ONS) which are networks of several inter-connected database servers (i.e., distributed database) whose main task is to resolve an object identifier into an Internet address to enable inter-communication via the Internet. These networks are chosen for several reasons. For example, the technologies and concepts found in these networks are among the major enablers of the IoT. Furthermore, these networks solve tasks that are central to any IoT architecture. Particularly, the thesis a) studies the data and readers redundancy problem found in RFID readers coverage networks and introduces decentralized RFID coverage and readers collisions avoidance algorithms to solve it, b) contributes to the problem of forming multihop inter-communications networks of Bluetooth-equipped readers by proposing decentralized time-efficient Bluetooth Scatternet Formation algorithms, and c) introduces a geographic-aware ONS architecture based on Peer-To-Peer (P2P) computing to overcome weaknesses found in existing ONS architectures.
|
83 |
Algorithmes distribués pour l'optimisation de déploiement des microrobots MEMS / Distributed algorithms for optimizing the deployment of MEMS microrobotsLakhlef, Hicham 24 November 2014 (has links)
Les microrobots MEMS sont des éléments miniaturisés qui peuvent capter et agir sur l'environnement. Leur taille est de l'ordre du millimètre et ils ont une faible capacité de mémoire et une capacité énergétique limitée. Les microrobots MEMS continuent d'accroître leur présence dans notre vie quotidienne. En effet, ils peuvent effectuer plusieurs missions et tâches dans une large gamme d'applications telles que la localisation d'odeur, la lutte contre les incendies, le service médical, la surveillance, le sauvetage et la sécurité. Pour faire ces taches et missions, ils doivent appliquer des protocoles de redéploiement afin de s'adapter aux conditions du travail. Ces algorithmes doivent être efficaces, évolutifs, robustes et ils doivent utiliser de préférence des informations locales. Le redéploiement pour les microrobots MEMS mobiles nécessite actuellement un système de positionnement et une carte (positions prédéfinies) de la forme cible. La solution traditionnelle de positionnement comme l'utilisation d'un GPS consommerait trop d'énergie. De plus, l'utilisation de solutions de positionnement algorithmique avec les techniques de multilatération pose toujours des problèmes à cause des erreurs dans les coordonnées obtenues.Dans la littérature, si nous voulons une auto-reconfiguration de microrobots vers une forme cible constituée de P positions, chaque microrobot doit avoir une capacité mémoire de P positions pour les sauvegarder. Par conséquent, si P est de l'ordre de milliers ou de millions, chaque noeud devra avoir une capacité de mémoire de positions en milliers ou millions. Parconséquent, ces algorithmes ne sont pas extensibles ou évolutifs. Dans cette thèse, on propose des protocoles de reconfiguration où les noeuds ne sont pas conscients de leurs positions dans le plan et n'enregistrent aucune position de la forme cible. En d'autres termes, les noeuds ne stockent pas au départ les coordonnées qui construisent la forme cible. Par conséquent, l'utilisation de mémoire pour chaque noeud est réduite à une complexité constante. L'objectif desalgorithmes distribués proposés est d'optimiser la topologie logique du réseau des microrobots afin de chercher une meilleure complexité pour l'échange de message et une communication peu coûteuse. Ces solutions sont complètement distribués. On montre pour la reconfiguration d'une chaîne à un carré comment gérer la dynamicité du réseau pour sauvegarder l'énergie, on étudie comment utiliser le parallélisme de mouvements pour optimiser le temps d'exécution et lenombre de mouvements. Ainsi, on propose une autre solution où la topologie physique initiale peut être n'importe quelle configuration initiale. Avec ces solutions, les noeuds peuvent exécuter l'algorithme indépendamment du lieu où ils sont déployés, parce que l'algorithme est indépendant de la carte de la forme cible. En outre, ces solutions cherchent à atteindre la forme de la cible avec une quantité minimale de mouvement. / MEMS microrobots are miniaturized elements that can capture and act on the environment. They have a small size, low memory capacity and limited energy capacity. These inexpensive devices can perform several missions and tasks in a wide range of applications such as locating odor, fighting against fires, medical service, surveillance, search, rescue and safety. To do these tasks and missions, they have to carry out protocols of redeployment to adapt to the working conditions. These algorithms should be efficient, scalable, robust and should only use local information. Redeployment for mobile MEMS microrobots currently requires a positioning system and a map (predefined positions) of the target shape. Traditional positioning solutions such as using GPS consumes a lot of energy and it is no applicable in the micro scale. Also, the use of an algorithmic solution positioning with multilateration techniques causes problems due to errors in the coordinates obtained. In the literature works, if we want a microrobots self-reconfiguring to a target shape consisting of P positions, each microrobot must have a storage capacity of at least P positions to save them. Therefore, if P equals to thousands or millions, every node must have a storage capacity of thousands or millions of positions. However, these algorithms are notscalable. In this thesis, we propose protocols of self-reconfiguration where nodes are not aware of their position in the plane and do not record the positions of the target shape. Therefore, the memory space required for each node is significantly reduced at a constant complexity. The purpose of these distributed algorithms is to optimize the logical topology of the network of mobile MEMS microrobots to seek a better complexity for message exchange and inexpensive communication.In this work, we show for the reconfiguration of a chain into a square, how to handle the dynamicity of the network to save energy, and we study how to use parallelism in motion to optimize the execution time and the number of movements. Furthermore, another solution is proposed where the initial physical topology may be any connected configuration. With thesesolutions the nodes can execute the algorithm regardless of where they are deployed, because the algorithm is independent of the map of the target shape. Furthermore, these solutions seek to achieve the shape of the target with a minimum amount of movement.
|
84 |
Precise Analysis of Epidemic Algorithms / Analyse précise des algorithmes épidémiquesKostrygin, Anatolii 29 August 2017 (has links)
La dissémination collaborative d'une information d'un agent à tous les autres agents d'un système distribué est un problème fondamental qui est particulièrement important lorsque l'on veut obtenir des algorithmes distribués qui sont à la fois robustes et fonctionnent dans un cadre anonyme, c'est-à-dire sans supposer que les agents possèdent des identifiants distincts connus. Ce problème, connu sous le nom de problème de propagation de rumeur , est à la base de nombreux algorithmes de communication sur des réseaux de capteurs sans-fil [Dimakis et al. (2010)] ou des réseaux mobiles ad-hoc. Il est aussi une brique de base centrale pour de nombreux algorithmes distribués avancés [Mosk-Aoyama et Shah (2008)].Les méthodes les plus connues pour surmonter les défis de robustesse et d'anonymat sont les algorithmes basés sur les ragots ( gossip-based algorithms ), c'est-à-dire sur la paradigme que les agents contact aléatoirement les autres agents pour envoyer ou récupérer l'information. Nousproposons une méthode générale d'analyse de la performance des algorithmes basés sur les ragots dans les graphes complets. Contrairement aux résultats précédents basés sur la structure précise des processus étudiés, notre analyse est basée sur la probabilité et la covariance des évènements correspondants au fait qu'un agent non-informé s'informe. Cette universalité nous permet de reproduire les résultats basiques concernant les protocoles classiques de push, pull et push-pull ainsi qu'analyser les certaines variantions telles que les échecs de communications ou les communications simultanés multiples réalisées par chaque agent. De plus, nous sommescapables d'analyser les certains modèles dynamiques quand le réseaux forme un graphe aléatoire échantillonné à nouveau à chaque étape [Clementi et al. (ESA 2013)]. Malgré sa généralité, notre méthode est simple et précise. Elle nous permet de déterminer l'espérance du temps de la diffusion à une constante additive près, ce qu'il est plus précis que la plupart des résultatsprécédents. Nous montrons aussi que la déviation du temps de la diffusion par rapport à son espérance est inférieure d'une constante r avec la probabilité au moins 1 − exp(Ω(r)).À la fin, nous discutons d'une hypothèse classique que les agents peuvent répondre à plusieurs appels entrants. Nous observons que la restriction à un seul appel entrant par agent provoque une décélération importante du temps de la diffusion pour un protocole de push-pull. En particulier, une phase finale du processus prend le temps logarithmique au lieu du temps double logarithmique. De plus, cela augmente le nombre de messages passés de Θ(n log log n) (valeur optimale selon [Karp et al. (FOCS 2000)]) au Θ(n log n) . Nous proposons une variation simple du protocole de push-pull qui rétablit une phase double logarithmique à nouveau et donc le nombre de messages passés redescend sur sa valeur optimal. / Epidemic algorithms are distributed algorithms in which the agents in thenetwork involve peers similarly to the spread of epidemics. In this work, we focus on randomized rumor spreading -- a class of epidemic algorithms based on the paradigm that nodes call random neighbors and exchange information with these contacts. Randomized rumor spreading has found numerous applications from the consistency maintenance of replicated databases to newsspreading in social networks. Numerous mathematical analyses of different rumor spreading algorithms can be found in the literature. Some of them provide extremely sharp estimates for the performance of such processes, but most of them are based on the inherent properties of concrete algorithms.We develop new simple and generic method to analyze randomized rumor spreading processes in fully connected networks. In contrast to all previous works, which heavily exploit the precise definition of the process under investigation, we only need to understand the probability and the covariance of the events that uninformed nodes become informed. This universality allows us to easily analyze the classic push, pull, and push-pull protocols both in their pure version and in several variations such as when messages fail with constant probability or when nodes call a random number of others each round. Some dynamic models can be analyzed as well, e.g., when the network is a random graph sampled independently each round [Clementi et al. (ESA 2013)]. Despite this generality, our method determines the expected rumor spreading time precisely apart from additive constants, which is more precise than almost all previous works. We also prove tail bounds showing that a deviation from the expectation by more than an additive number of r rounds occurs with probability at most exp(−Ω(r)).We further use our method to discuss the common assumption that nodes can answer any number of incoming calls. We observe that the restriction that only one call can be answered leads to a significant increase of the runtime of the push-pull protocol. In particular, the double logarithmic end phase of the process now takes logarithmic time. This also increases the message complexity from the asymptotically optimal Θ(n log log n) [Karp, Shenker, Schindelhauer, Vöcking (FOCS 2000)] to Θ(n log n). We propose a simple variation of the push-pull protocol that reverts back to the double logarithmic end phase and thus to the Θ(n log log n) message complexity.
|
85 |
ROBUST LOW ATOMICITY PEER-TO-PEER SYSTEMSMohd Nor, Rizal 06 July 2012 (has links)
No description available.
|
86 |
Byzantine Fault Tolerant Collaborative EditingBABI, MAMDOUH O. 24 May 2017 (has links)
No description available.
|
87 |
Entwurf und Verifikation von Petrinetzmodellen verteilter Algorithmen durch Verfeinerung unverteilter AlgorithmenWu, Bixia 12 July 2007 (has links)
Um Entwurf und Verifikation komplizierter verteilter Algorithmen leichter und verständlicher zu machen, wird oft eine Verfeinerungsmethode verwendet. Dabei wird ein einfacher Algorithmus, der gewünschte Eigenschaften erfüllt, schrittweise zu einem komplizierten Algorithmus verfeinert. In jedem Schritt sollen die gewünschten Eigenschaften erhalten bleiben. Für nachrichtenbasierte verteilte Algorithmen haben wir eine neue Verfeinerungsmethmode entwickelt. Wir beginnen mit einem Anfangsalgorithmus, der Aktionen enthält, die gemeinsame Aufgaben mehrerer Agenten beschreiben. In jedem Schritt verfeinern wir eine dieser Aktionen zu einem Netz, das nur solche Aktionen enthält, die die Aufgaben einzelner Agenten beschreiben. Jeder Schritt ist also eine Verteilung einer unverteilten Aktion. Die Analyse solcher Verfeinerungsschritte wird mit Hilfe eines neuen Verfeinerungsbegriffs - der verteilenden Verfeinerung - durchgeführt. Entscheidend dabei ist das Erhaltenbleiben der Halbordnungen des zu verfeinernden Algorithmus. Dies ist durch Kausalitäten der Aktionen der Agenten im lokalen Verfeinerungsnetz zu erreichen. Die Kausalitäten im lokalen Verfeinerungsnetz lassen sich einerseits beim Entwurf direkt durch Nachrichtenaustausch realisieren. Andererseits kann man bei der Verifikation die Gültigkeit einer Kausalität im lokalen Verfeinerungsnetz direkt vom Netz ablesen. Daher ist diese Methode leicht zu verwenden. Die Anwendung der Methode wird in der Arbeit an verschiedenen nicht trivialen Beispielen demonstriert. / In order to make design and verification of complicated distributed algorithms easier and more understandable, a refinement method is often used. A simple algorithm, which fulfills desired properties, is refined stepwise to a complicated algorithm. In each step the desired properties are preserved. For messages-based distributed algorithms we have developed a new refinement method. We begin with an initial algorithm, which contains actions, which describe common tasks of several agents. In each step we refine one of these actions to a net, which contains only such actions, which describe the tasks of individual agents. Thus, each step is a distribution of an undistributed action. The analysis of such refinement steps is accomplished with the help of a new refinement notation - the distributing refinement. Preservation of the partial order of the refined algorithm is important. This can be achieved by causalities of the actions of the agents in the local refinement net. Causalities in the local refinement net can be realized on the one hand at design directly by messages passing. On the other hand, at verification one can read the validity of causality in the local refinement net directly from the net. Therefore, this method is easy to use. The application of the method is demonstrated by several nontrivial examples in this thesis.
|
88 |
Fairneß, Randomisierung und Konspiration in verteilten AlgorithmenVölzer, Hagen 08 December 2000 (has links)
Fairneß (d.h. faire Konfliktlösung), Randomisierung (d.h. Münzwürfe) und partielle Synchronie sind verschiedene Konzepte, die häufig zur Lösung zentraler Synchronisations- und Koordinationsprobleme in verteilten Systemen verwendet werden. Beispiele für solche Probleme sind das Problem des wechselseitigen Ausschlusses (kurz: Mutex-Problem) sowie das Konsens-Problem. Für einige solcher Probleme wurde bewiesen, daß ohne die oben genannten Konzepte keine Lösung für das betrachtete Problem existiert. Unmöglichkeitsresultate dieser Art verbessern unser Verständnis der Wirkungsweise verteilter Algorithmen sowie das Verständnis des Trade-offs zwischen einem leicht analysierbaren und einem ausdrucksstarken Modell für verteiltes Rechnen. In dieser Arbeit stellen wir zwei neue Unmöglichkeitsresultate vor. Darüberhinaus beleuchten wir ihre Hintergründe. Wir betrachten dabei Modelle, die Randomisierung einbeziehen, da bisher wenig über die Grenzen der Ausdrucksstärke von Randomisierung bekannt ist. Mit einer Lösung eines Problems durch Randomisierung meinen wir, daß das betrachtete Problem mit Wahrscheinlichkeit 1 gelöst wird. Im ersten Teil der Arbeit untersuchen wir die Beziehung von Fairneß und Randomisierung. Einerseits ist bekannt, daß einige Probleme (z.B. das Konsens- Problem) durch Randomisierung nicht aber durch Fairneß lösbar sind. Wir zeigen nun, daß es andererseits auch Probleme gibt (nämlich das Mutex-Problem), die durch Fairneß, nicht aber durch Randomisierung lösbar sind. Daraus folgt, daß Fairneß nicht durch Randomisierung implementiert werden kann. Im zweiten Teil der Arbeit verwenden wir ein Modell, das Fairneß und Randomisierung vereint. Ein solches Modell ist relativ ausdrucksstark: Es erlaubt Lösungen für das Mutex-Problem, das Konsens-Problem, sowie eine Lösung für das allgemeine Mutex-Problem. Beim allgemeinen Mutex-Problem (auch bekannt als Problem der speisenden Philosophen) ist eine Nachbarschaftsrelation auf den Agenten gegeben und ein Algorithmus gesucht, der das Mutex-Problem für jedes Paar von Nachbarn simultan löst. Schließlich betrachten wir das ausfalltolerante allgemeine Mutex-Problem -- eine Variante des allgemeinen Mutex-Problems, bei der Agenten ausfallen können. Wir zeigen, daß sogar die Verbindung von Fairneß und Randomisierung nicht genügt, um eine Lösung für das ausfalltolerante allgemeine Mutex-Problem zu konstruieren. Ein Hintergrund für dieses Unmöglichkeitsresultat ist ein unerwünschtes Phänomen, für das in der Literatur der Begriff Konspiration geprägt wurde. Konspiration wurde bisher nicht adäquat charakterisiert. Wir charakterisieren Konspiration auf der Grundlage nicht-sequentieller Abläufe. Desweiteren zeigen wir, daß Konspiration für eine große Klasse von Systemen durch die zusätzliche Annahme von partieller Synchronie verhindert werden kann, d.h. ein konspirationsbehaftetes System kann zu einem randomisierten System verfeinert werden, das unter Fairneß und partieller Synchronie mit Wahrscheinlichkeit 1 konspirationsfrei ist. Partielle Synchronie fordert, daß alle relativen Geschwindigkeiten im System durch eine Konstante beschränkt sind, die jedoch den Agenten nicht bekannt ist. Die Darstellung der Unmöglichkeitsresultate und die Charakterisierung von Konspiration wird erst durch die Verwendung nicht-sequentieller Abläufe möglich. Ein nicht-sequentieller Ablauf repräsentiert im Gegensatz zu einem sequentiellen Ablauf kausale Ordnung und nicht zeitliche Ordnung von Ereignissen. Wir entwickeln in dieser Arbeit eine nicht-sequentielle Semantik für randomisierte verteilte Algorithmen, da es bisher keine in der Literatur gibt. In dieser Semantik wird kausale Unabhängigkeit durch stochastische Unabhängigkeit widergespiegelt. / Concepts such as fairness (i.e., fair conflict resolution), randomization (i.e., coin flips), and partial synchrony are frequently used to solve fundamental synchronization- and coordination-problems in distributed systems such as the mutual exclusion problem (mutex problem for short) and the consensus problem. For some problems it is proven that, without such concepts, no solution to the particular problem exists. Impossibilty results of that kind improve our understanding of the way distributed algorithms work. They also improve our understanding of the trade-off between a tractable model and a powerful model of distributed computation. In this thesis, we prove two new impossibility results and we investigate their reasons. We are in particular concerned with models for randomized distributed algorithms since little is yet known about the limitations of randomization with respect to the solvability of problems in distributed systems. By a solution through randomization we mean that the problem under consideration is solved with probability 1. In the first part of the thesis, we investigate the relationship between fairness and randomization. On the one hand, it is known that to some problems (e.g. to the consensus problem), randomization admits a solution where fairness does not admit a solution. On the other hand, we show that there are problems (viz. the mutex problem) to which randomization does not admit a solution where fairness does admit a solution. These results imply that fairness cannot be implemented by coin flips. In the second part of the thesis, we consider a model which combines fairness and randomization. Such a model is quite powerful, allowing solutions to the mutex problem, the consensus problem, and a solution to the generalized mutex problem. In the generalized mutex problem (a.k.a. the dining philosophers problem), a neighborhood relation is given and mutual exclusion must be achieved for each pair of neighbors. We finally consider the crash-tolerant generalized mutex problem where every hungry agent eventually becomes critical provided that neither itself nor one of its neighbors crashes. We prove that even the combination of fairness and randomization does not admit a solution to the crash-tolerant generalized mutex problem. We argue that the reason for this impossibility is the inherent occurrence of an undesirable phenomenon known as conspiracy. Conspiracy was not yet properly characterized. We characterize conspiracy on the basis of non-sequential runs, and we show that conspiracy can be prevented by help of the additional assumption of partial synchrony, i.e., we show that every conspiracy-prone system can be refined to a randomized system which is, with probability 1, conspiracy-free under the assumptions of partial synchrony and fairness. Partial synchrony means that each event consumes a bounded amount of time where, however, the bound is not known. We use a non-sequential semantics for distributed algorithms which is essential to some parts of the thesis. In particular, we develop a non-sequential semantics for randomized distributed algorithms since there is no such semantics in the literature. In this non-sequential semantics, causal independence is reflected by stochastic independence.
|
89 |
Vérification formelle d'algorithmes distribués en PlusCal-2 / Formal Verification of distributed algorithms using PlusCal-2Akhtar, Sabina 09 May 2012 (has links)
La conception d'algorithmes pour les systèmes concurrents et répartis est subtile et difficile. Ces systèmes sont enclins à des blocages et à des conditions de course et sont par conséquent difficiles à reproduire La vérification formelle est une technique essentielle pour modéliser le système et ses propriétés et s'assurer de sa correction au moyen du model checking. Des langages formels tels TLA+ permettent de décrire des algorithmes compliqués de manière assez concise, mais les concepteurs d'algorithmes trouvent souvent difficile de modéliser un algorithme par un ensemble de formules. Dans ce mémoire nous présentons le langage PlusCal-2 qui vise à allier la simplicité de pseudo-code à la capacité d'être vérifié formellement. PlusCal-2 améliore le langage algorithmique PlusCal conçu par Lamport en levant certaines restrictions de ce langage et en y ajoutant de nouvelles constructions. Notre langage est destiné à la description d'algorithmes à un niveau élevé d'abstraction. Sa syntaxe ressemble à du pseudo-code mais il est tout à fait expressif et doté d'une sémantique formelle. Pour calculer la dépendance conditionnelle pour les algorithmes en PlusCal-2 nous exploitons des informations sur la localité des actions et nous générons des prédicats d'indépendance. Nous proposons également une adaptation d'un algorithme de réduction par ordre partiel dynamique pour une variante du model checker TLC. Enfin, nous proposons une variante d'un algorithme de réduction par ordre partiel statique s'appuyant sur une relation de dépendance constante, et son implantation au sein de TLC. Nous présentons nos résultats expérimentaux et une preuve de correction / Designing sound algorithms for concurrent and distributed systems is subtle and challenging. These systems are prone to deadlocks and race conditions, and are therefore hard to reproduce. Formal verification is a key technique to model the system and its properties and then perform verification by means of model checking. Formal languages like TLA+ have the ability to describe complicated algorithms quite concisely, but algorithm designers often find it difficult to model an algorithm in the form of formulas. In this thesis, we present PlusCal-2 that aims at being similar to pseudo-code while being formally verifiable. PlusCal-2 improves upon Lamport?s PlusCal algorithm language by lifting some of its restrictions and adding new constructs. Our language is intended for describing algorithms at a high level of abstraction. Finite instances of algorithms described in PlusCal-2 can be verified through the TLC model checker. The second contribution presented in this thesis is a study of partial-order reduction methods using conditional and constant dependency relation. To compute conditional dependency for PlusCal-2 algorithms, we exploit their locality information and present them in the form of independence predicates. We also propose an adaptation of a dynamic partial-order reduction algorithm for a variant of the tlc model checker. As an alternative to partial order reduction based on conditional dependency, we also describe a variant of a static partial-order reduction algorithm for the tlc model checker that relies on constant dependency relation. We also present our results for the experiments along with the proof of correctness
|
90 |
Grafos evolutivos na modelagem e análise de redes dinâmicas / Evolving Graphs in the Modeling and Analysis of Dynamic NetworksFloriano, Paulo Henrique 29 February 2012 (has links)
Atualmente, muitas redes com características dinâmicas estão em funcionamento (por exemplo MANETs, DTNs, redes oportunistas, etc). Neste trabalho, estudamos um modelo para estas redes chamado de Grafos Evolutivos, que permite expressar a dinamicidade das conexões entre nós por meio de uma simples extensão da estrutura comum de grafos. Esta modelagem é utilizada no arcabouço proposto por Casteigts et al. para definir algoritmos distribuídos em redes dinâmicas, que utiliza grafos evolutivos para representar a topologia da rede e renomeação de rótulos para expressar a comunicação entre os nós. Utilizamos esta abordagem para estudar o problema da exclusão mútua distribuída em redes dinâmicas e diversos algoritmos propostos para ele, a fim de definir e validar suas condições necessárias e suficientes de conectividade em redes dinâmicas. Além da formalização de algoritmos, o modelo de grafos evolutivos também pode ser utilizado para analisar redes dinâmicas. Rastros de redes dinâmicas reais são amplamente utilizados na literatura para estudos de algoritmos pois estes geram resultados mais realísticos do que redes simuladas com padrões de movimento. A partir dos detalhes de cada conexão entre nós de um destes rastros, é possível construir um grafo evolutivo, do qual se pode extrair dados como jornadas ótimas entre nós, variação da conectividade no tempo, estabilidade, e periodicidade. Com as informações mencionadas, um pesquisador pode observar com maior precisão as características do rastro, o que facilita na escolha da rede mais apropriada para sua necessidade. Além disso, o conhecimento prévio de tais características de uma rede auxilia no estudo do comportamento de algoritmos executados sobre ela e provém uma validação para suposições geralmente feitas pelos pesquisadores. Para fornecer estas informações, desenvolvemos uma ferramenta Web que analisa rastros de redes dinâmicas e agrega os dados em um formato de fácil visualização. Descrevemos, neste trabalho, a implementação e a utilidade de todos os serviços da ferramenta. / Lately, several networks with dynamic properties (for instance MANETs, DTNs, opportunistic networks, etc) are functioning. In this work, we studied a model for these networks called Evolving Graphs, which allows the expression of the dynamicity of the conections between nodes through a simple extension of the common graph structure. This model is used by the framework proposed by Casteigts et al. to define distributed algorithms in dynamic networks, which uses evolving graphs to represent the network topology and graph relabelling to express the communication between nodes. Using this approach, we study the distributed mutual exclusion problem in dynamic networks and several algorithms proposed to solve it, in order to define and validate their necessary and sufficient connectivity conditions. Apart from the formalization of algorithms, the evolving graphs model can also be used to analyze dynamic networks. Dynamic network traces are widely used in the literature in order to study algorithms, as they generate better results than simulated networks with movement patterns. From the details of every connection between nodes in a trace, it is possible to build an evolving graph, from which a large amount of information can be extracted, such as optimal journeys between nodes, variation of the conectivity over time, stability and periodicity. With the aforementioned information, a researcher might observe the characteristics of a trace more precisely, which facilitates the process of choosing the most appropriate trace for his needs. Furthermore, the early knowledge of such characteristics of a network helps in the study of the behavior of the algorithms exected over it and provides a validation for the assumptions usually made by the researchers. In order to provide this information, we developed a web tool which analyzes dynamic network traces and aggregates the data in an easily readable format. In this work, we describe the implementation and usefulness of every service in the tool.
|
Page generated in 0.0561 seconds