Spelling suggestions: "subject:"[een] DISTRIBUTED COMPUTATION"" "subject:"[enn] DISTRIBUTED COMPUTATION""
1 |
Distributed computation in networked systemsCostello, Zachary Kohl 27 May 2016 (has links)
The objective of this thesis is to develop a theoretical understanding of computation in networked dynamical systems and demonstrate practical applications supported by the theory. We are interested in understanding how networks of locally interacting agents can be controlled to compute arbitrary functions of the initial node states. In other words, can a dynamical networked system be made to behave like a computer? In this thesis, we take steps towards answering this question with a particular model class for distributed, networked systems which can be made to compute linear transformations.
|
2 |
Consensus Algorithms and Distributed Structure Estimation in Wireless Sensor NetworksJanuary 2017 (has links)
abstract: Distributed wireless sensor networks (WSNs) have attracted researchers recently due to their advantages such as low power consumption, scalability and robustness to link failures. In sensor networks with no fusion center, consensus is a process where
all the sensors in the network achieve global agreement using only local transmissions. In this dissertation, several consensus and consensus-based algorithms in WSNs are studied.
Firstly, a distributed consensus algorithm for estimating the maximum and minimum value of the initial measurements in a sensor network in the presence of communication noise is proposed. In the proposed algorithm, a soft-max approximation together with a non-linear average consensus algorithm is used. A design parameter controls the trade-off between the soft-max error and convergence speed. An analysis of this trade-off gives guidelines towards how to choose the design parameter for the max estimate. It is also shown that if some prior knowledge of the initial measurements is available, the consensus process can be accelerated.
Secondly, a distributed system size estimation algorithm is proposed. The proposed algorithm is based on distributed average consensus and L2 norm estimation. Different sources of error are explicitly discussed, and the distribution of the final estimate is derived. The CRBs for system size estimator with average and max consensus strategies are also considered, and different consensus based system size estimation approaches are compared.
Then, a consensus-based network center and radius estimation algorithm is described. The center localization problem is formulated as a convex optimization problem with a summation form by using soft-max approximation with exponential functions. Distributed optimization methods such as stochastic gradient descent and diffusion adaptation are used to estimate the center. Then, max consensus is used to compute the radius of the network area.
Finally, two average consensus based distributed estimation algorithms are introduced: distributed degree distribution estimation algorithm and algorithm for tracking the dynamics of the desired parameter. Simulation results for all proposed algorithms are provided. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
|
3 |
Adressing scaling challenges in comparative genomics / Adresser les défis de passage à l'échelle en génomique comparéeGolenetskaya, Natalia 09 September 2013 (has links)
La génomique comparée est essentiellement une forme de fouille de données dans des grandes collections de relations n-aires. La croissance du nombre de génomes sequencés créé un stress sur la génomique comparée qui croit, au pire géométriquement, avec la croissance en données de séquence. Aujourd'hui même des laboratoires de taille modeste obtient, de façon routine, plusieurs génomes à la fois - et comme des grands consortia attend de pouvoir réaliser des analyses tout-contre-tout dans le cadre de ses stratégies multi-génomes. Afin d'adresser les besoins à tous niveaux il est nécessaire de repenser les cadres algorithmiques et les technologies de stockage de données utilisés pour la génomique comparée. Pour répondre à ces défis de mise à l'échelle, dans cette thèse nous développons des méthodes originales basées sur les technologies NoSQL et MapReduce. À partir d'une caractérisation des sorts de données utilisés en génomique comparée et d'une étude des utilisations typiques, nous définissons un formalisme pour le Big Data en génomique, l'implémentons dans la plateforme NoSQL Cassandra, et évaluons sa performance. Ensuite, à partir de deux analyses globales très différentes en génomique comparée, nous définissons deux stratégies pour adapter ces applications au paradigme MapReduce et dérivons de nouveaux algorithmes. Pour le premier, l'identification d'événements de fusion et de fission de gènes au sein d'une phylogénie, nous reformulons le problème sous forme d'un parcours en parallèle borné qui évite la latence d'algorithmes de graphe. Pour le second, le clustering consensus utilisé pour identifier des familles de protéines, nous définissons une procédure d'échantillonnage itérative qui converge rapidement vers le résultat global voulu. Pour chacun de ces deux algorithmes, nous l'implémentons dans la plateforme MapReduce Hadoop, et évaluons leurs performances. Cette performance est compétitive et passe à l'échelle beaucoup mieux que les algorithmes existants, mais exige un effort particulier (et futur) pour inventer les algorithmes spécifiques. / Comparative genomics is essentially a form of data mining in large collections of n-ary relations between genomic elements. Increases in the number of sequenced genomes create a stress on comparative genomics that grows, at worse geometrically, for every increase in sequence data. Even modestly-sized labs now routinely obtain several genomes at a time, and like large consortiums expect to be able to perform all-against-all analyses as part of these new multi-genome strategies. In order to address the needs at all levels it is necessary to rethink the algorithmic frameworks and data storage technologies used for comparative genomics.To meet these challenges of scale, in this thesis we develop novel methods based on NoSQL and MapReduce technologies. Using a characterization of the kinds of data used in comparative genomics, and a study of usage patterns for their analysis, we define a practical formalism for genomic Big Data, implement it using the Cassandra NoSQL platform, and evaluate its performance. Furthermore, using two quite different global analyses in comparative genomics, we define two strategies for adapting these applications to the MapReduce paradigm and derive new algorithms. For the first, identifying gene fusion and fission events in phylogenies, we reformulate the problem as a bounded parallel traversal that avoids high-latency graph-based algorithms. For the second, consensus clustering to identify protein families, we define an iterative sampling procedure that quickly converges to the desired global result. For both of these new algorithms, we implement each in the Hadoop MapReduce platform, and evaluate their performance. The performance is competitive and scales much better than existing solutions, but requires particular (and future) effort in devising specific algorithms.
|
4 |
Implementing And Evaluating The Coordination Layer Andtime-synchronization Of A New Protocol For Industrialcommunication NetworksTuran, Ulas 01 September 2011 (has links) (PDF)
Currently automation components of large-scale industrial systems are realized with distributed controller devices that use local sensor/actuator events and exchange shared events with communication networks. Fast paced improvement of Ethernet provoked its usage in industrial communication networks. The incompatibility of standard Ethernet protocol with the real-time requirements encouraged industry and academic researchers to provide a resolution for this problem. However, the existing solutions in the literature suggest a static bandwidth allocation for each controller device which usually leads to an inefficient bandwidth use.Dynamic Distributed Dependable Real-time Industrial Communication Protocol (D3RIP) family dynamically updates the necessary bandwidth allocation according to the messages generated by the control application. D3RIP is composed of two protocols / interface layer that provides time-slotted access to the shared medium based on an accurate clock synchronization of the distributed controller devices and coordination layer that decides the ownership of real-time slots. In this thesis, coordination layer protocol of D3RIP family and the IEEE 1588 time synchronization protocol is implemented and tested on the real hardware system that resembles a factory plant floor. In the end, we constructed a system that runs an instance of D3RIP family with 3ms time-slots that guarantees 6.6ms latency for the real-time packets of control application. The results proved that our implementation may be used in distributed controller realizations and encouraged us to further improve the timing constraints.
|
5 |
[en] VOLUMETRIC VISUALIZATION WITH RAY-CASTING IN A DISTRIBUTED ENVIRONMENT / [pt] VISUALIZAÇÃO VOLUMÉTRICA COM RAY-CASTING NUM AMBIENTE DISTRIBUÍDOROBERTO DE BEAUCLAIR SEIXAS 26 July 2002 (has links)
[pt] Ray-Casting é uma técnica muito usada em visualizção
volumétrica para a criação de imagens médicas, a partir
de dados obtidos por ressonância magnética (MRI) e
tomografias computadorizadas (CT). No entanto, ray-
casting tem um alto custo computacional que resulta em um
processo de visualização lento, o que compromete a
interatividade necessária para uma boa compreensão do
conjunto de dados tri-dimensionais.Este trabalho propõe
estratégias para a otimização do algoritmo de ray-casting
e para melhorar sua eficiência. Além disso, esta tese
investiga o uso em um ambiente de computação distribuída,
através de um protocolo de comunicação entre estações de
trabalho heterogêneas e não dedicadas, conectadas em uma
rede local.As idéias propostas foram implementadas em
duas versões do algoritmo, uma sequencial e uma paralela.
Os resultados obtidos com essas implementações em
conjuntos de dados reais mostram que é possível obter
tempo interativo com as máquinas disponíveis atualmente e
em condições normais de uso da rede local por outros
usuários. / [en] Ray-Casting is a useful volume visualization technique
applied to medical images such
as computer tomography (CT) and magnetic resonance image
(MRI). It has, however, a
high computational cost that results in a slow rendering
process, which compromises the
interactivity that is necessary for a good comprehension
of the three-dimensional data set.
This work proposes optimization strategies to the ray-
casting algorithm to improve
its effciency. To enhance, even further, the thesis
investigates the use of a distributed
computer environment through a communication protocol
between heterogeneous and non-
dedicate LAN-connected workstations.
The ideas proposed here were implemented in two versions
of the algorithm, one se-
quential and one parallel. Test results, obtained with
these implementations and real data
sets, show that it is possible to obtain interactive time
with the current available machines.
|
6 |
Strategie distribuovaného lámání hesel / Strategies for Distributed Password CrackingVečeřa, Vojtěch January 2019 (has links)
This thesis introduces viable password recovery tools and their categories as well as the technologies and hardware commonly used in this field of informatics. It follows by an overview of the available benchmarking tools for the given hardware. Thesis later contains a description of the custom benchmarking process targeting the aspects of interest. Later, the thesis moves to a distributed system FITcrack as it proposes and experimentally implements new features. The thesis finishes by comparison of the additions against the original state and highlights the areas of improvement.
|
7 |
Obnova hesel v distribuovaném prostředí / Password Recovery in Distributed EnvironmentKos, Ondřej January 2016 (has links)
The goal of this thesis is to design and implement a framework allowing password recovery in a distributed environment. The research is therefore focused on analyzing the security of passwords, techniques used for attacks on them and also presents methods preventing attacks on passwords. Described is the Wrathion tool which is allowing password recovery using acceleration on graphic cards through the integration of OpenCL framework. Conducted is also an analysis of available environments providing means to run computing tasks on multiple devices, based on which the OpenMPI platform is chosen for extending Wrathion. Disclosed are various modifications and added components, and the entire system is also subjected to experiments aiming at the measuring of scalability and network traffic performance. The financial side of the use of Wrathion tool is also discussed in terms of its usability in cloud based distributed environment.
|
8 |
Coded Computation for Speeding up Distributed Machine LearningWang, Sinong 11 July 2019 (has links)
No description available.
|
9 |
FATKID : a Finite Automaton ToolkitHuysamen, Nico 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: This thesis presents the FATKID Finite Automata Toolkit. While a lot
of toolkits currently exist which can manipulate and process nite state
automata, this toolkit was designed to e ectively and e ciently generate,
manipulate and process large numbers of nite automata by distributing
the work
ow across machines and running the computations in parallel.
Other toolkits do not currently provide this functionality. We show that
this framework is user-friendly and extremely extensible. Furthermore we
show that the system e ectively distributes the work to reduce computation
time. / AFRIKAANSE OPSOMMING: In hierdie tesis bespreek ons die FATKID Eindige Automaat Toestel. Al-
hoewel daar reeds toestelle bestaan wat automate kan genereer, manupileer,
en bewerkings daarmee kan uitvoer, is daar egter geen toestelle wat dit op
die skaal kan doen wat ons vereis deur die proses te versprei na 'n aantal
nodes nie. Ons vereis 'n stelsel wat mew baie groot aantalle automate werk.
Die stelsel moet dan die gewensde prosesse in 'n verspreide omgewing, en in
parallel uitvoer om verwerkingstyd te verminder. Ons sal wys dat ons stelsel
nie net hierdie doel bereik nie, maar ook dat dit gebruikers-vriendelik is en
maklik om uit te brei.
|
10 |
Um middleware Peer-to-Peer descentralizado para a computação de workflows / A descentralized P2P middleware for workflow computingSiqueira, Thiago Senador de 14 March 2008 (has links)
Orientador: Edmundo Roberto Mauro Madeira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-11T15:56:37Z (GMT). No. of bitstreams: 1
Siqueira_ThiagoSenadorde_M.pdf: 7328903 bytes, checksum: f944c1a0baac79f360047d39c4871f32 (MD5)
Previous issue date: 2008 / Resumo: A computação sobre P2P tem surgido como uma solução alternativa e complementaràs grades computacionais. O uso da tecnologia P2P é capaz de prover a flexibilização e descentralização dos processos de execução e gerenciamento de workflows nas grades computacionais. Neste trabalho é apresentado um middleware P2P completamente descentralizado para a computação de workflows. O middleware coleta o poder de processamento compartilhado pelos peers para possibilitar a execução de workflows, modelados como DAGs, compostos por um conjunto de tarefas dependentes. Através do processo distribuído de escalonamento de tarefas e do mecanismo de tolerância a faltas baseado em leasing, o middleware atinge um nível alto de paralelismo na execução e eficiência na recuperação de execuções em ocorrência de faltas. O middleware é implementado em Java, juntamente com RMI e a biblioteca JXTA. Os resultados experimentais obtidos mostram a eficiência do middleware na execução distribuída dos workflows assim como a recuperação rápida de execução em cenários com faltas / Abstract: P2P Computing has been raised as an alternative and complementary solution to Grid Computing. The use of P2P technology is able to provide a flexible and decentralized execution and management of Grid workflows. In this work we present a completely decentralized P2P middleware for workflow computing. The middleware collects the shared processing power of the peers in order to execute workflows, modeled as DAG structures, composed of a set of dependent tasks. Through a distributed scheduling algorithm and a leasing-based fault tolerance mechanism, the middleware achieves high execution parallelism and efficient execution recovery in failure occurrences. The middleware is implemented in Java, through RMI and the JXTA library. The obtained experimental results show the efficiency of the middleware in the distributed execution of workflows as well as the fast execution recovery / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
Page generated in 0.0556 seconds