51 |
Constant-RMR Implementations of CAS and Other Synchronization Primitives Using Read and Write OperationsGolab, Wojciech 15 February 2011 (has links)
We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison
primitives (such as CAS and TAS), and load-linked/store-conditional using only a constant number of remote memory references (RMRs), in both the cache-coherent and the
distributed-shared-memory models of such multiprocessors. Our implementations are
blocking, rather than wait-free: they ensure progress provided all processes that invoke
the implemented primitive are live.
Our results imply that any algorithm using read and write operations, comparison
primitives and load-linked/store-conditional, can be simulated by an algorithm that uses read and write operations only, with at most a constant-factor increase in RMR complexity.
|
52 |
Local-spin Algorithms for Variants of Mutual Exclusion Using Read and Write OperationsDanek, Robert 30 August 2011 (has links)
Mutual exclusion (ME) is used to coordinate access to shared resources by concurrent
processes. We investigate several new N-process shared-memory algorithms for variants
of ME, each of which uses only reads and writes, and is local-spin, i.e., has bounded
remote memory reference (RMR) complexity. We study these algorithms under two
different shared-memory models: the distributed shared-memory (DSM) model, and the
cache-coherent (CC) model. In particular, we present the first known algorithm for first-
come-first-served (FCFS) ME that has O(log N) RMR complexity in both the DSM and
CC models, and uses only atomic reads and writes. Our algorithm is also adaptive to
point contention, i.e., the number of processes that are simultaneously active during a
passage by some process. More precisely, the number of RMRs a process makes per
passage in our algorithm is \Theta(min(c, log N)), where c is the point contention. We also
present the first known FCFS abortable ME algorithm that is local-spin and uses only
atomic reads and writes. This algorithm has O(N) RMR complexity in both the DSM
and CC models, and is in the form of a transformation from abortable ME to FCFS
abortable ME. In conjunction with other results, this transformation also yields the
first known local-spin group mutual exclusion algorithm that uses only atomic reads
and writes. Additionally, we present the first known local-spin k-exclusion algorithms
that use only atomic reads and writes and tolerate up to k − 1 crash failures. These algorithms have RMR complexity O(N) in both the DSM and CC models. The simplest
of these algorithms satisfies a new fairness property, called k-FCFS, that generalizes the
FCFS fairness property for ME algorithms. A modification of this algorithm satisfies the
stronger first-in-first-enabled (FIFE) fairness property. Finally, we present a modification
to the FIFE k-exclusion algorithm that works with non-atomic reads and writes. The
high-level structure of all our k-exclusion algorithms is inspired by Lamport’s famous
Bakery algorithm.
|
53 |
Software Tools for Separating Distribution ConcernsTilevich, Eli 18 November 2005 (has links)
With the advent of the Internet, distributed programming has become a necessity for the majority of application domains. Nevertheless, programming distributed systems remains a delicate and complex task. This dissertation explores separating distribution concerns,
the process of transforming a centralized monolithic program into a distributed one. This research develops algorithms, techniques, and tools for separating distribution concerns
and evaluates the applicability of the developed artifacts by identifying the distribution
concerns that they separate and the common architectural characteristics of the centralized programs that they transform successfully. The thesis of this research is that software tools working with standard mainstream languages, systems software, and virtual machines can effectively and efficiently separate distribution concerns from application logic for object-oriented programs that use multiple distinct sets of resources. Among the specific technical contributions of this dissertation are (1) a general algorithm for call-by-copy-restore semantics in remote procedure calls for linked data structures, (2) an analysis heuristic that determines which application objects get passed to which parts of native (i.e., platform-specific) code in the language runtime system for platform-independent binary code applications, (3) a technique for injecting code in such applications that will convert objects to the right representation so that they can be accessed correctly inside both application
and native code, (4) an approach to maintaining the Java centralized concurrency and synchronization semantics over remote procedure calls efficiently, and (5) an approach to enabling the execution of legacy Java code remotely from a web browser.
The technical contributions of this dissertation have been realized in three software tools for separating distribution concerns: NRMI, middleware with copy-restore semantics; GOTECH, a program generator for distribution; and J-Orchestra, an automatic partitioning system. This dissertation presents several case studies of successfully applying the developed
tools to third-party programs.
|
54 |
The Design of Fault Tolerance of Cluster Computing PlatformLiao, Yu-tien 29 August 2012 (has links)
If nodes got failed in a distributed application service, it will not only pay more cost to handle with these results missing, but also make scheduler cause additional loadings. For whole results don¡¦t recalculated cause by fault occurs, it will be recalculated data of fault nodes in backup machines. Therefore, this paper uses three methods: N + N nodes, N + 1 nodes, and N + 1 nodes with probability to experiment and analyze their pros and cons, the third way gives jobs weight before assigning them, and converts weight into probability and nice value(defined by SLURM[1]) to influence scheduler¡¦s decision of jobs¡¦ order. When fault occurs, calculating in normal nodes¡¦ results will back to control node, and then the fault node¡¦s jobs are going to be reassigned or not be reassigned to backup machine for getting complete results. Finally, we will analyze these three ways good and bad.
|
55 |
Fault tolerant pulse synchronizationDeconda, Keerthi 15 May 2009 (has links)
Pulse synchronization is the evolution of spontaneous firing action across a network of sensor nodes. In the pulse synchronization model all nodes across a network produce a pulse, or "fire", at regular intervals even without access to a shared global time. Previous researchers have proposed the Reachback Firefly algorithm for pulse synchronization, in which nodes react to the firings of other nodes by changing their period. We propose an extension to this algorithm for tolerating arbitrary or Byzantine faults of nodes. Our algorithm queues up all the firings heard in the current cycle and discards outliers at the end of the cycle. An adjustment is computed with the remaining values and used as a starting point of the next cycle. Through simulation we validate the performance of our algorithm and study the overhead in terms of convergence time and periodicity. The simulation considers two specific kinds of Byzantine faults, the No Jump model where faulty nodes follow their own firing cycle without reacting to firings heard from other nodes and the Random Jump model where faulty nodes fire at any random time in their cycle.
|
56 |
Peer-to-peer support for Matlab-style computingAgrawal, Rajeev 30 September 2004 (has links)
Peer-to-peer technologies have shown a lot of promise in sharing the remote resources effectively. The resources shared by peers are information, bandwidth, storage space or the computing power. When used properly, they can prove to be very advantageous as they scale well, are dynamic, autonomous, fully distributed and can exploit the heterogeneity of peers effectively. They provide an efficient infrastructure for an application seeking to distribute numerical computation. In this thesis, we investigate the feasibility of using a peer-to-peer infrastructure to distribute the computational load of Matlab and similar applications to achieve performance benefits and scalability. We also develop a proof of concept application to distribute the computation of a Matlab style application.
|
57 |
Distributed computing and cryptography with general weak random sourcesLi, Xin, Ph. D. 14 August 2015 (has links)
The use of randomness in computer science is ubiquitous. Randomized protocols have turned out to be much more efficient than their deterministic counterparts. In addition, many problems in distributed computing and cryptography are impossible to solve without randomness. However, these applications typically require uniform random bits, while in practice almost all natural random phenomena are biased. Moreover, even originally uniform random bits can be damaged if an adversary learns some partial information about these bits. In this thesis, we study how to run randomized protocols in distributed computing and cryptography with imperfect randomness. We use the most general model for imperfect randomness where the weak random source is only required to have a certain amount of min-entropy. One important tool here is the randomness extractor. A randomness extractor is a function that takes as input one or more weak random sources, and outputs a distribution that is close to uniform in statistical distance. Randomness extractors are interesting in their own right and are closely related to many other problems in computer science. Giving efficient constructions of randomness extractors with optimal parameters is one of the major open problems in the area of pseudorandomness. We construct network extractor protocols that extract private random bits for parties in a communication network, assuming that they each start with an independent weak random source, and some parties are corrupted by an adversary who sees all communications in the network. These protocols imply fault-tolerant distributed computing protocols and secure multi-party computation protocols where only imperfect randomness is available. The probabilistic method shows that there exists an extractor for two independent sources with logarithmic min-entropy, while known constructions are far from achieving these parameters. In this thesis we construct extractors for two independent sources with any linear min-entropy, based on a computational assumption. We also construct the best known extractors for three independent sources and affine sources. Finally we study the problem of privacy amplification. In this model, two parties share a private weak random source and they wish to agree on a private uniform random string through communications in a channel controlled by an adversary, who has unlimited computational power and can change the messages in arbitrary ways. All previous results assume that the two parties have local uniform random bits. We show that this problem can be solved even if the two parties only have local weak random sources. We also improve previous results in various aspects by constructing the first explicit non-malleable extractor and giving protocols based on this extractor.
|
58 |
Distributed cost-optimal planningJezequel, Loïg 13 November 2012 (has links) (PDF)
Automated planning is a field of artificial intelligence that aims at proposing methods to chose and order sets of actions with the objective of reaching a given goal. A sequence of actions solving a planning problem is usually called a plan. In many cases, one does not only have to find a plan but an optimal one. This notion of optimality can be defined by assigning costs to actions. An optimal plan is then a plan minimizing the sum of the costs of its actions. Planning problems are standardly solved using algorithms such as A* that search for minimum cost paths in graphs. In this thesis we focus on a particular approach to planning called factored planning or modular planning. The idea is to consider a decomposition of a planning problem into almost independent sub-problems (or components). One then searches for plans into each component and try to assemble these local plans into a global plan for the original planning problem. The main interest of this approach is that, for some classes of planning problems, the components considered can be planning problems much simpler to solve than the original one. First, we present a study of the use of some message passing algorithms for factored planning. In this case the components of a problem are represented by weighted automata. This allows to handle all plans of a sub-problems, and permits to perform factored cost-optimal planning. Achieving cost-optimality of plans was not possible with previous factored planning methods. This approach is then extended by using approximate resolution techniques ("turbo" algorithms) and by proposing another representation of components for handling actions which read-only in some components. Then we describe another approach to factored planning: a distributed version of the famous A* algorithm. Each component is managed by an agent which is responsible for finding a local plan in it. For that, she uses information about her own component, but also information about the rest of the problem, transmitted by the other agents. The main difference between this approach and the previous one is that it is not only modular but also distributed.
|
59 |
Creating dynamic application behavior for distributed performance analysisLepler, Joerg January 1998 (has links)
No description available.
|
60 |
The morphing architecture : runtime evolution of distributed applicationsWilliams, Nicholas P. Unknown Date (has links)
No description available.
|
Page generated in 0.0801 seconds