Spelling suggestions: "subject:"[een] DISTRIBUTED SYSTEMS"" "subject:"[enn] DISTRIBUTED SYSTEMS""
1 |
Designing a universal name serviceMa, Chaoying January 1992 (has links)
No description available.
|
2 |
A cache coherence protocol for concurrency control and recovery in distributed object-oriented systemsMin, Sung-Gi January 1993 (has links)
No description available.
|
3 |
Observation spaces and timed processesJeffrey, Alan January 1991 (has links)
No description available.
|
4 |
Physical design of distributed object-oriented softwareDurrant, Jonathan M. January 1998 (has links)
No description available.
|
5 |
OurFileSystemGass, Robert Benjamin 21 April 2014 (has links)
OurFileSystem (OFS) is a peer-to-peer file and metadata sharing program. Peers freely join the network, but must be granted access to groups in which metadata and files are shared. Any peer may create a group and grant others access to the group. Group members have different degrees of authority to grant others access and set their authority. Metadata for files is created by users within the context of a group and distributed to all members of the group in the form of a post. Post templates can be created to set fields of metadata. Templates are distributed to all members of a group, and one can be selected when creating a post or searching for files. Metadata in posts is indexed, and sophisticated search on the metadata can be performed locally to help users find files of interest quickly. Files found during a search may be downloaded from peers upon request. Pieces of files are downloaded from as many different peers as possible to maximize bandwidth. Peers within a group may also be marked as bad locally. If a user marks another peer as bad within the context of a group, posts from that peer to the group are deleted and not shared with others. Furthermore, any peer that was granted access by a peer marked as bad is also marked bad. No further posts or authorizations are ever accepted from any peer marked as bad. OFS also supports small public and private messages, which are distributed to all peers in the network. Private messages are encrypted so only the intended peer can decrypt the message. Lastly OFS integrates well with anonymous overlay networks that support SOCKS proxies, such as TOR. I2P support has also been explicitly added. / text
|
6 |
On the Performance Analysis of Large Scale, Dynamic, Distributed and Parallel Systems.Ardelius, John January 2013 (has links)
Evaluating the performance of large distributed applications is an important and non-trivial task. With the onset of Internet wide applications there is an increasing need to quantify reliability, dependability and performance of these systems, both as a guide in system design as well as a means to understand the fundamental properties of large-scale distributed systems. Previous research has mainly focused on either formalised models where system properties can be deduced and verified using rigorous mathematics or on measurements and experiments on deployed applications. Our aim in this thesis is to study models on an abstraction level lying between the two ends of this spectrum. We adopt a model of distributed systems inspired by methods used in the study of large scale system of particles in physics and model the application nodes as a set of interacting particles each with an internal state whose actions are specified by the application program. We apply our modeling and performance evaluation methodology to four different distributed and parallel systems. The first system is the distributed hash table (DHT) Chord running in a dynamic environment. We study the system under two scenarios. First we study how performance (in terms of lookup latency) is affectedon a network with finite communication latency. We show that an average delay in conjunction with other parameters describing changes in the network (such as timescales for network repair and join and leave processes)induces fundamentally different system performance. We also verify our analytical predictions via simulations.In the second scenario we introduce network address translators (NATs) to the network model. This makes the overlay topology non-transitive and we explore the implications of this fact to various performance metrics such as lookup latency, consistency and load balance. The latter analysis is mainly simulation based.Even though these two studies focus on a specific DHT, many of our results can easily be translated to other similar ring-based DHTs with long-range links, and the same methodology can be applied evento DHT's based on other geometries.The second type of system studied is an unstructured gossip protocol running a distributed version of the famous Belman-Ford algorithm. The algorithm, called GAP, generates a spanning tree over the participating nodes and the question we set out to study is how reliable this structure is(in terms of generating accurate aggregate values at the root) in the presence of node churn. All our analytical results are also verified using simulations.The third system studied is a content distribution network (CDN) of interconnected caches in an aggregation access network. In this model, content which sits at the leaves of the cache hierarchy tree, is requested by end users. Requests can then either be served by the first cache level or sent further up the tree. We study the performance of the whole system under two cache eviction policies namely LRU and LFU. We compare our analytical results with traces from related caching systems.The last system is a work stealing heuristic for task distribution in the TileraPro64 chip. This system has access to a shared memory and is therefore classified as a parallel system. We create a model for the dynamic generation of tasks as well as how they are executed and distributed among the participating nodes. We study how the heuristic scales when the number of nodes exceeds the number of processors on the chip as well as how different work stealing policies compare with each other. The work on this model is mainly simulation-based. / Att utvärdera prestanda hos storskaliga distribuerade system är en viktigoch icke-trivial uppgift. I och med utvecklingen av Internet och det faktum attapplikationer och system har fått global utsträckning, har det uppkommit ettökande behov av kvantifiering av tillförlitlighet och prestanda hos dessa system.Både som underlag för systemdesign men också för att skapa förståelseoch kunskap om fundamentala egenskaper hos distribuerade system.Tidigare forskning har i mångt och mycket fokuserat antingen på formaliserademodeller, där egenskaper kan härledas med hjälp av strikta matematiskametoder eller på mätningar av riktiga system. Målet med arbetet i dennaavhandling är att undersöka modeller på en abstraktionsnivå mellan dessa tvåytterligheter. Vi tillämpar en modell av distributerade system med inspirationfrån så kallade partikelmodeller från den teoretiska fysiken och modellererarapplikationsnoder som en samling interagerande pariklar var och en med sitteget interna tillstånd vars beteende beskrivs av det exekvernade programmeti fråga. Vi tillämpar denna modelerings- och utvärderingsmetod på fyra olikadistribuerade och parallella system.Det första systemet är den distribuerade hash tabellen (DHT) Chord i endynamisk miljö. Vi har valt att studera systemet under två scenarion. Förstutvärderar vi hur systemet beteer sig (med avseende på lookup latency) iett nätverk med finita kommunikationsfördröjningar. Vårt arbete visar atten generell fördröjning i nätet tillsammans med andra parametrar (som t.ex.tidsskala för felkorrektion och anslutningsprocess för noder) genererar fundamentaltskilda prestandamått. Vi verifierar vår analytiska model med simuleringar.I det andra scenariot undersöker vi betydelsen av NATs (networkadress translators) i nätverksmodellen. Förekomsten av dessa tar bort dentransitiva egenskapen hos nätverkstopologin och vi undersöker hur detta påverkarlookup-kostnad, datakonsistens och lastbalans. Denna analys är främst simuleringsbaserad.Även om dessa två studier fokuserar på en specifik DHT såkan de flesta resultat och metoden som sådan överföras på andra liknanderingbaserade DHTer med långa länkar och även andra geometrier.Den andra klassen av system som analyseras är ostrukturerade gossip protokolli form av den välkända Belman-Ford algoritmen. Algoritmen, GAP,skapar ett spännande träd över systemets noder. Problemställningen vi studerarär hur tillförlitlig denna struktur, med avseende på precisionen på aggregatvid rotnoden, är i ett dynamiskt nätverk. Samtliga analytiska resultatverifieras i simulator.Det tredje systemet vi undersöker är ett CDN (content distribution system)med en hierarkisk cache struktur i sitt distributionsnät. I den här modellenefterfrågas data från löven på cache-trädet. Antingen kan förfrågan servas avcacharna på de lägre nivåerna eller så skickas förfrågan vidare uppåt i trädet.Vi analyserar två fundamentala heuristiker, LRU och LFU. Vi jämför våraanalytiska resultat med tracedata från riktiga cachesystem.Till sist analyserar vi en heuristik för last distribution i TileraPro64 arkitekturen.Systemet har ett centralt delat minne och är därför att betrakta somparallellt. Vi skapar här en model för den dynamiska genereringen av lastsamt hur denna distribueras till de olika noderna på chipet. Vi studerar hur heuristiken skalar när antalet noder överstiger antalet på chipet (64) samtjämför prestanda hos olika heuristiker. Analysen är simuleringsbaserad. / <p>QC 20131128</p>
|
7 |
Fault-tolerant group communication protocols for asynchronous systemsMacedo, Raimundo Jose de Araujo January 1994 (has links)
It is widely accepted that group communication (multicast) is a powerful abstraction that can be used whenever a collection of distributed processes cooperate to achieve a common goal such as load-sharing or fault-tolerance. Due to the uncertainties inherent to distributed systems (emerging from communication and/or process failures), group communication protocols have to face situations where, for instance, a sender process fails when a multicast is underway or where messages from different senders arrive in an inconsistent order at different destination processes. Further complications arise if processes belong to multiple groups. In this thesis, we make use of logical clocks [Lamport78] to develop the concept of Causal Blocks. We show that Causal Blocks provide a concise method for deducing ordering relationships between messages exchanged by processes of a group, resulting in simple methods for dealing with multiple groups. Based on the Causal Blocks representation, we present a protocol for total order message delivery which has constant and low message space overhead (Le. the protocol related information contained in a multicast message is small). We also present causal order protocols with different trade-offs between message space overhead and speed of message delivery. Furthermore, we show how the Causal Blocks representation can be used to easily deduce and maintain reliability information. Our protocols are faulttolerant: ordering and liveness are preserved even if group membership changes occur (due to failures such as process crashes or network partitions). The total order protocol, together with a novel flow control mechanism, has been implemented over a set of networked Unix workstations, and experiments carried out to analyse its performance in varied group configurations.
|
8 |
Naming issues in the design of transparently distributed operating systemsStroud, Robert James January 1987 (has links)
Naming is of fundamental importance in the design of transparently distributed operating systems. A transparently distributed operating system should be functionally equivalent to the systems of which it is composed. In particular, the names of remote objects should be indistinguishable from the names oflocal objects. In this thesis we explore the implication that this recursive notion of transparency has for the naming mechanisms provided by an operating system. In particular, we show that a recursive naming system is more readily extensible than a flat naming system by demonstrating that it is in precisely those areas in which a system is not recursive that transparency is hardest to achieve. However, this is not so much a problem of distribution so much as a problem of scale. A system which does not scale well internally will not extend well to a distributed system. Building a distributed system out of existing systems involves joining the name spaces of the individual systems together. When combining name spaces it is important to preserve the identity of individual objects. Although unique identifiers may be used to distinguish objects within a single name space, we argue that it is difficult if not impossible in practice to guarantee the uniqueness of such identifiers between name spaces. Instead, we explore the possibility of Using hierarchical identifiers, unique only within a localised context. However, We show that such identifiers cannot be used in an arbitrary naming graph without compromising the notion of identity and hence violating the semantics of the underlying system. The only alternative is to sacrifice a deterministic notion of identity by using random identifiers to approximate global uniqueness with a know probability of failure (which can be made arbitrarily small if the overall size of the system is known in advance).
|
9 |
Constructing reliable distributed applications using actions and objectsWheater, Stuart Mark January 1989 (has links)
A computation model for distributed systems which has found widespread acceptance is that of atomic actions (atomic transactions) controlling operations on persistent objects. Much current research work is oriented towards the design and implementation of distributed systems supporting such an object and action model. However, little work has been done to investigate the suitability of such a model for building reliable distributed systems. Atomic actions have many properties which are desirable when constructing reliable distributed applications, but these same properties can also prove to be obstructive. This thesis examines the suitability of atomic actions for building reliable distributed applications. Several new structuring techniques are proposed providing more flexibility than hitherto possible for building a large class of applications. The proposed new structuring techniques are: Serialising Actions, Top-Level Independent Actions, N-Level Independent Actions, Common Actions and Glued Actions. A new generic form of action is also proposed, the Coloured Actions, which provides more control over concurrency and recovery than traditional actions. It will be shown that Coloured Actions provide a uniform mechanism for implementing most of the new structuring techniques, and at the same time are no harder to implement than normal actions. Thus this proposal is of practical importance. The suitability of new structuring techniques will be demonstrated by considering a number of applications. It will be shown that the proposed techniques provide natural tools for composing distributed applications.
|
10 |
Coding, Computing, and Communication in Distributed Storage SystemsGerami, Majid January 2016 (has links)
Conventional studies in communication networks mostly focus on securely and reliably transmitting data from a source node (or multiple source nodes) to multiple destinations. A more general problem appears when the destination nodes are interested in obtaining functions of the data available in distributed source nodes. For obtaining a function, transmitting all the data to a destination node and then computing the function might be inefficient. In order to exploit the network resources efficiently, the general problem offers distributed computing in combination with coding and communication. This problem has applications in distributed systems, e.g., in wireless sensor networks, in distributed storage systems, and in distributed computing systems. Following this general problem formulation, we study the optimal and secure recovery of the lost data in storage nodes and in reconstructing a version of a file in distributed storage systems. The significance of this study is due to the fact that the new trends in communications including big data, Internet of things, low latency, and high reliability communications challenge the existing centralized data storage systems. Distributed storage systems can rectify those issues by distributing thousands of storage nodes (possibly around the globe), and then benefiting users by bringing data to their proximity. Yet, distributing the storage nodes brings new challenges. In these distributed systems, where storage nodes are connected through links and servers, communication plays a main role in their performance. In addition, a part of network may fail or due to communication failure or delay there might exist multi versions of a file. Moreover, an intruder can overhear the communications between storage nodes and obtain some information about the stored data. Therefore, there are challenges on reliability, security, availability, and consistency. To increase reliability, systems need to store redundant data in storage nodes and employ error control codes. To maintain the reliability in a dynamic environment where storage nodes can fail, the system should have an autonomous repair process. Namely, it should regenerate the failed nodes by the help of other storage nodes. The repair process demands bandwidth, energy, or in general transmission costs. We propose novel techniques to reduce the repair cost in distributed storage systems. First, we propose {surviving nodes cooperation} in repair, meaning that surviving nodes can combine their received data with their own stored data and then transmit toward the new node. In addition, we study the repair problem in multi-hop networks and consider the cost of transmitting data between storage nodes. While classical repair model assumes the availability of direct links between the new node and surviving nodes, we consider that such links may not be available either due to failure or their costs. We formulate an optimization problem to minimize the repair cost and compare two systems, namely with and without surviving nodes cooperation. Second, we study the repair problem where the links between storage nodes are lossy e.g., due to server congestion, load balancing, or unreliable physical layer (wireless links). We model the lossy links by packet erasure channels and then derive the fundamental bandwidth-storage tradeoff in packet erasure networks. In addition, we propose dedicated-for-repair storage nodes to reduce the repair-bandwidth. Third, we generalize the repair model by proposing the concept of partial repair. That is, storage nodes may lose parts of their stored data. Then in partial repair, the lost data is recovered by exchanging data between storage nodes and using the available data in storage nodes as side information. For efficient partial-repair, we propose two-layer coding in distributed storage systems and then we derive the optimal bandwidth in partial repair. Fourth, we study security in distributed storage systems. We investigate security in partial repair. In particular, we propose codes that make the partial repair secure in the senses of strong and weak information-theoretic security definitions. Finally, we study consistency in distributed storage systems. Consistency means that distinct users obtain the latest version of a file in a system that stores multi versions of a file. Given the probability of receiving a version by a storage node and the constraint on the node storage space, we aim to find the optimal encoding of multi versions of a file that maximizes the probability of obtaining the latest version of a file or a version close to the latest version by a read client that connects to a number of storage nodes. / <p>Pages 153-168 are removed due to copyright reasons.</p><p>QC 20161012</p><p></p>
|
Page generated in 0.0614 seconds