Spelling suggestions: "subject:"[een] DISTRIBUTED SYSTEMS"" "subject:"[enn] DISTRIBUTED SYSTEMS""
161 |
Design of a Network Independent Emergency ServiceKhayltash, Golara 28 February 2007 (has links)
Student Number : 9301997W -
MSc thesis -
School of Electrical and Information Engineering -
Faculty of Engineering and the Built Environment / Emergency services are vital for the minimization of damage, injury and loss of life.
These services are, by definition, a combination of telecommunications and information
services, and are by nature, distributed. However, most current emergency
services do not take advantage of emerging technology, and hence, are restricted in
the functionality they offer.
This project proposes the design a full information structure for an emergency call
centre service, which can be offered as a service or application on any core network.
As emergency services are distributed, and combine both telecommunications and
information services, an appropriate design tool which caters for these issues, is the
Reference Model for Open Distributed Processing (RM-ODP), which will be used in
the design of the emergency service. In addition, OSA/Parlay Application Programming
Interfaces (APIs) will be used for the application to access telecommunication
network functionality.
The enterprise viewpoint examines the design requirements and considerations for
an emergency system, which is the first step in designing a service based on the RMODP
guidelines. Secondly, the information viewpoint is defined, which identifies the
information flows between the objects and classes defined in the enterprise viewpoint
with the aid of robustness diagrams and high level message sequence charts. Next,
the computational viewpoint of the emergency service describes the components
that the service consists of and the interfaces through which they communicate,
enabling distribution of the system to be visualized. In addition, the engineering
and technology viewpoints are briefly touched upon.
The RM-ODP proves to be a useful tool the design of this application. In addition,
the use of OSA/Parlay APIs have also proved beneficial, enabling the application
to run on any platform, irrespective of the level of functionality it already provides.
The benefits that this design offers over conventional emergency services are allowing
callers and emergency response personnel full access to the functionality of the service, despite any limitations on their telecommunications network, finding the location
of a caller from a fixed or mobile phone, ease and speed of obtaining relevant
emergency information, and the ease and speed of sending relevant information to
emergency response personnel.
Finally we recommend improvements in the reliability and accuracy of finding the
location of mobile phones, as well as creating ways of identifying the location of
VoIP users.
|
162 |
Controle de acesso para sistemas distribuídos. / Access control for distributed systems.Souza, Marcos Tork 22 November 2010 (has links)
A implementação de arcabouços de controle de acesso para sistemas distribuídos é dificultada pelas características próprias desta classe de ambientes, demandando modificações tanto na arquitetura do arcabouço quanto no modelo de política de controle de acesso usualmente empregados em arquiteturas não distribuídas. Este trabalho tenciona sanar ou mitigar estas dificuldades formalizando os requisitos desta classe de ambientes em duas frentes distintas (arquitetura e modelo de política de acesso) e analisando o impacto que uma exerce sobre a outra. Duas conclusões fundamentais são suportadas por esta análise: a necessidade do arcabouço ser construído na forma de um sistema distribuído, e que embora um modelo de política de fato possa ser escolhido, a especificação deste precisará ser modificada de forma a se adaptar às características específicas do ambiente. O arcabouço DRBAC (Distributed Role Based Access Control) foi desenvolvido sobre uma arquitetura distribuída e aplica o modelo de política de controle de acesso baseado em papéis. A arquitetura foi obtida a partir da expansão da arquitetura de referência de ferramentas de controle de acesso e a especificação do modelo foi desenvolvida a partir da especificação padronizada pelo NIST (National Institute Of Standards and Technology). A validação do trabalho é levada a termo por meio de uma série de experimentos realizados sobre a implementação de uma prova de conceito deste arcabouço. / The creation of frameworks for access control in distributed systems is made difficult by this class of systems own characteristics, demanding changes in both the architecture of the framework and in the model of access control policy usually employed on non distributed systems. This works aims to solve or at least mitigate these problems by formalizing these requirements in two different fronts (architecture and model of access control policy) and analyzing its mutual impacts. Two fundamental conclusions are supported by this analysis: the need for the framework to be built in the form of a distributed system, and that although a policy model can indeed be chosen, the specification of this should to be modified to adapt the specific features of the environment. The DRBAC (Distributed Role Based Access Control) framework is built following a distributed architecture model that applies the Role Based Access Control policy. The DRBAC architecture was obtained from the expansion of the reference architecture for an access control tool for a generic access control system and the DRBAC access policy model was adapted from the one standardized by NIST (National Institute of Standards and Technology). The validation of this work is carried out through a series of experiments conducted on a proof of concept implementation of this framework.
|
163 |
Symmetry breaking in congested models: lower and upper boundsRiaz, Talal 01 August 2019 (has links)
A fundamental issue in many distributed computing problems is the need for nodes to distinguish themselves from their neighbors in a process referred to as symmetry breaking. Many well-known problems such as Maximal Independent Set (MIS), t-Ruling Set, Maximal Matching, and (\Delta+1)-Coloring, belong to the class of problems that require symmetry breaking. These problems have been studied extensively in the LOCAL model, which assumes arbitrarily large message sizes, but not as much in the CONGEST and k-machine models, which assume messages of size O(log n) bits. This dissertation focuses on finding upper and lower bounds for symmetry breaking problems, such as MIS and t-Ruling Set, in these congested models.
Chapter 2 shows that an MIS can be computed in O(sqrt{log n loglog n}) rounds for graphs with constant arboricity in the CONGEST model. Chapter 3 shows that the t-ruling set problem, for t \geq 3, can be computed in o(log n) rounds in the CONGEST model. Moreover, it is shown that a 2-ruling set can be computed in o(log n) rounds for a large range of values of the maximum degree in the graph. In the k-machine model, k machines must work together to solve a problem on an arbitrary n-node graph, where n is typically much larger than k. Chapter 4 shows that any algorithm in the BEEP model (which assumes 'primitive' single bit messages) with message complexity M and round complexity T can be simulated in O(t(M/k^2 + T) poly(log n)) rounds in the k-machine model. Using this result, it is shown that MIS, Minimum Dominating Set (MDS), and
Minimum Connected Dominating Set (MCDS) can all be solved in O(poly(log n) m/k^2) rounds in the k-machine model, where 'm' is the number of edges in the input graph. It is shown that a 2-ruling set can be computed even faster, in O((n/k^2+ k) poly(log n)) rounds, in the k-machine model. On the other hand, using information theoretic techniques and a reduction to a communication complexity problem, an \Omega(n/(k^2 poly(log n))) rounds lower bound for MIS in the k-machine model is also shown. As far as we know, this is the first example of a lower bound in the k-machine model for a symmetry breaking problem.
Chapter 5 focuses on the Max Clique problem in the CONGEST model. Max Clique is trivially solvable in one round in the LOCAL model since each node can share its entire neighborhood with all neighbors in a single round. However, in the CONGEST model, nodes have to choose what to communicate and along what communication links. Thus, in a sense, they have to break symmetry and this is forced upon them by the bandwidth constraints. Chapter 5 shows that an O(n^{3/5})-approximation to Max Clique in the CONGEST model can be computed in O(1) rounds. This dissertation ends with open questions in Chapter 6.
|
164 |
Shared and distributed memory parallel algorithms to solve big data problems in biological, social network and spatial domain applicationsSharma, Rahil 01 December 2016 (has links)
Big data refers to information which cannot be processed and analyzed using traditional approaches and tools, due to 4 V's - sheer Volume, Velocity at which data is received and processed, and data Variety and Veracity. Today massive volumes of data originate in domains such as geospatial analysis, biological and social networks, etc. Hence, scalable algorithms for effcient processing of this massive data is a signicant challenge in the field of computer science. One way to achieve such effcient and scalable algorithms is by using shared & distributed memory parallel programming models. In this thesis, we present a variety of such algorithms to solve problems in various above mentioned domains. We solve five problems that fall into two categories.
The first group of problems deals with the issue of community detection. Detecting communities in real world networks is of great importance because they consist of patterns that can be viewed as independent components, each of which has distinct features and can be detected based upon network structure. For example, communities in social networks can help target users for marketing purposes, provide user recommendations to connect with and join communities or forums, etc. We develop a novel sequential algorithm to accurately detect community structures in biological protein-protein interaction networks, where a community corresponds with a functional module of proteins. Generally, such sequential algorithms are computationally expensive, which makes them impractical to use for large real world networks. To address this limitation, we develop a new highly scalable Symmetric Multiprocessing (SMP) based parallel algorithm to detect high quality communities in large subsections of social networks like Facebook and Amazon. Due to the SMP architecture, however, our algorithm cannot process networks whose size is greater than the size of the RAM of a single machine. With the increasing size of social networks, community detection has become even more difficult, since network size can reach up to hundreds of millions of vertices and edges. Processing such massive networks requires several hundred gigabytes of RAM, which is only possible by adopting distributed infrastructure. To address this, we develop a novel hybrid (shared + distributed memory) parallel algorithm to efficiently detect high quality communities in massive Twitter and .uk domain networks.
The second group of problems deals with the issue of effciently processing spatial Light Detection and Ranging (LiDAR) data. LiDAR data is widely used in forest and agricultural crop studies, landscape classification, 3D urban modeling, etc. Technological advancements in building LiDAR sensors have enabled highly accurate and dense LiDAR point clouds resulting in massive data volumes, which pose computing issues with processing and storage. We develop the first published landscape driven data reduction algorithm, which uses the slope-map of the terrain as a filter to reduce the data without sacrificing its accuracy. Our algorithm is highly scalable and adopts shared memory based parallel architecture. We also develop a parallel interpolation technique that is used to generate highly accurate continuous terrains, i.e. Digital Elevation Models (DEMs), from discrete LiDAR point clouds.
|
165 |
Effective task assignment strategies for distributed systems under highly variable workloadsBroberg, James Andrew, james@broberg.com.au January 2007 (has links)
Heavy-tailed workload distributions are commonly experienced in many areas of distributed computing. Such workloads are highly variable, where a small number of very large tasks make up a large proportion of the workload, making the load very hard to distribute effectively. Traditional task assignment policies are ineffective under these conditions as they were formulated based on the assumption of an exponentially distributed workload. Size-based task assignment policies have been proposed to handle heavy-tailed workloads, but their applications are limited by their static nature and assumption of prior knowledge of a task's service requirement. This thesis analyses existing approaches to load distribution under heavy-tailed workloads, and presents a new generalised task assignment policy that significantly improves performance for many distributed applications, by intelligently addressing the negative effects on performance that highly variable workloads cause. Many problems associated with the modelling and optimisations of systems under highly variable workloads were then addressed by a novel technique that approximated these workloads with simpler mathematical representations, without losing any of their pertinent original properties. Finally, we obtain advance queuing metrics (such as the variance of key measurements like waiting time and slowdown that are difficult to obtain analytically) through rigorous simulation.
|
166 |
Towards expressive, well-founded and correct Aspect-Oriented ProgrammingSüdholt, Mario 11 July 2007 (has links) (PDF)
This thesis aims at two different goals. First, a uniform presentation of the major relevant research results on EAOP-based expressive aspects. We motivate that these instantiations enable aspects to be defined more concisely and provide better support for formal reasoning over AO programs than standard atomic approaches and other proposed non-atomic approaches. Concretely, four groups of results are presented in order to substantiate these claims: 1. The EAOP model, which features pointcuts defined over the execution history of an underly- ing base program. We present a taxonomy of the major language design issues pertaining to non-atomic aspect languages, such as pointcut expressiveness (e.g., finite-state based, turing- complete) and aspect composition mechanisms (e.g., precedence specifications vs. turing- complete composition programs). 2. Support for the formal definition of aspect-oriented programming based on different seman- tic paradigms (among others, operational semantics and denotation semantics). Furthermore, we have investigated the static analysis of interactions among aspects as well as applicability conditions for aspects. The corresponding foundational work on AOP has also permitted to investigate different weaver definitions that generalize on those used in other approaches. 3. Several instantiations of the EAOP model for aspects concerning sequential program execu- tions, in particular, for component-based and system-level programming. The former has re- sulted in formally-defined notions of aspects for the modification of component protocols, while the latter has shown, in particular that expressive aspects can be implemented in a performance- critical domain with negligible to reasonable overhead. 4. Two instantiations of the EAOP model to distributed and concurrent programming that signifi- cantly increase the abstraction level of aspect definitions by means of domain-specific abstrac- tions.
|
167 |
CAESAR : A proposed method for evaluating security in component-based distributed information systemsPeterson, Mikael January 2004 (has links)
<p>Background: The network-centric defense requires a method for securing vast dynamic distributed information systems. Currently, there are no efficient methods for establishing the level of IT security in vast dynamic distributed information systems. </p><p>Purpose: The target of this thesis was to design a method, capable of determining the level of IT security of vast dynamic component-based distributed information systems. </p><p>Method: The work was carried out by first defining concepts of IT security and distributed information systems and by reviewing basic measurement and modeling theory. Thereafter, previous evaluation methods aimed at determining the level of IT security of distributed information systems were reviewed. Last, by using the theoretic foundation and the ideas from reviewed efforts, a new evaluation method, aimed at determining the level of IT security of vast dynamic component-based distributed information systems, was developed. </p><p>Results: This thesis outlines a new method, CAESAR, capable of predicting the security level in parts of, or an entire, component-based distributed information system. The CAESAR method consists of a modeling technique and an evaluation algorithm. In addition, a Microsoft Windows compliant software, ROME, which allows the user to easily model and evaluate distributed systems using the CAESAR method, is made available.</p>
|
168 |
Performance and availability trade-offs in fault-tolerant middlewareSzentiványi, Diana January 2002 (has links)
<p>Distributing functionality of an application is in common use. Systems that are built with this feature in mind also have to provide high levels of dependability. One way of assuring availability of services is to tolerate faults in the system, thereby avoiding failures. Building distributed applications is not an easy task. To provide fault tolerance is even harder.</p><p>Using middlewares as mediators between hardware and operating systems on one hand and high-level applications on the other hand is a solution to the above difficult problems. It can help application writers by providing automatic generation of code supporting e.g. fault tolerance mechanisms, and by offering interoperability and language independence.</p><p>For over twenty years, the research community is producing results in the area of . However, experimental studies of different platforms are performed mostly by using made-up simple applications. Also, especially in case of CORBA, there is no fault-tolerant middleware totally conforming to the standard, and well studied in terms of trade-offs.</p><p>This thesis presents a fault-tolerant CORBA middleware built and evaluated using a realistic application running on top of it. Also, it contains results obtained after experiments with an alternative infrastructure implementing a robust fault-tolerant algorithm using basic CORBA. In the first infrastructure a problem is the existence of single points of failure. On the other hand, overheads and recovery times fall in acceptable ranges. When using the robust algorithm, the problem of single points of failure disappears. The problem here is the memory usage, and overhead values as well as recovery times that can become quite long.</p> / Report code: LiU-TEK-LIC-2002:55.
|
169 |
Relating Inter-Agent and Intra-Agent Specifications (The Case of Live Sequence Charts)Bontemps, Yves 20 April 2005 (has links)
The problem of relating inter-agent and intra-agent behavioral specifications is investigated. These two views are complimentary, in that the former is closer to scenario-based user requirements whereas the latter is design-oriented. We use a graphical, user-friendly and very simple language as inter-agent specification language: Live Sequence Charts (LSC). LSC is presented and its properties are investigated: it is highly succinct, but inexpressive. There are essentially two ways to relate inter-agent and intra-agent specifications:
(i) by checking that an intra-agent specification is correct with respect to some LSC specification and (ii) by automatically building an intra-agent specification from an LSC specification.
Several variants of these problems exist: closed/open systems and centralized/distributed systems. We give inefficient but optimal algorithms solving all problems, besides synthesis of open distributed systems, which we show is undecidable. All the problems considered are difficult, even for a very restricted subset of LSCs, without alternatives, interleaving, conditions nor loops. We investigate the cost of extending the language with control flow constructs, conditions, real-time and symbolic instances. An implementation of the algorithms is proposed. The applicability of the language is illustrated on a real-world case study.
|
170 |
Distributed k-ary System: Algorithms for Distributed Hash TablesGhodsi, Ali January 2006 (has links)
This dissertation presents algorithms for data structures called distributed hash tables (DHT) or structured overlay networks, which are used to build scalable self-managing distributed systems. The provided algorithms guarantee lookup consistency in the presence of dynamism: they guarantee consistent lookup results in the presence of nodes joining and leaving. Similarly, the algorithms guarantee that routing never fails while nodes join and leave. Previous algorithms for lookup consistency either suffer from starvation, do not work in the presence of failures, or lack proof of correctness. Several group communication algorithms for structured overlay networks are presented. We provide an overlay broadcast algorithm, which unlike previous algorithms avoids redundant messages, reaching all nodes in O(log n) time, while using O(n) messages, where n is the number of nodes in the system. The broadcast algorithm is used to build overlay multicast. We introduce bulk operation, which enables a node to efficiently make multiple lookups or send a message to all nodes in a specified set of identifiers. The algorithm ensures that all specified nodes are reached in O(log n) time, sending maximum O(log n) messages per node, regardless of the input size of the bulk operation. Moreover, the algorithm avoids sending redundant messages. Previous approaches required multiple lookups, which consume more messages and can render the initiator a bottleneck. Our algorithms are used in DHT-based storage systems, where nodes can do thousands of lookups to fetch large files. We use the bulk operation algorithm to construct a pseudo-reliable broadcast algorithm. Bulk operations can also be used to implement efficient range queries. Finally, we describe a novel way to place replicas in a DHT, called symmetric replication, that enables parallel recursive lookups. Parallel lookups are known to reduce latencies. However, costly iterative lookups have previously been used to do parallel lookups. Moreover, joins or leaves only require exchanging O(1) messages, while other schemes require at least log(f) messages for a replication degree of f. The algorithms have been implemented in a middleware called the Distributed k-ary System (DKS), which is briefly described. / QC 20100824
|
Page generated in 0.0536 seconds