Spelling suggestions: "subject:"[een] DISTRIBUTED SYSTEMS"" "subject:"[enn] DISTRIBUTED SYSTEMS""
121 |
Dynamic Load Balancing Schemes for Large-scale HLA-based SimulationsDe Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
|
122 |
An efficient execution model for reactive stream programsNguyen, Vu Thien Nga January 2015 (has links)
Stream programming is a paradigm where a program is structured by a set of computational nodes connected by streams. Focusing on data moving between computational nodes via streams, this programming model fits well for applications that process long sequences of data. We call such applications reactive stream programs (RSPs) to distinguish them from stream programs with rather small and finite input data. In stream programming, concurrency is expressed implicitly via communication streams. This helps to reduce the complexity of parallel programming. For this reason, stream programming has gained popularity as a programming model for parallel platforms. However, it is also challenging to analyse and improve the performance without an understanding of the program's internal behaviour. This thesis targets an effi cient execution model for deploying RSPs on parallel platforms. This execution model includes a monitoring framework to understand the internal behaviour of RSPs, scheduling strategies for RSPs on uniform shared-memory platforms; and mapping techniques for deploying RSPs on heterogeneous distributed platforms. The foundation of the execution model is based on a study of the performance of RSPs in terms of throughput and latency. This study includes quantitative formulae for throughput and latency; and the identification of factors that influence these performance metrics. Based on the study of RSP performance, this thesis exploits characteristics of RSPs to derive effective scheduling strategies on uniform shared-memory platforms. Aiming to optimise both throughput and latency, these scheduling strategies are implemented in two heuristic-based schedulers. Both of them are designed to be centralised to provide load balancing for RSPs with dynamic behaviour as well as dynamic structures. The first one uses the notion of positive and negative data demands on each stream to determine the scheduling priorities. This scheduler is independent from the runtime system. The second one requires the runtime system to provide the position information for each computational node in the RSP; and uses that to decide the scheduling priorities. Our experiments show that both schedulers provides similar performance while being significantly better than a reference implementation without dynamic load balancing. Also based on the study of RSP performance, we present in this thesis two new heuristic partitioning algorithms which are used to map RSPs onto heterogeneous distributed platforms. These are Kernighan-Lin Adaptation (KLA) and Congestion Avoidance (CA), where the main objective is to optimise the throughput. This is a multi-parameter optimisation problem where existing graph partitioning algorithms are not applicable. Compared to the generic meta-heuristic Simulated Annealing algorithm, both proposed algorithms achieve equally good or better results. KLA is faster for small benchmarks while slower for large ones. In contrast, CA is always orders of magnitudes faster even for very large benchmarks.
|
123 |
Towards Energy-Efficient Mobile Sensing: Architectures and Frameworks for Heterogeneous Sensing and ComputingFan, Songchun January 2016 (has links)
<p>Modern sensing apps require continuous and intense computation on data streams. Unfortunately, mobile devices are failing to keep pace despite advances in hardware capability. In contrast to powerful system-on-chips that rapidly evolve, battery capacities merely grow. This hinders the potential of long-running, compute-intensive sensing services such as image/audio processing, motion tracking and health monitoring, especially on small, wearable devices. </p><p>In this thesis, we present three pieces of work that target at improving the energy efficiency for mobile sensing. (1) In the first work, we study heterogeneous mobile processors that dynamically switch between high-performance and low-power cores according to tasks' performance requirements. We benchmark interactive mobile workloads and quantify the energy improvement of different microarchitectures. (2) Realizing that today's users often carry more than one mobile devices, in the second work, we extend the resource boundary of individual devices by prototyping a distributed framework that coordinates multiple devices. When devices share common sensing goals, the framework schedules sensing and computing tasks according to devices' heterogeneity, improving the performance and latency for compute-intensive sensing apps. (3) In the third work, we study the power breakdown of motion sensing apps on wearable devices and show that traditional offloading schemes cannot mitigate sensing’s high energy costs. We design a framework that allows the phone to take over sensing and computation by predicting the wearable's sensory data, when motions of the two devices are highly correlated. This allows the wearable to offload without communicating raw sensing data, resulting in little performance loss but significant energy savings.</p> / Dissertation
|
124 |
A Message Oriented Middleware LibraryKuhlman, Christopher James 01 January 2007 (has links)
A message oriented middleware inter-process communication library called Nora has been designed, constructed, and validated. The library is written in C++. The middleware is designed to bridge two of the main messaging standards, the Message Passing Interface (MPI) and the Data Distribution Service (DDS), by enabling communications for (1) computationally intensive distributed systems that typically follow a master-slave design and (2) general data distribution. The design is original and does not borrow from either specification. The library can be statically linked to application code so that the library is part of each application in a distributed system. The implementation for master-slave messaging has not yet been completed, but the great majority of the work is done; the general data distribution model has been fully implemented. The design is critically evaluated.A key aspect of the library is configurability. Various characteristics of the messaging library, such as the number of message producer and consumer threads, the message types serviced by each thread, the types of communication mechanisms, and others are specified through a configuration file. Consequently, the library has only to be built once for all applications in a distributed system and communications for each application are tailored through a unique configuration file. The library application programmer interface (API) is structured so that communications details can be isolated from the application code and therefore applications are not affected by changes to the IPC configuration.Beyond its use for the two classifications of problems listed above, it is also suited for use by system architects that are investigating resource requirements and designs for new systems because applications can be reconfigured quickly for different communications behavior on different platforms through the configuration file. Thus, it is useful for prototyping and performance evaluation.
|
125 |
Dynamické rekonfigurace v komponentovém systému SOFA2 / Dynamic reconfiguration in SOFA 2 component systemBabka, David January 2011 (has links)
SOFA 2 is a component system employing hierarchically composed components in distributed environment. It contains concepts, which allow for specifying dynamic reconfigurations of component architectures at runtime, which is essential for virtually any real-life application. The dynamic reconfigurations comprise creating/disposing components and creating/disposing connections between components. In contrast to majority of component systems, SOFA 2 is able to specify possible architectural reconfigurations in the application architecture at design time. This allows SOFA 2 runtime to follow the dynamic behavior of the application and reflect the behavior in architectural reconfigurations. The goal of this thesis is to reify these concepts of dynamic reconfigurations in the implementation of SOFA 2 and demonstrate their usage on a demo application.
|
126 |
SOFAnet 2 / SOFAnet 2Papež, Michal January 2011 (has links)
SOFAnet 2 MASTER THESIS Michal Papež Department of Distributed and Dependable Systems, 2011 Abstract: The aim of SOFAnet 2, as a network environment of the SOFA 2 com- ponent system, is to exchange components between SOFAnodes in a simple and rational way. Current concerns of the SOFA 2 users about software distribution are analyzed and discussed. New high level concepts of Applications and Components are defined together with their mapping to SOFA 2 first class concepts, means of distribution and removal. Furthermore a methodology to keep SOFA 2 repository clean is introduced. All new elements as concepts and operations are studied using a formal set model. The proposed concept of SOFAnet 2 is proved by a prototype implementation. 1
|
127 |
Implementability of distributed systems described with scenarios / Implémentabilité de systèmes distribués décrits à l'aide de scénariosAbdallah, Rouwaida 16 July 2013 (has links)
Les systèmes distribués sont au cœur de nombreuses applications modernes (réseaux sociaux, services web, etc.). Cependant, les développeurs sont confrontés à de nombreux défis dans l’implémentation des systèmes distribués, notamment les comportements erronés à éviter et qui sont causées par la concurrence entre les entités de ce système. La génération automatique de code à partir des exigences des systèmes distribués reste un vieux rêve. Dans cette thèse, nous considérons la génération automatique d'un squelette de code portant sur les interactions entre les différentes entités d'un système distribué. Cela nous permet d'éviter les comportements erronés causés par la concurrence. Ensuite, ce squelette peut être complété par l'ajout et le débogage du code qui décrit les actions locales qui se passent sur chaque entité indépendamment de ses interactions avec les autres entités. / Distributed systems lie at the heart of many modern applications (social networks, web services, etc.). However, developers face many challenges in implementing distributed systems. The major one we focus on is avoiding the erroneous behaviors, that do not appear in the requirements of the distributed system, and that are caused by the concurrency between the entities of this system. The automatic code generation from requirements of distributed systems remains an old dream. In this thesis, we consider the automatic generation of a skeleton of code covering the interactions between the entities of a distributed system. This allows us to avoid the erroneous behaviors caused by the concurrency. Then, in a later step, this skeleton can be completed by adding and debugging the code that describes the local actions happening on each entity independently from its interactions with the other entities. The automatic generation that we consider is from a scenario-based specification that formally describes the interactions within informal requirements of a distributed system. We choose High-level Message Sequence Charts (HMSCs for short) as a scenario-based specification for the many advantages that they present: namely the clear graphical and textual representations, and the formal semantics. The code generation from HMSCs requires an intermediate step, called “Synthesis” which is their transformation into an abstract machine model that describes the local views of the interactions by each entity (A machine representing an entity defines sequences of messages sending and reception). Then, from the abstract machine model, the skeleton’s code generation becomes an easy task. A very intuitive abstract machine model for the synthesis of HMSCs is the Communicating Finite State Machine (CFSMs). However, the synthesis from HMSCs into CFSMs may produce programs with more behaviors than described in the specifications in general. We thus restrict then our specifications to a sub-class of HMSCs named "local HMSC". We show that for any local HMSC, behaviors can be preserved by addition of communication controllers that intercept messages to add stamping information before resending them. We then propose a new technique that we named "localization" to transform an arbitrary HMSC specification into a local HMSC, hence allowing correct synthesis. We show that this transformation can be automated as a constraint optimization problem. The impact of modifications brought to the original specification can be minimized with respect to a cost function. Finally, we have implemented the synthesis and the localization approaches into an existing tool named SOFAT. We have, in addition, implemented to SOFAT the automatic code generation of a Promela code and a JAVA code for REST based web services from HMSCs.
|
128 |
Models and algorithms for cyber-physical systemsGujrati, Sumeet January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Gurdip Singh / In this dissertation, we propose a cyber-physical system model, and based on this model, present algorithms for a set of distributed computing problems. Our model specifies a cyber-physical system as a combination of cyber-infrastructure, physical-infrastructure, and user behavior specification. The cyber-infrastructure is superimposed on the physical-infrastructure and continuously monitors its (physical-infrastructure's) changing state. Users operate in the physical-infrastructure and interact with the cyber-infrastructure using hand-held devices and sensors; and their behavior is specified in terms of actions they can perform (e.g., move, observe). While in traditional distributed systems, users interact solely via the underlying cyber-infrastructure, users in a cyber-physical system may interact directly with one another, access sensor data directly, and perform actions asynchronously with respect to the underlying cyber-infrastructure. These additional types of interactions have an impact on how distributed algorithms for cyber-physical systems are designed. We augment distributed mutual exclusion and predicate detection algorithms so that they can accommodate user behavior, interactions among them and the physical-infrastructure. The new algorithms have two components - one describing the behavior of the users in the physical-infrastructure and the other describing the algorithms in the cyber-infrastructure. Each combination of users' behavior and an algorithm in the cyber-infrastructure yields a different cyber-physical system algorithm. We have performed extensive simulation study of our algorithms using OMNeT++ simulation engine and Uppaal model checker. We also propose Cyber-Physical System Modeling Language (CPSML) to specify cyber-physical systems, and a centralized global state recording algorithm.
|
129 |
Android application for file storage and retrieval over secured and distributed file serversKukkadapu, Sowmya January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / Recently, the world has been trending toward the use of Smartphone. Today, almost each and every individual is using Smartphone for various purposes benefited by the availability of large number of applications. The memory on the SD (Secure Digital) memory card is going to be a constraint for the usage of the Smartphone for the user. Memory is used for storing large amounts of data, which include various files and important document.
Besides having many applications to fill the free space, we hardly have an application that manages the free memory according to the user’s choice. In order to manage the free space on the SD card, we need an application to be developed. All the important files stored on the Android device cannot be retrieved if we lose the Android device.
Targeting the problem of handling the memory issues, we developed an application that can be used to provide the security to the important documents and also store unnecessary files on the distributed file servers and retrieve them back on request.
|
130 |
[en] SOFTWARE COMPONENTS WITH SUPPORT FOR DATA STREAMS / [pt] COMPONENTES DE SOFTWARE COM SUPORTE A FLUXO DE DADOSVICTOR SA FREIRE FUSCO 18 January 2013 (has links)
[pt] O desenvolvimento baseado em componentes de um tópico que tem
atrasado bastante atençco nos últimos anos. Esta técnica permite a construção
de sistemas de software complexos de forma rápida e estruturada. Diversos
modelos de componentes já foram propostos pela indústria e pela academia.
Dentro destes, aqueles que oferecem suporte da comunicação distribuída
geralmente interagem através de Chamadas Remotas de Procedimentos.
Dos modelos pesquisados, apenas o CORBA Component Model possui uma
especificação em andamento para o suporte da comunicação através de
fluxos de dados. Esse suporte se mostra de grande relevância em sistemas que
precisam lidar com dados de sensores e com transmissão de áudio e vídeo.
O objetivo principal deste trabalho de apresentar uma arquitetura que permita
a implementação de aplicações com suporte ao fluxo de dados no middleware Software Component System (SCS). Para tal, o modelo de componentes do
SCS foi estendido para oferecer portas de fluxos de dados. Como avaliação,
este trabalho apresenta alguns resultados experimentais de desempenho e
escalabilidade, assim como uma aplicação que exercita as necessidades do
executor de fluxos de algoritmos do CSBase, um framework utilizado no
desenvolvimento de sistemas para computação em grade. / [en] Component-based software development is a topic that has attracted
attention in recent years. This technique allows the construction of complex
software systems in a quick and structured way. Several component models
have been proposed by the industry and the academy. The majority of
these component models adopt Remote Procedure Calls as their basic
communication mechanism. The CORBA Component Model is the only
one from the surveyed models that has a work in progress to support
communication over data streams. This support proves to be of great
importance in systems that must deal with data from sensors and systems
that deal with audio and video transmission. The main goal of this work is
to propose an architecture that enables the middleware Software Component
System (SCS) to support applications that require data streaming. To this
end, the SCS component model was extended to support stream ports. As
evaluation, this work presents some experimental results of performance and
scalability, as well as an application that exercises the needs of the CSBase s
algorithm
ow executor, a framework used to build systems for grid computing.
|
Page generated in 0.0623 seconds