221 |
Eidolon: adapting distributed applications to their environment.Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
|
222 |
Control Strategies for the Next Generation MicrogridsAli, Mehrizi-Sani 06 December 2012 (has links)
In the context of the envisioned electric power delivery system of the future, the smart grid, this dissertation focuses on control and management strategies for integration of distributed energy resources in the power system. This work conceptualizes a hierarchical framework for the control of microgrids---the building blocks of the smart grid---and develops the notion of potential functions for the secondary control for devising intermediate set points to ensure feasibility of operation of the system. A scalar potential function is defined for each controllable unit of the microgrid such that its minimization corresponds to achieving the control goal. The set points are dynamically updated using communication within the microgrid. This strategy is generalized to (i) include both local and system-wide constraints and (ii) allow a distributed implementation.
This dissertation also proposes and evaluates a simple yet elaborate distributed strategy to mitigate the transients of controllable devices of the microgrid using local measurements. This strategy is based on response monitoring and is augmented to the existing controller of a power system device. This strategy can be implemented based on either set point automatic adjustment (SPAA) or set point automatic adjustment with correction enabled (SPAACE) methods. SPAA takes advantage of an approximate model of the system to calculate intermediate set points such that the response to each one is acceptable. SPAACE treats the device as a generic system and monitors its response and modulates its set point to achieve the desired trajectory. SPAACE bases its decisions on the trend of variations of the response and accounts for inaccuracies and unmodeled dynamics.
Case studies using the PSCAD/EMTDC software environment and MATLAB programming environment are presented to demonstrate the application and effectiveness of the proposed strategies in different scenarios.
|
223 |
Control Strategies for the Next Generation MicrogridsAli, Mehrizi-Sani 06 December 2012 (has links)
In the context of the envisioned electric power delivery system of the future, the smart grid, this dissertation focuses on control and management strategies for integration of distributed energy resources in the power system. This work conceptualizes a hierarchical framework for the control of microgrids---the building blocks of the smart grid---and develops the notion of potential functions for the secondary control for devising intermediate set points to ensure feasibility of operation of the system. A scalar potential function is defined for each controllable unit of the microgrid such that its minimization corresponds to achieving the control goal. The set points are dynamically updated using communication within the microgrid. This strategy is generalized to (i) include both local and system-wide constraints and (ii) allow a distributed implementation.
This dissertation also proposes and evaluates a simple yet elaborate distributed strategy to mitigate the transients of controllable devices of the microgrid using local measurements. This strategy is based on response monitoring and is augmented to the existing controller of a power system device. This strategy can be implemented based on either set point automatic adjustment (SPAA) or set point automatic adjustment with correction enabled (SPAACE) methods. SPAA takes advantage of an approximate model of the system to calculate intermediate set points such that the response to each one is acceptable. SPAACE treats the device as a generic system and monitors its response and modulates its set point to achieve the desired trajectory. SPAACE bases its decisions on the trend of variations of the response and accounts for inaccuracies and unmodeled dynamics.
Case studies using the PSCAD/EMTDC software environment and MATLAB programming environment are presented to demonstrate the application and effectiveness of the proposed strategies in different scenarios.
|
224 |
Geographically Distributed Teams in a Collaborative Problem Solving TaskJanuary 2012 (has links)
abstract: As technology enhances our communication capabilities, the number of distributed teams has risen in both public and private sectors. There is no doubt that these technological advancements have addressed a need for communication and collaboration of distributed teams. However, is all technology useful for effective collaboration? Are some methods (modalities) of communication more conducive than others to effective performance and collaboration of distributed teams? Although previous literature identifies some differences in modalities, there is little research on geographically distributed mobile teams (DMTs) performing a collaborative task. To investigate communication and performance in this context, I developed the GeoCog system. This system is a mobile communications and collaboration platform enabling small, distributed teams of three to participate in a variant of the military-inspired game, "Capture the Flag". Within the task, teams were given one hour to complete as many "captures" as possible while utilizing resources to the advantage of the team. In this experiment, I manipulated the modality of communication across three conditions with text-based messaging only, vocal communication only, and a combination of the two conditions. It was hypothesized that bi-modal communication would yield superior performance compared to either single modality conditions. Results indicated that performance was not affected by modality. Further results, including communication analysis, are discussed within this paper. / Dissertation/Thesis / M.S. Applied Psychology 2012
|
225 |
Simulace distribuovaných systémů / Distributed Systems SimulationĎuriš, Anton January 2021 (has links)
This thesis is focused on distributed systems modeling using Petri nets. Distributed systems are increasingly being implemented in applications and computing systems, where their task is to ensure sufficient performance and stability for a large number of its users. When modeling a distributed systems, stochastic behavior of Petri nets is important, which will provide more realistic simulations. Therefore, this thesis focuses mainly on timed Petri nets. The theoretical part of this thesis summarizes distributed systems, their properties, types and available architectures, as well as Petri nets, their representation, types and the principle of an operation. In the practical part, two models were implemented, namely a horizontally scaled web application divided into several services with a distributed database and a large grid computing system, more precisely the BOINC platform with the Folding@home project. Both models were implemented using the PetNetSim library of Python. The goal of this thesis is to perform simulations on the created models for different scenarios of their behavior.
|
226 |
A tool for implementing distributed algorithms written in PROMELA, using DAJ toolkitNuthi, Kranthi Kiran January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / PROMELA stands for Protocol Meta Language. It is a modeling language for developing distributed systems. It allows for the dynamic creation of concurrent processes which can communicate through message channels. DAJ stands for Distributed Algorithms in Java. It is a Java toolkit for designing, implementing, simulating, and visualizing distributed algorithms. The toolkit consists of Java class library with a simple programming interface that allows development of distributed algorithms based on a message passing model. It also provides a visualization environment where the protocol execution can be paused, performed step by step, and restarted.
This project is a Java application designed to translate a model written in Promela into a model using the Java class library provided by DAJ and simulate it using DAJ. Even though there are similarities between the programming constructs of Promela and DAJ, the programming interface supported by DAJ is smaller, so the input has been confined to a variant, which is a subset of Promela. The implementation was performed in three steps. In the first step an input domain was defined and an ANTLR grammar was defined for the input structure. Java code has been embedded to this ANTLR grammar so that it can parse the input and translates it into an intermediate xml format. In the second step, a String Template is used which would consist of templates of the output model, along with a Java program which traverses the intermediate xml file and generates the output model. In the third step, the obtained output model is compiled and then simulated and visualized using DAJ. The application has been tested over input models having different topologies, process nodes, messages, and variables and covering most of the input domain.
|
227 |
Distributed computation in networked systemsCostello, Zachary Kohl 27 May 2016 (has links)
The objective of this thesis is to develop a theoretical understanding of computation in networked dynamical systems and demonstrate practical applications supported by the theory. We are interested in understanding how networks of locally interacting agents can be controlled to compute arbitrary functions of the initial node states. In other words, can a dynamical networked system be made to behave like a computer? In this thesis, we take steps towards answering this question with a particular model class for distributed, networked systems which can be made to compute linear transformations.
|
228 |
The Application of a Distributed Computing Architecture to a Large Telemetry Ground StationBuell, Robert K. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada / The evolution of telemetry ground station systems over the past twenty years has tracked the evolution of the mini-computer industry during that same time period. As the various mini-computer vendors introduced systems offering ever increasing compute power, and ever increasing capabilities to support multiple simultaneous users, the high end of the telemetry ground station systems offered by the industry evolved from single stream, single user, raw data systems to multi-user, multiple stream systems supporting real-time data processing and display functions from a single CPU or, in some cases, a closely coupled set of CPUs. In more recent years we have seen the maturation of networking and clustering concepts within the digital computer industry to a point where such systems coupled with current workstation technology, now permit the development of large telemetry ground station systems which accommodate large numbers of simultaneous users, each with his or her own dedicated computing resources. This paper discusses, at a hardware block diagram and software functional level, the architecture of such a distributed system.
|
229 |
A MICROPROCESSOR-BASED DIGITAL VOICE NETWORKMoses, J., Sklar, R. 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 1984 / Riviera Hotel, Las Vegas, Nevada / The Digital Voice Network project is a 1984 IR&D program within the Microelectronic
Systems Division of the Hughes Aircraft Company. The project is intended to advance the
state-of-the-art in digital voice technology and demonstrate digital voice transmission using
advanced microprocessor technology and token passing bus network architecture. This
paper discusses the Digital Voice Network design architecture, voice terminal design and
implementation, and finally future plans to satisfy digital voice requirements in a military
environment.
|
230 |
Improving the Selection of Surrogates During the Cold-Start Phase of a Cyber Foraging Application to Increase Application PerformanceKowalczk, Brian 31 August 2014 (has links)
Mobile devices are generally less powerful and more resource constrained than their desktop counterparts are, yet many of the applications that are of the most value to users of mobile devices are resource intensive and difficult to support on a mobile device. Applications such as games, video playback, image processing, voice recognition, and facial recognition are resource intensive and often exceed the limits of mobile devices.
Cyber foraging is an approach that allows a mobile device to discover and utilize surrogate devices present in the local environment to augment the capabilities of the mobile device. Cyber foraging has been shown to be beneficial in augmenting the capabilities of mobile devices to conserve power, increase performance, and increase the fidelity of applications.
The cyber foraging scheduler determines what operation to execute remotely and what surrogate to use to execute the operation. Virtually all cyber foraging schedulers in use today utilize historical data in the scheduling algorithm. If historical data about a surrogate is unavailable, execution history must be generated before the scheduler's algorithm can utilize the surrogate. The period between the arrival time of a surrogate and when historical data become available is called the cold-start state. The cold-start state delays the utilization of potentially beneficial surrogates and can degrade system performance.
The major contribution of this research was the extension of a historical-based prediction algorithm into a low-overhead estimation-enhanced algorithm that eliminated the cold-start state. This new algorithm performed better than the historical and random scheduling algorithms in every operational scenario.
The four operational scenarios simulated typical use-cases for a mobile device. The scenarios simulated an unconnected environment, an environment where every surrogate was available, an environment where all surrogates were initially unavailable and surrogates joined the system slowly over time, and an environment where surrogates randomly and quickly joined and departed the system.
One future research possibility is to extend the heuristic to include storage system I/O performance. Additional extensions include accounting for architectural differences between CPUs and the utilization of Bayesian estimates to provide metrics based upon performance specifications rather than direct
|
Page generated in 0.0534 seconds