1 |
Model driven coordination framework for concurrency programmingZimmerman, John Dean January 2008 (has links)
Ensembles of distributed, autonomous and heterogenous entities that are situated in an environment, interacting over both space and time, and striving to uphold some global system coherence, mission, and goal characterize a new class of systems coined Open Computational Systems (OCS). OCS are materializing as a result of various enabling Internet technologies and examples include: ubiquitous computing, proactive computing, autonomic computing, network-centric computing, and network-centric warfare. OCS require a fundamental shift in the way we think about software development. In order to address these issues we advocate a holistic approach where models and tools come together to provide a platform for the building, understanding and monitoring of software based on the notion of these type of systems. In this research project, this was investigated by adopting the generative communication paradigm, a framework for entity communication and collaboration that will allow us to construct systems with characteristics of an OCS. Model-Driven Engineering (MDE) technologies (Domain Specific Modelling Languages and Transformation Engines) were used to provision a modelling environment for the construction, visualization and transformation of systems based on the notion of OCS. An initial mechanism was then established, and a prototype built for system understanding, verification and validation.
|
2 |
A logging service as a universal subscriberSharp, Jayson January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Eugene Vasserman / As medical systems expand to allow for the increase the number of devices, new ways to protect patient safety have be developed. The Integrated Clinical Environment, ICE, standard sets up a set of standards that define what an integrated hospital system is. Within the specification is a direct call for a forensic logger that can be used to review patient and system data. The MDCF is one implementation of the ICE standard, but it lacked a key component the ICE standard requires, a logger. Many loggers exist in industry, with varying rates of success and usefulness. A medically sound logger has to be able to completely retell exactly what happened during an event, including patient, device, and system information, so that the right medical professional can provide the best care. Several loggers have been built for MDCF, but few were practical due to the invasiveness of the service. A universal subscriber, a service that is able to connect to all publishing data streams, logging service was built for the MDCF which has the ability to record all information that passes over the MDCF messaging service. This implementation was then stress tested with varying numbers of devices and amounts of data. A reviewing tool was also built that allows for replay of device data that is similar to the original device UI. Future work will include looking into storing system information such as state changes within MDCF and system health. There is also a push to further integrate the forensic reviewer into the core MDCF UI.
|
3 |
Data logger for medical device coordination frameworkGundimeda, Karthik January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / A software application or a hardware device performs well under favorable conditions. Practically there can be many factors which effect the performance and functioning of the system. Scenarios where the system fails or performs better are needed to be determined. Logging is one of the best methodologies to determine such scenarios. Logging can be helpful in determining worst and effective performance. There is always an advantage of levels in logging which gives flexibility in logging different kinds of messages. Determining what messages to be logged is the key of logging. All the important events, state changes, messages are to be logged to know the higher level of progress of the system.
Medical Device Coordination Framework (MDCF) deals with device connectivity with MDCF server. In this report, we propose a logging component to the existing MDCF. Logging component for MDCF is inspired from the flight data recorder, “black box”. Black box is a device used to log each and every message passing through the flight‟s system. In this way it is reliable and easy to investigate any failures in the system. We will also be able to simulate the replay of the scenarios. The important state changes in MDCF include device connection, scenario instantiation, initial state of MDCF server, destination creation. Logging in MDCF is implemented by wrapping Log4j logging framework. The interface provided by the logging component is used by MDCF in order to log. This implementation facilitates building more complex logging component for MDCF.
|
4 |
A Distributed Memory Implementation of LOCIGeorge, Thomas 14 December 2001 (has links)
Distributed memory systems have gained immense popularity due to their favorable price/performance ratios. This study seeks to reduce the complexities, involved in developing parallel applications for distributed memory systems. The Loci system is a coordination framework which was developed to eliminate most of the accidental complexities involved in numerical simulation software development. A distributed memory version of Loci is developed and has been tested and validated using a finite-rate chemically reacting flow solver developed in the sequential Loci framework. The application developed in the original sequential version of Loci was parallelized with minimal changes in its source code. A comparison with the results from the original sequential version guarantees a correct implementation. The performance measurements indicate that an efficient implementation has been achieved.
|
5 |
A Coordination Framework for Deploying Hadoop MapReduce Jobs on Hadoop ClusterRaja, Anitha January 2016 (has links)
Apache Hadoop is an open source framework that delivers reliable, scalable, and distributed computing. Hadoop services are provided for distributed data storage, data processing, data access, and security. MapReduce is the heart of the Hadoop framework and was designed to process vast amounts of data distributed over a large number of nodes. MapReduce has been used extensively to process structured and unstructured data in diverse fields such as e-commerce, web search, social networks, and scientific computation. Understanding the characteristics of Hadoop MapReduce workloads is the key to achieving improved configurations and refining system throughput. Thus far, MapReduce workload characterization in a large-scale production environment has not been well studied. In this thesis project, the focus is mainly on composing a Hadoop cluster (as an execution environment for data processing) to analyze two types of Hadoop MapReduce (MR) jobs via a proposed coordination framework. This coordination framework is referred to as a workload translator. The outcome of this work includes: (1) a parametric workload model for the target MR jobs, (2) a cluster specification to develop an improved cluster deployment strategy using the model and coordination framework, and (3) better scheduling and hence better performance of jobs (i.e. shorter job completion time). We implemented a prototype of our solution using Apache Tomcat on (OpenStack) Ubuntu Trusty Tahr, which uses RESTful APIs to (1) create a Hadoop cluster version 2.7.2 and (2) to scale up and scale down the number of workers in the cluster. The experimental results showed that with well tuned parameters, MR jobs can achieve a reduction in the job completion time and improved utilization of the hardware resources. The target audience for this thesis are developers. As future work, we suggest adding additional parameters to develop a more refined workload model for MR and similar jobs. / Apache Hadoop är ett öppen källkods system som levererar pålitlig, skalbar och distribuerad användning. Hadoop tjänster hjälper med distribuerad data förvaring, bearbetning, åtkomst och trygghet. MapReduce är en viktig del av Hadoop system och är designad att bearbeta stora data mängder och även distribuerad i flera leder. MapReduce är använt extensivt inom bearbetning av strukturerad och ostrukturerad data i olika branscher bl. a e-handel, webbsökning, sociala medier och även vetenskapliga beräkningar. Förståelse av MapReduces arbetsbelastningar är viktiga att få förbättrad konfigurationer och resultat. Men, arbetsbelastningar av MapReduce inom massproduktions miljö var inte djup-forskat hittills. I detta examensarbete, är en hel del fokus satt på ”Hadoop cluster” (som en utförande miljö i data bearbetning) att analysera två typer av Hadoop MapReduce (MR) arbeten genom ett tilltänkt system. Detta system är refererad som arbetsbelastnings översättare. Resultaten från denna arbete innehåller: (1) en parametrisk arbetsbelastningsmodell till inriktad MR arbeten, (2) en specifikation att utveckla förbättrad kluster strategier med båda modellen och koordinations system, och (3) förbättrad planering och arbetsprestationer, d.v.s kortare tid att utföra arbetet. Vi har realiserat en prototyp med Apache Tomcat på (OpenStack) Ubuntu Trusty Tahr som använder RESTful API (1) att skapa ”Hadoop cluster” version 2.7.2 och (2) att båda skala upp och ner antal medarbetare i kluster. Forskningens resultat har visat att med vältrimmad parametrar, kan MR arbete nå förbättringar dvs. sparad tid vid slutfört arbete och förbättrad användning av hårdvara resurser. Målgruppen för denna avhandling är utvecklare. I framtiden, föreslår vi tilläggning av olika parametrar att utveckla en allmän modell för MR och liknande arbeten.
|
Page generated in 0.1136 seconds