• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 24
  • 11
  • 8
  • 8
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 155
  • 38
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Automated Multi-agent Framework For Testing Distributed System

Haque, Ehsanul 01 May 2013 (has links)
Testing is a part of the software development life cycle (SDLC) which ensures the quality and efficiency of the software. It gives confident to the developers about the system by early detecting faults of the system. Therefore, it is considered as one of the most important part of the SDLC. Unfortunately, testing is often neglected by the developers mainly because of the time and cost of the testing process. Testing involves lots of manpower, specially for a large system, such as distributed system. On the other hand, it is more common to have bugs in a large system than a small centralized system and therefore there is no alternative of testing to find and fix the bugs. The situation gets worst if the developer follows one of the most powerful development process called continuous integration process. This is because developers need to write the test cases in each cycle of the continuous integration process which increase the development time drastically. As a result, testing often neglected for large systems. This is an alarming situation because distributed system is one of the most popular and widely accepted system in both industries and academia. Therefore, this is one of the highly pressured areas where lot of developers engaged to provide distributed software solutions. If these systems delivered to the users untested, there is a high possibility that we will end up with a lot of buggy systems every year. There are also a very few number of testing framework exist in the market for testing distributed system compared to the number of testing framework exists for traditional system. The main reason behind this is, testing a distributed system is far difficult and complex process compares to test a centralized system. Most common technique to test a centralized system is to test the middleware which might not be the case for distributed system. Unlike the traditional system, distributed system can be resides in multiple location of different corners of the world. Therefore, testing and verification of distributed systems are difficult. In addition to this, distributed systems have some basic properties such as fault tolerance, availability, concurrency, responsiveness, security, etc. which makes the testing process more complex and difficult. This research proposed a multi-agent based testing framework for distributed system where multiple agent communicate with each other and accomplish the whole testing process for a distributed system. The bullet proof idea of testing centralizes system has been reused partially to design the framework so that developers will be more comfortable to use the framework. The research also focused on the automation of testing process which will reduce the time and cost of the whole testing process and relief the developer from re-generating the same test cases over and over before each release of the application. This paper briefly described the architecture of the framework and communication process between multiple agents.
2

AnalyzeThis: An Analysis Workflow-Aware Storage System

Sim, Hyogi 13 January 2015 (has links)
Supercomputing application simulations on hundreds of thousands of cores produce vast amounts of data that need to be analyzed on smaller-scale clusters to glean insights. The process is referred to as an end-to-end workflow. Extant workflow systems are stymied by the storage wall, resulting from both the disk-based parallel file system (PFS) failing to keep pace with the compute and memory subsystems as well as the inefficiencies in end-to-end workflow processing. In the post-petaflop era, supercomputers are provisioned with flash devices, as an intermediary between compute nodes and the PFS, enabling novel paradigms not just for expediting I/O, but also for the in-situ analysis of the simulation output data on the flash device. An array of such active flash elements allows us to fundamentally rethink the way data analysis workflows interact with storage systems. By blending the flash storage array and data analysis together in a seamless fashion, we create an analysis workflow-aware storage system, AnalyzeThis. Our guiding principle is that analysis-awareness be deeply ingrained in each and every layer of the storage system—active flash fabric, analysis object abstraction layer, scheduling layer within the storage, and an easy-to-use file system interface—thereby elevating data analyses as first-class citizens. Together, these concepts transform AnalyzeThis into a potent analytics-aware appliance. / Master of Science
3

Dynamically reconfigurable system

Edwards, Nigel John January 1989 (has links)
No description available.
4

Open implementation and flexibility in CSCW toolkits

Dourish, James Paul January 1996 (has links)
No description available.
5

A Lightweight Intrusion Detection System for the Cluster Environment

Liu, Zhen 02 August 2002 (has links)
As clusters of Linux workstations have gained in popularity, security in this environment has become increasingly important. While prevention methods such as access control can enhance the security level of a cluster system, intrusions are still possible and therefore intrusion detection and recovery methods are necessary. In this thesis, a system architecture for an intrusion detection system in a cluster environment is presented. A prototype system called pShield based on this architecture for a Linux cluster environment is described and its capability to detect unique attacks on MPI programs is demonstrated. The pShield system was implemented as a loadable kernel module that uses a neural network classifier to model normal behavior of processes. A new method for generating artificial anomalous data is described that uses a limited amount of attack data in training the neural network. Experimental results demonstrate that using this method rather than randomly generated anomalies reduces the false positive rate without compromising the ability to detect novel attacks. A neural network with a simple activation function is used in order to facilitate fast classification of new instances after training and to ease implementation in kernel space. Our goal is to classify the entire trace of a program¡¯s execution based on neural network classification of short sequences in the trace. Therefore, the effect of anomalous sequences in a trace must be accumulated. Several trace classification methods were compared. The results demonstrate that methods that use information about locality of anomalies are more effective than those that only look at the number of anomalies. The impact of pShield on system performance was evaluated on an 8-node cluster. Although pShield adds some overhead for each API for MPI communication, the experimental results show that a real world parallel computing benchmark was slowed only slightly by the intrusion detection system. The results demonstrate the effectiveness of pShield as a light-weight intrusion detection system in a cluster environment. This work is part of the Intelligent Intrusion Detection project of the Center for Computer Security Research at Mississippi State University.
6

GRAPE : parallel graph query engine

Xu, Jingbo January 2017 (has links)
The need for graph computations is evident in a multitude of use cases. To support computations on large-scale graphs, several parallel systems have been developed. However, existing graph systems require users to recast algorithms into new models, which makes parallel graph computations as a privilege to experienced users only. Moreover, real world applications often require much more complex graph processing workflows than previously evaluated. In response to these challenges, the thesis presents GRAPE, a distributed graph computation system, shipped with various applications for social network analysis, social media marketing and functional dependencies on graphs. Firstly, the thesis presents the foundation of GRAPE. The principled approach of GRAPE is based on partial evaluation and incremental computation. Sequential graph algorithms can be plugged into GRAPE with minor changes, and get parallelized as a whole. The termination and correctness are guaranteed under a monotonic condition. Secondly, as an application on GRAPE, the thesis proposes graph-pattern association rules (GPARs) for social media marketing. GPARs help users discover regularities between entities in social graphs and identify potential customers by exploring social influence. The thesis studies the problem of discovering top-k diversified GPARs and the problem of identifying potential customers with GPARs. Although both are NP- hard, parallel scalable algorithms on GRAPE are developed, which guarantee a polynomial speedup over sequential algorithms with the increase of processors. Thirdly, the thesis proposes quantified graph patterns (QGPs), an extension of graph patterns by supporting simple counting quantifiers on edges. QGPs naturally express universal and existential quantification, numeric and ratio aggregates, as well as negation. The thesis proves that the matching problem of QGPs remains NP-complete in the absence of negation, and is DP-complete for general QGPs. In addition, the thesis introduces quantified graph association rules defined with QGPs, to identify potential customers in social media marketing. Finally, to address the issue of data consistency, the thesis proposes a class of functional dependencies for graphs, referred to as GFDs. GFDs capture both attribute-value dependencies and topological structures of entities. The satisfiability and implication problems for GFDs are studied and proved to be coNP-complete and NP-complete, respectively. The thesis also proves that the validation problem for GFDs is coNP- complete. The parallel algorithms developed on GRAPE verify that GFDs provide an effective approach to detecting inconsistencies in knowledge and social graphs.
7

Customer-driven cost-performance comparison of a real-world distributed system

Turner, Nicholas James Nickerson 30 April 2019 (has links)
Many modern web applications run on distributed cloud systems, which allows them to scale their resources to match performance requirements. Scaling of resources at industry scales, however, is a financially-expensive operation, and therefore one that should involve a business justification rooted in customer quality-of-service metrics over more commonly-used utilization metrics. Additionally, changing the resources available to such a system is non-instantaneous, and thus a reasonable effort should be made to predict system performance at varying resource allocations and at various expected workloads. Common performance monitoring solutions look at general metrics such as CPU utilization or available memory. These metrics are at best an indirect means of evaluating customer experience, and at worst may provide no information as to whether users of a commercial application are satisfied with the product they have paid for. Instead, the use of application-specific metrics that accurately reflect the experience of system users, combined with research into how these metrics are affected by various tunable parameters, allows a company to make accurate decisions as to the desired performance perceived by their users versus the costs associated with providing that level of performance. This thesis uses a real-world software-as-a-service product as a case study in the development of quality-of-service metrics and the use of those metrics to determine business cases and costing packages for customers. The product used for this work is Phoenix, a state-of-the-art social media aggregation and analytics software-as-a-service web platform developed by Echosec Systems, Ltd. The product will be tested under realworld conditions on cloud hardware with a minimal test harness to ensure a realistic depiction of live production conditions. / Graduate
8

A Database Supported Modeling Environment for Pandemic Planning and Course of Action Analysis

Ma, Yifei 24 June 2013 (has links)
Pandemics can significantly impact public health and society, for instance, the 2009 H1N1<br />and the 2003 SARS. In addition to analyzing the historic epidemic data, computational simulation of epidemic propagation processes and disease control strategies can help us understand the spatio-temporal dynamics of epidemics in the laboratory. Consequently, the public can be better prepared and the government can control future epidemic outbreaks more effectively. Recently, epidemic propagation simulation systems, which use high performance computing technology, have been proposed and developed to understand disease propagation processes. However, run-time infection situation assessment and intervention adjustment, two important steps in modeling disease propagation, are not well supported in these simulation systems. In addition, these simulation systems are computationally efficient in their simulations, but most of them have<br />limited capabilities in terms of modeling interventions in realistic scenarios.<br />In this dissertation, we focus on building a modeling and simulation environment for epidemic propagation and propagation control strategy. The objective of this work is to<br />design such a modeling environment that both supports the previously missing functions,<br />meanwhile, performs well in terms of the expected features such as modeling fidelity,<br />computational efficiency, modeling capability, etc. Our proposed methodologies to build<br />such a modeling environment are: 1) decoupled and co-evolving models for disease propagation, situation assessment, and propagation control strategy, and 2) assessing situations and simulating control strategies using relational databases. Our motivation for exploring these methodologies is as follows: 1) a decoupled and co-evolving model allows us to design modules for each function separately and makes this complex modeling system design simpler, and 2) simulating propagation control strategies using relational databases improves the modeling capability and human productivity of using this modeling environment. To evaluate our proposed methodologies, we have designed and built a loosely coupled and database supported epidemic modeling and simulation environment. With detailed experimental results and realistic case studies, we demonstrate that our modeling environment provides the missing functions and greatly enhances many expected features, such as modeling capability, without significantly sacrificing computational efficiency and scalability. / Ph. D.
9

Modeling and Computation of Complex Interventions in Large-scale Epidemiological Simulations using SQL and Distributed Database

Kaw, Rushi 30 August 2014 (has links)
Scalability is an important problem in epidemiological applications that simulate complex intervention scenarios over large datasets. Indemics is one such interactive data intensive framework for High-performance computing (HPC) based large-scale epidemic simulations. In the Indemics framework, interventions are supplied from an external, standalone database which proved to be an effective way of implementing interventions. Although this setup performs well for simple interventions and small datasets, performance and scalability of complex interventions and large datasets remain an issue. In this thesis, we present IndemicsXC, a scalable and massively parallel high-performance data engine for Indemics in a supercomputing environment. IndemicsXC has the ability to implement complex interventions over large datasets. Our distributed database solution retains the simplicity of Indemics by using the same SQL query interface for expressing interventions. We show that our solution implements the most complex interventions by intelligently offloading them to the supercomputer nodes and processing them in parallel. We present an extensive performance evaluation of our database engine with the help of various intervention case studies over synthetic population datasets. The evaluation of our parallel and distributed database framework illustrates its scalability over standalone database. Our results show that the distributed data engine is efficient as it is parallel, scalable and cost-efficient means of implementing interventions. The proposed cost-model in this thesis could be used to approximate intervention query execution time with decent accuracy. The usefulness of our distributed database framework could be leveraged for fast, accurate and sensible decisions by the public health officials during an outbreak. Finally, we discuss the considerations for using distributed databases for driving large-scale simulations. / Master of Science
10

A Distributed System Interface for a Flight Simulator

Zeitoun, Omar 11 1900 (has links)
The importance of flight training has been realized since the inception of manned flight. In this thesis, a project about the interfacing of hardware cockpit instruments with a flight simulation software over a distributed system is to be described. A TRC472 Flight Cockpit was to be used while linked with Presagis FlightSIM to fully simulate a Cessna 172 Skyhawk aircraft. The TRC 472 contains flight input gauges (Airspeed Indicator, RPM indicator... etc.), pilot control devices (Rudder, Yoke...etc.) and navigation systems (VOR,ADF...etc.) all connected to computer through separate USBs and identified as HID's (Human Interface Devices). These devices required real-time interaction with FlightSIM software; in total 21 devices communicating at the same time. The TRC472 Flight Cockpit and the FlightSIM software were to be running on a distributed system of computers and to be communicating together through Ethernet. Serialization was to be used for the data transfer across the connection link so objects can be reproduced seamlessly on the different computers. Some of the TRC472 devices were straight forward in writing and reading from, but some of them required some calibrations of raw I/O data and buffers. The project also required making plugins to overwrite and extend FlightSIM software to communicate with the TRC472 Flight Cockpit. The final product is to be a full fledged flight experience with complete environment and physics of the Cessna 172. / Thesis / Master of Applied Science (MASc)

Page generated in 0.1503 seconds