1 |
Dynamically reconfigurable systemEdwards, Nigel John January 1989 (has links)
No description available.
|
2 |
Open implementation and flexibility in CSCW toolkitsDourish, James Paul January 1996 (has links)
No description available.
|
3 |
An Automated Multi-agent Framework For Testing Distributed SystemHaque, Ehsanul 01 May 2013 (has links)
Testing is a part of the software development life cycle (SDLC) which ensures the quality and efficiency of the software. It gives confident to the developers about the system by early detecting faults of the system. Therefore, it is considered as one of the most important part of the SDLC. Unfortunately, testing is often neglected by the developers mainly because of the time and cost of the testing process. Testing involves lots of manpower, specially for a large system, such as distributed system. On the other hand, it is more common to have bugs in a large system than a small centralized system and therefore there is no alternative of testing to find and fix the bugs. The situation gets worst if the developer follows one of the most powerful development process called continuous integration process. This is because developers need to write the test cases in each cycle of the continuous integration process which increase the development time drastically. As a result, testing often neglected for large systems. This is an alarming situation because distributed system is one of the most popular and widely accepted system in both industries and academia. Therefore, this is one of the highly pressured areas where lot of developers engaged to provide distributed software solutions. If these systems delivered to the users untested, there is a high possibility that we will end up with a lot of buggy systems every year. There are also a very few number of testing framework exist in the market for testing distributed system compared to the number of testing framework exists for traditional system. The main reason behind this is, testing a distributed system is far difficult and complex process compares to test a centralized system. Most common technique to test a centralized system is to test the middleware which might not be the case for distributed system. Unlike the traditional system, distributed system can be resides in multiple location of different corners of the world. Therefore, testing and verification of distributed systems are difficult. In addition to this, distributed systems have some basic properties such as fault tolerance, availability, concurrency, responsiveness, security, etc. which makes the testing process more complex and difficult. This research proposed a multi-agent based testing framework for distributed system where multiple agent communicate with each other and accomplish the whole testing process for a distributed system. The bullet proof idea of testing centralizes system has been reused partially to design the framework so that developers will be more comfortable to use the framework. The research also focused on the automation of testing process which will reduce the time and cost of the whole testing process and relief the developer from re-generating the same test cases over and over before each release of the application. This paper briefly described the architecture of the framework and communication process between multiple agents.
|
4 |
AnalyzeThis: An Analysis Workflow-Aware Storage SystemSim, Hyogi 13 January 2015 (has links)
Supercomputing application simulations on hundreds of thousands of cores produce vast amounts of data that need to be analyzed on smaller-scale clusters to glean insights. The process is referred to as an end-to-end workflow. Extant workflow systems are stymied by the storage wall, resulting from both the disk-based parallel file system (PFS) failing to keep pace with the compute and memory subsystems as well as the inefficiencies in end-to-end workflow processing. In the post-petaflop era, supercomputers are provisioned with flash devices, as an intermediary between compute nodes and the PFS, enabling novel paradigms not just for expediting I/O, but also for the in-situ analysis of the simulation output data on the flash device. An array of such active flash elements allows us to fundamentally rethink the way data analysis workflows interact with storage systems. By blending the flash storage array and data analysis together in a seamless fashion, we create an analysis workflow-aware storage system, AnalyzeThis. Our guiding principle is that analysis-awareness be deeply ingrained in each and every layer of the storage system—active flash fabric, analysis object abstraction layer, scheduling layer within the storage, and an easy-to-use file system interface—thereby elevating data analyses as first-class citizens. Together, these concepts transform AnalyzeThis into a potent analytics-aware appliance. / Master of Science
|
5 |
A Lightweight Intrusion Detection System for the Cluster EnvironmentLiu, Zhen 02 August 2002 (has links)
As clusters of Linux workstations have gained in popularity, security in this environment has become increasingly important. While prevention methods such as access control can enhance the security level of a cluster system, intrusions are still possible and therefore intrusion detection and recovery methods are necessary. In this thesis, a system architecture for an intrusion detection system in a cluster environment is presented. A prototype system called pShield based on this architecture for a Linux cluster environment is described and its capability to detect unique attacks on MPI programs is demonstrated. The pShield system was implemented as a loadable kernel module that uses a neural network classifier to model normal behavior of processes. A new method for generating artificial anomalous data is described that uses a limited amount of attack data in training the neural network. Experimental results demonstrate that using this method rather than randomly generated anomalies reduces the false positive rate without compromising the ability to detect novel attacks. A neural network with a simple activation function is used in order to facilitate fast classification of new instances after training and to ease implementation in kernel space. Our goal is to classify the entire trace of a program¡¯s execution based on neural network classification of short sequences in the trace. Therefore, the effect of anomalous sequences in a trace must be accumulated. Several trace classification methods were compared. The results demonstrate that methods that use information about locality of anomalies are more effective than those that only look at the number of anomalies. The impact of pShield on system performance was evaluated on an 8-node cluster. Although pShield adds some overhead for each API for MPI communication, the experimental results show that a real world parallel computing benchmark was slowed only slightly by the intrusion detection system. The results demonstrate the effectiveness of pShield as a light-weight intrusion detection system in a cluster environment. This work is part of the Intelligent Intrusion Detection project of the Center for Computer Security Research at Mississippi State University.
|
6 |
Customer-driven cost-performance comparison of a real-world distributed systemTurner, Nicholas James Nickerson 30 April 2019 (has links)
Many modern web applications run on distributed cloud systems, which allows them to
scale their resources to match performance requirements. Scaling of resources at industry
scales, however, is a financially-expensive operation, and therefore one that should
involve a business justification rooted in customer quality-of-service metrics over more
commonly-used utilization metrics. Additionally, changing the resources available to such a system is non-instantaneous, and thus a reasonable effort should be made to predict system performance at varying resource allocations and at various expected workloads.
Common performance monitoring solutions look at general metrics such as CPU utilization or available memory. These metrics are at best an indirect means of evaluating
customer experience, and at worst may provide no information as to whether users of a
commercial application are satisfied with the product they have paid for. Instead, the use
of application-specific metrics that accurately reflect the experience of system users,
combined with research into how these metrics are affected by various tunable parameters, allows a company to make accurate decisions as to the desired performance
perceived by their users versus the costs associated with providing that level of performance.
This thesis uses a real-world software-as-a-service product as a case study in the
development of quality-of-service metrics and the use of those metrics to determine
business cases and costing packages for customers. The product used for this work is
Phoenix, a state-of-the-art social media aggregation and analytics software-as-a-service
web platform developed by Echosec Systems, Ltd. The product will be tested under realworld conditions on cloud hardware with a minimal test harness to ensure a realistic
depiction of live production conditions. / Graduate
|
7 |
Simple Bivalency Proofs of the Lower Bounds in Synchronous Consensus ProblemsWang, Xianbing, Teo, Yong Meng, Cao, Jiannong 01 1900 (has links)
A fundamental problem of fault-tolerant distributed computing is for the reliable processes to reach a consensus. For a synchronous distributed system of n processes with up to t crash failures and f failures actually occur, we prove using a straightforward bivalency argument that the lower bound for reaching uniform consensus is (f + 2)-rounds in the case of 0 < f ⤠t â2, and a new lower bound for early-stopping consensus is min (t + 1, f + 2)-rounds where 0 ⤠f ⤠t. Both proofs are simpler and more intuitive than the traditional methods such as backward induction. Our main contribution is that we solve the open problem of proving that bivalency can be applied to show the (f + 2)-rounds lower bound for synchronous uniform consensus. / Singapore-MIT Alliance (SMA)
|
8 |
An ownership-base message admission control mechanism for curbing spamGeng, Hongxing 04 September 2007
Unsolicited e-mail has brought much annoyance to users, thus, making e-mail less reliable as a communication tool. This has happened because current email architecture has key limitations. For instance, while it allows senders to send as many messages as they want, it does not provide adequate capability to recipients to prevent unrestricted access to their mailbox. This research develops a new approach to equip recipients with ability to control access to their mailbox.<p>This thesis builds an ownership-based approach to control mailbox usage employing the CyberOrgs model. CyberOrgs is a model that provides facilities to control resources in multi-agent systems. We consider a mailbox to be a precious resource of its owner. Any access to the resource requires its owner's permission. Thus, we give recipients a capability to manage their valuable resource - mailbox. In our approach, message senders obtain a permission to send messages through negotiation. In this negotiation, a sender makes a proposal and the intended recipient evaluates the proposal according to their own policies. A sender's desired outcome of a negotiation is a contract, which conducts the subsequent communication between the sender and the recipient. Contracts help senders and recipients construct a long-term relationship.<p>Besides allowing individuals to control their mailbox, we consider groups, which represent organizations in human society, in order to allow organizations to manage their resources including mailboxes, message sending allowances, and contracts.<p>A prototype based on our approach is implemented. In the prototype, policies are separated from the mechanisms. Examples of policies are presented and a public policy interface is exposed to allow programmers to develop custom policies. Experimental results demonstrate that the system performance is policy-dependent. In other words, as long as policies are carefully designed, communication involving negotiation has minimal overhead compared to communication in which senders deliver messages to recipients directly.
|
9 |
An ownership-base message admission control mechanism for curbing spamGeng, Hongxing 04 September 2007 (has links)
Unsolicited e-mail has brought much annoyance to users, thus, making e-mail less reliable as a communication tool. This has happened because current email architecture has key limitations. For instance, while it allows senders to send as many messages as they want, it does not provide adequate capability to recipients to prevent unrestricted access to their mailbox. This research develops a new approach to equip recipients with ability to control access to their mailbox.<p>This thesis builds an ownership-based approach to control mailbox usage employing the CyberOrgs model. CyberOrgs is a model that provides facilities to control resources in multi-agent systems. We consider a mailbox to be a precious resource of its owner. Any access to the resource requires its owner's permission. Thus, we give recipients a capability to manage their valuable resource - mailbox. In our approach, message senders obtain a permission to send messages through negotiation. In this negotiation, a sender makes a proposal and the intended recipient evaluates the proposal according to their own policies. A sender's desired outcome of a negotiation is a contract, which conducts the subsequent communication between the sender and the recipient. Contracts help senders and recipients construct a long-term relationship.<p>Besides allowing individuals to control their mailbox, we consider groups, which represent organizations in human society, in order to allow organizations to manage their resources including mailboxes, message sending allowances, and contracts.<p>A prototype based on our approach is implemented. In the prototype, policies are separated from the mechanisms. Examples of policies are presented and a public policy interface is exposed to allow programmers to develop custom policies. Experimental results demonstrate that the system performance is policy-dependent. In other words, as long as policies are carefully designed, communication involving negotiation has minimal overhead compared to communication in which senders deliver messages to recipients directly.
|
10 |
Integrated Feeder Switching and Voltage Control for Increasing Distributed Generation PenetrationSu, Sheng-yi 24 July 2009 (has links)
The design and regulation of power equipments which installed in distribution system are based on single direction power flow. When distributed generators (DG) are added into distribution system, it may cause some technical problems such as two-way current, fault capacity and power quality. In general, the utility should make sure that its power system could be operated safely and reliably before integrating DG into the system. If there are no complete measurements for DG, the capacity of DG would be restricted by fault current, short circuit capacity, feeder voltage or other problems. In this research, the focus is on the influence of DG operations in distribution system and the increase of DG integration capacity. The impacts of different combinations of DG generation profiles and control strategies are first analyzed, followed by the use of particle swarm optimization (PSO) technique to search for better feeder reconfigurations in order to increase DG integration capacity.
|
Page generated in 0.08 seconds