Spelling suggestions: "subject:"6electronic data processing"" "subject:"belectronic data processing""
581 |
Distributed load balancing in a multiple server system by shift-invariant protocol sequences. / CUHK electronic theses & dissertations collectionJanuary 2013 (has links)
Zhang, Yupeng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 45-48). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
|
582 |
Decentralized periodic broadcasting for large-scale video streaming.January 2006 (has links)
To Ka Ki. / Thesis submitted in: August 2005. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 55-56). / Abstracts in English and Chinese. / Acknowledgement --- p.i / Abstract --- p.ii / 摘要 --- p.iii / Chapter Chapter 1 --- INTRODUCTION --- p.5 / Chapter Chapter 2 --- related WORKS --- p.12 / Chapter Chapter 3 --- Decentralization of Periodic Broadcasting --- p.16 / Chapter 3.1 --- Staggered Broadcasting --- p.16 / Chapter 3.2 --- Modified Staggered Broadcasting --- p.17 / Chapter Chapter 4 --- PEERS SYNCHRONIZATION --- p.21 / Chapter 4.1 --- Integrating PCS with Periodic Broadcasting --- p.22 / Chapter 4.2 --- Distributed PCS --- p.23 / Chapter Chapter 5 --- Performance Evaluations of Decentralized Periodic Broadcasting Architecture --- p.27 / Chapter 5.1 --- Sensitivity to Clock Drift --- p.28 / Chapter 5.2 --- System Dynamic Behavior --- p.30 / Chapter Chapter 6 --- P-NICE --- p.33 / Chapter 6.1 --- The Original NICE Protocol --- p.34 / Chapter 6.2 --- Parallel Overlay Architecture --- p.35 / Chapter 6.3 --- Control Overheads --- p.37 / Chapter Chapter 7 --- Performance Evaluations of P-NICE --- p.39 / Chapter 7.1 --- End-to-End Packet Delivery Ratio --- p.40 / Chapter 7.2 --- Utilization of Network Links --- p.41 / Chapter 7.3 --- Convergence Time of End-to-End Packet Delivery Ratio --- p.44 / Chapter 7.4 --- Effect of Number of Overlays --- p.45 / Chapter 7.5 --- End-to-End Data Delivery Delay --- p.47 / Chapter 7.6 --- Load Balance of Overlays --- p.47 / Chapter 7.7 --- Peers Reception Quality --- p.48 / Chapter 7.8 --- Control Overheads --- p.51 / Chapter Chapter 8 --- Conclusions --- p.53 / Bibliography --- p.55
|
583 |
Improving the Productivity of Volunteer ComputingToth, David M. 15 March 2008 (has links)
The price of computers has dropped drastically over the past years enabling many households to have at least one computer. At the same time, the performance of computers has skyrocketed, far surpassing what a typical user needs, and most of the computational power of personal computers is wasted. Volunteer computing projects attempt to use this wasted computational power in order to solve problems that would otherwise be computationally infeasible. Some of these problems include medical applications like searching for cures for AIDS and cancer. However, the number of volunteer computing projects is increasing rapidly, requiring improvements in the field of volunteer computing to enable the increasing number of volunteer projects to continue making significant progress. This dissertation examines two ways to increase the productivity of volunteer computing: using the volunteered CPU cycles more effectively and exploring ways to increase the amount of CPU cycles that are donated. Each of the existing volunteer computing projects uses one of two task retrieval policies to enable the volunteered computers participating in projects to retrieve work. This dissertation compares the amount of work completed by the volunteered computers participating in projects based on which of the two task retrieval techniques the project employs. Additional task retrieval policies are also proposed and evaluated. The most commonly used task retrieval policy is shown to be less effective than both the less frequently used policy and a proposed policy. The potential that video game consoles have to be used for volunteer computing is explored, as well as the potential benefits of constructing different types of volunteer computing clients, rather than the most popular client implementation: the screensaver. In addition to examining methods of increasing the productivity of volunteer computing, 140 traces of computer usage detailing when computers are available to participate in volunteer computing is collected and made publicly available. Volunteer computing project-specific information that can be used in researching how to improve volunteer computing is collected and combined into the first summary of which we are aware.
|
584 |
Exploratory Visualization of Data with Variable QualityHuang, Shiping 11 January 2005 (has links)
Data quality, which refers to correctness, uncertainty, completeness and other aspects of data, has became more and more prevalent and has been addressed across multiple disciplines. Data quality could be introduced and presented in any of the data manipulation processes such as data collection, transformation, and visualization. Data visualization is a process of data mining and analysis using graphical presentation and interpretation. The correctness and completeness of the visualization discoveries to a large extent depend on the quality of the original data. Without the integration of quality information with data presentation, the analysis of data using visualization is incomplete at best and can lead to inaccurate or incorrect conclusions at worst. This thesis addresses the issue of data quality visualization. Incorporating data quality measures into the data displays is challenging in that the display is apt to be cluttered when faced with multiple dimensions and data records. We investigate both the incorporation of data quality information in traditional multivariate data display techniques as well as develop novel visualization and interaction tools that operate in data quality space. We validate our results using several data sets that have variable quality associated with dimensions, records, and data values.
|
585 |
Optimal Load Balancing in a Beowulf ClusterAdams, Daniel Alan 02 May 2005 (has links)
PANTS (PANTS Application Node Transparency System) is a suite of programs designed to add transparent load balancing to a Beowulf cluster so that processes are transfered among the nodes of the cluster to improve performance. PANTS provides the option of using one of several different load balancing policies, each having a different approach. This paper studies the scalability and performance of these policies on large clusters and under various workloads. We measure the performance of our policies on our current cluster, and use that performance data to build simulations to test the performance of the policies in larger clusters and under differing workloads. Two policies, one deterministic and one non-deterministic, are presented which offer optimal steady-state performance. We also present best practices and discuss the major challenges of load balancing policy design.
|
586 |
Multicast-Based Interactive-Group Object-Replication For Fault ToleranceSoria-Rodriguez, Pedro 25 October 1999 (has links)
"Distributed systems are clusters of computers working together on one task. The sharing of information across different architectures, and the timely and efficient use of the network resources for communication among computers are some of the problems involved in the implementation of a distributed system. In the case of a low latency system, the network utilization and the responsiveness of the communication mechanism are even more critical.
This thesis introduces a new approach for the distribution of messages to computers in the system, in which, the Common Object Request Broker Architecture (CORBA) is used in conjunction with IP multicast to implement a fault-tolerant, low latency distributed system. Fault tolerance is achieved by replication of the current state of the system across several hosts. An update of the current state is initiated by a client application that contacts one of the state object replicas. The new information needs to be distributed to all the members of the distributed system (the object replicas).
This state update is accomplished by using a two-phase commit protocol, which is implemented using a binary tree structure along with IP multicast to reduce the amount of network utilization, distribute the computation load associated with state propagation, and to achieve faster communication among the members of the distributed system. The use of IP multicast enhances the speed of message distribution, while the two-phase commit protocol encapsulates IP multicast to produce a reliable multicast service that is suitable for fault tolerant, distributed low latency applications. The binary tree structure, finally, is essential for the load sharing of the state commit response collection processing.
"
|
587 |
Clutter-Based Dimension Reordering in Multi-Dimensional Data VisualizationPeng, Wei 11 January 2005 (has links)
Visual clutter denotes a disordered collection of graphical entities in information visualization. It can obscure the structure present in the data. Even in a small dataset, visual clutter makes it hard for the viewer to find patterns, relationships and structure. In this thesis, I study visual clutter with four distinct visualization techniques, and present the concept and framework of Clutter-Based Dimension Reordering (CBDR). Dimension order is an attribute that can significantly affect a visualization's expressiveness. By varying the dimension order in a display, it is possible to reduce clutter without reducing data content or modifying the data in any way. Clutter reduction is a display-dependent task. In this thesis, I apply the CBDR framework to four different visualization techniques. For each display technique, I determine what constitutes clutter in terms of display properties, then design a metric to measure visual clutter in this display. Finally I search for an order that minimizes the clutter in a display. Different algorithms for the searching process are discussed in this thesis as well. In order to gather users' responses toward the clutter measures used in the Clutter-Based Dimension Reordering process and validate the usefulness of CBDR, I also conducted an evaluation with two groups of users. The study result proves that users find our approach to be helpful for visually exploring datasets. The users also had many comments and suggestions for the CBDR approach as well as for visual clutter reduction in general. The content and result of the user study are included in this thesis.
|
588 |
Development of an Automated Anesthesia System for the Stabilization of Physiological Parameters in RodentsHawkins, Kevin Michael 24 April 2003 (has links)
The testing of any physiological diagnostic system in-vivo depends critically on the stability of the anesthetized animal used. That is, if the systemic physiological parameters are not tightly controlled, it is exceedingly difficult to assess the precision and accuracy of the system or interpret the consequence of disease. In order to ensure that all measurements taken using the experimental system are not affected by fluctuations in physiological state, the animal must be maintained in a tightly controlled physiologic range. The main goal of this project was to develop a robust monitoring and control system capable of maintaining the physiological parameters of the anesthetized animal in a predetermined range, using the instrumentation already present in the laboratory, and based on the LabVIEWR software interface. A single user interface was developed that allowed for monitoring and control of key physiological parameters including body temperature (BT), mean arterial blood pressure (MAP) and end tidal CO2 (ETCO2). Embedded within this interface was a fuzzy logic based control system designed to mimic the decision making of an anesthetist. The system was tested by manipulating the blood pressure of a group of anesthetized animal subjects using bolus injections of epinephrine and continuous infusions of phenylephrine (a vasoconstrictor) and sodium nitroprusside (a vasodilator). This testing showed that the system was able to significantly reduce the deviation from the set pressure (as measured by the root mean square value) while under control in the hypotension condition (p < 0.10). Though both the short-term and hypertension testing showed no significant improvement, the control system did successfully manipulate the anesthetic percentage in response to changes in MAP. Though currently limited by the control variables being used, this system is an important first step towards a fully automated monitoring and control system and can be used as the basis for further research.
|
589 |
Feature-Oriented Specification of Hardware Bus ProtocolsFreitas, Paul Michael 29 April 2008 (has links)
Hardware engineers frequently create formal specification documents as part of the verification process. Doing so is a time-consuming and error-prone process, as the primary documents for communications and standards use a mixture of prose, diagrams and tables. We would like this process to be partially automated, in which the engineer's role would be to refine a machine-generated skeleton of a specification's formal model. We have created a preliminary intermediate language which allows specifications to be captured using formal semantics, and allows an engineer to easily find, understand, and modify critical portions of the specification. We have converted most of ARM's AMBA AHB specification to our language; our representation is able to follow the structure of the original document.
|
590 |
WAIT: Selective Loss Recovery for Multimedia Multicast.Mane, Pravin D 31 July 2000 (has links)
"Recently the Internet has been increasingly used for multi-party applications like video-conferencing, video-on-demand and shared white-boards. Multicast extensions to IP to support multi-party applications are best effort, often resulting in packet loss within the network. Since some multicast applications can not tolerate packet loss, most of the existing reliable multicast schemes recover each and every lost packet. However, multimedia applications can tolerate a certain amount of packet loss and are sensitive to long recovery delays. We propose a new loss recovery technique that selectively repairs lost packets based upon the amount of packet loss and delay expected for the repair. Our technique sends a special WAIT message down the multicast tree in the event a loss is detected in order to reduce the number of retransmission requests. We also propose an efficient sender initiated multicast trace-route mechanism for determining the multicast topology and a mechanism to deliver the topology information to the multicast session participants. We evaluate our proposed technique using an event driven network simulator, comparing it with two popular reliable multicast protocols, SRM and PGM. We conclude that our proposed WAIT protocol can reduce the overhead on a multicast session as well as improve the average end-to-end latency of the session."
|
Page generated in 0.1092 seconds