321 |
DJM: distributed Java machine for Internet computing. / Distributed Java machine for Internet computing / CUHK electronic theses & dissertations collectionJanuary 2002 (has links)
Wong, Yuk Yin. / "December 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 193-206). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
322 |
The design and implementation of a load distribution facility on Mach.January 1997 (has links)
by Hsieh Shing Leung Arthur. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 78-81). / List of Figures --- p.viii / List of Tables --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background and Related Work --- p.4 / Chapter 2.1 --- Load Distribution --- p.4 / Chapter 2.1.1 --- Load Index --- p.5 / Chapter 2.1.2 --- Task Transfer Mechanism --- p.5 / Chapter 2.1.3 --- Load Distribution Facility --- p.6 / Chapter 2.2 --- Load Distribution Algorithm --- p.6 / Chapter 2.2.1 --- Classification --- p.6 / Chapter 2.2.2 --- Components --- p.7 / Chapter 2.2.3 --- Stability and Effectiveness --- p.9 / Chapter 2.3 --- The Mach Operating System --- p.10 / Chapter 2.3.1 --- Mach kernel abstractions --- p.10 / Chapter 2.3.2 --- Mach kernel features --- p.11 / Chapter 2.4 --- Related Work --- p.12 / Chapter 3 --- The Design of Distributed Scheduling Framework --- p.16 / Chapter 3.1 --- System Model --- p.16 / Chapter 3.2 --- Design Objectives and Decisions --- p.17 / Chapter 3.3 --- An Overview of DSF Architecture --- p.17 / Chapter 3.4 --- The DSF server --- p.18 / Chapter 3.4.1 --- Load Information Module --- p.19 / Chapter 3.4.2 --- Movement Module --- p.22 / Chapter 3.4.3 --- Decision Module --- p.25 / Chapter 3.5 --- LD library --- p.28 / Chapter 3.6 --- User-Agent --- p.29 / Chapter 4 --- The System Implementation --- p.33 / Chapter 4.1 --- Shared data structure --- p.33 / Chapter 4.2 --- Synchronization --- p.37 / Chapter 4.3 --- Reentrant library --- p.39 / Chapter 4.4 --- Interprocess communication (IPC) --- p.42 / Chapter 4.4.1 --- Mach IPC --- p.42 / Chapter 4.4.2 --- Socket IPC --- p.43 / Chapter 5 --- Experimental Studies --- p.47 / Chapter 5.1 --- Load Distribution algorithms --- p.47 / Chapter 5.2 --- Experimental environment --- p.49 / Chapter 5.3 --- Experimental results --- p.50 / Chapter 5.3.1 --- Performance of LD algorithms --- p.50 / Chapter 5.3.2 --- Degree of task transfer --- p.54 / Chapter 5.3.3 --- Effect of threshold value --- p.55 / Chapter 6 --- Conclusion and Future Work --- p.57 / Chapter 6.1 --- Summary and Conclusion --- p.57 / Chapter 6.2 --- Future Work --- p.58 / Chapter A --- LD Library --- p.60 / Chapter B --- Sample Implementation of LD algorithms --- p.65 / Chapter B.l --- LOWEST --- p.65 / Chapter B.2 --- THRHLD --- p.67 / Chapter C --- Installation Guide --- p.71 / Chapter C.1 --- Software Requirement --- p.71 / Chapter C.2 --- Installation Steps --- p.72 / Chapter C.3 --- Configuration --- p.73 / Chapter D --- User's Guide --- p.74 / Chapter D.1 --- The DSF server --- p.74 / Chapter D.2 --- The User Agent --- p.74 / Chapter D.3 --- LD experiment --- p.77 / Bibliography --- p.78
|
323 |
Path selection in multi-overlay application layer multicast.January 2009 (has links)
Lin, Yangyang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (p. 50-53). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Background and Related Work --- p.5 / Chapter 2.1 --- Latency-based Approaches --- p.5 / Chapter 2.2 --- Bandwidth-based Approaches --- p.6 / Chapter 2.3 --- Other Approaches --- p.8 / Chapter 2.4 --- Comparisons and Contributions --- p.9 / Chapter Chapter 3 --- RTT-based Path Selection Revisit --- p.11 / Chapter 3.1 --- Experimental Setting --- p.11 / Chapter 3.2 --- Relationship between RTT and Available Bandwidth --- p.12 / Chapter 3.3 --- Path Selection Accuracy and Efficiency of RTT --- p.13 / Chapter Chapter 4 --- Path Bandwidth measurement --- p.16 / Chapter 4.1 --- In-band Bandwidth Probing --- p.17 / Chapter 4.2 --- Scheduling Constraints --- p.19 / Chapter 4.3 --- Cascaded Bandwidth Probing --- p.20 / Chapter 4.4 --- Model Verification --- p.23 / Chapter Chapter 5 --- Adaptive Multi-overlay ALM --- p.26 / Chapter 5.1 --- Overlay Construction --- p.26 / Chapter 5.2 --- Overlay Adaptation --- p.28 / Chapter 5.3 --- RTT-based Path Selection --- p.30 / Chapter 5.4 --- Topology-Adaptation-Induced Data Loss --- p.31 / Chapter Chapter 6 --- Performance Evaluation --- p.33 / Chapter 6.1 --- Simulation Setting --- p.33 / Chapter 6.2 --- Topology-Adaptation-Induced Data Loss --- p.34 / Chapter 6.3 --- Data Delivery Performance --- p.36 / Chapter 6.4 --- Performance Variation across Peers --- p.38 / Chapter 6.5 --- Performance of Cross Traffic --- p.40 / Chapter 6.6 --- Overlay Topology Convergence --- p.42 / Chapter 6.7 --- Impact of Overlay Adaptation Triggering Threshold --- p.44 / Chapter 6.8 --- Impact of Peer Buffer Size --- p.46 / Chapter Chapter 7 --- Conclusion and future work --- p.48 / References --- p.50
|
324 |
Prevention and detection of deadlock in distributed systems : a survey of current literatureVaughn, Rayford B January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
325 |
A Quantitative Approach to Medical Decision MakingMeredith, John W. 05 1900 (has links)
The purpose of this study is to develop a technique by which a physician may use a predetermined data base to derive a preliminary diagnosis for a patient with a given set of symptoms. The technique will not yield an absolute diagnosis, but rather will point the way to a set of most likely diseases upon which the physician may concentrate his efforts. There will be no reliance upon a data base compiled from poorly kept medical records with non-standardization of terminology. While this study produces a workable tool for the physician to use in the process of medical diagnosis, the ultimate responsibility for the patient's welfare must still rest with the physician.
|
326 |
Data Allocation for Distributed ProgramsSetiowijoso, Liono 11 August 1995 (has links)
This thesis shows that both data and code must be efficiently distributed to achieve good performance in a distributed system. Most previous research has either tried to distribute code structures to improve parallelism or to distribute data to reduce communication costs. Code distribution (exploiting functional parallelism) is an effort to distribute or to duplicate function codes to optimize parallel performance. On the other hand, data distribution tries to place data structures as close as possible to the function codes that use it, so that communication cost can be reduced. In particular, dataflow researchers have primarily focused on code partitioning and assignment. We have adapted existing data allocation algorithms for use with an existing dataflow-based system, ParPlum. ParPlum allows the execution of dataflow graphs on networks of workstations. To evaluate the impact of data allocation, we extended ParPlum to more effectively handle data structures. We then implemented tools to extract from dataflow graphs information that is relevant to the mapping algorithms and fed this information to our version of a data distribution algorithm. To see the relation between code and data parallelism we added optimization to optimize the distribution of the loop function components and the data structure access components. All of these are done automatically without programmer or user involvement. We ran a number of experiments using matrix multiplication as our workload. We used different numbers of processors and different existing partitioning and allocation algorithm. Our results show that automatic data distribution greatly improves the performance of distributed dataflow applications. For example, with 15 x 15 matrices, applying data distribution speeds up execution about 80% on 7 machines. Using data distribution and our code-optimizations on 7 machines speeds up execution over the base case by 800%. Our work shows that it is possible to make efficient use of distributed networks with compiler support and shows that both code mapping and data mapping must be considered to achieve optimal performance.
|
327 |
Conceptual Modeling of Data with ProvenanceArcher, David William 01 January 2011 (has links)
Traditional database systems manage data, but often do not address its provenance. In the past, users were often implicitly familiar with data they used, how it was created (and hence how it might be appropriately used), and from which sources it came. Today, users may be physically and organizationally remote from the data they use, so this information may not be easily accessible to them. In recent years, several models have been proposed for recording provenance of data. Our work is motivated by opportunities to make provenance easy to manage and query. For example, current approaches model provenance as expressions that may be easily stored alongside data, but are difficult to parse and reconstruct for querying, and are difficult to query with available languages. We contribute a conceptual model for data and provenance, and evaluate how well it addresses these opportunities. We compare the expressive power of our model's language to that of other models. We also define a benchmark suite with which to study performance of our model, and use this suite to study key model aspects implemented on existing software platforms. We discover some salient performance bottlenecks in these implementations, and suggest future work to explore improvements. Finally, we show that our implementations can comprise a logical model that faithfully supports our conceptual model.
|
328 |
GLOMAR : a component based framework for maintaining consistency of data objects within a heterogeneous distributed file systemCuce, Simon January 2003 (has links)
Abstract not available
|
329 |
Impacts of Selective Outsourcing Of Information Technology And Information ServicesJanuary 1998 (has links)
This study identifies the impacts to the internal Information Technology (IT) department's policies and procedures caused by outsourcing selective IT functions and assesses the threats and opportunities to an internal IT group, presented by outsourcing selective IT functions. The trend to selectively outsource IT functions implies that this can be done with minimal disruption and risk to the IT departments policies and processes. This research investigates whether this assertion is valid and develops a model for internal IT departments to respond to the challenges presented by selective outsourcing. Existing models of outsourcing currently in use and available to organizations are reviewed to assess their suitability or adaptability for `selective outsourcing' and from this identifies which areas of internal IT policy and procedures are most impacted. An analysis of the threats and opportunities presented to the internal IT department is also provided. Research was conducted into one organization's experience with selective outsourcing to investigate how internal IT departments could approach selective outsourcing of internal IT functions and how internal IT departments could develop strategies for responding to the challenges posed by selective outsourcing. A case study was conducted of a recent selective outsourcing arrangement within the IT group of the target organization. The personal interview method was adopted to survey a cross section of management and staff from the work groups involved in the selective outsourcing arrangement. The results obtained revolved around the major themes of: 1. Planning (identified as time constraints, resource constraints and work load). 2. Management control (which encompasses the structuring of the outsourcing relationship, human resource concerns, level of ownership, communication, structure of the internal IT group and inter-departmental concerns). 3. Process (which applies to the quality of the procedures, inherent internal knowledge required, the informality of the procedures and concerns over adherence to procedures). For an IT group to develop strategies to respond to the challenges of selective outsourcing it was identified that the IT group needs to remove internal barriers to process and strive to achieve single ownership of processes within functional work groups; nurture a shift in internal groups thinking to more planning rather than doing; improve the quality of internal IT procedures and implement appropriate project team structures for task specific selective outsourcing engagements and for ongoing vendor relationship management.
|
330 |
Semantically annotated multi-protocol adapter nodes: a new approach to implementing network-based information systems using ontologies.Falkner, Nickolas John Gowland January 2007 (has links)
Network-based information systems are an important class of distributed systems that serve large and diverse user communities with information and essential network services. Centrally defined standards for interoperation and information exchange ensure that any required functionality is provided but do so at the expense of flexibility and ease of system evolution. This thesis presents a novel approach to implementing network-based information systems in a knowledge-representation-based format using an ontological description of the service. Our approach allows us to provide flexible distributed systems that can conform to global standards while still allowing local developments and protocol extensions. We can share data between systems if we provide an explicit specification of the relationship between the knowledge in the system and the structure and nature of the values shared between systems. Existing distributed systems may share data based on the values and structures of that data but we go beyond syntax-based value exchange to introduce a semantically-based exchange of knowledge. The explicit statement of the semantics and syntax of the system in a machine-interpretable form provides the automated integration of different systems through the use of adapter nodes. Adapter nodes are members of more than one system and seamlessly transport data between the systems. We develop a multi-tier software architecture that characterises the values held inside the system depending on an ontological classification of their structure and context to allow the definition of values in terms of the knowledge that they represent. Initially, received values are viewed as data, with no structural information. Structural and type information, and the context of the value can now be associated with it through the use of ontologies, leading to a value-form referred to as knowledge: a value that is structurally and contextually rich. This is demonstrated through an implementation process employing RDF, OWL and SPARQL to develop an ontological description of a network-based information system. The implementation provides evidence for the benefits and costs of representing a system in such a manner, including a complexity-based analysis of system performance. The implementation demonstrates the ability of such a representation to separate global standards-based requirements from local user requirements. This allows the addition of behaviour, specific to local needs, to otherwise global systems in a way that does not compromise the global standards. Our contribution is in providing a means for network-based information systems to retain the benefits of their global interaction while still allowing local customisation to meet the user expectations. This thesis presents a novel use of ontologically-based representation and tools to demonstrate the benefits of the multi-tier software architecture with a separation of the contents of the system into data, information and knowledge. Our approach increases the ease of interoperation for large-scale distributed systems and facilitates the development of systems that can adapt to local requirements while retaining their wider interoperability. Further, our approach provides a strong contextual framework to ground concepts in the system and also supports the amalgamation of data from many sources to provide rich and extensible network-based information system. / http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1295234 / Thesis (Ph.D.) -- School of Computer Science, 2007
|
Page generated in 0.12 seconds