• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 156
  • 94
  • 32
  • 24
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 372
  • 372
  • 312
  • 302
  • 186
  • 97
  • 73
  • 65
  • 64
  • 62
  • 53
  • 36
  • 34
  • 34
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

The design and implementation of a load distribution facility on Mach.

January 1997 (has links)
by Hsieh Shing Leung Arthur. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 78-81). / List of Figures --- p.viii / List of Tables --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background and Related Work --- p.4 / Chapter 2.1 --- Load Distribution --- p.4 / Chapter 2.1.1 --- Load Index --- p.5 / Chapter 2.1.2 --- Task Transfer Mechanism --- p.5 / Chapter 2.1.3 --- Load Distribution Facility --- p.6 / Chapter 2.2 --- Load Distribution Algorithm --- p.6 / Chapter 2.2.1 --- Classification --- p.6 / Chapter 2.2.2 --- Components --- p.7 / Chapter 2.2.3 --- Stability and Effectiveness --- p.9 / Chapter 2.3 --- The Mach Operating System --- p.10 / Chapter 2.3.1 --- Mach kernel abstractions --- p.10 / Chapter 2.3.2 --- Mach kernel features --- p.11 / Chapter 2.4 --- Related Work --- p.12 / Chapter 3 --- The Design of Distributed Scheduling Framework --- p.16 / Chapter 3.1 --- System Model --- p.16 / Chapter 3.2 --- Design Objectives and Decisions --- p.17 / Chapter 3.3 --- An Overview of DSF Architecture --- p.17 / Chapter 3.4 --- The DSF server --- p.18 / Chapter 3.4.1 --- Load Information Module --- p.19 / Chapter 3.4.2 --- Movement Module --- p.22 / Chapter 3.4.3 --- Decision Module --- p.25 / Chapter 3.5 --- LD library --- p.28 / Chapter 3.6 --- User-Agent --- p.29 / Chapter 4 --- The System Implementation --- p.33 / Chapter 4.1 --- Shared data structure --- p.33 / Chapter 4.2 --- Synchronization --- p.37 / Chapter 4.3 --- Reentrant library --- p.39 / Chapter 4.4 --- Interprocess communication (IPC) --- p.42 / Chapter 4.4.1 --- Mach IPC --- p.42 / Chapter 4.4.2 --- Socket IPC --- p.43 / Chapter 5 --- Experimental Studies --- p.47 / Chapter 5.1 --- Load Distribution algorithms --- p.47 / Chapter 5.2 --- Experimental environment --- p.49 / Chapter 5.3 --- Experimental results --- p.50 / Chapter 5.3.1 --- Performance of LD algorithms --- p.50 / Chapter 5.3.2 --- Degree of task transfer --- p.54 / Chapter 5.3.3 --- Effect of threshold value --- p.55 / Chapter 6 --- Conclusion and Future Work --- p.57 / Chapter 6.1 --- Summary and Conclusion --- p.57 / Chapter 6.2 --- Future Work --- p.58 / Chapter A --- LD Library --- p.60 / Chapter B --- Sample Implementation of LD algorithms --- p.65 / Chapter B.l --- LOWEST --- p.65 / Chapter B.2 --- THRHLD --- p.67 / Chapter C --- Installation Guide --- p.71 / Chapter C.1 --- Software Requirement --- p.71 / Chapter C.2 --- Installation Steps --- p.72 / Chapter C.3 --- Configuration --- p.73 / Chapter D --- User's Guide --- p.74 / Chapter D.1 --- The DSF server --- p.74 / Chapter D.2 --- The User Agent --- p.74 / Chapter D.3 --- LD experiment --- p.77 / Bibliography --- p.78
62

SOM4R: Um Middleware para AplicaÃÃes RobÃticas baseado na Arquitetura Orientada a Recursos / SOM4R: A Middleware for Robotic Applications based on the Resource-Oriented Architecture

Marcus Vinicius Duarte Veloso 14 February 2014 (has links)
nÃo hà / Middleware à a camada de software, situada entre o sistema operacional e a camada de aplicaÃÃes ou entre camadas de aplicaÃÃes, que fornece uma infraestrutura para integraÃÃo de programas aplicativos e dados em sistema de processamento distribuÃdo. Nesta tese propomos uma nova camada de software (Middleware) para integraÃÃo e compartilhamento inteligente dos recursos (sensores, atuadores e/ou serviÃos) robÃticos identificados por URIs (Uniform Resource Identifiers), empregando a rede TCP/IP, utilizando protocolos com menores restriÃÃes em firewall, uma interface de interaÃÃo humano-mÃquina (IHM) implementada atravÃs de um portal web e uma linguagem de descriÃÃo dos recursos que torna os dados mais portÃveis e interoperÃveis entre diferentes tipos de computadores, sistemas operacionais e linguagens de programaÃÃo. O middleware proposto facilita a computaÃÃo interativa de mÃltiplos aplicativos interconectados com a finalidade de criar uma aplicaÃÃo maior, geralmente distribuÃda sobre uma rede de computadores composta de vÃrios tipos heterogÃneos de hardware e software. Com este modelo de middleware, à possÃvel garantir seguranÃa de acesso aos recursos, abstrair a diversidade do hardware robÃtico, reutilizar a infraestrutura de software para robÃs entre mÃltiplos esforÃos de pesquisa, reduzir o acoplamento entre os mÃltiplos aplicativos, estimular a portabilidade do cÃdigo e suportar escalabilidade da arquitetura. / Middleware is the software layer situated between the operating system and applications layer or between layers of applications, which provides an infrastructure for integrating applications and data in a distributed processing system. In this thesis we propose a new software layer (middleware) for integration and intelligent sharing of robotic resources (sensors, actuators and / or services) identified by URIs (Uniform Resource Identifiers), using the TCP/IP network, employing protocols with minor firewall restrictions and a resource description language that makes data more portable and interoperable between different types of computers, operating systems and programming languages. The proposed middleware facilitates interactive computing of multiple interconnected applications with the purpose to create a larger application, usually distributed over a computer network consisting of various kinds of heterogeneous hardware and software. With this model of middleware, it is possible to ensure security of access to resources, abstracting the diversity of robotic hardware, to reuse the infrastructure of software for robots between multiple search efforts, reduce the coupling between multiple applications, encourage code portability and support scalability of the architecture.
63

Path selection in multi-overlay application layer multicast.

January 2009 (has links)
Lin, Yangyang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (p. 50-53). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Background and Related Work --- p.5 / Chapter 2.1 --- Latency-based Approaches --- p.5 / Chapter 2.2 --- Bandwidth-based Approaches --- p.6 / Chapter 2.3 --- Other Approaches --- p.8 / Chapter 2.4 --- Comparisons and Contributions --- p.9 / Chapter Chapter 3 --- RTT-based Path Selection Revisit --- p.11 / Chapter 3.1 --- Experimental Setting --- p.11 / Chapter 3.2 --- Relationship between RTT and Available Bandwidth --- p.12 / Chapter 3.3 --- Path Selection Accuracy and Efficiency of RTT --- p.13 / Chapter Chapter 4 --- Path Bandwidth measurement --- p.16 / Chapter 4.1 --- In-band Bandwidth Probing --- p.17 / Chapter 4.2 --- Scheduling Constraints --- p.19 / Chapter 4.3 --- Cascaded Bandwidth Probing --- p.20 / Chapter 4.4 --- Model Verification --- p.23 / Chapter Chapter 5 --- Adaptive Multi-overlay ALM --- p.26 / Chapter 5.1 --- Overlay Construction --- p.26 / Chapter 5.2 --- Overlay Adaptation --- p.28 / Chapter 5.3 --- RTT-based Path Selection --- p.30 / Chapter 5.4 --- Topology-Adaptation-Induced Data Loss --- p.31 / Chapter Chapter 6 --- Performance Evaluation --- p.33 / Chapter 6.1 --- Simulation Setting --- p.33 / Chapter 6.2 --- Topology-Adaptation-Induced Data Loss --- p.34 / Chapter 6.3 --- Data Delivery Performance --- p.36 / Chapter 6.4 --- Performance Variation across Peers --- p.38 / Chapter 6.5 --- Performance of Cross Traffic --- p.40 / Chapter 6.6 --- Overlay Topology Convergence --- p.42 / Chapter 6.7 --- Impact of Overlay Adaptation Triggering Threshold --- p.44 / Chapter 6.8 --- Impact of Peer Buffer Size --- p.46 / Chapter Chapter 7 --- Conclusion and future work --- p.48 / References --- p.50
64

Prevention and detection of deadlock in distributed systems : a survey of current literature

Vaughn, Rayford B January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
65

Data Allocation for Distributed Programs

Setiowijoso, Liono 11 August 1995 (has links)
This thesis shows that both data and code must be efficiently distributed to achieve good performance in a distributed system. Most previous research has either tried to distribute code structures to improve parallelism or to distribute data to reduce communication costs. Code distribution (exploiting functional parallelism) is an effort to distribute or to duplicate function codes to optimize parallel performance. On the other hand, data distribution tries to place data structures as close as possible to the function codes that use it, so that communication cost can be reduced. In particular, dataflow researchers have primarily focused on code partitioning and assignment. We have adapted existing data allocation algorithms for use with an existing dataflow-based system, ParPlum. ParPlum allows the execution of dataflow graphs on networks of workstations. To evaluate the impact of data allocation, we extended ParPlum to more effectively handle data structures. We then implemented tools to extract from dataflow graphs information that is relevant to the mapping algorithms and fed this information to our version of a data distribution algorithm. To see the relation between code and data parallelism we added optimization to optimize the distribution of the loop function components and the data structure access components. All of these are done automatically without programmer or user involvement. We ran a number of experiments using matrix multiplication as our workload. We used different numbers of processors and different existing partitioning and allocation algorithm. Our results show that automatic data distribution greatly improves the performance of distributed dataflow applications. For example, with 15 x 15 matrices, applying data distribution speeds up execution about 80% on 7 machines. Using data distribution and our code-optimizations on 7 machines speeds up execution over the base case by 800%. Our work shows that it is possible to make efficient use of distributed networks with compiler support and shows that both code mapping and data mapping must be considered to achieve optimal performance.
66

GLOMAR : a component based framework for maintaining consistency of data objects within a heterogeneous distributed file system

Cuce, Simon January 2003 (has links)
Abstract not available
67

Semantically annotated multi-protocol adapter nodes: a new approach to implementing network-based information systems using ontologies.

Falkner, Nickolas John Gowland January 2007 (has links)
Network-based information systems are an important class of distributed systems that serve large and diverse user communities with information and essential network services. Centrally defined standards for interoperation and information exchange ensure that any required functionality is provided but do so at the expense of flexibility and ease of system evolution. This thesis presents a novel approach to implementing network-based information systems in a knowledge-representation-based format using an ontological description of the service. Our approach allows us to provide flexible distributed systems that can conform to global standards while still allowing local developments and protocol extensions. We can share data between systems if we provide an explicit specification of the relationship between the knowledge in the system and the structure and nature of the values shared between systems. Existing distributed systems may share data based on the values and structures of that data but we go beyond syntax-based value exchange to introduce a semantically-based exchange of knowledge. The explicit statement of the semantics and syntax of the system in a machine-interpretable form provides the automated integration of different systems through the use of adapter nodes. Adapter nodes are members of more than one system and seamlessly transport data between the systems. We develop a multi-tier software architecture that characterises the values held inside the system depending on an ontological classification of their structure and context to allow the definition of values in terms of the knowledge that they represent. Initially, received values are viewed as data, with no structural information. Structural and type information, and the context of the value can now be associated with it through the use of ontologies, leading to a value-form referred to as knowledge: a value that is structurally and contextually rich. This is demonstrated through an implementation process employing RDF, OWL and SPARQL to develop an ontological description of a network-based information system. The implementation provides evidence for the benefits and costs of representing a system in such a manner, including a complexity-based analysis of system performance. The implementation demonstrates the ability of such a representation to separate global standards-based requirements from local user requirements. This allows the addition of behaviour, specific to local needs, to otherwise global systems in a way that does not compromise the global standards. Our contribution is in providing a means for network-based information systems to retain the benefits of their global interaction while still allowing local customisation to meet the user expectations. This thesis presents a novel use of ontologically-based representation and tools to demonstrate the benefits of the multi-tier software architecture with a separation of the contents of the system into data, information and knowledge. Our approach increases the ease of interoperation for large-scale distributed systems and facilitates the development of systems that can adapt to local requirements while retaining their wider interoperability. Further, our approach provides a strong contextual framework to ground concepts in the system and also supports the amalgamation of data from many sources to provide rich and extensible network-based information system. / http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1295234 / Thesis (Ph.D.) -- School of Computer Science, 2007
68

Communication performance measurement and analysis on commodity clusters.

Abdul Hamid, Nor Asilah Wati January 2008 (has links)
Cluster computers have become the dominant architecture in high-performance computing. Parallel programs on these computers are mostly written using the Message Passing Interface (MPI) standard, so the communication performance of the MPI library for a cluster is very important. This thesis investigates several different aspects of performance analysis for MPI libraries, on both distributed memory clusters and shared memory parallel computers. The performance evaluation was done using MPIBench, a new MPI benchmark program that provides some useful new functionality compared to existing MPI benchmarks. Since there has been only limited previous use of MPIBench, some initial work was done on comparing MPIBench with other MPI benchmarks, and improving its functionality, reliability, portability and ease of use. This work included a detailed comparison of results from the Pallas MPI Benchmark (PMB), SKaMPI, Mpptest, MPBench and MPIBench on both distributed memory and shared memory parallel computers, which has not previously been done. This comparison showed that the results for some MPI routines were significantly different between the different benchmarks, particularly for the shared memory machine. A comparison was done between Myrinet and Ethernet network performance on the same machine, an IBM Linux cluster with 128 dual processor nodes, using the MPICH MPI library. The analysis focused mainly on the scalability and variability of communication times for the different networks, making use of the capability of MPIBench to generate distributions of MPI communication times. The analysis provided an improved understanding of the effects of TCP retransmission timeouts on Ethernet networks. This analysis showed anomalous results for some MPI routines. Further investigation showed that this is because MPICH uses different algorithms for small and large message sizes for some collective communication routines, and the message size where this changeover occurs is fixed, based on measurements using a cluster with a single processor per node. Experiments were done to measure the performance of the different algorithms, which demonstrated that for some MPI routines the optimal changeover points were very different between Myrinet and Ethernet networks and for 1 and 2 processors per node. Significant performance improvements can be made by allowing the changeover points to be tuned rather than fixed, particularly for commodity Ethernet networks and for clusters with more than 1 process per node. MPIBench was also used to analyse the MPI performance and scalability of a large ccNUMA shared memory machine, an SGI Altix 3000 with 160 processors. The results were compared with a high-end cluster, an AlphaServer SC with Quadrics QsNet interconnect. For most MPI routines the Altix showed significantly better performance, particularly when non-buffered copy was used. MPIBench proved to be a very capable tool for analyzing MPI performance in a variety of different situations. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1331421 / Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2008
69

On resource placements and fault-tolerant broadcasting in toroidal networks

AlMohammad, Bader Fahed AlBedaiwi 13 November 1997 (has links)
Parallel computers are classified into: Multiprocessors, and multicomputers. A multiprocessor system usually has a shared memory through which its processors can communicate. On the other hand, the processors of a multicomputer system communicate by message passing through an interconnection network. A widely used class of interconnection networks is the toroidal networks. Compared to a hypercube, a torus has a larger diameter, but better tradeoffs, such as higher channel bandwidth and lower node degree. Results on resource placements and fault-tolerant broadcasting in toroidal networks are presented. Given a limited number of resources, it is desirable to distribute these resources over the interconnection network so that the distance between a non-resource and a closest resource is minimized. This problem is known as distance-d placement. In such a placement, each non-resource must be within a distance of d or less from at least one resource, where the number of resources used is the least possible. Solutions for distance-d placements in 2D and 3D tori are proposed. These solutions are compared with placements used so far in practice. Simulation experiments show that the proposed solutions are superior to the placements used in practice in terms of reducing average network latency. The complexity of a multicomputer increases the chances of having processor failures. Therefore, designing fault-tolerant communication algorithms is quite necessary for a sufficient utilization of such a system. Broadcasting (single-node one-to-all) in a multicomputer is one of the important communication primitives. A non-redundant fault-tolerant broadcasting algorithm in a faulty toroidal network is designed. The algorithm can adapt up to (2n-2) processor failures. Compared to the optimal algorithm in a fault-free n-dimensional toroidal network, the proposed algorithm requires at most 3 extra communication steps using cut through packet routing, and (n + 1) extra steps using store-and-forward routing. / Graduation date: 1998
70

Automatic program restructuring for distributed memory multicomputers

Ikei, Mitsuru 04 1900 (has links) (PDF)
M.S. / Computer Science and Engineering / To compile a Single Program Multiple Data (SPMD) program for a Distributed Memory Multicomputer (DMMC), we need to find data that can be processed in parallel in the program and we need to distribute the data among processors such that the interprocessor communication becomes reasonably small. Loop restructuring is needed for finding parallelism in imperative programs and array alignment is one effective step to reduce interprocessor communication caused by array references. Automatic conversion of imperative programs using these two program restructuring steps has been implemented in the Tiny loop restructuring tool. The restructuring strategy is derived by translating the way that the compiler uses for the functional language Crystal, to the imperative language Tiny. Although an imperative language can have more varied loop structures than a functional language and it is more difficult to select the optimal one, we can get a loop structure which is comparable to Crystal. We also can find array alignment preference (temporal + spatial) relations in a Tiny source program and add a new construct, the align statement, to Tiny to express the array alignment preferences. In this thesis, we discuss these program restructuring strategies which we used for Tiny by comparison with Crystal.

Page generated in 0.0293 seconds