• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 33
  • 26
  • 16
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 323
  • 323
  • 91
  • 65
  • 64
  • 51
  • 45
  • 44
  • 42
  • 35
  • 35
  • 31
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Coordination and P2P computing

Ji, Lichun 27 September 2004
Peer-to-Peer (P2P) refers to a class of systems and/or applications that use distributed resources in a decentralized and autonomous manner to achieve a goal. A number of successful applications, like BitTorrent (for file and content sharing) and SETI@Home (for distributed computing) have demonstrated the feasibility of this approach. <p> As a new form of distributed computing, P2P computing has the same coordination problems as other forms of distributed computing. Coordination has been considered an important issue in distributed computing for a long time and many coordination models and languages have been developed. <p> This research focuses on how to solve coordination problems in P2P computing. In particular, it is to provide a seamless P2P computing environment so that the migration of computation components is transparent. This research extends Manifold, an event-driven coordination model, to meet P2P computing requirements and integrates the P2P-Manifold model into an existing platform. The integration hides the complexity of the coordination model and makes the model easy to use.
22

An Architecture for Geographically-Oriented Service Discovery on the Internet

Li, Qiyan January 2002 (has links)
Most of the service discovery protocols available on the Internet are built upon its logical structure. This phenomenon can be observed frequently from the way in which they behave. For instance, Jini and SLP service providers announce their presence by multicasting service advertisements, an approach that is neither intended to scale nor capable of scaling to the size of the Internet. With mobile and wireless devices becoming increasingly popular, there appears to be a need for performing service discovery in a wide-area context, as there is very little direct correlation between the Internet topology and geographic locations. Even for desktop computers, such a need can arise from time to time. This problem suggests the necessity for an architecture that allows users to locate resources on the Internet using geographic criteria. This thesis presents an architecture that can be deployed with minimal effort in the existing network infrastructure. The geographic information can be shared among multiple applications in a fashion similar to the way DNS is shared throughout the Internet. The design and implementation of the architecture are discussed in detail, and three case studies are used to illustrate how the architecture can be employed by various applications to satisfy dramatically different needs of end-users.
23

Coordination and P2P computing

Ji, Lichun 27 September 2004 (has links)
Peer-to-Peer (P2P) refers to a class of systems and/or applications that use distributed resources in a decentralized and autonomous manner to achieve a goal. A number of successful applications, like BitTorrent (for file and content sharing) and SETI@Home (for distributed computing) have demonstrated the feasibility of this approach. <p> As a new form of distributed computing, P2P computing has the same coordination problems as other forms of distributed computing. Coordination has been considered an important issue in distributed computing for a long time and many coordination models and languages have been developed. <p> This research focuses on how to solve coordination problems in P2P computing. In particular, it is to provide a seamless P2P computing environment so that the migration of computation components is transparent. This research extends Manifold, an event-driven coordination model, to meet P2P computing requirements and integrates the P2P-Manifold model into an existing platform. The integration hides the complexity of the coordination model and makes the model easy to use.
24

A SIMULATION PLATFORM FOR EXPERIMENTATION AND EVALUATION OF DISTRIBUTED-COMPUTING SYSTEMS

Xu, Yijia January 2005 (has links)
Distributed simulations have been widely applied as the method to study complex systems which are analytically intractable and numerically prohibitive to evaluate. However it is not a trivia task to develop distributed simulations. Besides distributed simulations may introduce difficulties for analysis due to decentralized, heterogeneous data sources. It is important to integrate these data sources seamlessly for analysis. In applications for system design, it is required to explore the alternatives of hardware components, algorithms, and simulation models. How to enable these operations conveniently is critical for the distributed system as well. All these challenges raise the need of a workbench that facilitates rapid composition, evaluation, modification and validation of components in a distributed system.This dissertation proposes a platform for these challenges, which we refer to the SPEED-CS platform. The architecture of the platform consists of multiple layers that include network layer, component management layer, components layer, and modeling layer. It is a multi-agent system (MAS), containing static agents and mobile agents. The mobile agent is referred as the Data Exchange Agent, which is able to visit sub-simulations and has the intelligence to find the useful data for output analysis. Experiments show that the MAS requires much less network bandwidth than the "centralized" system does, in which simulations report data to output analyst.The application of the SPEED-CS platform is extended to handle systems with dynamic data sources. We demonstrate that the platform can be used for parallel reality applications where simulation parameters can be updated according to real-time sensor information. Data exchange agents are involved to manage the collection, dissemination, and analysis of data from dynamic data sources including simulations and/or physical systems.The SPEED-CS platform is also implemented to integrate simulations and optimizations. The system is able to provide services to facilitate distributed computing, event services, naming services, and component management. One of the important features is that the component sets can be updated and enlarged with different models adding in. This feature enables the platform to work as a testbed to explore alternatives of system designs.Finally we conclude this dissertation with several future research topics.
25

Floyd : a functional programming language with distributed scope

Ilberg, Peter January 1998 (has links)
No description available.
26

Efficient time representation in distributed systems

Torres-Rojas, Francisco Jose January 1995 (has links)
No description available.
27

INDIGO: An In-Situ Distributed Gossip System Design and Evaluation

Ramanan, Paritosh 11 August 2015 (has links)
Distributed Gossip in networks is a well studied and observed problem which can be accomplished using different gossiping styles. This work focusses on the development, analysis and evaluation of a novel in-situ distributed gossip protocol framework design called (INDIGO). A core aspect of INDIGO is its ability to execute on a simulation setup as well as a system testbed setup in a seamless manner allowing easy portability. The evaluations focus on application of INDIGO to solve problems such as distributed average consensus, distributed seismic event location and lastly distributed seismic tomography. The results obtained herein validate the efficacy and reliability of INDIGO.
28

Pivot-based Data Partitioning for Distributed k Nearest Neighbor Mining

Kuhlman, Caitlin Anne 20 January 2017 (has links)
This thesis addresses the need for a scalable distributed solution for k-nearest-neighbor (kNN) search, a fundamental data mining task. This unsupervised method poses particular challenges on shared-nothing distributed architectures, where global information about the dataset is not available to individual machines. The distance to search for neighbors is not known a priori, and therefore a dynamic data partitioning strategy is required to guarantee that exact kNN can be found autonomously on each machine. Pivot-based partitioning has been shown to facilitate bounding of partitions, however state-of-the-art methods suffer from prohibitive data duplication (upwards of 20x the size of the dataset). In this work an innovative method for solving exact distributed kNN search called PkNN is presented. The key idea is to perform computation over several rounds, leveraging pivot-based data partitioning at each stage. Aggressive data-driven bounds limit communication costs, and a number of optimizations are designed for efficient computation. Experimental study on large real-world data (over 1 billion points) compares PkNN to the state-of-the-art distributed solution, demonstrating that the benefits of additional stages of computation in the PkNN method heavily outweigh the added I/O overhead. PkNN achieves a data duplication rate close to 1, significant speedup over previous solutions, and scales effectively in data cardinality and dimension. PkNN can facilitate distributed solutions to other unsupervised learning methods which rely on kNN search as a critical building block. As one example, a distributed framework for the Local Outlier Factor (LOF) algorithm is given. Testing on large real-world and synthetic data with varying characteristics measures the scalability of PkNN and the distributed LOF framework in data size and dimensionality.
29

Pivot-based Data Partitioning for Distributed k Nearest Neighbor Mining

Kuhlman, Caitlin Anne 20 January 2017 (has links)
This thesis addresses the need for a scalable distributed solution for k-nearest-neighbor (kNN) search, a fundamental data mining task. This unsupervised method poses particular challenges on shared-nothing distributed architectures, where global information about the dataset is not available to individual machines. The distance to search for neighbors is not known a priori, and therefore a dynamic data partitioning strategy is required to guarantee that exact kNN can be found autonomously on each machine. Pivot-based partitioning has been shown to facilitate bounding of partitions, however state-of-the-art methods suffer from prohibitive data duplication (upwards of 20x the size of the dataset). In this work an innovative method for solving exact distributed kNN search called PkNN is presented. The key idea is to perform computation over several rounds, leveraging pivot-based data partitioning at each stage. Aggressive data-driven bounds limit communication costs, and a number of optimizations are designed for efficient computation. Experimental study on large real-world data (over 1 billion points) compares PkNN to the state-of-the-art distributed solution, demonstrating that the benefits of additional stages of computation in the PkNN method heavily outweigh the added I/O overhead. PkNN achieves a data duplication rate close to 1, significant speedup over previous solutions, and scales effectively in data cardinality and dimension. PkNN can facilitate distributed solutions to other unsupervised learning methods which rely on kNN search as a critical building block. As one example, a distributed framework for the Local Outlier Factor (LOF) algorithm is given. Testing on large real-world and synthetic data with varying characteristics measures the scalability of PkNN and the distributed LOF framework in data size and dimensionality.
30

Distributed Reconfigurable Simulation for Communication Systems

Kim, Song Hun 27 November 2002 (has links)
The simulation of physical-layer communication systems often requires long execution times. This is due to the nature of the Monte Carlo simulation. To obtain a valid result by producing enough errors, the number of bits or symbols being simulated must significantly exceed the inverse of the bit error rate of interest. This often results in hours or even days of execution using a personal computer or a workstation. Reconfigurable devices can perform certain functions faster than general-purpose processors. In addition, they are more flexible than Application Specific Integrated Circuit (ASIC) devices. This fast yet flexible property of reconfigurable devices can be used for the simulation of communication systems. However, although reconfigurable devices are more flexible than ASIC devices, they are often not compatible with each other. Programs are usually written in hardware description languages such as Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL). A program written for one device often cannot be used for another device because these devices all have different architectures, and programs are architecture-specific. Distributed computing, which is not a new concept, refers to interconnecting a number of computing elements, often heterogeneous, to perform a given task. By applying distributed computing, reconfigurable devices and digital signal processors can be connected to form a distributed reconfigurable simulator. In this paper, it is shown that using reconfigurable devices can greatly increase the speed of simulation. A simple physical-layer communication system model has been created using a WildForce board, a reconfigurable device, and the performance is compared to a traditional software simulation of the same system. Using the reconfigurable device, the performance was increased by approximately one hundred times. This demonstrates the possibility of using reconfigurable devices for simulation of physical-layer communication systems. Also, an middleware architecture for distributed reconfigurable simulation is proposed and implemented. Using the middleware, reconfigurable devices and various computing elements can be integrated. The proposed middleware has several components. The master works as the server for the system. An object is any device that has computing capability. A resource is an algorithm or function implemented for a certain object. An object and its resources are connected to the system through an agent. This middleware system is tested with three different objects and six resources, and the performance is analyzed. The results shows that it is possible to interconnect various objects to perform a distributed simulation using reconfigurable devices. Possible future research to enhance the architecture is also discussed. / Ph. D.

Page generated in 0.0708 seconds