• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 33
  • 26
  • 16
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 321
  • 321
  • 91
  • 65
  • 63
  • 50
  • 45
  • 44
  • 41
  • 35
  • 33
  • 31
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Coordination and P2P computing

Ji, Lichun 27 September 2004
Peer-to-Peer (P2P) refers to a class of systems and/or applications that use distributed resources in a decentralized and autonomous manner to achieve a goal. A number of successful applications, like BitTorrent (for file and content sharing) and SETI@Home (for distributed computing) have demonstrated the feasibility of this approach. <p> As a new form of distributed computing, P2P computing has the same coordination problems as other forms of distributed computing. Coordination has been considered an important issue in distributed computing for a long time and many coordination models and languages have been developed. <p> This research focuses on how to solve coordination problems in P2P computing. In particular, it is to provide a seamless P2P computing environment so that the migration of computation components is transparent. This research extends Manifold, an event-driven coordination model, to meet P2P computing requirements and integrates the P2P-Manifold model into an existing platform. The integration hides the complexity of the coordination model and makes the model easy to use.
22

An Architecture for Geographically-Oriented Service Discovery on the Internet

Li, Qiyan January 2002 (has links)
Most of the service discovery protocols available on the Internet are built upon its logical structure. This phenomenon can be observed frequently from the way in which they behave. For instance, Jini and SLP service providers announce their presence by multicasting service advertisements, an approach that is neither intended to scale nor capable of scaling to the size of the Internet. With mobile and wireless devices becoming increasingly popular, there appears to be a need for performing service discovery in a wide-area context, as there is very little direct correlation between the Internet topology and geographic locations. Even for desktop computers, such a need can arise from time to time. This problem suggests the necessity for an architecture that allows users to locate resources on the Internet using geographic criteria. This thesis presents an architecture that can be deployed with minimal effort in the existing network infrastructure. The geographic information can be shared among multiple applications in a fashion similar to the way DNS is shared throughout the Internet. The design and implementation of the architecture are discussed in detail, and three case studies are used to illustrate how the architecture can be employed by various applications to satisfy dramatically different needs of end-users.
23

Coordination and P2P computing

Ji, Lichun 27 September 2004 (has links)
Peer-to-Peer (P2P) refers to a class of systems and/or applications that use distributed resources in a decentralized and autonomous manner to achieve a goal. A number of successful applications, like BitTorrent (for file and content sharing) and SETI@Home (for distributed computing) have demonstrated the feasibility of this approach. <p> As a new form of distributed computing, P2P computing has the same coordination problems as other forms of distributed computing. Coordination has been considered an important issue in distributed computing for a long time and many coordination models and languages have been developed. <p> This research focuses on how to solve coordination problems in P2P computing. In particular, it is to provide a seamless P2P computing environment so that the migration of computation components is transparent. This research extends Manifold, an event-driven coordination model, to meet P2P computing requirements and integrates the P2P-Manifold model into an existing platform. The integration hides the complexity of the coordination model and makes the model easy to use.
24

A SIMULATION PLATFORM FOR EXPERIMENTATION AND EVALUATION OF DISTRIBUTED-COMPUTING SYSTEMS

Xu, Yijia January 2005 (has links)
Distributed simulations have been widely applied as the method to study complex systems which are analytically intractable and numerically prohibitive to evaluate. However it is not a trivia task to develop distributed simulations. Besides distributed simulations may introduce difficulties for analysis due to decentralized, heterogeneous data sources. It is important to integrate these data sources seamlessly for analysis. In applications for system design, it is required to explore the alternatives of hardware components, algorithms, and simulation models. How to enable these operations conveniently is critical for the distributed system as well. All these challenges raise the need of a workbench that facilitates rapid composition, evaluation, modification and validation of components in a distributed system.This dissertation proposes a platform for these challenges, which we refer to the SPEED-CS platform. The architecture of the platform consists of multiple layers that include network layer, component management layer, components layer, and modeling layer. It is a multi-agent system (MAS), containing static agents and mobile agents. The mobile agent is referred as the Data Exchange Agent, which is able to visit sub-simulations and has the intelligence to find the useful data for output analysis. Experiments show that the MAS requires much less network bandwidth than the "centralized" system does, in which simulations report data to output analyst.The application of the SPEED-CS platform is extended to handle systems with dynamic data sources. We demonstrate that the platform can be used for parallel reality applications where simulation parameters can be updated according to real-time sensor information. Data exchange agents are involved to manage the collection, dissemination, and analysis of data from dynamic data sources including simulations and/or physical systems.The SPEED-CS platform is also implemented to integrate simulations and optimizations. The system is able to provide services to facilitate distributed computing, event services, naming services, and component management. One of the important features is that the component sets can be updated and enlarged with different models adding in. This feature enables the platform to work as a testbed to explore alternatives of system designs.Finally we conclude this dissertation with several future research topics.
25

Floyd : a functional programming language with distributed scope

Ilberg, Peter January 1998 (has links)
No description available.
26

Efficient time representation in distributed systems

Torres-Rojas, Francisco Jose January 1995 (has links)
No description available.
27

INDIGO: An In-Situ Distributed Gossip System Design and Evaluation

Ramanan, Paritosh 11 August 2015 (has links)
Distributed Gossip in networks is a well studied and observed problem which can be accomplished using different gossiping styles. This work focusses on the development, analysis and evaluation of a novel in-situ distributed gossip protocol framework design called (INDIGO). A core aspect of INDIGO is its ability to execute on a simulation setup as well as a system testbed setup in a seamless manner allowing easy portability. The evaluations focus on application of INDIGO to solve problems such as distributed average consensus, distributed seismic event location and lastly distributed seismic tomography. The results obtained herein validate the efficacy and reliability of INDIGO.
28

Pivot-based Data Partitioning for Distributed k Nearest Neighbor Mining

Kuhlman, Caitlin Anne 20 January 2017 (has links)
This thesis addresses the need for a scalable distributed solution for k-nearest-neighbor (kNN) search, a fundamental data mining task. This unsupervised method poses particular challenges on shared-nothing distributed architectures, where global information about the dataset is not available to individual machines. The distance to search for neighbors is not known a priori, and therefore a dynamic data partitioning strategy is required to guarantee that exact kNN can be found autonomously on each machine. Pivot-based partitioning has been shown to facilitate bounding of partitions, however state-of-the-art methods suffer from prohibitive data duplication (upwards of 20x the size of the dataset). In this work an innovative method for solving exact distributed kNN search called PkNN is presented. The key idea is to perform computation over several rounds, leveraging pivot-based data partitioning at each stage. Aggressive data-driven bounds limit communication costs, and a number of optimizations are designed for efficient computation. Experimental study on large real-world data (over 1 billion points) compares PkNN to the state-of-the-art distributed solution, demonstrating that the benefits of additional stages of computation in the PkNN method heavily outweigh the added I/O overhead. PkNN achieves a data duplication rate close to 1, significant speedup over previous solutions, and scales effectively in data cardinality and dimension. PkNN can facilitate distributed solutions to other unsupervised learning methods which rely on kNN search as a critical building block. As one example, a distributed framework for the Local Outlier Factor (LOF) algorithm is given. Testing on large real-world and synthetic data with varying characteristics measures the scalability of PkNN and the distributed LOF framework in data size and dimensionality.
29

Pivot-based Data Partitioning for Distributed k Nearest Neighbor Mining

Kuhlman, Caitlin Anne 20 January 2017 (has links)
This thesis addresses the need for a scalable distributed solution for k-nearest-neighbor (kNN) search, a fundamental data mining task. This unsupervised method poses particular challenges on shared-nothing distributed architectures, where global information about the dataset is not available to individual machines. The distance to search for neighbors is not known a priori, and therefore a dynamic data partitioning strategy is required to guarantee that exact kNN can be found autonomously on each machine. Pivot-based partitioning has been shown to facilitate bounding of partitions, however state-of-the-art methods suffer from prohibitive data duplication (upwards of 20x the size of the dataset). In this work an innovative method for solving exact distributed kNN search called PkNN is presented. The key idea is to perform computation over several rounds, leveraging pivot-based data partitioning at each stage. Aggressive data-driven bounds limit communication costs, and a number of optimizations are designed for efficient computation. Experimental study on large real-world data (over 1 billion points) compares PkNN to the state-of-the-art distributed solution, demonstrating that the benefits of additional stages of computation in the PkNN method heavily outweigh the added I/O overhead. PkNN achieves a data duplication rate close to 1, significant speedup over previous solutions, and scales effectively in data cardinality and dimension. PkNN can facilitate distributed solutions to other unsupervised learning methods which rely on kNN search as a critical building block. As one example, a distributed framework for the Local Outlier Factor (LOF) algorithm is given. Testing on large real-world and synthetic data with varying characteristics measures the scalability of PkNN and the distributed LOF framework in data size and dimensionality.
30

Wireless Distributed Computing on the Android Platform

Karra, Kiran 23 October 2012 (has links)
The last couple of years have seen an explosive growth in smartphone sales. Additionally, the computational power of modern smartphones has been increasing at a high rate. For example, the popular iPhone 4S has a 1 GHz processor with 512 MB of RAM [5]. Other popular smartphones such as the Samsung Galaxy Nexus S also have similar specications. These smartphones are as powerful as desktop computers of the 2005 era, and the tight integration of many dierent hardware chipsets in these mobile devices makes for a unique mobile platform that can be exploited for capabilities other than traditional uses of a phone, such as talk and text [4]. In this work, the concept using smartphones that run the Android operating system for distributed computing over a wireless mesh network is explored. This is also known as wireless distributed computing (WDC). The complexities of WDC on mobile devices are different from traditional distributed computing because of, among other things, the unreliable wireless communications channel and the limited power available to each computing node. This thesis develops the theoretical foundations for WDC. A mathematical model representing the total amount of resources required to distribute a task with WDC is developed. It is shown that given a task that is distributable, under certain conditions, there exists a theoretical minimum amount of resources that can be used in order to perform a task using WDC. Finally, the WDC architecture is developed, an Android App implementation of the WDC architecture is tested, and it is shown in a practical application that using WDC to perform a task provides a performance increase over processing the job locally on the Android OS. / Master of Science

Page generated in 0.1122 seconds