• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 94
  • 32
  • 24
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 386
  • 386
  • 326
  • 316
  • 200
  • 107
  • 76
  • 67
  • 66
  • 65
  • 56
  • 45
  • 38
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Distributed multi-processing for high performance computing

Algire, Martin January 2000 (has links)
No description available.
232

Researches On Reverse Lookup Problem In Distributed File System

Zhang, Junyao 01 January 2010 (has links)
Recent years have witnessed an increasing demand for super data clusters. The super data clusters have reached the petabyte-scale can consist of thousands or tens of thousands storage nodes at a single site. For this architecture, reliability is becoming a great concern. In order to achieve a high reliability, data recovery and node reconstruction is a must. Although extensive research works have investigated how to sustain high performance and high reliability in case of node failures at large scale, a reverse lookup problem, namely finding the objects list for the failed node remains open. This is especially true for storage systems with high requirement of data integrity and availability, such as scientific research data clusters and etc. Existing solutions are either time consuming or expensive. Meanwhile, replication based block placement can be used to realize fast reverse lookup. However, they are designed for centralized, small-scale storage architectures. In this thesis, we propose a fast and efficient reverse lookup scheme named Group-based Shifted Declustering (G-SD) layout that is able to locate the whole content of the failed node. G-SD extends our previous shifted declustering layout and applies to large-scale file systems. Our mathematical proofs and real-life experiments show that G-SD is a scalable reverse lookup scheme that is up to one order of magnitude faster than existing schemes.
233

Virtualization And Self-organization For Utility Computing

Saleh, Mehdi 01 January 2011 (has links)
We present an alternative paradigm for utility computing when the delivery of service is subject to binding contracts; the solution we propose is based on resource virtualization and a selfmanagement scheme. A virtual cloud aggregates set virtual machines to work in concert for the tasks specified by the service agreement. A first step for the establishment of a virtual cloud is to create a scale-free overlay network through a biased random walk; scale-free networks enjoy a set of remarkable properties such as: robustness against random failures, favorable scaling, and resilience to congestion, small diameter, and average path length. Constrains such as limits on the cost of per unit of service, total cost, or the requirement to use only “green" computing cycles are then considered when a node of this overlay network decides whether to join the virtual cloud or not. A VIRTUAL CLOUD consists of a subset of the nodes assigned to the tasks specified by a Service Level Agreement, SLA, as well as a virtual interconnection network, or overlay network, for the virtual cloud. SLAs could serve as a congestion control mechanism for an organization providing utility computing; this mechanism allows the system to reject new contracts when there is the danger of overloading the system and failing to fulfill existing contractual obligations. The objective of this thesis is to show that biased random walks in power law networks are capable of responding to dynamic changes of the workload in utility computing.
234

Generic Flow Algorithm for Analysis of Interdependent Multi-Domain Distributed Network Systems

Feinauer, Lynn Ralph 27 October 2009 (has links)
Since the advent of the computer in the late 1950s, scientists and engineers have pushed the limits of the computing power available to them to solve physical problems via computational simulations. Early computer languages evaluated program logic in a sequential manner, thereby forcing the designer to think of the problem solution in terms of a sequential process. Object-oriented analysis and design have introduced new concepts for solving systems of engineering problems. The term object-oriented was first introduced by Alan Kay [1] in the late 1960s; however, mainstream incorporation of object-oriented programming did not occur until the mid- to late 1990s. The principles and methods underlying object-oriented programming center around objects that communicate with one another and work together to model the physical system. Program functions and data are grouped together to represent the objects. This dissertation extends object-oriented modeling concepts to model algorithms in a generic manner for solving interconnected, multi-domain problems. This work is based on an extension of Graph Trace Analysis (GTA) which was originally developed in the 1990's for power distribution system design. Because of GTA's ability to combine and restructure analysis methodologies from a variety of problem domains, it is now being used for integrated power distribution and transmission system design, operations and control. Over the last few years research has begun to formalize GTA into a multidiscipline approach that uses generic algorithms and a common model-based analysis framework. This dissertation provides an overview of the concepts used in GTA, and then discusses the main problems and potential generic algorithm based solutions associated with design and control of interdependent reconfigurable systems. These include: • Decoupling analysis into distinct component and system level equations. • Using iterator based topology management and algorithms instead of matrices. • Using composition to implement polymorphism and simplify data management. • Using dependency components to structure analysis across different systems types. • Defining component level equations for power, gas and fluid systems in terms of across and though variables. This dissertation presents a methodology for solving interdependent, multi-domain networks with generic algorithms. The methodology enables modeling of very large systems and the solution of the systems can be accomplished without the need for matrix solvers. The solution technique incorporates a binary search algorithm for accelerating the solution of looped systems. Introduction of generic algorithms enables the system solver to be written such that it is independent of the system type. Example fluid and electrical systems are solved to illustrate the generic nature of the approach. / Ph. D.
235

Scalability Analysis of Parallel and Distributed Processing Systems via Fork and Join Queueing Network Models

Zeng, Yun 14 August 2018 (has links)
No description available.
236

Estimating temporary file sizes for query graphs in distributed relational database systems

Chao, Tian-Jy January 1985 (has links)
This thesis implements a part of the front-end software, the Optimizer, of the distributed database system being developed at Virginia Tech. The Optimizer generates a strategy for optimal query processing, and it presents and analyzes a given query by means of query trees and query graphs. This thesis develops PASCAL procedures that implement quantitative and qualitative rules to select query graphs requiring minimum communication costs. To develop the rules, the size of the temporary files generated after performing each required operation is estimated. The focus of this work is on the implementation of a new technique for estimating the temporary file sizes. Detailed discussion of this implementation is presented and illustrated with a complete example, followed by a comparison with one of the existing methods proposed by Dwyer. / M.S.
237

Automatic, incremental, on-the-fly garbage collection of actors

Nelson, Jeffrey Ernest 10 June 2012 (has links)
Garbage collection is an important topic of research for operating systems, because applications are easier to write and maintain if they are unburdened by the concerns of storage management. The actor computation model is another important topic: it is a powerful, expressive model of concurrent computation. This thesis is motivated by the need for an actor garbage collector for a distributed real-time system under development by the Real-Time Systems Group at Virginia Tech. It is shown that traditional garbage collectors—even those that operate on computational objects—are not sufficient for actors. Three algorithms, with varying degrees of efficiency, are presented as solutions to the actor garbage collection problem. The correctness and execution complexity of the algorithms is derived. Implementation methods are explored, and directions for future research are proposed. / Master of Science
238

Distributed Linda: design, development, and characterization of the data subsystem

Robinson, Patrick Glen 10 July 2009 (has links)
The Linda model for concurrent processing employs a shared data space approach for interprocess communication. The development of the distributed Linda system presented in this thesis implements distributed process creation within the shared data space framework and provides for parallel execution of Linda processes on a network of workstations. In addition, the design of the system’s shared data space allows it to be distributed over multiple hosts, providing for parallel access to various regions of the shared data space, thereby reducing contention among Linda processes for this resource. A preliminary analysis of the system’s execution profile has identified particular characteristics of the system which tend to limit computational performance under conditions of heavy I/O with the shared data space. An investigation of the system’s I/O behavior has led to the identification of a technique which can improve the I/O performance of the system by as much as an order of magnitude. However, this technique results in inefficient use of network bandwidth by the Linda system. Consequently, potential alternative techniques for improving the system’s I/O performance are presented. / Master of Science
239

A model for end-to-end delay in distributed computer systems

Deeds, John J. 05 September 2009 (has links)
Mitchell [1,2] describes end-to-end performance for a LAN-based computer system as the total system throughput and delay for a single-thread transaction. This model is used for a variety of applications. The single-thread transaction might, for example, be a remote database update or a real-time control activity. To model end-to-end performance, one must include the host computers, the network interface units (NIUs), the host-NIU links, and the NIU-NIU links. Based on Jackson's Theorem, total delay for single-thread transaction traversing a computer network can be approximated by the sum of delays in the host computers, the network interface units, the host-NIU links, and the NIU-NIU links. The host computer performance model can be refined by applying execution path analysis. Execution path analysis examines the structure of each software routine to be executed and provides an expression of time delay as a function of probabilities associated with conditional branches and a function of data input size. Spreadsheet models provide quick and convenient solutions for purposes of performing computer system tuning and capacity planning as demonstrated by Thomas [10]. This thesis paper extends the typical modeling approach by providing more detailed analysis of host computer delay, more specifically, the execution path analysis. In addition, spreadsheet models are implemented to demonstrate the execution path analysis and to provide comparisons with previously implemented models. / Master of Science
240

A control framework for distributed (parallel) processing environments

Cline, George E. 04 December 2009 (has links)
A control framework for distributed (parallel) processing environments is introduced. The control framework supplies a centralized control sub-system within a distributed environment to manage the resources available on a network of computers. Special attention is centered on the execution characteristics of the framework as reflected within the Linda paradigm. Linda is a conceptually simple coordination language that supports parallelism. Linda and the control framework are combined to create Linda-LAN, a distributed parallel programming environment that utilizes a local area network of computers to provide a low-cost parallel processing solution. This thesis presents the design of the Linda-LAN environment along with an analysis on the applicability of the control framework within other distributed environments. In addition, the control sub-system's execution characteristics of stability and scalability are substantiated. / Master of Science

Page generated in 0.0657 seconds