• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 792
  • 792
  • 267
  • 214
  • 149
  • 143
  • 113
  • 97
  • 83
  • 79
  • 77
  • 74
  • 72
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

On the Use of Double Auctions in Resource Allocation Problems in Large-scale Distributed Systems

Feng, Yuan 24 August 2011 (has links)
In this thesis, we explore the use of double auction markets as a general approach to tackle resource allocation problems in large-scale distributed systems, which are traditionally solved using optimization techniques. Prevalently adopted in real-world markets, double auctions have the power of arbitrating mappings between participating players and trading commodities in a decentralized fashion, with every player trying to maximize her own utility selfishly. Through the design of prefetching strategies in peer-assisted video-on-demand systems, we show how the problem of minimizing server bandwidth costs by reallocating media contents can be solved by double auction markets gracefully. However, not every resource allocation problem satisfies requirements of double auctions. We illustrate the limitation of double auctions with an example of virtual machine migration in container-based datacenters, which is then modeled into a Nash bargaining game and solved by a Nash bargaining solution.
82

On the Use of Double Auctions in Resource Allocation Problems in Large-scale Distributed Systems

Feng, Yuan 24 August 2011 (has links)
In this thesis, we explore the use of double auction markets as a general approach to tackle resource allocation problems in large-scale distributed systems, which are traditionally solved using optimization techniques. Prevalently adopted in real-world markets, double auctions have the power of arbitrating mappings between participating players and trading commodities in a decentralized fashion, with every player trying to maximize her own utility selfishly. Through the design of prefetching strategies in peer-assisted video-on-demand systems, we show how the problem of minimizing server bandwidth costs by reallocating media contents can be solved by double auction markets gracefully. However, not every resource allocation problem satisfies requirements of double auctions. We illustrate the limitation of double auctions with an example of virtual machine migration in container-based datacenters, which is then modeled into a Nash bargaining game and solved by a Nash bargaining solution.
83

Efficient Pattern Search in Large, Partial-Order Data Sets

Nichols, Matthew January 2008 (has links)
The behaviour of a large, distributed system is inherently complex. One step towards making this behaviour more understandable to a user involves instrumenting the system and collecting data about its execution. We can model the data as traces (representing various sequential entities in the system such as single-threaded processes) that contain both events local to the trace and communication events involving another trace. Visualizing this data provides a modest benefit to users as it makes basic interactions in the system clearer and, with some user effort, more complex interactions can be determined. Unfortunately, visualization by itself is not an adequate solution, especially for large numbers of events and complex interactions among traces. A search facility has the ability to make this event data more useful. Work has been done previously on various frameworks and algorithms that could form the core of such a search facility; however, various shortcomings in the completeness of the frameworks and in the efficiency of the algorithms resulted in an inconsistent, incomplete, and inefficient solution. This thesis takes steps to remedy this situation. We propose a provably-complete framework for determining precedence between sets of events and propose additions to a previous pattern-specification language so it can specify a wider variety of search patterns. We improve the efficiency of the existing search algorithm, and provide a new, more efficient, algorithm that processes a pattern in a fundamentally different way. Furthermore, the various proposed improvements have been implemented and are analysed empirically.
84

An Efficient Computation of Convex Closure on Abstract Events

Bedasse, Dwight Samuel January 2005 (has links)
The behaviour of distributed applications can be modeled as the occurrence of events and how these events relate to each other. Event data collected according to this event model can be visualized using process-time diagrams that are constructed from a collection of traces and events. One of the main characteristics of a distributed system is the large number of events that are involved, especially in practical situations. This large number of events, and hence large process-time diagrams, make distributed-system observation difficult for the user. However, event-predicate detection, a search mechanism able to detect and locate arbitrary predicates within a process-time diagram or event collection, can help the user to make sense of this large amount of data. Ping Xie used the convex-abstract event concept, developed by Thomas Kunz, to search for hierarchical event predicates. However, his algorithm for computing convex closure to construct compound events, and especially hierarchical compound events (i. e. , compound events that contain other compound events), is inefficient. In one case it took, on average, close to four hours to search the collection of event data for a specific hierarchical event predicate. In another case, it took nearly one hour. This dissertation discusses an efficient algorithm, an extension of Ping Xie?s algorithm, that employs a caching scheme to build compound and hierarchical compound events based on matched sub-patterns. In both cases cited above, the new execution times were reduced by over 94%. They now take, on average, less than four minutes.
85

Efficient Pattern Search in Large, Partial-Order Data Sets

Nichols, Matthew January 2008 (has links)
The behaviour of a large, distributed system is inherently complex. One step towards making this behaviour more understandable to a user involves instrumenting the system and collecting data about its execution. We can model the data as traces (representing various sequential entities in the system such as single-threaded processes) that contain both events local to the trace and communication events involving another trace. Visualizing this data provides a modest benefit to users as it makes basic interactions in the system clearer and, with some user effort, more complex interactions can be determined. Unfortunately, visualization by itself is not an adequate solution, especially for large numbers of events and complex interactions among traces. A search facility has the ability to make this event data more useful. Work has been done previously on various frameworks and algorithms that could form the core of such a search facility; however, various shortcomings in the completeness of the frameworks and in the efficiency of the algorithms resulted in an inconsistent, incomplete, and inefficient solution. This thesis takes steps to remedy this situation. We propose a provably-complete framework for determining precedence between sets of events and propose additions to a previous pattern-specification language so it can specify a wider variety of search patterns. We improve the efficiency of the existing search algorithm, and provide a new, more efficient, algorithm that processes a pattern in a fundamentally different way. Furthermore, the various proposed improvements have been implemented and are analysed empirically.
86

A distributed hard real-time Java system for high mobility components

Rho, Sangig 17 February 2005 (has links)
In this work we propose a methodology for providing real-time capabilities to component-based, on-the-fly reconfigurable, distributed systems. In such systems, software components migrate across computational resources at run-time to allow applications to adapt to changes in user requirements or to external events. We describe how we achieve run-time reconfiguration in distributed Java applications by appropriately migrating servers. Guaranteed-rate schedulers at the servers provide the necessary temporal protection and so simplify remote method invocation management. We describe how we manage overhead and resource utilization by controlling the parameters of the server schedulers. According to our measurements, this methodology provides real-time capability to component-based reconfigurable distributed systems in an effcient and effective way. In addition, we propose a new resource discovery protocol, REALTOR, which is based on a combination of pull-based and push-based resource information dissemination. REALTOR has been designed for real-time component-based distributed applications in very dynamic or adverse environments. REALTOR supports survivability and information assurance by allowing the migration of components to safe locations under emergencies suchas externalattack, malfunction, or lackofresources. Simulation studies show that under normal and heavy load conditions REALTOR remains very effective in finding available resources, and does so with a reasonably low communication overhead.REALTOR 1)effectively locates resources under highly dynamic conditions, 2) has an overhead that is system-size independent, and 3) works well in highlyadverse environments.We evaluate the effectiveness of a REALTOR implementation as part of Agile Objects, an infrastructure for real-time capable, highly mobile Java components.
87

Evaluation of Globe Location Service Performance

Reynisson, Gauti January 2000 (has links)
<p>Performance evaluation of Globe’s location service is becoming necessary in order to help steer development in the right direction. In this paper I put the current implementation of the location service to work and design and setup a number of tests with input data from a mobile phone environment provided by the Stanford University Mobile Activity Traces (SUMATRA). It turns out that the implementation is not ready for performance evaluation of this scale after all, and that no performance evaluation can be done with SUMATRA since too many inconsistencies are to be found in that data.</p>
88

The weighted Byzantine Agreement Problem

Bridgman, John Francis 13 August 2012 (has links)
This report presents a weighted version of the Byzantine Agreement Problem and its solution under various conditions. In this version, each machine is assigned a weight depending on the application. Instead of assuming that at most $f$ out of $N$ machines fail, the algorithm assumes that the total weight of the machines that fail is at most $\rho < 1/3.$ When each machine has weight $1/N,$ this problem reduces to the standard Byzantine Generals Agreement Problem. By choosing weights appropriately, the weighted Byzantine Agreement Problem can be applied to situations where a subset of processes are more trusted. By using weights, the system can reach consensus in the presence of Byzantine failures, even when more than $N/3$ processes fail, so long as the total weight of the failed processes is less than $1/3.$ Some properties of the Weighted Byzantine Agreement algorithms when the weight vectors are not the same at every process are discussed. Also, a method to update the weights of the processes after execution of the weighted Byzantine Agreement is given. The update method guarantees that the weight of any correct process is never reduced and the weight of any faulty process, suspected by correct processes whose total weight is at least $1/4,$ is reduced to $0$ for future instances. A short discussion of some weight assignment strategies is also given. / text
89

A new approach to detecting failures in distributed systems

Leners, Joshua Blaise 18 September 2015 (has links)
Fault-tolerant distributed systems often handle failures in two steps: first, detect the failure and, second, take some recovery action. A common approach to detecting failures is end-to-end timeouts, but using timeouts brings problems. First, timeouts are inaccurate: just because a process is unresponsive does not mean that process has failed. Second, choosing a timeout is hard: short timeouts can exacerbate the problem of inaccuracy, and long timeouts can make the system wait unnecessarily. In fact, a good timeout value—one that balances the choice between accuracy and speed—may not even exist, owing to the variance in a system’s end-to-end delays. ƃis dissertation posits a new approach to detecting failures in distributed systems: use information about failures that is local to each component, e.g., the contents of an OS’s process table. We call such information inside information, and use it as the basis in the design and implementation of three failure reporting services for data center applications, which we call Falcon, Albatross, and Pigeon. Falcon deploys a network of software modules to gather inside information in the system, and it guarantees that it never reports a working process as crashed by sometimes terminating unresponsive components. ƃis choice helps applications by making reports of failure reliable, meaning that applications can treat them as ground truth. Unfortunately, Falcon cannot handle network failures because guaranteeing that a process has crashed requires network communication; we address this problem in Albatross and Pigeon. Instead of killing, Albatross blocks suspected processes from using the network, allowing applications to make progress during network partitions. Pigeon renounces interference altogether, and reports inside information to applications directly and with more detail to help applications make better recovery decisions. By using these services, applications can improve their recovery from failures both quantitatively and qualitatively. Quantitatively, these services reduce detection time by one to two orders of magnitude over the end-to-end timeouts commonly used by data center applications, thereby reducing the unavailability caused by failures. Qualitatively, these services provide more specific information about failures, which can reduce the logic required for recovery and can help applications better decide when recovery is not necessary.
90

Computational process networks : a model and framework for high-throughput signal processing

Allen, Gregory Eugene 16 June 2011 (has links)
Many signal and image processing systems for high-throughput, high-performance applications require concurrent implementations in order to realize desired performance. Developing software for concurrent systems is widely acknowledged to be difficult, with common industry practice leaving the burden of preventing concurrency problems on the programmer. The Kahn Process Network model provides the mathematically provable property of determinism of a program result regardless of the execution order of its processes, including concurrent execution. This model is also natural for describing streams of data samples in a signal processing system, where processes transform streams from one data type to another. However, a Kahn Process Network may require infinite memory to execute. I present the dynamic distributed deadlock detection and resolution (D4R) algorithm, which permits execution of Process Networks in bounded memory if it is possible. It detects local deadlocks in a Process Network, determines whether the deadlock can be resolved and, if so, identifies the process that must take action to resolve the deadlock. I propose the Computational Process Network (CPN) model which is based on the formalisms of Kahn’s PN model, but with enhancements that are designed to make it efficiently implementable. These enhancements include multi-token transactions to reduce execution overhead, multi-channel queues for multi-dimensional synchronous data, zero-copy semantics, and consumer and producer firing thresholds for queues. Firing thresholds enable memoryless computation of sliding window algorithms, which are common in signal processing systems. I show that the Computational Process Network model preserves the formal properties of Process Networks, while reducing the operations required to implement sliding window algorithms on continuous streams of data. I also present a high-throughput software framework that implements the Computational Process Network model using C++, and which maps naturally onto distributed targets. This framework uses POSIX threads, and can exploit parallelism in both multi-core and distributed systems. Finally, I present case studies to exercise this framework and demonstrate its performance and utility. The final case study is a three-dimensional circular convolution sonar beamformer and replica correlator, which demonstrates the high throughput and scalability of a real-time signal processing algorithm using the CPN model and framework. / text

Page generated in 0.0975 seconds