• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 263
  • 21
  • 20
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 723
  • 723
  • 256
  • 182
  • 139
  • 132
  • 97
  • 91
  • 77
  • 70
  • 66
  • 66
  • 65
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Software design measures for distributed enterprise Information systems

Rossi, Pablo Hernan, pablo@cs.rmit.edu.au January 2004 (has links)
Enterprise information systems are increasingly being developed as distributed information systems. Quality attributes of distributed information systems, as in the centralised case, should be evaluated as early and as accurately as possible in the software engineering process. In particular, software measures associated with quality attributes of such systems should consider the characteristics of modern distributed technologies. Early design decisions have a deep impact on the implementation of distributed enterprise information systems and thus, on the ultimate quality of the software as an operational entity. Due to the fact that the distributed-software engineering process affords software engineers a number of design alternatives, it is important to develop tools and guidelines that can be used to assess and compare design artefacts quantitatively. This dissertation makes a contribution to the field of Software Engineering by proposing and evaluating software design measures for distributed enterprise information systems. In previous research, measures developed for distributed software have been focused in code attributes, and thus, only provide feedback towards the end of the software engineering process. In contrast, this thesis proposes a number of specific design measures that provide quantitative information before the implementation. These measures capture attributes of the structure and behaviour of distributed information systems that are deemed important to assess their quality attributes, based on the analysis of the problem domain. The measures were evaluated theoretically and empirically as part of a well defined methodology. On the one hand, we have followed a formal framework based on the theory of measurement, in order to carry out the theoretical validation of the proposed measures. On the other hand, the suitability of the measures, to be used as indicators of quality attributes, was evaluated empirically with a robust statistical technique for exploratory research. The data sets analysed were gathered after running several experiments and replications with a distributed enterprise information system. The results of the empirical evaluation show that most of the proposed measures are correlated to the quality attributes of interest, and that most of these measures may be used, individually or in combination, for the estimation of these quality attributes-namely efficiency, reliability and maintainability. The design of a distributed information system is modelled as a combination of its structure, which reflects static characteristics, and its behaviour, which captures complementary dynamic aspects. The behavioural measures showed slightly better individual and combined results than the structural measures in the experimentation. This was in line with our expectations, since the measures were evaluated as indicators of non-functional quality attributes of the operational system. On the other hand, the structural measures provide useful feedback that is available earlier in the software engineering process. Finally, we developed a prototype application to collect the proposed measures automatically and examined typical real-world scenarios where the measures may be used to make design decisions as part of the software engineering process.
62

Decentralized Modular Router Architectures

Hidell, Markus January 2006 (has links)
The Internet grows extremely fast in terms of number of users and traffic volume, as well as in the number of services that must be supported. This development results in new requirements on routers—the main building blocks of the Internet. Existing router designs suffer from architectural limitations that make it difficult to meet future requirements, and the purpose of this thesis is to explore new ways of building routers. We take the approach to investigate distributed and modular router designs, where routers are composed of multiple modules that can be mapped onto different processing elements. The modules communicate through open well-defined interfaces over an internal network. Our overall hypothesis is that such a combination of modularization and decentralization is a promising way to improve scalability, flexibility, and robustness of Internet routers—properties that will be critical for new generations of routers. Our research methodology is based on design, implementation, and experimental verification. The design work has two main results: an overall system design and a distributed router control plane. The system design consists of interfaces, protocols, and internal mechanisms for physically separation of different components of a router. The distributed control plane is a decomposition of control software into independent modules mapped onto multiple distributed processing elements. Our design is evaluated and verified through the implementation of a prototype system. The experimental part of the work deals with two key issues. First, transport mechanisms for communication of internal control information between processing elements are evaluated. In particular, we investigate the use of reliable multicast protocols in this context. Results regarding communication overhead as well as overall performance of routing table dissemination and installation are presented. The results show that even though there are certain costs associated with using reliable multicast, there are large performance gains to be made when the number of processing elements increases. Second, we present performance results of processing routing information in a distributed control plane. These results show that the processing time can be significantly reduced by distributing the workload over multiple processing elements. This indicates that considerable performance improvements can be made through the use of the distributed control plane architecture proposed in this thesis. / QC 20100616
63

On the Use of Double Auctions in Resource Allocation Problems in Large-scale Distributed Systems

Feng, Yuan 24 August 2011 (has links)
In this thesis, we explore the use of double auction markets as a general approach to tackle resource allocation problems in large-scale distributed systems, which are traditionally solved using optimization techniques. Prevalently adopted in real-world markets, double auctions have the power of arbitrating mappings between participating players and trading commodities in a decentralized fashion, with every player trying to maximize her own utility selfishly. Through the design of prefetching strategies in peer-assisted video-on-demand systems, we show how the problem of minimizing server bandwidth costs by reallocating media contents can be solved by double auction markets gracefully. However, not every resource allocation problem satisfies requirements of double auctions. We illustrate the limitation of double auctions with an example of virtual machine migration in container-based datacenters, which is then modeled into a Nash bargaining game and solved by a Nash bargaining solution.
64

On the Use of Double Auctions in Resource Allocation Problems in Large-scale Distributed Systems

Feng, Yuan 24 August 2011 (has links)
In this thesis, we explore the use of double auction markets as a general approach to tackle resource allocation problems in large-scale distributed systems, which are traditionally solved using optimization techniques. Prevalently adopted in real-world markets, double auctions have the power of arbitrating mappings between participating players and trading commodities in a decentralized fashion, with every player trying to maximize her own utility selfishly. Through the design of prefetching strategies in peer-assisted video-on-demand systems, we show how the problem of minimizing server bandwidth costs by reallocating media contents can be solved by double auction markets gracefully. However, not every resource allocation problem satisfies requirements of double auctions. We illustrate the limitation of double auctions with an example of virtual machine migration in container-based datacenters, which is then modeled into a Nash bargaining game and solved by a Nash bargaining solution.
65

Efficient Pattern Search in Large, Partial-Order Data Sets

Nichols, Matthew January 2008 (has links)
The behaviour of a large, distributed system is inherently complex. One step towards making this behaviour more understandable to a user involves instrumenting the system and collecting data about its execution. We can model the data as traces (representing various sequential entities in the system such as single-threaded processes) that contain both events local to the trace and communication events involving another trace. Visualizing this data provides a modest benefit to users as it makes basic interactions in the system clearer and, with some user effort, more complex interactions can be determined. Unfortunately, visualization by itself is not an adequate solution, especially for large numbers of events and complex interactions among traces. A search facility has the ability to make this event data more useful. Work has been done previously on various frameworks and algorithms that could form the core of such a search facility; however, various shortcomings in the completeness of the frameworks and in the efficiency of the algorithms resulted in an inconsistent, incomplete, and inefficient solution. This thesis takes steps to remedy this situation. We propose a provably-complete framework for determining precedence between sets of events and propose additions to a previous pattern-specification language so it can specify a wider variety of search patterns. We improve the efficiency of the existing search algorithm, and provide a new, more efficient, algorithm that processes a pattern in a fundamentally different way. Furthermore, the various proposed improvements have been implemented and are analysed empirically.
66

An Efficient Computation of Convex Closure on Abstract Events

Bedasse, Dwight Samuel January 2005 (has links)
The behaviour of distributed applications can be modeled as the occurrence of events and how these events relate to each other. Event data collected according to this event model can be visualized using process-time diagrams that are constructed from a collection of traces and events. One of the main characteristics of a distributed system is the large number of events that are involved, especially in practical situations. This large number of events, and hence large process-time diagrams, make distributed-system observation difficult for the user. However, event-predicate detection, a search mechanism able to detect and locate arbitrary predicates within a process-time diagram or event collection, can help the user to make sense of this large amount of data. Ping Xie used the convex-abstract event concept, developed by Thomas Kunz, to search for hierarchical event predicates. However, his algorithm for computing convex closure to construct compound events, and especially hierarchical compound events (i. e. , compound events that contain other compound events), is inefficient. In one case it took, on average, close to four hours to search the collection of event data for a specific hierarchical event predicate. In another case, it took nearly one hour. This dissertation discusses an efficient algorithm, an extension of Ping Xie?s algorithm, that employs a caching scheme to build compound and hierarchical compound events based on matched sub-patterns. In both cases cited above, the new execution times were reduced by over 94%. They now take, on average, less than four minutes.
67

Efficient Pattern Search in Large, Partial-Order Data Sets

Nichols, Matthew January 2008 (has links)
The behaviour of a large, distributed system is inherently complex. One step towards making this behaviour more understandable to a user involves instrumenting the system and collecting data about its execution. We can model the data as traces (representing various sequential entities in the system such as single-threaded processes) that contain both events local to the trace and communication events involving another trace. Visualizing this data provides a modest benefit to users as it makes basic interactions in the system clearer and, with some user effort, more complex interactions can be determined. Unfortunately, visualization by itself is not an adequate solution, especially for large numbers of events and complex interactions among traces. A search facility has the ability to make this event data more useful. Work has been done previously on various frameworks and algorithms that could form the core of such a search facility; however, various shortcomings in the completeness of the frameworks and in the efficiency of the algorithms resulted in an inconsistent, incomplete, and inefficient solution. This thesis takes steps to remedy this situation. We propose a provably-complete framework for determining precedence between sets of events and propose additions to a previous pattern-specification language so it can specify a wider variety of search patterns. We improve the efficiency of the existing search algorithm, and provide a new, more efficient, algorithm that processes a pattern in a fundamentally different way. Furthermore, the various proposed improvements have been implemented and are analysed empirically.
68

The weighted Byzantine Agreement Problem

Bridgman, John Francis 13 August 2012 (has links)
This report presents a weighted version of the Byzantine Agreement Problem and its solution under various conditions. In this version, each machine is assigned a weight depending on the application. Instead of assuming that at most $f$ out of $N$ machines fail, the algorithm assumes that the total weight of the machines that fail is at most $\rho < 1/3.$ When each machine has weight $1/N,$ this problem reduces to the standard Byzantine Generals Agreement Problem. By choosing weights appropriately, the weighted Byzantine Agreement Problem can be applied to situations where a subset of processes are more trusted. By using weights, the system can reach consensus in the presence of Byzantine failures, even when more than $N/3$ processes fail, so long as the total weight of the failed processes is less than $1/3.$ Some properties of the Weighted Byzantine Agreement algorithms when the weight vectors are not the same at every process are discussed. Also, a method to update the weights of the processes after execution of the weighted Byzantine Agreement is given. The update method guarantees that the weight of any correct process is never reduced and the weight of any faulty process, suspected by correct processes whose total weight is at least $1/4,$ is reduced to $0$ for future instances. A short discussion of some weight assignment strategies is also given. / text
69

A new approach to detecting failures in distributed systems

Leners, Joshua Blaise 18 September 2015 (has links)
Fault-tolerant distributed systems often handle failures in two steps: first, detect the failure and, second, take some recovery action. A common approach to detecting failures is end-to-end timeouts, but using timeouts brings problems. First, timeouts are inaccurate: just because a process is unresponsive does not mean that process has failed. Second, choosing a timeout is hard: short timeouts can exacerbate the problem of inaccuracy, and long timeouts can make the system wait unnecessarily. In fact, a good timeout value—one that balances the choice between accuracy and speed—may not even exist, owing to the variance in a system’s end-to-end delays. ƃis dissertation posits a new approach to detecting failures in distributed systems: use information about failures that is local to each component, e.g., the contents of an OS’s process table. We call such information inside information, and use it as the basis in the design and implementation of three failure reporting services for data center applications, which we call Falcon, Albatross, and Pigeon. Falcon deploys a network of software modules to gather inside information in the system, and it guarantees that it never reports a working process as crashed by sometimes terminating unresponsive components. ƃis choice helps applications by making reports of failure reliable, meaning that applications can treat them as ground truth. Unfortunately, Falcon cannot handle network failures because guaranteeing that a process has crashed requires network communication; we address this problem in Albatross and Pigeon. Instead of killing, Albatross blocks suspected processes from using the network, allowing applications to make progress during network partitions. Pigeon renounces interference altogether, and reports inside information to applications directly and with more detail to help applications make better recovery decisions. By using these services, applications can improve their recovery from failures both quantitatively and qualitatively. Quantitatively, these services reduce detection time by one to two orders of magnitude over the end-to-end timeouts commonly used by data center applications, thereby reducing the unavailability caused by failures. Qualitatively, these services provide more specific information about failures, which can reduce the logic required for recovery and can help applications better decide when recovery is not necessary.
70

Computational process networks : a model and framework for high-throughput signal processing

Allen, Gregory Eugene 16 June 2011 (has links)
Many signal and image processing systems for high-throughput, high-performance applications require concurrent implementations in order to realize desired performance. Developing software for concurrent systems is widely acknowledged to be difficult, with common industry practice leaving the burden of preventing concurrency problems on the programmer. The Kahn Process Network model provides the mathematically provable property of determinism of a program result regardless of the execution order of its processes, including concurrent execution. This model is also natural for describing streams of data samples in a signal processing system, where processes transform streams from one data type to another. However, a Kahn Process Network may require infinite memory to execute. I present the dynamic distributed deadlock detection and resolution (D4R) algorithm, which permits execution of Process Networks in bounded memory if it is possible. It detects local deadlocks in a Process Network, determines whether the deadlock can be resolved and, if so, identifies the process that must take action to resolve the deadlock. I propose the Computational Process Network (CPN) model which is based on the formalisms of Kahn’s PN model, but with enhancements that are designed to make it efficiently implementable. These enhancements include multi-token transactions to reduce execution overhead, multi-channel queues for multi-dimensional synchronous data, zero-copy semantics, and consumer and producer firing thresholds for queues. Firing thresholds enable memoryless computation of sliding window algorithms, which are common in signal processing systems. I show that the Computational Process Network model preserves the formal properties of Process Networks, while reducing the operations required to implement sliding window algorithms on continuous streams of data. I also present a high-throughput software framework that implements the Computational Process Network model using C++, and which maps naturally onto distributed targets. This framework uses POSIX threads, and can exploit parallelism in both multi-core and distributed systems. Finally, I present case studies to exercise this framework and demonstrate its performance and utility. The final case study is a three-dimensional circular convolution sonar beamformer and replica correlator, which demonstrates the high throughput and scalability of a real-time signal processing algorithm using the CPN model and framework. / text

Page generated in 0.0365 seconds