• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 15
  • 13
  • Tagged with
  • 113
  • 17
  • 16
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Extending the open grid services infrastructure to intermittently available network environments

Hampshire, Alastair January 2007 (has links)
No description available.
22

A routing calculas towards formalising the cost of computation in a distributed computer network

Guar, Manish January 2007 (has links)
We model a distributed network with routers acting as an active component in determining the quality of service of the network. Our model may be considered as an extension of the asynhronous distributed pi-calculus (ADpi). We believe that such models help in prototyping the routing algorithms in context of large networks and reasoning about them while abstracting away the excessive details.
23

On dynamic resource allocation in systems with bursty sources

Slegers, Joris January 2009 (has links)
There is a trend to use computing resources in a way that is more removed from the technical constraints. Users buy compute time on machines that they do not control or necessarily know the specifics of. Conversely this means the providers of such resources have more freedom in allocating them amongst different tasks. They can use this freedom to provide more, or better, service by reallocating resources as demand for them changes. However deciding when to reallocate resources is not trivial. In order to make good reallocation decisions, this thesis constructs a series of models. Each of the models concerns a resource allocation problem in the presence of bursty sources. The focus of the modelling, however, varies. In its most basic form it considers several different job types competing over the allocation of a limited number of servers. The goal there is to minimize the (weighted) mean time jobs spend in the system. The weighting can reflect the relative importance of the different job types. Reallocation of servers between job types is in general considered to be neither free nor instantaneous. We then show how to find the optimal static allocation of servers over job types. Finding the optimal dynamic allocation of servers is formulated as solving a Markov decision process. We show that this is practically unfeasible for all but the most simple systems. Instead a number of heuristics are introduced. Some are fluid-approximation based and some are parameterless, i.e. do not require the a priori knowledge of parameters of the system. The performance of these heuristic policies is then explored in a series of simulations. A slightly different model is formulated next. Its goal is not to optimize allocation of servers over several job types, but rather between powered up and powered down states. In the powered up state servers can provide service for incoming jobs. In the powered down state servers cannot service incoming jobs but incur a profit due to power savings. Balancing power and performance is again formulated as a Markov decision process. This is not explicitly solved but instead some of the heuristics considered earlier are adapted to give dynamic policies for powering servers up and v down. Their performance is again tested in a number of simulations, including some where the arrival process is not only bursty but also non-Markovian. The third and final model considers allocation of servers over different job types again. This time the servers experience breakdowns and subsequent repairs. During a repair period the servers cannot process any incoming jobs. To reduce the complexity of this model, it is assumed that switches of servers between job types are instantaneous, albeit not necessarily free. This is modeled as a Markov decision process and we show how to find the optimal static allocation of servers. For the dynamic allocation previously considered heuristics are adapted again. Simulations then show the performance of these heuristics and the optimal static allocation in a number of scenarios.
24

Supporting visualization users in diverse environments

Osborne, James Alexander January 2005 (has links)
No description available.
25

Evaluating JavaSpaces as a viable framework for distributed computing

Pritchard, Les January 2006 (has links)
No description available.
26

Resource discovery and distributed computing over peer-to-peer networks

Al-Dmour, Nidal January 2005 (has links)
No description available.
27

The architecture of an autonomic, resource-aware, workstation-based distributed database system

Macdonald, Angus January 2012 (has links)
Distributed software systems that are designed to run over workstation machines within organisations are termed workstation-based. Workstation-based systems are characterised by dynamically changing sets of machines that are used primarily for other, user-centric tasks. They must be able to adapt to and utilize spare capacity when and where it is available, and ensure that the non-availability of an individual machine does not affect the availability of the system. This thesis focuses on the requirements and design of a workstation-based database system, which is motivated by an analysis of existing database architectures that are typically run over static, specially provisioned sets of machines. A typical clustered database system — one that is run over a number of specially provisioned machines — executes queries interactively, returning a synchronous response to applications, with its data made durable and resilient to the failure of machines. There are no existing workstation-based databases. Furthermore, other workstation-based systems do not attempt to achieve the requirements of interactivity and durability, because they are typically used to execute asynchronous batch processing jobs that tolerate data loss — results can be re-computed. These systems use external servers to store the final results of computations rather than workstation machines. This thesis describes the design and implementation of a workstation-based database system and investigates its viability by evaluating its performance against existing clustered database systems and testing its availability during machine failures.
28

The development of reliable X-by-wire systems : assessing the effectiveness of a 'simulation first' approach

Ayavoo, Devaraj January 2006 (has links)
Networks of embedded processors play an increasingly important role in the control of automotive, aerospace, industrial, defence and medical systems. The requirements for such X-by-Wire applications are highly demanding and complex in nature, and there are numerous possible design and technology options available. As a consequence, in all but the most trivial systems, engineering teams who wish to identify the best solution can only hope to prototype a small percentage of the possible designs. Several researchers have argued that an effective solution to these problems is to use computer simulations in the early stages of the design process. The aim of this thesis is to explore the effectiveness of such a simulation first approach when developing X-by-Wire systems. The main focus is on the automotive sector, but it is suggested that the techniques developed during the course of this project can be more widely applied. This document makes three main contributions, as follows. First, it provides clear empirical evidence of the extent to which a simulation first approach can be used to support the development of non-trivial X-by-Wire systems. Second, it introduces a novel, cost-effective empirical small group methodology (SGM) to compare between different development techniques for embedded systems. The SGM is described, and its effectiveness demonstrated in four non-trivial case studies; and Third, evidence is presented which suggests that the SGM may be more widely applicable.
29

Efficient arbitration and bridging techniques for high-performance conventional multimedia servers

Maierhofer, Martin January 2000 (has links)
Recent progress in computing power and network bandwidth has enabled the emergence of sophisticated distributed multimedia applications for a large audience. Several applications, such as video on demand, utilise a client-server based approach, thus servers are fundamental building blocks in such environments. Due to certain properties of multimedia applications and data types, multimedia servers face a set of complex challenges. This thesis focuses on advanced architectural support for high-volume data streaming, aiming at an improved cost/performance ratio of conventional servers. Traditionally research in that area has focused on a narrow set of issues, such as storage allocation and disk scheduling. By comparison, other areas have not received as much attention. In particular, aspects relating to system buses and associated bridging structures have generally not been systematically investigated. In this thesis, a number of such issues are comprehensively studied, addressing crucial shortcomings of traditional server architectures. Formal analysis and extensive simulations are used to evaluate server architectures and components. In a preparatory step, the properties of I/O technologies and server components are surveyed, and a formal model of system bus performance is established, using the PCI local bus as representative of current system bus standards. Detailed investigations show that streaming performance suffers from high initial latency of main memory access. To alleviate this problem, a novel "PCI Address Pipelining" scheme is proposed and evaluated. Analysis shows that such a technique can modestly improve maximum server load with little associated cost, while a hardware design using VHDL proves that it can be implemented efficiently. A second avenue of research concentrates on arbitration and its impact on server performance: to meet deadlines imposed by real-time data streaming, components must compete for access to shared resources. Consequently latency and thus quality of service can depend on the properties of the arbiter. In this thesis, arbitration and other resource allocation techniques are comprehensively investigated in a wide range of data streaming applications. Results differ substantially from those Obtained for multiprocessor machines, but do not indicate a clear improvement in performance for any particular protocol. Hardware implementations, on the other hand, vary considerably both in arbitration speed and design size. Finally, a more general solution to streaming bottlenecks in traditional architectures is proposed in this thesis, combining benefits of parallel and conventional architectures: bus bridges and local buffering provide the framework for modular architectures based on inexpensive standard components. 1\vo implementations of this design, as well as a third more restricted solution for ATM adapters, are critically evaluated and compared. Simulations show that a single streaming module can handle more than three times as many data streams as a traditional server. The scalability of modular architectures, however, is limited by global interrupt handling. A scheme based on local processing is shown to be highly efficient at removing this restriction. Investigations carried out in this thesis show that high streaming performance can be achieved by an inexpensive evolutionary adaption of traditional servers, without the need to resort to parallel architectures.
30

VOML : virtual organization modelling language

Rajper, Noor Jehan January 2012 (has links)
Virtual organizations (VOs) and their breeding environments are an emerging approach for developing systems as a consortium of autonomous entities formed to share costs and resources, better respond to opportunities, achieve shorter time-to-market and exploit fast changing market opportunities. VOs cater for those demands by incorporating reconfigurations making VOs highly resilient and agile by design. Reconfiguration of systems is an active research area. Many policy and specification languages have been dedicated for the purpose. However, all these approaches consider reconfiguration of a system as somewhat isolated from its business and operational model; it is usually assumed that the latter two remain unaffected through such reconfigurations and the reconfiguration is usually limited to dynamic binding of components the system consists of. However the demands of VO reconfiguration go beyond dynamic binding and reach the level where it becomes crucial to keep changing the organizational structure (process model) of the system as well, which leads to changes of the operational/functional model. This continuous reconfiguration of the operational model emphasizes the need of a modelling language that allows specification and validation of such systems. This thesis approaches the problem of formal specification of VOs through the Virtual Organization Modelling Language (VOML) framework. The core of this framework are three languages each capturing a specific aspect. The first language named Virtual Organization Structural modelling language (VO-S), focuses on structural aspects and many of the characteristics particular to VOs such as relationship between the members expressed in domain terminology. The second language named VO Reconfiguration (VO-R for short), permits different reconfigurations on the structure of the VO. This language is an extension of APPEL for the domain of VOs. The third language named VO Operational modelling language (VO-O) describes the operational model of a VO in more details. This language is an adaptation and extension of the Sensoria Reference Modelling Language for service oriented architecture (SRML). Our framework models VOs using the VO-S and the VO-R which are at a high level of abstraction and independent of a specific computational model. Mapping rules provide guidelines to generate operational models, thus ensuring that the two models conform to each other. The usability and applicability of VOML is validated through two cases studies one of which offers travel itineraries as a VO service and is a running example. The other case study is an adaptation of a case study on developing a chemical plant from [14].

Page generated in 0.0184 seconds