91 |
Design componentsIliasov, Alexei January 2008 (has links)
Although it is generally recognised that formal modelling is crucial for ensuring the correctness of software systems, some obstacles to its wider adoption in software engineering persist. One of these is that its productivity is low; another that for modelling techniques and tools to be used efficiently, a broad range of specific skills is required. With the gap between computer performance and engineers’ productivity growing, there is a need to raise the level of abstraction at which development is carried out and off-load much of the routine work done manually today to computers. Formal modelling has all the characteristics required to replace programming and offer higher productivity. Nonetheless, as a branch of software engineering it has yet to be generally accepted. While there is substantial research accumulated in systems analysis and verification, notmuch has been done to foster higher productivity and efficiency of modelling activity. This study puts forward an approach that allows the modeller to encapsulate design ideas and experience in a reusable package. This package, called a design component, can be used in differentways. While a design component is generally intended for constructing a new design using an existing one, we base our approach on a refinement technique. The design encapsulated in the design component is injected into a formal development by formally refining an abstract model. This process is completely automated: the design component is integrated by a tool, with the corresponding correctness proofs also handled automatically. To help us construct design components we consider a number of techniques of transforming models and describing reusable designs. We then introduce the concept ofmodel transformation to encapsulate syntactic rewrite rules used to produce new models. To capture high-level design we introduce the pattern language allowing us to build abstraction and refinement patterns from model transformations. Patterns automate the formal development process and reduce the number of proofs. To help the modeller plan and execute refinement steps, we introduce the concept of themodelling pattern. A modelling pattern combines refinement (or abstraction) patterns with modelling guidelines to form a complete design component.
|
92 |
A grid and cloud-based framework for high throughput bioinformaticsFlanagan, Keith Stanley January 2010 (has links)
Recent advances in genome sequencing technologies have unleashed a flood of new data. As a result, the computational analysis of bioinformatics data sets has been rapidly moving from a labbased desktop computer environment to exhaustive analyses performed by large dedicated computing resources. Traditionally, large computational problems have been performed on dedicated clusters of high performance machines that are typically local to, and owned by, a particular institution. The current trend in Grid computing has seen institutions pooling their computational resources in order to offload excess computational work to remote locations during busy periods. In the last year or so, commercial Cloud computing initiatives have matured enough to offer a viable remote source of reliable computational power. Collections of idle desktop computers have also been used as a source of computational power in the form of ‘volunteer Grids’. The field of bioinformatics is highly dynamic, with new or updated versions of software tools and databases continually being developed. Several different tools and datasets must often be combined into a coherent, automated workflow or pipeline. While existing solutions are available for constructing workflows, there is a clear need for long-lived analyses consisting of many interconnected steps to be able to migrate among Grid and cloud computational resources dynamically. This project involved research into the principles underlying the design and architecture of flexible, high-throughput bioinformatics processes. Following extensive research into requirements gathering, a novel Grid-based platform, Microbase, has been implemented that is based on service-oriented architectures and peer-to-peer data transfer technology. This platform has been shown to be amenable to utilising a wide range of hardware from commodity desktop computers, to high-performance cloud infrastructure. The system has been shown to drastically reduce the bandwidth requirements of bioinformatics data distribution, and therefore reduces both the financial and computational costs associated with cloud computing. The system is inherently modular in nature, comprising a service based notification system, a data storage system scheduler and a job manager. In keeping with e-Science principles, each module can operate in physical isolation from each other, distributed within an intranet or Internet. Moreover, since each module is loosely coupled via Web services, modules have the potential to be used in combination with external service oriented components or in isolation as part of another system. In order to demonstrate the utility of such an open source system to the bioinformatics community, a pipeline of inter-connected bioinformatics applications was developed using the Microbase system to form a high throughput application for the comparative and visual analysis of microbial genomes. This application, Automated Genome Analyser (AGA) has been developed to operate without user interaction. AGA exposes its results via Web-services which can be used by further analytical stages within Microbase, by external computational resources via a Web service interface or which can be queried by users via an interactive genome browser. In addition to providing the necessary infrastructure for scalable Grid applications, a modular development framework has been provided, which simplifies the process of writing Grid applications. Microbase has been adopted by a number of projects ranging from comparative genomics to synthetic biology simulations.
|
93 |
Interpreted graph modelsPoliakov, Ivan January 2011 (has links)
A model class called an Interpreted Graph Model (IGM) is defined. This class includes a large number of graph-based models that are used in asynchronous circuit design and other applications of concurrecy. The defining characteristic of this model class is an underlying static graph-like structure where behavioural semantics are attached using additional entities, such as tokens or node/arc states. The similarities in notation and expressive power allow a number of operations on these formalisms, such as visualisation, interactive simulation, serialisation, schematic entry and model conversion to be generalised. A software framework called Workcraft was developed to take advantage of these properties of IGMs. Workcraft provides an environment for rapid prototyping of graph-like models and related tools. It provides a large set of standardised functions that considerably facilitate the task of providing tool support for any IGM. The concept of Interpreted Graph Models is the result of research on methods of application of lower level models, such as Petri nets, as a back-end for simulation and verification of higher level models that are more easily manipulated. The goal is to achieve a high degree of automation of this process. In particular, a method for verification of speed-independence of asynchronous circuits is presented. Using this method, the circuit is specified as a gate netlist and its environment is specified as a Signal Transition Graph. The circuit is then automatically translated into a behaviourally equivalent Petri net model. This model is then composed with the specification of the environment. A number of important properties can be established on this compound model, such as the absence of deadlocks and hazards. If a trace is found that violates the required property, it is automatically interpreted in terms of switching of the gates in the original gate-level circuit specification and may be presented visually to the circuit designer. A similar technique is also used for the verification of a model called Static Data Flow Structure (SDFS). This high level model describes the behaviour of an asynchronous data path. SDFS is particularly interesting because it models complex behaviours such as preemption, early evaluation and speculation. Preemption is a technique which allows to destroy data objects in a computation pipeline if the result of computation is no longer needed, reducing the power consumption. Early evaluation allows a circuit to compute the output using a subset of its inputs and preempting the inputs which are not needed. In speculation, all conflicting branches of computation run concurrently without waiting for the selecting condition; once the selecting condition is computed the unneeded branches are preempted. The automated Petri net based verification technique is especially useful in this case because of the complex nature of these features. As a result of this work, a number of cases are presented where the concept of IGMs and the Workcraft tool were instrumental. These include the design of two different types of arbiter circuits, the design and debugging of the SDFS model, synthesis of asynchronous circuits from the Conditional Partial Order Graph model and the modification of the workflow of Balsa asynchronous circuit synthesis system.
|
94 |
WS-mediator for improving dependability of service compositionChen, Yuhui January 2008 (has links)
Web Services and service-oriented architectures (SOAs) represent a new paradigm for building distributed computing applications. In recent years, they have started to play a critical role in numerous e-Science and e-Commerce applications. The advantages of Web Services, such as their loosely coupled architecture and standardized interoperability, make them a desirable platform, especially for developing large-scale applications such as those based on cross-organizational service composition. However, the Web Service technology is now facing many serious issues that need to be addressed, one of the most important ones being the dependability of their composition. Web Service composition relies on individual component services and computer networks, particularly the Internet. As the component services are autonomous, prior to use their dependability is unknown. In addition to that, computer networks are inherently unreliable media: from the user's perspective, network failures may undermine the dependability of Web Services. Consequently, failures of individual component services and of the network can undermine the dependability of the entire application relying on service composition. Our research is intended to contribute to achieving higher dependability of Web Service composition. We have developed a novel solution, called WS-Mediator system, implementing resilience-explicit computing and fault tolerance mechanisms to improve the dependability of Web Service composition. It consists of a number of subsystems, called Sub-Mediators, which are deployed at various geographical locations across the Internet to monitor Web Services and dynamically generate Web Service dependability metadata in order to make resilience-explicit decisions. In addition to applying the fault tolerance mechanisms that deal with various kinds of faults during the service composition, the resilience-explicit reconfiguration mechanism dynamically selects the most dependable Web Services to achieve higher service composition dependability fault tolerance. A specific instance of the WS-Mediator architecture has been developed in the Java Web Service technology. A series of experiments with real-world Web Services, in particular in the bioinformatics domain, have been carried out using the Java WS- Mediator. The results of the experiments have demonstrated the applicability of the WS-Mediator approach.
|
95 |
Enhancing the usability of rely-guarantee conditions for atomicity refinementPierce, Kenneth George January 2009 (has links)
Formal methods are a useful tool for increasing the confidence in the correctness of computer programs with respect to their specifications. Formal methods allow designers to model specifications and these formal models can then be reasoned about in a rigourous way. Formal methods for sequential processes are well-understood, however formal methods for concurrent programs are more difficult, because of the interference which may arise when programs run concurrently. Rely-guarantee reasoning is a well-established formal method for modelling concurrent programs. Rely-guarantee conditions offer a tractable and compositional approach to reasoning about concurrent programs, by allowing designers to reason about the interference inherent in concurrent systems. While useful, there are certain weaknesses in rely-guarantee conditions. In particular, the requirement for rely-guarantee conditions to describe whole-state updates can make large specifications unwieldy. Similarly, it can be difficult to describe problems which exhibit distinct phases of execution. The main contribution of this thesis is to show ways in which these two weaknesses of rely-guarantee reasoning can be addressed. In turn, this enhances the usability of rely-guarantee conditions. Atomicity refinement is a potentially useful tool for simplifying the development of concurrent programs. The central idea is that designers can record (possibly unrealistic) atomicity assumptions about the eventual implementation of a program. This fiction of atomicity simplifies the design process by avoiding the difficult issue of interference. It is then necessary to identify ways in which this atomicity can be relaxed and concurrent execution introduced. This thesis also argues that the choice of data representation plays an important role in achieving atomicity refinement. In addition, this thesis presents an argument that rely-guarantee conditions and VDM offer a potentially fruitful approach to atomicity refinement. Specifically, rely-conditions can be used to represent assumptions about atomicity and the refinement rules of VDM allow different data representations to be introduced. To this end, a more usable approach to rely-guarantee reasoning would benefit the search for a usable form of atomicity refinement. All of these points are illustrated with a novel development of Simpson’s Four-Slot, a mechanism for asynchronous communication between processes.
|
96 |
A context provisioning middleware with support for evolving awarenessKnappmeyer, Michael January 2012 (has links)
This thesis contributes to the research domain of Ubiquitous and Context-aware Computing. It presents a novel middleware (entitled Context Provisioning Middleware with Support for Evolving Awareness; C- ProMiSE) that applies a consumer-producer role model as architectural basis. The middleware aims at supporting diverse applications and services to easily and coherently acquire relevant context information. A mediat- ing Context Broker facilitates the coordination between distributed Context Provider, Context Source and Context Consumer components. The chosen design principles support' self-management capabilities and modular extendibility during run-time. Communication is based on the Representational State Transfer (REST) approach. Context is represented in ContextML, an XML-based modelling schema, enabling a structured generally applicable basis for various context domains, e.g. spatial, temporal, device-specific and user-centric prop- erties. This combination of context management and context representation model allows for gradual and distributed context processing and reasoning. Context is abstracted in various layers from primitive data up to high-level interpretation, e.g. users' activities. Specific emphasis is put on probabilistic reasoning of data originating from physical, virtual and logical sensors. In addition to the conceptualisation, a specific prototype implementation is presented and utilised as ex- perimentation testbed. Its functional evaluation covers field tests and context emulation. With regard to quantitative evaluation, the C-ProMiSE performance is finally assessed by applying both black-box tests of specific prototype components and Discrete Event Simulation at system level. Focus of the experimen- tation is to estimate the responsive context provisioning behaviour realistically. The obtained results lend weight to the argument that a consumer-producer-broker based framework can serve as basis to realise a distributed multi-domain context provisioning middleware that does not only scale physically but also functionally with regard to context processing capabilities. Furthermore, the con- text management, context representation and context processing concepts allow for supporting a variety of emerging and evolving context-aware applications and services.
|
97 |
Network media distribution systems: understanding the media ecology of software using an evolutionary frameworkRawlings, Tomas Richard January 2013 (has links)
Developed from the practice of building an open source Peer-to-Peer (P2P) distribution system, this research begins by examining the potential forking (duplicating two copies as separate entities) of a software development project. This research begins with the exploration of the discourses of forking within Software Studies and Actor Network Theory, concluding that both offer a degree of insight, especially around power relations but that they both lack predictive and temporal qualities in order to model the emergent pattems arising from the development process. As an alternative view, the research considers the idea of forking as a software-borne process of speciation, where forking establishes a barrier to source-code flow much as a geographical barrier impedes genetic exchange in biology. Exploring the existing ideas and theories of applying an evolutionary model to the development of human technology it argues that these fall into two broad areas of concern. The first of these areas considers evolution as too narrow a mechanistic process and so fails to fully account for the role of human society and culture. The second of these areas uses biological evolution as a metaphorical construct and by doing so fails to establish any solid methodological framework beyond that proxy. The research adopts the position that to fully examine technology evolution, the discourse should involve examples of technology which encompass complex assemblages of culture and technology. The research cites media as an example of such as technology and P2P as a sub-set of a media technology. Responding to the concerns with existing concepts of technology evolution, the research draws on the practice of software development to arrive at a new methodology. This method begins with the position that P2P technology has many advantages as an object of study much as biologists study fruit flies for genetic research, where P2P is analogous to studying the genotype (genetic make-up) and phenotype (visual appearance) of a biological species. Building on this idea it then argues that the source code used to create software is a parallel of DNA in form and function. From this basis the research proposes how examining the layers of committed source code of software projects can be used to establish a common analytical framework. This method allows the data gathered from this process to depict proto-phylogenetic trees of different P2P projects, giving an insight into how software forking as a form of technology speciation. It concludes that there is a form of an evolutionary process occurring within software development and outlines a hypotheSiS as to how this form operates. This hypothesis proposes that the environment is the landscape conSisting of human culture and societies, whereas the individual organism here is the particular iteration of the technology and an evolutionary process is emergent from this interaction. It then explores some predictive uses of this hypothesis.
|
98 |
A novel value analysis method for process network optimisationSadhukan, Jhuma January 2002 (has links)
No description available.
|
99 |
The management of dynamically reconfigurable computing systemsSu, Lan January 2008 (has links)
An abstract of the thesis of Lan Su submitted to The University of Manchester Faculty of Engineering for the degree of Doctor of Philosophy of Engineering presented March 2008. Title: The management of Dynamically Reconfigurable Computing Systems In recent years, there has been significant interest in the use of reconfigurable computing devices such as Field-Programmable Gate Array (FPGA) to accelerate software computation. Many contemporary FPGAs support the ability to reconfigure some, or all, of the user logic !f run-time, Le. during the operation of the system. This is known as dynamic reconfiguration. One advantage of dynamic reconfiguration is the ability to execute designs that require a larger number of gates than are present on the FPGA. Some SRAM-based FPGAs support dynamic partial reconfiguration, which enables part of an FPGA to be reconfigured the rest continues to operate. Such a capability opens up the prospect of multitasking hardware, analogous to the execution of concurrent software processes by an operating system. This thesis discusses the development of a multi-tasking operating system which supports the execution of applications by configuring the FPGA resources to execute hardware tasks at run-time. The operating system is called the FPGA Support System (FSS). This runs on a conventional processor attached to the target FPGA. For pragmatic reasons, the FSS was implemented on a personal computer, and the FPGA was a widely used XililLX Virtex device on a PCI card. The key contribution of this thesis is the development of a device driver for the FSS, which can manipulate the Virtex bitstream at run-time to achieve partial reconfiguration. Due to the fact that the architecture of the Virtex is frame-based (or colum.n.-based), two configuration algorithms (simple and advanced) were developed to support 'system calls' for task creation, deletion, suspension, resumption, etc, and all of these facilities are demonstrated via application programs. The experimental results indicated that the simple approach, suitable for one-dimensional column-based task placement, was an order of magnitude faster than the advanced method that can support two-dimensional placement. Research into hardware task scheduling and placement was also carried out. FSS algorithms were developed to decide when and where to place waiting tasks, aild to reuse hardware area occupied by terminated tasks as frequently as possible, thus would be executing the greatest number of tasks on the hardware as more as possible and achieving greater hardware resource utilisation. The management of dynamic reconfiguration for both 1D and a simplified form of2D placement (quasi2D) were simulated and compared using different scheduling policies. These simulation results indicated that a hybrid policy based upon the two scheduling policies provided the best results, and the quasi-2D task placement algoritr.J:11 performed better than 1D placement algorithm in the terms of average utilisation. Supplied by The British Library - 'The world's knowledge'
|
100 |
A software component model with concurrencyNtalamagkas, Ioannis January 2009 (has links)
Component-Based Software Development (CBSD) represents a paradigm shift in software engineering, where the focus is on building systems out of pre-developed components. In CBSD, components developers are different from component users, where the former concentrate on building reusable components, whereas the latter concentrate on building customised systems out of components. The different roles in component development give rise to the idealised component life cycle which consists of the design, the deployment and the run-time phases. Ideally, component composition should occur in both design and deployment phases and it should be supported by a proper composition theory. Concurrency is an important issue in software engineering and in theoretical computer science. Decades of research have been devoted in finding efficient ways of discovering common concurrency errors, like deadlock, in abstract mathematical models and in software. Existing software component models do not address composition and concurrency completely. In general, component models support composition in either design or deployment phase but not in both. Regarding concurrency, the support provided varies, ranging from complete formal models to leaving the concurrency aspect undefined and dependant on the implementation. More importantly, not all component models support both passive and active components. In this thesis we define a software component model that supports composition of active and passive components using explicit composition operators, in both design and deployment phases of the idealised component life cycle. We also show that our composition connectors are control patterns and we define their composition. This means that composite connectors are composed out of simpler ones. This allows for the hierarchical construction of complex control structures that can be used in further connector and component compositions. Connector composition constitutes a unique feature of our model.
|
Page generated in 0.0174 seconds