1 |
A method for rigorous development of fault-tolerant systemsLopatkin, Ilya January 2013 (has links)
With the rapid development of information systems and our increasing dependency on computer-based systems, ensuring their dependability becomes one the most important concerns during system development. This is especially true for the mission and safety critical systems on which we rely not to put signi cant resources and lives at risk. Development of critical systems traditionally involves formal modelling as a fault prevention mechanism. At the same time, systems typically support fault tolerance mechanisms to mitigate runtime errors. However, fault tolerance modelling and, in particular, rigorous de nitions of fault tolerance requirements, fault assumptions and system recovery have not been given enough attention during formal system development. The main contribution of this research is in developing a method for top-down formal design of fault tolerant systems. The re nement-based method provides modelling guidelines presented in the following form: a set of modelling principles for systematic modelling of fault tolerance, a fault tolerance re nement strategy, and a library of generic modelling patterns assisting in disciplined integration of error detection and error recovery steps into models. The method supports separation of normal and fault tolerant system behaviour during modelling. It provides an environment for explicit modelling of fault tolerance and modal aspects of system behaviour which ensure rigour of the proposed development process. The method is supported by tools that are smoothly integrated into an industry-strength development environment. The proposed method is demonstrated on two case studies. In particular, the evaluation is carried out using a medium-scale industrial case study from the aerospace domain. The method is shown to provide support for explicit modelling of fault tolerance, to reduce the development e orts during modelling, to support reuse of fault tolerance modelling, and to facilitate adoption of formal methods.
|
2 |
The matching unit of the Manchester data-flow computer : a pseudo associative store with hardware hashingDa Silva, Jose Guilherme Domingues January 1982 (has links)
In the data-flow model of computation instruction execution is determined by the availability of data rather than by an explicit or implicit sequential flow of control. One of the major problems in the architectural design of a data-flow computer is the detection of the availability of data. This problem is compounded if the data carry context information as well as pointers to the instructions that will use them; an instruction is then executable when all data directed to it within the same context are present. The solution adopted in the Manchester design is to limit the maximum number of operands of an instruction to two, and to use associative storage techniques to detect the presence of data. The use of true content-addressable memory is precluded by its small density and high cost, and therefore a pseudo associative store using hardware hashing techniques and implemented with conventional random-access memory is employed. The concept of sequence in the data-flow model of computation is unimportant: as a result search operations do not have to be resolved in the same sequence that the store is interrogated. This suggests a design which uses a main parallel hash table and a separate overflow mechanism operating in parallel. In this manner, an overflow search need not hal t the progress of further main hash table searches. A pseudo associative store results whose average access time is very close to the cycle time of the original random-access memory.
|
3 |
Reconstruction of the international information technology standardisation process within a component-based design framework : a component based project development setting perspectiveNgosi, Theodora N. January 2007 (has links)
No description available.
|
4 |
Principles for secure system designJürjens, Jan January 2002 (has links)
No description available.
|
5 |
Design time detection of architectural mismatches in service oriented architecturesGamble, Carl Jeffrey January 2011 (has links)
Service Oriented Architecture (SOA) is a software component paradigm that has the potential to allow for exible systems that are loosely coupled to each other. They are discoverable entities that may be bound to at run time by a client who is able to use the service correctly by referring to the service's description documents. Assumptions often have to be made in any design process if the problem domain is not fully speci ed. If those decisions are about the software architecture of that component and it is inserted into a system with di ering and incompatible assumptions then we say that an architectural mismatch exists. Architectural styles are a form of software reuse. They can simply be used by referring to a name such as \client-server" or \pipe and lter", where these names may conjure up topologies and expected properties in the architects mind. They can also however be more rigorously de ned given the right software environment. This can lead to a vocabulary of elements in the system, de ned properties of those elements along with rules and analysis to either show correctness of an implementation or reveal some emergent property of the whole. SOA includes a requirement that the service components make available descriptions of themselves, indicating how they are to be used. With this in mind and assuming we have a suitable description of the client application it should be the case that we can detect architectural mismatches when designing a new system. Here designing can range from organising a set of existing components into a novel con guration through to devising an entirely new set of components for an SOA. This work investigates the above statement using Web Services as the SOA implementation and found that, to a degree, the above statement is true. The only element of description required for a web service is the Web Service Description Language (WSDL) document and this does indeed allow the detection of a small number of mismatches when represented using our minimal web service architectural style. However from the literature we nd that the above mismatches are only a subset of those that we argue should be detectable. In response to this we produce an enhanced web service architectural style containing properties and analysis supporting the detection of this more complete set of mismatches and demonstrate its e ectiveness against a number of case studies.
|
6 |
A toolkit for model checking of electronic contractsAbdelsadiq, Abubkr January 2013 (has links)
In the business world, contracts are used to regulate business interactions between trading parties. In this context, an electronic contracting systems can be used to monitor business–to–business interactions to ensure that they comply with the rights (permissions), obligations and prohibitions stipulated in contract clauses. Such an electronic contracting system will require an executable version of the contract (e-contract) for compliance checking. It is important to be verify the correctness properties of an e- contract before deploying it for compliance checking. Model checkers are widely used for automatic verification of concurrent systems. However, such tools for e-contracts with means for expressing directly and intu- itively key concepts that appear recurrently in contracts, such as execu- tions of business operations, granting (cancellation, suspension, fulfilment, violation, etc.) of rights, obligations and prohibitions to role players are not yet available. This thesis rectifies the situation by developing a high-level e-contract verification toolkit using the Spin model checker. A formal Contractual Business-To-Business interaction (CB2B) model based on the concepts of contract compliance checking developed earlier at Newcastle university has been constructed. Further, Promela, the input language of the Spin model checker, has been extended in a manner that enables specification of contract clauses in terms of contract entities: role players, business operations, rights, obligations and prohibitions. A given contract can now be expressed using extended Promela as a set of declarations and a set of Event-Condition-Action rules. In addition, the designer can specify the correctness requirements to be verified in Linear-Temporal-Logic directly in terms of the contract entities. A notable feature is that the CB2B model automatically checks for contract independent properties: properties that must hold for all contracts. For example, at run time, a contract should not simultaneously grant a role player a right to perform an operation and also prohibit it. Thus, the toolkit hides much of the intricate details of dealing with Promela processes communicating through channels and enables a designer to build verifiable abstract models directly in terms of contract entities. The usefulness of the toolkit is demonstrated by trying out a number of contract examples used by researchers working on contract verification. The thesis also shows how the toolkit can be used for generating test cases for testing an implemented system.
|
7 |
Ideology and information systemsStraub, Bernhard Heinrich January 1991 (has links)
Ideology is a political and a sociological term. Thinking about 'system' makes the results of sociological thought appear relevant. The understanding of information systems (IS) as social systems or as phenomena in a social context justifies, therefore, the use of the concept of ideology in the context of IS. As a consequence of this application a series of hypotheses can be formulated about IS and the study of IS. Some of these are taken up in a thought experiment. This experiment is presented in a dialectical mould, consisting of thesis, antithesis and synthesis. In the thesis, the application of the concept of ideology is advocated because it is seen as contributing relevant knowledge to the study of IS. In the antithesis, the above hypothesis is opposed, and consequently the application of the concept of ideology is not advocated. In the synthesis, the application of the concept of ideology is put into a different perspective, where the importance of knowledge is substituted by an emphasis on thinking. Thinking is introduced as the touchstone of relevant knowledge. Its elusive nature is responsible for the elusiveness of claims in systems-thinking and -practice.
|
8 |
Self-diagnosis of faults in microprocessor systemsBilton, J. M. January 1978 (has links)
No description available.
|
9 |
Experimental user interface design toolkit for interaction research (IDTR)Golovine, Jean Claude Raymond Rolin January 2013 (has links)
The research reported and discussed in this thesis represents a novel approach to User Interface evaluation and optimisation through cognitive modelling. This is achieved through the development and testing of a toolkit or platform titled Toolkit for Optimisation of Interface System Evolution (TOISE). The research is conducted in two main phases. In phase 1, the Adaptive Control of Thought Rational (ACT-R) cognitive architecture is used to design Simulated Users (SU) models. This allows models of user interaction to be tested on a specific User Interface (UI). In phase 2, an evolutionary algorithm is added and used to evolve and test an optimised solution to User Interface layout based on the original interface design. The thesis presents a technical background, followed by an overview of some applications in their respective fields. The core concepts behind TOISE are introduced through a discussion of the Adaptive Control of Thought – Rational (ACT-R) architecture with a focus on the ACT-R models that are used to simulate users. The notion of adding a Genetic Algorithm optimiser is introduced and discussed in terms of the feasibility of using simulated users as the basis for automated evaluation to optimise usability. The design and implementation of TOISE is presented and discussed followed by a series of experiments that evaluate the TOISE system. While the research had to address and solve a large number of technical problems the resulting system does demonstrate potential as a platform for automated evaluation and optimisation of user interface layouts. The limitations of the system and the approach are discussed and further work is presented. It is concluded that the research is novel and shows considerable promise in terms of feasibility and potential for optimising layout for enhanced usability.
|
10 |
Performance management of event processing systemsLi, Chunhui January 2014 (has links)
This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.
|
Page generated in 0.0131 seconds