• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Applications of lattice theory to model checking

Kashyap, Sujatha 27 April 2015 (has links)
Society is increasingly dependent on the correct operation of concurrent and distributed software systems. Examples of such systems include computer networks, operating systems, telephone switches and flight control systems. Model checking is a useful tool for ensuring the correctness of such systems, because it is a fully automatic technique whose use does not require expert knowledge. Additionally, model checking allows for the production of error trails when a violation of a desired property is detected. Error trails are an invaluable debugging aid, because they provide the programmer with the sequence of events that lead to an error. Model checking typically operates by performing an exhaustive exploration of the state space of the program. Exhaustive state space exploration is not practical for industrial use in the verification of concurrent systems because of the well-known phenomenon of state space explosion caused by the exploration of all possible interleavings of concurrent events. However, the exploration of all possible interleavings is not always necessary for verification. In this dissertation, we show that results from lattice theory can be applied to ameliorate state space explosion due to concurrency, and to produce short error trails when an error is detected. We show that many CTL formulae exhibit lattice-theoretic structure that can be exploited to avoid exploring multiple interleavings of a set of concurrent events. We use this structural information to develop efficient model checking techniques for both implicit (partial order) and explicit (interleaving) models of the state space. For formulae that do not exhibit the required structure, we present a technique called predicate filtering, which uses a weaker property with the desired structural characteristics to obtain a reduced state space which can then be exhaustively explored. We also show that lattice theory can be used to obtain a path of shortest length to an error state, thereby producing short error trails that greatly ease the task of debugging. We provide experimental results from a wide range of examples, showing the effectiveness of our techniques at improving the efficiency of verifying and debugging concurrent and distributed systems. Our implementation is based on the popular model checker SPIN, and we compare our performance against the state-of-the-art state space reduction strategies implemented in SPIN. / text
2

Hierarchical Interface-Based Decentralized Supervisory Control

Liu, Huailiang 11 December 2015 (has links)
In decentralized control, agents have only a partial view and partial control of the system and must cooperate to achieve the control objective. In order to synthesize a decentralized control solution, a specification must satisfy the co-observability property. Existing co-observability verification methods require the possibly intractable construction of the complete system. To address this issue, we introduce an incremental verification of co-observability approach. Selected subgroups of the system are evaluated individually, until verification is complete. The new method is potentially much more efficient than the monolithic approaches, in particular for systems composed of many subsystems, allowing for some intractable problems to be manageable. Properties of this new strategy are presented, along with a corresponding algorithm and an example. To further increase the scalability of decentralized control, we wish to adapt the existing Hierarchical Interface-Based Supervisory Control (HISC) to support it. We introduce the Hierarchical Interface-Based Decentralized Supervisory Control (HIDSC) framework that extends HISC to decentralized control. To adapt co-observability for HIDSC, we propose a per-component definition of co-observability along with a verification strategy that requires only a single component at a time in order to verify co-observability. Finally, we provide and prove the necessary and sufficient conditions for supervisory control existence in the HIDSC framework and illustrate our approach with an example. As the entire system model never needs to be constructed, HIDSC potentially provides significant savings. / Thesis / Doctor of Philosophy (PhD)
3

Systematic and Scalable Testing of Concurrent Programs

Simsa, Jiri 16 December 2013 (has links)
The challenge this thesis addresses is to speed up the development of concurrent programs by increasing the efficiency with which concurrent programs can be tested and consequently evolved. The goal of this thesis is to generate methods and tools that help software engineers increase confidence in the correct operation of their programs. To achieve this goal, this thesis advocates testing of concurrent software using a systematic approach capable of enumerating possible executions of a concurrent program. The practicality of the systematic testing approach is demonstrated by presenting a novel software infrastructure that repeatedly executes a program test, controlling the order in which concurrent events happen so that different behaviors can be explored across different test executions. By doing so, systematic testing circumvents the limitations of traditional ad-hoc testing, which relies on chance to discover concurrency errors. However, the idea of systematic testing alone does not quite solve the problem of concurrent software testing. The combinatorial nature of the number of ways in which concurrent events of a program can execute causes an explosion of the number of possible interleavings of these events, a problem referred to as state space explosion. To address the state space explosion problem, this thesis studies techniques for quantifying the extent of state space explosion and explores several directions for mitigating state space explosion: parallel state space exploration, restricted runtime scheduling, and abstraction reduction. In the course of its research exploration, this thesis pushes the practical limits of systematic testing by orders of magnitude, scaling systematic testing to real-world programs of unprecedented complexity.
4

Scalable analysis of stochastic process algebra models

Tribastone, Mirco January 2010 (has links)
The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results.
5

Using Explicit State Space Enumeration For Specification Based Regression Testing

Chakrabarti, Sujit Kumar 01 1900 (has links)
Regression testing of an evolving software system may involve significant challenges. While, there would be a requirement of maximising the probability of finding out if the latest changes to the system has broken some existing feature, it needs to be done as economically as possible. A particularly important class of software systems are API libraries. Such libraries would typically constitute a very important component of many software systems. High quality requirements make it imperative to continually optimise the internal implementation of such libraries without affecting the external interface. Therefore, it is preferred to guide the regression testing by some kind of formal specification of the library. The testing problem comprises of three parts: computation of test data, execution of test, and analysis of test results. Current research mostly focuses on the first part. The objective of test data computation is to maximise the probability of uncovering bugs, and to do it with as few test cases as possible. The problem of test data computation for regression testing is to select a subset of the original test suite running which would suffice to test for bugs probably inserted in the modifications done after the last round of testing. A variant of this problem is that of regression testing of API libraries. The regression testing of an API is usually done by making function calls in such a way that the sequence of function calls thus made suffices a test specification. The test specification in turn embodies some concept of completeness. In this thesis, we focus on the problem of test sequence computation for the regression testing of API libraries. At the heart of this method lies the creation of a state space model of the API library by reverse engineering it by executing the system, with guidance from an formal API specification. Once the state space graph is obtained, it is used to compute test sequences for satisfying some test specification. We analyse the theoretical complexity of the problem of test sequence computation and provide various heuristic algorithms for the same. State space explosion is a classical problem encountered whenever there is an attempt of creating a finite state model of a program. Our method also faces this limitation. We explore a simple and intuitive method of ameliorating this problem – by simply reducing the size of the state vector. We develop the theoretical insights into this method. Also, we present experimental results indicating the practical effectiveness of this method. Finally, we bring all this together into the design and implementation of a tool called Modest.

Page generated in 0.0491 seconds