Return to search

Applications of Description Logic and Causality in Model Checking

Model checking is an automated technique for the verification of finite-state systems that is widely used in practice.
In model checking, a model M is verified against a specification $\varphi$, exhaustively checking that the tree of all computations of M satisfies $\varphi$.
When $\varphi$ fails to hold in M, the negative result is accompanied
by a counterexample: a computation in M that demonstrates the failure.
State of the art model checkers apply Binary Decision Diagrams(BDDs) as well as satisfiability solvers for this task. However, both methods suffer from the state explosion problem, which restricts the application of model checking to only modestly sized systems. The importance of model checking makes it worthwhile to explore
alternative technologies, in the hope
of broadening the applicability
of the technique to a wider class of systems.

Description Logic (DL) is a family of knowledge representation formalisms based on decidable fragments of first order logic.
DL is used mainly for designing ontologies in information systems. In recent years several DL reasoners have been developed, demonstrating an impressive capability to cope with very large ontologies.

This work consists of two parts. In the first we harness the growing ability of DL reasoners to solve model checking problems.
We show how DL can serve as a natural setting for representing and solving a model checking problem, and present a variety of
encodings that translate such problems into consistency queries in DL.
Experimental results, using the Description Logic reasoner FaCT++, demonstrate that for some systems and properties, our method can
outperform existing ones.

In the second part we approach a different aspect of model checking. When a specification fails to hold in a model and a counterexample is presented to the user, the counterexample may itself be complex and difficult to understand. We propose an automatic technique to find the computation steps and their associated variable values, that are of particular importance in generating the counterexample. We use the notion of causality to formally define a set
of causes for the failure of the specification on the given counterexample. We give a linear-time algorithm to detect
the causes, and we demonstrate how these causes can be presented to the user as a visual explanation of the failure.

Identiferoai:union.ndltd.org:WATERLOO/oai:uwspace.uwaterloo.ca:10012/4485
Date January 2009
CreatorsBen-David, Shoham
Source SetsUniversity of Waterloo Electronic Theses Repository
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation

Page generated in 0.0021 seconds