41 |
Applications of Description Logic and Causality in Model CheckingBen-David, Shoham January 2009 (has links)
Model checking is an automated technique for the verification of finite-state systems that is widely used in practice.
In model checking, a model M is verified against a specification $\varphi$, exhaustively checking that the tree of all computations of M satisfies $\varphi$.
When $\varphi$ fails to hold in M, the negative result is accompanied
by a counterexample: a computation in M that demonstrates the failure.
State of the art model checkers apply Binary Decision Diagrams(BDDs) as well as satisfiability solvers for this task. However, both methods suffer from the state explosion problem, which restricts the application of model checking to only modestly sized systems. The importance of model checking makes it worthwhile to explore
alternative technologies, in the hope
of broadening the applicability
of the technique to a wider class of systems.
Description Logic (DL) is a family of knowledge representation formalisms based on decidable fragments of first order logic.
DL is used mainly for designing ontologies in information systems. In recent years several DL reasoners have been developed, demonstrating an impressive capability to cope with very large ontologies.
This work consists of two parts. In the first we harness the growing ability of DL reasoners to solve model checking problems.
We show how DL can serve as a natural setting for representing and solving a model checking problem, and present a variety of
encodings that translate such problems into consistency queries in DL.
Experimental results, using the Description Logic reasoner FaCT++, demonstrate that for some systems and properties, our method can
outperform existing ones.
In the second part we approach a different aspect of model checking. When a specification fails to hold in a model and a counterexample is presented to the user, the counterexample may itself be complex and difficult to understand. We propose an automatic technique to find the computation steps and their associated variable values, that are of particular importance in generating the counterexample. We use the notion of causality to formally define a set
of causes for the failure of the specification on the given counterexample. We give a linear-time algorithm to detect
the causes, and we demonstrate how these causes can be presented to the user as a visual explanation of the failure.
|
42 |
Mapping Template Semantics to SMVLu, Yun January 2004 (has links)
Template semantics is a template-based approach to describing the semantics of model-based notations, where a pre-defined template captures the notations' common semantics, and parameters specify the notations' distinct semantics. In this thesis, we investigate using template semantics to parameterize the translation from a model-based notation to the input language of the SMV family of model checkers. We describe a fully automated translator that takes as input a specification written in template semantics syntax, and a set of template parameters, encoding the specification's semantics, and generates an SMV model of the specification. The result is a parameterized technique for model checking specifications written in a variety of notations. Our work also shows how to represent complex composition operators, such as rendezvous synchronization, in the SMV language, in which there is no matching language construct.
|
43 |
Applications of Description Logic and Causality in Model CheckingBen-David, Shoham January 2009 (has links)
Model checking is an automated technique for the verification of finite-state systems that is widely used in practice.
In model checking, a model M is verified against a specification $\varphi$, exhaustively checking that the tree of all computations of M satisfies $\varphi$.
When $\varphi$ fails to hold in M, the negative result is accompanied
by a counterexample: a computation in M that demonstrates the failure.
State of the art model checkers apply Binary Decision Diagrams(BDDs) as well as satisfiability solvers for this task. However, both methods suffer from the state explosion problem, which restricts the application of model checking to only modestly sized systems. The importance of model checking makes it worthwhile to explore
alternative technologies, in the hope
of broadening the applicability
of the technique to a wider class of systems.
Description Logic (DL) is a family of knowledge representation formalisms based on decidable fragments of first order logic.
DL is used mainly for designing ontologies in information systems. In recent years several DL reasoners have been developed, demonstrating an impressive capability to cope with very large ontologies.
This work consists of two parts. In the first we harness the growing ability of DL reasoners to solve model checking problems.
We show how DL can serve as a natural setting for representing and solving a model checking problem, and present a variety of
encodings that translate such problems into consistency queries in DL.
Experimental results, using the Description Logic reasoner FaCT++, demonstrate that for some systems and properties, our method can
outperform existing ones.
In the second part we approach a different aspect of model checking. When a specification fails to hold in a model and a counterexample is presented to the user, the counterexample may itself be complex and difficult to understand. We propose an automatic technique to find the computation steps and their associated variable values, that are of particular importance in generating the counterexample. We use the notion of causality to formally define a set
of causes for the failure of the specification on the given counterexample. We give a linear-time algorithm to detect
the causes, and we demonstrate how these causes can be presented to the user as a visual explanation of the failure.
|
44 |
Using Software Model Checking for Software CertificationTaleghani, Ali January 2010 (has links)
Software certification is defined as the process of independently confirming that a system or component complies with its specified requirements and is acceptable for use. It consists of the following steps: (1) the software producer subjects her software to rigorous testing and submits for certification, among other documents, evidence that the software has been thoroughly verified, and (2) the certifier evaluates the completeness of the verification and confirms that the software meets its specifications. The certification process is typically a manual evaluation of thousands of pages of documents that the software producer submits. Moreover, most of the current certification techniques focus on certifying testing results, but there is an increase in using formal methods to verify software. Model checking is a formal verification method that systematically explores the entire execution state space of a software program to ensure that a property is satisfied in every program state.
As the field of model checking matures, there is a growing interest in its use for verification. In fact, several industrial-sized software projects have used model checking for verification, and there has been an increased push for techniques, preferably automated, to certify model checking results. Motivated by these challenges in certification, we have developed a set of automated techniques to certify model-checking results.
One technique, called search-carrying code (SCC), uses information collected by a model checker during the verification of a program to speed up the certification of that program. In SCC, the software producer's model checker performs an exhaustive search of a program's state space and creates a search script that acts as a certificate of verification. The certifier's model checker uses the search script to partition its search task into a number of smaller, roughly balanced tasks that can be distributed to parallel model checkers, thereby using parallelization to speed up certification.
When memory resources are limited, the producer's model checker can reduce its memory requirements by caching only a subset of the model-checking-search results. Caching increases the likelihood that an SCC verification task runs to completion and produces a search script that represents the program's entire state space. The downside of caching is that it can result in an increase in search time. We introduce cost-based caching, that achieves an exhaustive search faster than existing caching techniques.
Finally, for cases when an exhaustive search is not possible, we present a novel method for estimating the state-space coverage of a partial model checking run. The coverage estimation can help the certifier to determine whether the partial model-checking results are adequate for certification.
|
45 |
Flexible and efficient reliability in memory systemsYoon, Doe Hyun 22 June 2011 (has links)
Future computing platforms will increasingly demand more stringent memory resiliency mechanisms due to shrinking memory cell size, reduced error margins, higher capacity, and higher reliability expectations. Traditional mechanisms, which apply error checking and correcting (ECC) codes uniformly across all memory locations, are inefficient -- Uniform protection dedicates resources to redundant information and demand higher cost for stronger protection, a fixed (worst-case based) error tolerance level, and a fixed access granularity.
The design of modern computing platforms is a multi-objective optimization, balancing performance, reliability, and many other parameters within a constrained power budget. If resiliency mechanisms consume too many resources, we lose an opportunity to improve performance. Hence, it is important and necessary to enable more efficient and flexible memory resiliency mechanisms.
This dissertation develops techniques that enable efficient, adaptive, and dynamically tunable memory resiliency mechanisms.
First, we develop two-tiered protection, apply it to the last-level cache, and present Memory Mapped ECC (MME) and ECC FIFO. Two-tiered protection provides low-cost error detection or light-weight correction in the common case read operations, while the uncommon case error correction overhead is off-loaded to
main memory namespace. MME and ECC FIFO use different schemes for managing redundant information in main memory. Both achieve 15-25% reduction in area and 9-18% reduction in power consumption of the last-level cache, while performance is degraded by only 0.7% on average.
Then, we apply two-tiered protection to main memory and augment the virtual memory interface to dynamically adapt error tolerance levels according to user, system, and environmental needs. This mechanism, Virtualized ECC (V-ECC), improves system energy efficiency by 12% and degrades performance only by 1-2% for chipkill-correct level protection. V-ECC also supports ECC in a system with no dedicated storage for redundant information.
Lastly, we propose the adaptive granularity memory system (AGMS) that allows different access granularities, while supporting ECC. By not wasting off-chip bandwidth for transferring unnecessary data, AGMS achieves higher throughput (by 44%) and power efficiency (by 46%) in a 4-core CMP system. Furthermore, AGMS will provide further gains in future systems, where off-chip bandwidth will be comparatively scarce. / text
|
46 |
Using Software Model Checking for Software CertificationTaleghani, Ali January 2010 (has links)
Software certification is defined as the process of independently confirming that a system or component complies with its specified requirements and is acceptable for use. It consists of the following steps: (1) the software producer subjects her software to rigorous testing and submits for certification, among other documents, evidence that the software has been thoroughly verified, and (2) the certifier evaluates the completeness of the verification and confirms that the software meets its specifications. The certification process is typically a manual evaluation of thousands of pages of documents that the software producer submits. Moreover, most of the current certification techniques focus on certifying testing results, but there is an increase in using formal methods to verify software. Model checking is a formal verification method that systematically explores the entire execution state space of a software program to ensure that a property is satisfied in every program state.
As the field of model checking matures, there is a growing interest in its use for verification. In fact, several industrial-sized software projects have used model checking for verification, and there has been an increased push for techniques, preferably automated, to certify model checking results. Motivated by these challenges in certification, we have developed a set of automated techniques to certify model-checking results.
One technique, called search-carrying code (SCC), uses information collected by a model checker during the verification of a program to speed up the certification of that program. In SCC, the software producer's model checker performs an exhaustive search of a program's state space and creates a search script that acts as a certificate of verification. The certifier's model checker uses the search script to partition its search task into a number of smaller, roughly balanced tasks that can be distributed to parallel model checkers, thereby using parallelization to speed up certification.
When memory resources are limited, the producer's model checker can reduce its memory requirements by caching only a subset of the model-checking-search results. Caching increases the likelihood that an SCC verification task runs to completion and produces a search script that represents the program's entire state space. The downside of caching is that it can result in an increase in search time. We introduce cost-based caching, that achieves an exhaustive search faster than existing caching techniques.
Finally, for cases when an exhaustive search is not possible, we present a novel method for estimating the state-space coverage of a partial model checking run. The coverage estimation can help the certifier to determine whether the partial model-checking results are adequate for certification.
|
47 |
Redundancy-aware Electromigration Checking for Mesh Power GridsChatterjee, Sandeep 21 November 2013 (has links)
Electromigration is re-emerging as a significant problem in modern integrated circuits (IC). Especially in power-grids, due to shrinking wire widths and increasing current densities, there is little or no margin left between the predicted EM stress and that allowed by the EM design rules. Statistical Electromigration Budgeting estimates the reliability of the grid by considering it as a series system. However, a power grid with its many parallel paths has much inherent redundancy. In this work, we propose a new model to estimate the MTF and reliability of the power grid under the influence of EM, which accounts for these redundancies. To implement the mesh model, we also develop a framework to estimate the change in statistics of an interconnect as its effective-EM current varies. The results indicate that the series model gives a pessimistic estimate of power grid MTF by a factor of 3-4.
|
48 |
Redundancy-aware Electromigration Checking for Mesh Power GridsChatterjee, Sandeep 21 November 2013 (has links)
Electromigration is re-emerging as a significant problem in modern integrated circuits (IC). Especially in power-grids, due to shrinking wire widths and increasing current densities, there is little or no margin left between the predicted EM stress and that allowed by the EM design rules. Statistical Electromigration Budgeting estimates the reliability of the grid by considering it as a series system. However, a power grid with its many parallel paths has much inherent redundancy. In this work, we propose a new model to estimate the MTF and reliability of the power grid under the influence of EM, which accounts for these redundancies. To implement the mesh model, we also develop a framework to estimate the change in statistics of an interconnect as its effective-EM current varies. The results indicate that the series model gives a pessimistic estimate of power grid MTF by a factor of 3-4.
|
49 |
Una metodología de detección de fallos transitorios en aplicaciones paralelas sobre cluster de multicoresMontezanti, Diego Miguel January 2014 (has links)
El aumento en la escala de integración, con el objetivo de mejorar las prestaciones en los procesadores actuales, sumado al crecimiento de los sistemas de cómputo, han producido que la fiabilidad se haya vuelto un aspecto relevante. En particular, la creciente vulnerabilidad a los fallos transitorios se ha vuelto crítica, a causa de la capacidad de estos fallos de corromper los resultados de las aplicaciones.
Históricamente, los fallos transitorios han sido una preocupación en el diseño de sistemas críticos, como sistemas de vuelo o servidores de alta disponibilidad, en los que las consecuencias del fallo pueden resultar desastrosas. Pese a ser fallos temporarios, tienen la capacidad de alterar el comportamiento del sistema de cómputo. A partir del año 2000 se han vuelto más frecuentes los reportes de desperfectos significativos en distintas supercomputadoras, debidos a los fallos transitorios.
El impacto de los fallos transitorios se vuelve más relevante en el contexto del Cómputo de Altas Prestaciones (HPC). Aun cuando el tiempo medio entre fallos (MTBF) es del orden de 2 años para un procesador comercial, en el caso de una supercomputadora con cientos o miles de procesadores que cooperan para resolver una tarea, el MTBF disminuye cuanto mayor es la cantidad de procesadores. Esta situación se agrava con el advenimiento de los procesadores multicore y las arquitecturas de cluster de multicores, que incorporan un alto grado de paralelismo a nivel de hardware. La incidencia de los fallos transitorios es aún mayor en el caso de aplicaciones de gran duración, que manejan elevados volúmenes de datos, dado el alto costo (en términos de tiempo y utilización de recursos) que implica volver a lanzar la ejecución desde el comienzo, en caso de obtener resulta-dos incorrectos debido a la ocurrencia del fallo.
Estos factores justifican la necesidad de desarrollar estrategias específicas para mejorar la con-fiabilidad en sistemas de HPC; en este sentido, es crucial poder detectar los fallos llamados silenciosos, que alteran los resultados de las aplicaciones pero que no son interceptados por el sistema operativo ni ninguna otra capa de software del sistema, por lo que no causan la finalización abrupta de la ejecución.
En este contexto, el trabajo analizará una metodología distribuida basada en software, diseñada para aplicaciones paralelas científicas que utilizan paso de mensajes, capaz de detectar fallos transitorios mediante la validación de contenidos de los mensajes que se van a enviar a otro proceso de la aplicación. Esta metodología, previamente publicada, intenta abordar un problema no cubierto por las propuestas existentes, detectando los fallos transitorios que permiten la continuidad de la ejecución pero que son capaces de corromper los resultados finales, mejorando la confiabilidad del sistema y disminuyendo el tiempo luego del cual se puede relanzar la aplicación, lo cual es especialmente útil en ejecuciones prolongadas.
|
50 |
Guided random-walk based model checkingBui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
|
Page generated in 0.0253 seconds