1 |
Sequential Equivalence Checking with Efficient Filtering Strategies for Inductive InvariantsNguyen, Huy 24 May 2011 (has links)
Powerful sequential optimization techniques can drastically change the Integrated Circuit (IC) design paradigm. Due to the limited capability of sequential verification tools, aggressive sequential optimization is shunned nowadays as there is no efficient way to prove the preservation of equivalence after optimization. Due to the fact that the number of transistors fitting on single fixed-size die increases with Moore's law, the problem gets harder over time and in an exponential rate. It is no surprise that functional verification becomes a major bottleneck in the time-to-market of a product. In fact, literature has reported that 70% of design time is spent on making sure the design is bug-free and operating correctly. One of the core verification tasks in achieving high quality products is equivalence checking. Essentially, equivalence checking ensures the preservation of optimized product's functionality to the unoptimized model. This is important for industry because the products are modified constantly to meet different goals such as low power, high performance, etc. The mainstream in conducting equivalence checking includes simulation and formal verification. In simulation approach, golden design and design under verification (DUV) are fed with same stimuli for input expecting outputs to produce identical responses. In case of discrepancy, traces will be generated and DUV will undergo modifications. With the increase in input pins and state elements in designs, exhaustive simulation becomes infeasible. Hence, the completeness of the approach is not guaranteed and notions of coverage has to be accompanied. On the other hand, formal verification incorporates mathematical proofs and guarantee the completeness over the search space. However, formal verification has problems of its own in which it is usually resource intensive. In addition, not all design can be verified after optimization processes. That is to say the golden model and DUV are vastly different in structure which cause modern checker to give inconclusive result. Due to this nature, this thesis focuses in improving the strength and the efficiency of sequential equivalence checking (SEC) using formal approach.
While there has been great strides made in the verification for combinational circuits, SEC still remains rather rudimentary. Without powerful SEC as a backbone, aggressive sequential synthesis and optimization are often avoided if the optimized design cannot be proved to be equivalent to the original one. In an attempt to take on the challenges of SEC, we propose two frameworks that successfully determining equivalence for hard-to-verify circuits. The first framework utilizes arbitrary relations between any two nodes within the two sequential circuits in question. The two nodes can reside in the same or across the circuits; likewise, they can be from the same time-frame or across time-frames. The merit for this approach is to use global structure of the circuits to speed up the verification process. The second framework introduces techniques to identify subset but yet powerful multi-node relations (involve more than 2 nodes) which then help to prune large don't care search space and result in a successful SEC framework. In contrast with previous approaches in which exponential number of multi-node relations are mined and learned, we alleviate the computation cost by selecting much fewer invariants to achieve desired conclusion. Although independent, the two frameworks could be used in sequential to complement each other. Experimental results demonstrate that our frameworks can take on many hard-to-verify cases and show a significant speed up over previous approaches. / Master of Science
|
2 |
Dynamic Invariant Generation for Concurrent ProgramsChattopadhyay, Arijit 23 June 2014 (has links)
We propose a fully automated and dynamic method for generating likely invariants from multithreaded programs and then leveraging these invariants to infer atomic regions and diagnose concurrency errors in the software code. Although existing methods for dynamic invariant generation perform reasonably well on sequential programs, for multithreaded programs, their effectiveness often reduces dramatically in terms of both the number of invariants that they can generate and the likelihood of them being true invariants. We solve this problem by developing a new dynamic invariant generator, which consists of a new LLVM based code instrumentation tool, an INSPECT based thread interleaving explorer, and a customized inference engine inside Daikon. We have evaluated the resulting system on public domain multithreaded C/C++ benchmarks. Our experiments show that the new method is effective in generating high-quality invariants. Furthermore, the state and transition invariants generated by our new method have been proved useful both in error diagnosis and in identifying likely atomic regions in the concurrent software code. / Master of Science
|
3 |
Dynamic pointer tracking and its applicationsZhang, Kun 12 January 2010 (has links)
Due to the significant limitations of static analysis and the dynamic nature of pointers in weakly typed programming languages like C and C++, the points-to sets obtained at compile time are quite conservative. Most static pointer analysis methods trade the precision for the analysis speed. The methods that perform the analysis in a reasonable amount of time are often context and/or flow insensitive. Other methods that are context, flow, and field sensitive have to perform the whole program inter-procedural analysis, and do not scale with respect to the program size. A large class of problems involving optimizations such as instruction prefetching, control and data speculation, redundant load/store instructions removal, instruction scheduling, and memory disambiguation suffer due to the imprecise and conservative points-to sets computed statically. One could possibly live without optimizations, but in domains involving memory security and safety, lack of the precise points-to sets can jeopardize the security and safety. In particular, the lack of dynamic points-to sets drastically reduce the ability to reason about a program's memory access behavior, and thus illegal memory accesses can go unchecked leading to bugs as well as security holes. On the other hand, the points-to sets can be very useful for other domains such as the heap shape analysis and garbage collection. The knowledge of precise points-to sets is therefore becoming very important, but has received little attention so far beyond a few studies, which have shown that the pointers exhibit very interesting behaviors during execution. How to track such behaviors dynamically and benefit from them is the topic covered by this research.
In this work, we propose a technique to compute the precise points-to sets through dynamic pointer tracking. First, the compiler performs the pointer analysis to obtain the static points-to sets. Then, the compiler analyzes the program, and inserts the necessary instructions to refine the points-to sets. At runtime, the inserted instructions automatically update the points-to sets. Dynamic pointer tracking in software can be expensive and can be a barrier to the practicality of such methods. Several optimizations including removal of redundant update, post-loop update, special pattern driven update removal, pointer initialization update removal, update propagation, invariant removal, and on demand update optimization are proposed. Our experimental results demonstrate that our mechanism is able to compute the points-to sets dynamically with tolerable overheads. Finally, the memory protection and garbage collection work are presented as the consumers of dynamic pointer tracking to illustrate its importance. In particular, it is shown how different memory properties can be easily tracked using the dynamic points-to sets opening newer possibilities.
|
Page generated in 0.0675 seconds