Spelling suggestions: "subject:"checking"" "subject:"hecking""
61 |
Bayesian Model Checking Methods for Dichotomous Item Response Theory and Testlet ModelsCombs, Adam 02 April 2014 (has links)
No description available.
|
62 |
Trojan Detection in Hardware DesignsRaju, Akhilesh January 2017 (has links)
No description available.
|
63 |
Static Learning for Problems in VLSI Test and VerificationSyal, Manan 01 July 2005 (has links)
Static learning in the form of logic implications captures Boolean relationships between various gates in a circuit. In the past, logic implications have been applied in several areas of electronic design automation (EDA) including: test-pattern-generation, logic and fault simulation, fault diagnosis, logic optimization, etc. While logic implications have assisted in solving several EDA problems, their usefulness has not been fully explored. We believe that logic implications have not been carefully analyzed in the past, and this lack of thorough investigation has limited their applicability in solving hard EDA problems. In this dissertation, we offer deeper insights into the Boolean relationships exhibited in a circuit, and present techniques to extract their full potential in solving two hard problems in test and verification: (1) Efficient identification of sequentially untestable stuck-at faults, and (2) Equivalence checking of sequential circuits. Additionally, for the dissertation, we define a new concept called multi-cycle path delay faults (M-pdf) for latch based designs with multiple clock domains, and propose an implications-based methodology for the identification of untestable M-pdfs for such designs.
One of the main bottlenecks in the efficiency of test-pattern-generation (TPG) is the presence of untestable faults in a design. State-of-the-art automatic test pattern generators (ATPG) spend a lot of effort (in both time and memory) targeting untestable faults before aborting on such faults, or, eventually identifying these faults as untestable (if given enough computational resources). In either case, TPG is considerably slowed down by the presence of untestable faults. Thus, efficient methods to identify untestable faults are desired. In this dissertation, we discuss a number of solutions that we have developed for the purpose of untestable fault identification. The techniques that we propose are fault-independent and explore properties associated with logic implications to derive conclusions about untestable faults. Experimental results for benchmark circuits show that our techniques achieve a significant increase in the number of untestable faults identified, at low memory and computational overhead.
The second related problem that we address in this proposal is that of determining the equivalence of sequential circuits. During the design phase, hardware goes through several stages of optimizations (for area, speed, power, etc). Determining the functional correctness of the design after each optimization step by means of exhaustive simulation can be prohibitively expensive. An alternative to prove functional correctness of the optimized design is to determine the design's functional equivalence w.r.t. some golden model which is known to be functionally correct. Efficient techniques to perform this process, known as equivalence checking, have been investigated in the research community. However, equivalence checking of sequential circuits still remains a challenging problem. In an attempt to solve this problem, we propose a Boolean SAT (satisfiability) based framework that utilizes logic implications for the purpose of sequential equivalence checking.
Finally, we define a new concept called multi-cycle path-delay faults (M-pdfs). Traditionally, path delay faults have been analyzed for flip-flop based designs over the boundary of a single clock cycle. However, path delay faults may span multiple clock cycles, and a technique is desired to model and analyze such path delay faults. This is especially essential for latch based designs with multiple clock domains, because the problem of identifying untestable faults is more complex in such design environments. In this dissertation, we propose a three-step methodology to identify untestable M-pdfs in latch-based designs with multiple clocks using logic implications. / Ph. D.
|
64 |
The Fixpoint Checking Problem: An Abstraction Refinement PerspectiveGanty, Pierre P 28 September 2007 (has links)
<P align="justify">Model-checking is an automated technique which aims at verifying properties of computer systems. A model-checker is fed with a model of the system (which capture all its possible behaviors) and a property to verify on this model. Both are given by a convenient mathematical formalism like, for instance, a transition system for the model and a temporal logic formula for the property.</P>
<P align="justify">For several reasons (the model-checking is undecidable for this class of model or the model-checking needs too much resources for this model) model-checking may not be applicable. For safety properties (which basically says "nothing bad happen"), a solution to this problem uses a simpler model for which model-checkers might terminate without too much resources. This simpler model, called the abstract model, over-approximates the behaviors of the concrete model. However the abstract model might be too imprecise. In fact, if the property is true on the abstract model, the same holds on the concrete. On the contrary, when the abstract model violates the property, either the violation is reproducible on the concrete model and so we found an error; or it is not reproducible and so the model-checker is said to be inconclusive. Inconclusiveness stems from the over-approximation of the concrete model by the abstract model. So a precise model yields the model-checker to conclude, but precision comes generally with an increased computational cost.</P>
<P align="justify">Recently, a lot of work has been done to define abstraction refinement algorithms. Those algorithms compute automatically abstract models which are refined as long as the model-checker is inconclusive. In the thesis, we give a new abstraction refinement algorithm which applies for safety properties. We compare our algorithm with previous attempts to build abstract models automatically and show, using formal proofs that our approach has several advantages. We also give several extensions of our algorithm which allow to integrate existing techniques used in model-checking such as acceleration techniques.</P>
<P align="justify">Following a rigorous methodology we then instantiate our algorithm for a variety of models ranging from finite state transition systems to infinite state transition systems. For each of those models we prove the instantiated algorithm terminates and provide encouraging preliminary experimental results.</P>
<br>
<br>
<P align="justify">Le model-checking est une technique automatisée qui vise à vérifier des propriétés sur des systèmes informatiques. Les données passées au model-checker sont le modèle du système (qui en capture tous les comportements possibles) et la propriété à vérifier. Les deux sont donnés dans un formalisme mathématique adéquat tel qu'un système de transition pour le modèle et une formule de logique temporelle pour la propriété.</P>
<P align="justify">Pour diverses raisons (le model-checking est indécidable pour cette classe de modèle ou le model-checking nécessite trop de ressources pour ce modèle) le model-checking peut être inapplicable. Pour des propriétés de sûreté (qui disent dans l'ensemble "il ne se produit rien d'incorrect"), une solution à ce problème recourt à un modèle simplifié pour lequel le model-checker peut terminer sans trop de ressources. Ce modèle simplifié, appelé modèle abstrait, surapproxime les comportements du modèle concret. Le modèle abstrait peut cependant être trop imprécis. En effet, si la propriété est vraie sur le modèle abstrait alors elle l'est aussi sur le modèle concret. En revanche, lorsque le modèle abstrait enfreint la propriété : soit l'infraction peut être reproduite sur le modèle concret et alors nous avons trouvé une erreur ; soit l'infraction ne peut être reproduite et dans ce cas le model-checker est dit non conclusif. Ceci provient de la surapproximation du modèle concret faite par le modèle abstrait. Un modèle précis aboutit donc à un model-checking conclusif mais son coût augmente avec sa précision.</P>
<P align="justify">Récemment, différents algorithmes d'abstraction raffinement ont été proposés. Ces algorithmes calculent automatiquement des modèles abstraits qui sont progressivement raffinés jusqu'à ce que leur model-checking soit conclusif. Dans la thèse, nous définissons un nouvel algorithme d'abstraction raffinement pour les propriétés de sûreté. Nous comparons notre algorithme avec les algorithmes d'abstraction raffinement antérieurs. A l'aide de preuves formelles, nous montrons les avantages de notre approche. Par ailleurs, nous définissons des extensions de l'algorithme qui intègrent d'autres techniques utilisées en model-checking comme les techniques d'accélérations.</P>
<P align="justify">Suivant une méthodologie rigoureuse, nous instancions ensuite notre algorithme pour une variété de modèles allant des systèmes de transitions finis aux systèmes de transitions infinis. Pour chacun des modèles nous établissons la terminaison de l'algorithme instancié et donnons des résultats expérimentaux préliminaires encourageants.</P>
|
65 |
Verifying Absence of ∞ Loops in Parameterized ProtocolsSaksena, Mayank January 2008 (has links)
<p>The complex behavior of computer systems offers many challenges for <i>formal verification</i>. The analysis quickly becomes difficult as the number of participating processes increases.</p><p>A <i>parameterized system</i> is a family of systems parameterized on a number <i>n</i>, typically representing the number of participating processes. The <i>uniform verification problem</i> — to check whether a property holds for each instance — is an infinite-state problem. The automated analysis of parameterized and infinite-state systems has been the subject of research over the last 15–20 years. Much of the work has focused on safety properties. Progress in verification of liveness properties has been slow, as it is more difficult in general.</p><p>In this thesis, we consider verification of parameterized and infinite-state systems, with an emphasis on liveness, in the verification framework called <i>regular model checking (RMC)</i>. In RMC, states are represented as words, sets of states as regular expressions, and the transition relation as a regular relation.</p><p>We extend the automata-theoretic approach to RMC. We define a <i>specification logic</i> sufficiently strong to specify systems representable using RMC, and linear temporal logic properties of such systems, and provide an automatic translation from a specification into an analyzable model.</p><p>We develop <i>acceleration techniques</i> for RMC which allow more uniform and automatic verification than before, with greater power. Using these techniques, we succeed to verify safety and liveness properties of parameterized protocols from the literature.</p><p>We present a novel <i>reachability based</i> verification method for verification of liveness, in a general setting. We implement the method for RMC, with promising results.</p><p>Finally, we develop a framework for the verification of dynamic networks based on graph transformation, which generalizes the systems representable in RMC. In this framework we verify the latest version of the DYMO routing protocol, currently being considered for standardization by the IETF.</p>
|
66 |
Temporal logic encodings for SAT-based bounded model checkingSheridan, Daniel January 2006 (has links)
Since its introduction in 1999, bounded model checking (BMC) has quickly become a serious and indispensable tool for the formal verification of hardware designs and, more recently, software. By leveraging propositional satisfiability (SAT) solvers, BMC overcomes some of the shortcomings of more conventional model checking methods. In model checking we automatically verify whether a state transition system (STS) describing a design has some property, commonly expressed in linear temporal logic (LTL). BMC is the restriction to only checking the looping and non-looping runs of the system that have bounded descriptions. The conventional BMC approach is to translate the STS runs and LTL formulae into propositional logic and then conjunctive normal form (CNF). This CNF expression is then checked by a SAT solver. In this thesis we study the effect on the performance of BMC of changing the translation to propositional logic. One novelty is to use a normal form for LTL which originates in resolution theorem provers. We introduce the normal form conversion early on in the encoding process and examine the simplifications that it brings to the generation of propositional logic. We further enhance the encoding by specialising the normal form to take advantage of the types of runs peculiar to BMC. We also improve the conversion from propositional logic to CNF. We investigate the behaviour of the new encodings by a series of detailed experimental comparisons using both hand-crafted and industrial benchmarks from a variety of sources. These reveal that the new normal form based encodings can reduce the solving time by a half in most cases, and up to an order of magnitude in some cases, the size of the improvement corresponding to the complexity of the LTL expression. We also compare our method to the popular automata-based methods for model checking and BMC.
|
67 |
Automatická kontrola překladu / Automatic Checking of TranslationŠimlovič, Juraj January 2011 (has links)
Translation memories are becoming more and more popular with professional translators nowadays, especially in fields of software localization and translation of technical and official documents. Although commercial systems, which employ memory translation, provide some limited capabilities for automatic checking of translations, these are mostly of simple search-and-replace type. And none of these systems provide reasonable means of applying Czech morphology while checking. Professional translators could benefit from an automatic tool, which would provide more advanced rule-based checking capabilities, taking Czech and even English morphology into the process. Checking not only for correct use of terminology, but also for illicit translations and use of forbidden terms would be useful. This thesis investigates types of mistakes translators tend to make. Review of existing solutions for automatic translation checking for different languages is provided. An application is then suggested and developed, which attempts to search for some of the most frequent mistakes made in translations into Czech language, taking morphology into account while searching.
|
68 |
Teste e verificação formal do comportamento excepcional de programas Java / Testing and formal verification of the exceptional behavior of Java programsMartins, Alexandre Locci 09 June 2014 (has links)
Estruturas de tratamento de exceção são extremamente comuns em softwares desenvolvidos em linguagens modernas, como Java, e afetam de forma contundente o comportamento de um software quando exercitadas. Apesar destas duas características, as principais técnicas de verificação, teste de software e verificação formal, e as ferramentas a elas vinculadas, tendem a negligenciar o comportamento excepcional. Alguns dos fatores que levam a esta negligência são a não especificação do comportamento excepcional em termos de projeto e a consequente implementação das estruturas de tratamento com base no julgamento individual de cada programador. Isto resulta na não consideração de partes expressivas do código em termos de verificação e, consequentemente, a possibilidade de não serem detectados erros relativos tanto às próprias estruturas de tratamento quanto às estruturas de código vinculadas a estas. A fim de abordar este problema, propomos uma técnica, baseada em model checking, que automatiza o processo de exercício de caminhos excepcionais. Isto permite que seja observado o comportamento de um software quando da ocorrência de uma exceção. Pretendemos, com esta técnica, dar suporte para que seja aplicado aos caminhos que representam o comportamento excepcional de um software as mesmas técnicas de detecção de erros que são aplicadas aos caminhos que representam o comportamento normal e, com isso, agregar um aumento na qualidade do desenvolvimento de software. / Software developed in modern languages, such as Java, commonly present structures of exception handling. These structures, when exercised, may affect the software behavior. Despite these two characteristics, the main verification techniques, software testing and formal verification and the tools related to them, tend to neglect the exceptional behavior. The nonexistent specification of software exceptional behaviors at the design level, and, the subsequent implementation of exception handling based on the judgment of each programmer, are some factors that lead to this neglect. These factors result in the non-consideration of the expressive parts of the code in verification terms and, consequently, the impossibility of errors detection concerning either the exception treatment structures or the code structures linked to them. Taking this fact into consideration, we propose a technique based on the model checking process, which automates the process of exercising exceptional paths to address this problem. This allows the observation of the software behavior when an exception occurs. With this technique, we intend to support the application of the same error detection techniques for program normal behavior paths to the paths that represent the software exceptional behavior. Therefore, using the proposed technique, we aim to increase the software development quality.
|
69 |
New Algorithms and Data Structures for the Emptiness Problem of Alternating Automata / Nouveaux algorithmes et structures de données pour le problème du vide des automates alternantsMaquet, Nicolas P. P. 03 March 2011 (has links)
This work studies new algorithms and data structures that are useful in the context of program verification. As computers have become more and more ubiquitous in our modern societies, an increasingly large number of computer-based systems are considered safety-critical. Such systems are characterized by the fact that a failure or a bug (computer error in the computing jargon) could potentially cause large damage, whether in loss of life, environmental damage, or economic damage. For safety-critical systems, the industrial software engineering community increasingly calls for using techniques which provide some formal assurance that a certain piece of software is correct.
One of the most successful program verification techniques is model checking, in which programs are typically abstracted by a finite-state machine. After this abstraction step, properties (typically in the form of some temporal logic formula) can be checked against the finite-state abstraction, with the help of automated tools. Alternating automata play an important role in this context, since many temporal logics on words and trees can be efficiently translated into those automata. This property allows for the reduction of model checking to automata-theoretic questions and is called the automata-theoretic approach to model checking. In this work, we provide three novel approaches for the analysis (emptiness checking) of alternating automata over finite and infinite words. First, we build on the successful framework of antichains to devise new algorithms for LTL satisfiability and model checking, using alternating automata. These algorithms combine antichains with reduced ordered binary decision diagrams in order to handle the exponentially large alphabets of the automata generated by the LTL translation. Second, we develop new abstraction and refinement algorithms for alternating automata, which combine the use of antichains with abstract interpretation, in order to handle ever larger instances of alternating automata. Finally, we define a new symbolic data structure, coined lattice-valued binary decision diagrams that is particularly well-suited for the encoding of transition functions of alternating automata over symbolic alphabets. All of these works are supported with empirical evaluations that confirm the practical usefulness of our approaches. / Ce travail traite de l'étude de nouveaux algorithmes et structures de données dont l'usage est destiné à la vérification de programmes. Les ordinateurs sont de plus en plus présents dans notre vie quotidienne et, de plus en plus souvent, ils se voient confiés des tâches de nature critique pour la sécurité. Ces systèmes sont caractérisés par le fait qu'une panne ou un bug (erreur en jargon informatique) peut avoir des effets potentiellement désastreux, que ce soit en pertes humaines, dégâts environnementaux, ou économiques. Pour ces systèmes critiques, les concepteurs de systèmes industriels prônent de plus en plus l'usage de techniques permettant d'obtenir une assurance formelle de correction.
Une des techniques de vérification de programmes les plus utilisées est le model checking, avec laquelle les programmes sont typiquement abstraits par une machine a états finis. Après cette phase d'abstraction, des propriétés (typiquement sous la forme d'une formule de logique temporelle) peuvent êtres vérifiées sur l'abstraction à espace d'états fini, à l'aide d'outils de vérification automatisés. Les automates alternants jouent un rôle important dans ce contexte, principalement parce que plusieurs logiques temporelle peuvent êtres traduites efficacement vers ces automates. Cette caractéristique des automates alternants permet de réduire le model checking des logiques temporelles à des questions sur les automates, ce qui est appelé l'approche par automates du model checking. Dans ce travail, nous étudions trois nouvelles approches pour l'analyse (le test du vide) desautomates alternants sur mots finis et infinis. Premièrement, nous appliquons l'approche par antichaînes (utilisée précédemment avec succès pour l'analyse d'automates) pour obtenir de nouveaux algorithmes pour les problèmes de satisfaisabilité et du model checking de la logique temporelle linéaire, via les automates alternants.Ces algorithmes combinent l'approche par antichaînes avec l'usage des ROBDD, dans le but de gérer efficacement la combinatoire induite par la taille exponentielle des alphabets d'automates générés à partir de LTL. Deuxièmement, nous développons de nouveaux algorithmes d'abstraction et raffinement pour les automates alternants, combinant l'usage des antichaînes et de l'interprétation abstraite, dans le but de pouvoir traiter efficacement des automates de grande taille. Enfin, nous définissons une nouvelle structure de données, appelée LVBDD (Lattice-Valued Binary Decision Diagrams), qui permet un encodage efficace des fonctions de transition des automates alternants sur alphabets symboliques. Tous ces travaux ont fait l'objet d'implémentations et ont été validés expérimentalement.
|
70 |
Invariant Procedures for Model Checking, Checking for Prior-Data Conflict and Bayesian InferenceJang, Gun Ho 13 August 2010 (has links)
We consider a statistical theory as being invariant when the results of two statisticians' independent data analyses, based upon the same statistical theory and using effectively the same statistical ingredients, are the same.
We discuss three aspects of invariant statistical theories.
Both model checking and checking for prior-data conflict are assessments of single null hypothesis without any specific alternative hypothesis.
Hence, we conduct these assessments using a measure of surprise based on a discrepancy statistic.
For the discrete case, it is natural to use the probability of obtaining a data point that is less probable than the observed data.
For the continuous case, the natural analog of this is not invariant under equivalent choices of discrepancies.
A new method is developed to obtain an invariant assessment. This approach also allows several discrepancies to be combined into one discrepancy via a single P-value.
Second, Bayesians developed many noninformative priors that are supposed to contain no information concerning the true parameter value.
Any of these are data dependent or improper which can lead to a variety of difficulties.
Gelman (2006) introduced the notion of the weak informativity as a comprimise between informative and noninformative priors without a precise definition.
We give a precise definition of weak informativity using a measure of prior-data conflict that assesses whether or not a prior places its mass around the parameter values having relatively high likelihood.
In particular, we say a prior Pi_2 is weakly informative relative to another prior Pi_1 whenever Pi_2 leads to fewer prior-data conflicts a priori than Pi_1.
This leads to a precise quantitative measure of how much less informative a weakly informative prior is.
In Bayesian data analysis, highest posterior density inference is a commonly used method.
This approach is not invariant to the choice of dominating measure or reparametrizations.
We explore properties of relative surprise inferences suggested by Evans (1997).
Relative surprise inferences which compare the belief changes from a priori to a posteriori are invariant under reparametrizations.
We mainly focus on the connection of relative surprise inferences to classical Bayesian decision theory as well as important optimalities.
|
Page generated in 0.2648 seconds