Spelling suggestions: "subject:"cerification**"" "subject:"erification**""
201 |
Constraint-Based Thread-Modular Abstract InterpretationKusano, Markus Jan Urban 25 July 2018 (has links)
In this dissertation, I present a set of novel constraint-based thread-modular abstract-interpretation techniques for static analysis of concurrent programs. Specifically, I integrate a lightweight constraint solver into a thread-modular abstract interpreter to reason about inter-thread interference more accurately. Then, I show how to extend the new analyzer from programs running on sequentially consistent memory to programs running on weak memory. Finally, I show how to perform incremental abstract interpretation, with and without the previously mentioned constraint solver, by analyzing only regions of the program impacted by a program modification. I also demonstrate, through experiments, that these new constraint-based static analyzers are significantly more accurate than prior abstract interpretation-based static analyzers, with lower runtime overhead, and that the incremental technique can drastically reduce runtime overhead in the presence of small program modifications. / Ph. D. / Software touches nearly every aspect of our lives, from smartphones, personal computers, and websites, to airplanes, cars, and medical equipment. Due to its ubiquity, we would like software in our lives to operate correctly, that is, without any unintended side effects, or freezes. This dissertation presents a new technique to automatically analyze a piece of software and determine if it runs as intended. We focus particularly on software where multiple entities run simultaneously, and thus can interact in many ways. Our automated analysis gives software developers high assurance that the software will always perform correctly, and thus never have any unexpected issues.
|
202 |
Static Analysis to improve RTL VerificationAgrawal, Akash 06 March 2017 (has links)
Integrated circuits have traveled a long way from being a general purpose microprocessor to an application specific circuit. It has become an integral part of the modern era of technology that we live in. As the applications and their complexities are increasing rapidly every day, so are the sizes of these circuits. With the increase in the design size, the associated testing effort to verify these designs is also increased. The goal of this thesis is to leverage some of the static analysis techniques to reduce the effort of testing and verification at the register transfer level. Studying a design at register transfer level gives exposure to the relational information for the design which is inaccessible at the structural level.
In this thesis, we present a way to generate a Data Dependency Graph and a Control Flow Graph out of a register transfer level description of a circuit description. Next, the generated graphs are used to perform relation mining to improve the test generation process in terms of speed, branch coverage and number of test vectors generated. The generated control flow graph gives valuable information about the flow of information through the circuit design. We are using this information to create a framework to improve the branch reachability analysis mainly in terms of the speed. We show the efficiency of our methods by running them through a suite of ITC'99 benchmark circuits. / Master of Science / In this era of modern technology, digital circuits and microprocessors have become an unavoidable part of everyone’s life. The role of these circuits is becoming more and more critical as they are running a lot of critical services for us. Testing and verifying the design has been a very important aspect in the designing of these circuits. With the increasing number of its applications and the advancement of the technology, the size and complexity of the designs have also increased. It has imposed a need to test the design at a stage when it is easy to test and easy to fix also. There have been a lot of research focused on automatically generating the test pattern at an early stage of development and the work presented in this thesis is an effort to take it one step further in the process.
The method proposed in this work is taking advantage of the fact that a design speaks for itself and can give a lot of information if looked at carefully. We present a way to extract important information about the data dependency and its flow through the design. With the help of this information, we are generating relations between the design elements which can aid the test generation process to achieve its goal more efficiently. We are also using this information to help in proving that some part of the design is inaccessible. We show the efficiency of our method by running them through benchmark designs.
|
203 |
Quantitative verification of real-time properties with application to medical devicesDiciolla, Marco January 2014 (has links)
Probabilistic model checking is a powerful technique used to ensure the correct functioning of systems which exhibit real-time and stochastic behaviours. Many such systems are embedded and used in safety-critical situations, to mention implantable medical devices. This thesis aims to develop a formal model-based framework that is tailored for the analysis and verification of cardiac pacemakers. The contributions are novel approaches for the automatic verification and validation of real-time properties over continuous-time models, which are applicable to software embedded in medical devices. First, we address the problem of model checking continuous-time Markov chain (CTMC) models against real-time specifications given in the form of temporal logic, namely, metric temporal logic (MTL) and linear duration properties (LDP), or as timed automata (TA). The main question that we address is “given a continuous-time Markov chain, what is the probability of the set of timed paths that satisfy the real-time property under consideration?”. We provide novel algorithms to approximate the probability through generating systems of linear inequalities over variables that represent the waiting times in system states, and then solving multidimensional integrals over this set. Second, we present a model-based framework to support the design and verification of pacemakers against real-time properties. The pacemaker is modelled as a network of timed automata, whereas the human heart is modelled either as a network of timed automata or as a network of hybrid automata. Our framework can be instantiated with personalised heart models whose parameters can be learnt from patient data, and we have done so to validate our approach. We introduce property patterns and the counting metric temporal logic (CMTL) in order to specify the properties of interest. We provide new verification algorithms for networks of timed or hybrid automata against property patterns and CMTL. Finally, we pose and solve the parameter synthesis problem, i.e., given a network of timed automata containing model parameters, an objective function and a CMTL formula, find the set of parameter valuations, whenever existing, which satisfy the CMTL formula and maximise the objective function. The framework has been implemented using Simulink, Matlab and Python code. Extensive experimental results on pacemaker models have been carried out and discussed in detail. The techniques developed in this thesis can assist in the design and verification of software embedded in medical devices.
|
204 |
Automated Architecture-Based Verification of Safety-Critical SystemsJaradat, Omar Tawffeeq Saleem January 2011 (has links)
Safety-critical systems require high quality and dependability levels, where system correctness and safety are major features to avoid any severe outcome. Time and cost are also important challenges that are imposed during the development process. Describing the behavior of a system in a high level provides a realistic vision and anticipation of the system. This presents a valuable opportunity for verifying the system before wasting the intended resources to develop the system. Architecture Description Languages (ADLs) provide the ability to comprise and represent the system level details of components, interactions and configuration. Architecture Analysis and Design Language (AADL) as a family member of ADLs proved its effectiveness in designing software intensive systems. In this report, we present a case study to validate “An Architecture-Based Verification Technique for AADL Specifications”. The technique involves a combination of model checking and model-based testing approaches adapted to an architectural perspective. The objectives of the verification process are 1) to ensure completeness and consistency of an AADL specification, and 2) to ensure conformance of an implementation with respect to its AADL specification. The technique has only been applied to small examples, and the goal of this thesis work is to validate it against a safety-critical system developed by a major vehicle manufacturer. Validation of the technique begins by investigating the system and specifying it in AADL. The defined verification criteria are subsequently applied to the AADL specification which drives the verification process. The case study presents interesting results while performing the model checking (the completeness and consistency checking). Conformance testing, on the other hand, could not be performed on the implemented system but is an interesting topic for future work.
|
205 |
Higher-order model checking with traversalsNeatherway, Robin Philip January 2014 (has links)
Higher-order recursion schemes are a powerful model of functional computation that grew out of traditional recursive program schemes and generalisations of grammars. It is common to view recursion schemes as generators of possibly-infinite trees, which Ong showed to have a decidable monadic second order theory and opened the door to applications in verification. Kobayashi later presented an intersection type characterisation of the model checking problem, on which most subsequent applied work is based. In recent work, recursion schemes have been considered to play a role similar to Boolean programs in verification of first-order imperative programs: a natural target for abstraction of programs with very large or infinite data domains. In this thesis we focus on the development of model checking algorithms for variants of recursion schemes. We start our contributions with a model checking algorithm inspired by the fully abstract game semantics of recursion schemes, but specified as a goal-directed approach to intersection type inference, that offers a unification of the views of Ong and Kobayashi. We build on this largely theoretical contribution with two orthogonal extensions and practical implementations. First, we develop a new extension of recursion schemes: higher-order recursion schemes with cases, which add non-determinism and a case construct operating over a finite data domain. These additions provide us with a more natural and succinct target for abstraction from functional programs: encoding data using functions inevitably results in an increase in the order and arity of the scheme, which have a direct impact on the worst-case complexity of the problem. We characterise the model checking problem using a novel intersection and union type system and give a practical algorithm for type inference in this system. We have carried out an empirical evaluation of the implementation --- the tool T<sub>RAV</sub>MC --- using a variety of problem instances from the literature and a new suite of problem instances derived via an abstraction-refinement procedure from functional programs. Second, we extend our approach from safety properties to all properties expressible in monadic second order logic using alternating parity tree automata as our specification language. We again provide an implementation and an empirical evaluation, which shows that despite the challenges accompanying liveness properties our tool scales beyond the current state of the art.
|
206 |
Game semantics based equivalence checking of higher-order programsHopkins, David G. B. January 2012 (has links)
This thesis examines the use of game semantics for the automatic equivalence checking of higher-order programs. Game semantics has proved to be a powerful method for constructing fully abstract models of logics and programming languages. Furthermore, the concrete nature of the semantics lends itself to algorithmic analysis. The game-semantic model can be used to identify fragments of languages which have a decidable observational equivalence problem. We investigate decidability results for different languages as well as the efficiency of these algorithms in practice. First we consider the call-by-value higher-order language with state, RML. This can be viewed as a canonical restriction of Standard ML to ground-type references. The O-strict fragment of RML is the largest set of type sequents for which, in the game-semantic denotation, justification pointers from O-moves are always uniquely reconstructible from the underlying move sequence. The O-strict fragment is surprisingly expressive, including higher-order types and difficult examples from the literature. By representing strategies as Visibly Pushdown Automata (VPA) we show that observational equivalence of O-strict terms is decidable (and in fact is ExpTime-complete). We then consider extensions of the O-strict fragment. Adding general recursion or using most non-O-strict types leads to undecidability. However, a limited form of recursion can be added while still preserving decidability (although the full power of DPDA is required). Next we examine languages with non-local control. This involves adding call/cc to our language and is known to correspond to dropping the game-semantic bracketing condition. In the call-by-name game-semantic model of Idealized Algol (IA), in which answers cannot justify questions, the visibility condition still implies a form of weak bracketing. By making bracketing violations explicit we show that we can still model the entire third-order fragment using VPA. We have also implemented tools based on these algorithms. Our model checkers Homer and Hector perform equivalence checking for third-order IA and O-strict RML respectively. Homer uses a naive explicit state method whereas Hector takes advantage of on-the-fly model checking. Our tools perform well on small yet challenging examples. On negative instances, the on-the-fly approach allows Hector to outperform Homer. To improve their performance, we also consider using ideas from symbolic execution. We propose a representation for finite automata using transitions labelled with formulas and guards which aims to take advantage of the symmetries of the game-semantic model so that strategies can be represented compactly. We refer to this representation as Symbolically Executed Automata (SEA). Using SEA allows much larger data types to be handled but is not as effective on larger examples with small data types.
|
207 |
What is the value of a Health Verified Program.Schumacher, Kash Tucker January 1900 (has links)
Master of Agribusiness / Department of Agricultural Economics / Ted C. Schroeder / The beef cattle industry is one of the last industries in production agriculture that is not heavily integrated. Therefore each segment of the industry is constantly looking for opportunities to increase the value of their cattle. In recent years, one of those opportunities available to cow-calf producers was verification of certain production practices (i.e. Age and Source, Natural, and Non-Hormone Treated). The value flows from the consumer to the cow-calf producer. The packers need these verified cattle to fill export contracts therefore they are willing to pay a premium for these types of cattle.
The objective of the thesis was to determine the value of a Health Verified Program (HPV) to feedlot operators. HPV is not required to export beef like other verified programs, but it does verify the procedures that a group of calves has received from the previous owner.
Since the feedlot is a deciding factor of value for HPV, feedlot managers were asked from across the United States not only what value they place on HPV but other questions that could be beneficial to others involved in the beef cattle industry. Regression models were used along with a correlation analysis to determine value.
There is value to a health verified program along with other procedures that are available to cow-calf producers. Individual producers need to determine which verifications and procedures are economical and efficient for their individual operations with all factors considered.
|
208 |
Ferramenta CAD para extração de modelo de cobertura de saída por itens em verificação funcional. / CAD tool for output coverage model extraction in functional verification.Muñoz Quispe, Joel Iván 25 October 2011 (has links)
Nos ambientes de desenvolvimento de sistemas integrados da atualidade, os requisitos dos sistemas devidos ao alto grau de funcionalidades incorporadas vêm-se incrementando, gerando uma alta complexidade nos projetos. Isto traz como consequência o aumento na quantidade de ciclos dentro do fluxo de projeto. Uma solução tem sido o uso de blocos IP para acelerar o desenvolvimento. Entretanto, para garantir um grau elevado de confiabilidade destes componentes, os processos de verificação devem comprovar que todas as propriedades do circuito estejam sendo cumpridas. Uma das técnicas utilizadas para isto é verificação funcional por simulação, que procura explorar, através da injeção de vetores de teste, a maior porção possível de todo o espaço de estados do circuito. Quanto maior o número de estados possíveis, maior o número de vetores de testes que devem ser inseridos. Portanto, o número de vetores de teste deve ser reduzido de forma considerável, entretanto, por este fato, métricas para determinar a completeza do processo de verificação, definidas como modelos de cobertura, têm sido necessárias. As métricas de cobertura são estabelecidas segundo as estratégias de observação do projeto sob verificação, DUV, sendo bastante comum na indústria a de caixa preta que tem como objetivo a estimulação das entradas e a observação dos eventos de saída do DUV. Neste caso, para determinar se o sistema cumpre com as especificações, o engenheiro de verificação, deve definir os eventos à saída que considera relevantes e as métricas para determinar a quantidade de vezes que devem ser observadas. Este tipo de modelagem é conhecido como cobertura por itens. A quantidade de itens e os eventos a serem observados podem ser dfinidos pelo conhecimento especialista, dos engenheiros de verificação ou, para simplificar esta tarefa, uma distribuição uniforme é adotada. Como estas formas de modelagem não abstraem todas as propriedades do circuito, o perfil da distribuição de valores dos eventos (parâmetros) escolhidos, em geral, não estão correlacionados com o perfil real verficado durante a execução dos testbenches , tendo como consequência o aumento dos tempos de simulação. Para tratar do problema acima, o presente trabalho tem como objetivo geral o desenvolvimento de uma metodologia para obter um modelo de cobertura de saída que apresente um perfil de distribuição semelhante ao real e que, assim, assista o engenheiro de verificação na seleção dos pontos ou intervalos de saída de interesse, adicionado-os às decisões derivadas de seu conhecimento especialista. Pela metodologia utilizada, encontra-se a(s) equação(ões) que define(m) a(s) saída(s) do circuito sob verificação e, a partir destas, a distribuição probabilística por evento observável. No centro da metodologia está a ferramenta PrOCov (Probabilistic Output Coverage), projetada com os objetivos acima. A metodologia e a ferramenta foram testadas com alguns exemplos de circuitos, modelos em alto nível do filtro FIR, do processador FFT e do filtro Elliptic, todos descritos em SystemC. Nos três casos testados, o PrOCov encontrou satisfatoriamente os respectivos perfis de saída. Estes foram comparados com os perfis obtidos por simulação, mostrando que uma excelente precisão pode ser obtida; apenas pequenas variações foram encontradas devidas a erros de aproximação. Também variações de precisão e tempo de simulação em função da resolução dos parâmetros de saída (eventos) foram analisadas nesta dissertação. / In current integrated system development environments, the requirements for the design of multi-function systems have increased constantly. Consequently, the number of iterations in the design flow has also grown. A solution for this problem has been the use of IP-cores to speed up the hardware development. However, to guarantee high level of reliability for these components, the verification process has to be kept strict in other to prove if the all system properties have been satisfied. The mainstream technique that has been used in the industry for the verification process is the dynamic functional verification. It aims to explore, by test vector injection, all the state space of the circuit. The higher the number of possible states, the higher the number of test vectors to be inserted. Therefore, the number of test vectors must be kept as low as possible. Due to that, completion and sufficiency metrics, identified as the coverage model, should be carefully defined. The coverage metrics are established according the observation strategies of the design under verification, DUV, where the black box approach is very common in the industry, being aimed at the stimulation of the inputs and observing the events of the DUV output. To determine whether the system meets the specifications, the verification engineer must define the events (s)he considers relevant at the output and the metrics used to determine the amount of times that the results must be observed. This type of modeling is known as item coverage. The amount of items and events to be observed may be defined by the experience of the engineer, but in most cases, to simplify this task, a uniform distribution is adopted. Those forms of modeling do not abstract the functionality of the circuit, then, the probability distribution of the chosen events is uncorrelated to the real simulated distribution, when the testbenchs are implemented. Therefore, the resulting simulation time increases. To solve the problem that is mentioned above, this work aims the development of a methodology to compute the output coverage, which should be similar to the real output value distribution and thus assist the engineer in the selection of the proper check points or output ranges of interest, by adding them to the decisions derived from his(her) knowledge. This methodology finds the equations that represent the outputs of the DUV and, from them, it computes the output probabilistic distribution. At the core of this methodology is the PrOCov (Probabilistic Output Coverage) tool, which was developed with the goals above. Both methodology and tool were tested with three circuits described in high level language, the FIR filter, FFT processor and Elliptic filter, written in SystemC. In all three cases, PrOCov presented a satisfactorily output distribution. Excellent precision could be achieved by the results, with only small variations found due to approximation errors. Also variations of accuracy and simulation time due to different resolutions of the output parameters (events) were analyzed in this dissertation.
|
209 |
Formal Methods For Verification Based Software InspectionPowell, Daniel, n/a January 2003 (has links)
Useful processes, that are independently repeatable, are utilised in all branches of science and traditional engineering disciplines but seldom in software engineering. This is particularly so with processes used for detection and correction of defects in software systems. Code inspection, as introduced by Michael Fagan at IBM in the mid 1970's is widely recognised as an effective technique for finding defects in software. Despite its reputation, code inspection, as it is currently practiced, is not a strictly repeatable process. This is due to the problems faced by inspectors when they attempt to paraphrase the complicated semantics of a unit of computer code. Verification based software inspection, as advocated by the cleanroom software engineering community, requires that arguments of correctness be formulated with the code and its specification. These arguments rely on the reader being able to extract the semantics from the code. This thesis addresses the requirement for an independently repeatable, scalable and substantially automated method for yielding semantics from computer code in a complete, unambiguous and consistent manner in order to facilitate, and make repeatable, verification based code inspection. Current literature regarding the use of code inspection for verification of software is surveyed. Empirical studies are referenced, comparing inspection to software testing and program proof. Current uses of formal methods in software engineering will be discussed, with particular reference to formal method applications in verification. Forming the basis of the presented method is a systematic, and hence repeatable, approach to the derivation of program semantics. The theories and techniques proposed for deriving semantics from program code extend current algorithmic and heuristic techniques for deriving invariants. Additionally, the techniques introduced yield weaker forms of invariant information which are also useful for verification, defect detection and correction. Methods for using these weaker invariant forms, and tools to support these methods, are introduced. Algorithmic and heuristic techniques for investigating loop progress and termination are also introduced. Some of these techniques have been automated in supporting tools, and hence, the resulting defects can be repeatably identified. Throughout this thesis a strong emphasis is placed on describing implementable algorithms to realise the derivation techniques discussed. A number of these algorithms are implemented in a tool to support the application of the verification methods presented. The techniques and tools presented in this thesis are well suited, but not limited to, supporting rigorous methods of defect detection as well as formal and semi-formal reasoning of correctness. The automation of these techniques in tools to support practical, formal code reading and correctness argument will assist in addressing the needs of trusted component technologies and the general requirement for quality in software.
|
210 |
Architecture-Based Verification of Software-Intensive SystemsJohnsen, Andreas January 2010 (has links)
<p>Development of software-intensive systems such as embedded systems for telecommunications, avionics and automotives occurs under severe quality, schedule and budget constraints. As the size and complexity of software-intensive systems increase dramatically, the problems originating from the design and specification of the system architecture becomes increasingly significant. Architecture-based development approaches promise to improve the efficiency of software-intensive system development processes by reducing costs and time, while increasing quality. This paradox is partially explained by the fact that the system architecture abstracts away unnecessary details, so that developers can concentrate both on the system as a whole, and on its individual pieces, whether it's the components, the components' interfaces, or connections among components. The use of architecture description languages (ADLs) provides an important basis for verification since it describes how the system should behave, in a high level view and in a form where automated tests can be generated. Analysis and testing based on architecture specifications allow detection of problems and faults early in the development process, even before the implementation phase, thereby reducing a significant amount of costs and time. Furthermore, tests derived from the architecture specification can later be applied to the implementation to see the conformance of the implementation with respect to the specification. This thesis extends the knowledge base in the area of architecture-based verification. In this thesis report, an airplane control system is specified using the Architecture Analysis and Description Language (AADL). This specification will serve as a starting point of a system development process where developed architecture-based verification algorithms are applied.</p>
|
Page generated in 0.0937 seconds