Spelling suggestions: "subject:"5oftware cerification"" "subject:"5oftware erification""
71 |
Strategies for Scalable Symbolic Execution-based Test GenerationKrishnamoorthy, Saparya 02 August 2010 (has links)
With the advent of advanced program analysis and constraint solving techniques, several test generation tools use variants of symbolic execution. Symbolic techniques have been shown to be very effective in path-based test generation; however, they fail to scale to large programs due to the exponential number of paths to be explored. In this thesis, we focus on tackling this path explosion problem and propose search strategies to achieve quick branch coverage under symbolic execution, while exploring only a fraction of paths in the program. We present a reachability-guided strategy that makes use of the reachability graph of the program to explore unvisited portions of the program and a conflict driven backtracking strategy that utilizes conflict analysis to perform nonchronological backtracking. We also propose error-directed search strategies, that are aimed at catching bugs in the program faster, by targeting those parts of the program where bugs are likely to be found or those that are hard to reach. We present experimental evidence that these strategies can significantly reduce the search space and improve the speed of test generation for programs. / Master of Science
|
72 |
A comparison of two different model checking techniquesBull, J. J. D 12 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2003. / ENGLISH ABSTRACT: Model checking is a computer-aided verification technique that is used to verify properties
about the formal description of a system automatically. This technique has been applied
successfully to detect subtle errors in reactive systems. Such errors are extremely difficult to
detect by using traditional testing techniques. The conventional method of applying model
checking is to construct a model manually either before or after the implementation of a
system. Constructing such a model requires time, skill and experience. An alternative method
is to derive a model from an implementation automatically.
In this thesis two techniques of applying model checking to reactive systems are compared,
both of which have problems as well as advantages. Two specific strategies are compared in
the area of protocol development:
1. Structuring a protocol as a transition system, modelling the system, and then deriving
an implementation from the model.
2. Automatically translating implementation code to a verifiable model.
Structuring a reactive system as a transition system makes it possible to verify the control flow
of the system at implementation level-as opposed to verifying the control flow at abstract
level. The result is a closer correspondence between implementation and specification (model).
At the same time testing, which is restricted to small, independent code fragments that
manipulate data, is simplified significantly.
The construction of a model often takes too long; therefore, verification results may no longer
be applicable when they become available. To address this problem, the technique of automated
model extraction was suggested. This technique aims to reduce the time required to
construct a model by minimising manual input during model construction.
A transition system is a low-level formalism and direct execution through interpretation is feasible. However, the overhead of interpretation is the major disadvantage of this technique.
With automated model extraction there are disadvantages too. For example, differences
between the implementation and specification languages-such as constructs present in the
implementation language that cannot be expressed in the modelling language-make the
development of an automated model extraction tool extremely difficult.
In conclusion, the two techniques are compared against a set of software development considerations.
Since a specific technique is not always preferable, guidelines are proposed to help
select the best approach in different circumstances. / AFRIKAANSE OPSOMMING: Modeltoetsing is 'n rekenaargebaseerde verifikasietegniek wat gebruik word om eienskappe
rakende 'n formele spesifikasie van 'n stelsel te verifieer. Die tegniek is al suksesvol toegepas
om subtiele foute in reaktiewe stelsels op te spoor. Sulke foute word uiters moeilik opgespoor
as tradisionele toetsings tegnieke gebruik word. Tradisioneel word modeltoetsing toegepas
deur 'n model te bou voor of na die implementasie van 'n stelsel. Om'n model te bou
verg tyd, vernuf en ervaring. 'n Alternatiewe metode is om outomaties 'n model van 'n
implementasie af te lei.
In hierdie tesis word twee toepassingstegnieke van modeltoetsing vergelyk, waar beide tegnieke
beskik oor voordele sowel as nadele. Twee strategieë word vergelyk in die gebied van protokol
ontwikkeling:
1. Om 'n protokol as 'n oorgangsstelsel te struktureer, dit te moduleer en dan 'n implementasie
van die model af te lei.
2. Om outomaties 'n verifieerbare model van 'n implementasie af te lei.
Om 'n reaktiewe stelsel as 'n oorgangsstelsel te struktureer maak dit moontlik om die kontrolevloei
op implementasie vlak te verifieer-in teenstelling met verifikasie van kontrolevloei
op 'n abstrakte vlak. Die resultaat is 'n nouer band wat bestaan tussen die implementasie en
die spesifikasie. Terselfdetyd word toetsing, wat beperk word tot klein, onafhanklike kodesegmente
wat data manupileer, beduidend vereenvoudig.
Die konstruksie van 'n model neem soms te lank; gevolglik, wanneer die verifikasieresultate
beskikbaar word, is dit dalk nie meer toepaslik op die huidige weergawe van 'n implementasie
nie. Om die probleem aan te spreek is 'n tegniek om modelle outomaties van implementasies
af te lei, voorgestel. Die doel van die tegniek is om die tyd wat dit neem om 'n model te bou
te verminder deur handtoevoer tot 'n minimum te beperk. 'n Oorgangsstelsel is 'n laevlak formalisme en direkte uitvoering deur interpretasie is wesenlik.
Die oorhoofse koste van die interpreteerder is egter die grootste nadeel van die tegniek. Daar is
ook nadele wat oorweeg moet word rakende die tegniek om outomaties modelle van implementasies
af te lei. Byvoorbeeld, verskille tussen die implementasietaal en spesifikasietaal=-soos
byvoorbleed konstrukte wat in die implementasietaal gebruik word wat nie in die modeleringstaal
voorgestel kan word nie-vrnaak die ontwikkeling van 'n modelafieier uiters moeilik.
As gevolg word die twee tegnieke vergelyk teen 'n stel van programatuurontwikkelingsoorwegings.
Omdat 'n spesifieke tegniek nie altyd voorkeur kan geniet nie, word riglyne voorgestel
om te help met die keuse om die beste tegniek te kies in verskillende omstandighede.
|
73 |
Investigating the non-termination of affine loopsDurant, Kevin 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The search for non-terminating paths within a program is a crucial part of software verification,
as the detection of anfinite path is often the only manner of falsifying program termination
- the failure of a termination prover to verify termination does not necessarily imply that a
program is non-terminating. This document describes the development and implementation of
two focussed techniques for investigating the non-termination of affine loops. The developed
techniques depend on the known non-termination concepts of recurrent sets and Jordan matrix
decomposition respectively, and imply the decidability of single-variable and cyclic affine loops.
Furthermore, the techniques prove to be practically capable methods for both the location of
non-terminating paths, as well as the generation of preconditions for non-termination. / AFRIKAANSE OPSOMMING: Sagtewareveri kasie vereis of die bewys van die beeindiging van 'n program, of die deteksie
van oneindige uitvoerings. In hierdie tesis ontwikkel en implementeer ons twee tegnieke om
oor die oneindige eienskap van a ene lusse te beslis. Die tegnieke wat ontwikkel word is
gebaseer op konsepte soos Jordan matriksdekomposisie en herhaalde groepe wat al in die verlede
gebruik is om die beeindiging van lusse te ondersoek. Die tegnieke kan gebruik word om
die uitvoerbaarheid van beide een-veranderlike en sikliese a ene lusse te bepaal. Feitlik alle
nie-eindige a ene lusse kan ge denti seer word en die toestande waaronder hierdie oneindige
eienskap verskyn kan beskryf word.
|
74 |
Program analysis with interpolantsWeissenbacher, Georg January 2010 (has links)
This dissertation discusses novel techniques for interpolation-based software model checking, an approximate method which uses Craig interpolation to compute invariants of programs. Our work addresses two aspects of program analyses based on model checking: verification (the construction of correctness proofs for programs) and falsification (the detection of counterexamples that violate the specification). In Hoare's calculus, a proof of correctness comprises assertions which establish that a program adheres to its specification. The principal challenge is to derive appropriate assertions and loop invariants. Contemporary software verification tools use Craig interpolation (as opposed to traditional predicate transformers such as the weakest precondition) to derive approximate assertions. The performance of the model checker is contingent on the Craig interpolants computed. We present novel interpolation techniques which provide the following advantages over existing methods. Firstly, the resulting interpolants are sound with respect to the bit-level semantics of programs, which is an improvement over interpolation systems that use linear arithmetic over the reals to approximate bit-vector arithmetic and/or do not support bit-level operations. Secondly, our interpolation systems afford us a choice of interpolants and enable us to fine-tune their logical strength and structure. In contrast, existing procedures are limited to a single ad-hoc choice of an interpolant. Interpolation-based verification tools are typically forced to refine an initial approximation repeatedly in order to achieve the accuracy required to establish or refute the correctness of a program. The detection of a counterexample containing a repetitive construct may necessitate one refinement step (involving the computation of additional interpolants) for each iteration of the loop. We present a heuristic that aims to avoid the repeated and computationally expensive construction of interpolants, thus enabling the detection of deeply buried defects such as buffer overflows. Finally, we present an implementation of our techniques and evaluate them on a set of standardised device driver and buffer overflow benchmarks.
|
75 |
Certified Compilation and Worst-Case Execution Time Estimation / Compilation formellement vérifiée et estimation du pire temps d'éxécutionMaroneze, André Oliveira 17 June 2014 (has links)
Les systèmes informatiques critiques - tels que les commandes de vol électroniques et le contrôle des centrales nucléaires - doivent répondre à des exigences strictes en termes de sûreté de fonctionnement. Nous nous intéressons ici à l'application de méthodes formelles - ancrées sur de solides bases mathématiques - pour la vérification du comportement des logiciels critiques. Plus particulièrement, nous spécifions formellement nos algorithmes et nous les prouvons corrects, à l'aide de l'assistant à la preuve Coq - un logiciel qui vérifie mécaniquement la correction des preuves effectuées et qui apporte un degré de confiance très élevé. Nous appliquons ici des méthodes formelles à l'estimation du Temps d'Exécution au Pire Cas (plus connu par son abréviation en anglais, WCET) de programmes C. Le WCET est une propriété importante pour la sûreté de fonctionnement des systèmes critiques, mais son estimation exige des analyses sophistiquées. Pour garantir l'absence d'erreurs lors de ces analyses, nous avons formellement vérifié une méthode d'estimation du WCET fondée sur la combinaison de deux techniques principales: une estimation de bornes de boucles et une estimation du WCET via la méthode IPET (Implicit Path Enumeration Technique). L'estimation de bornes de boucles est elle-même décomposée en trois étapes : un découpage de programmes, une analyse de valeurs opérant par interprétation abstraite, et une méthode de calcul de bornes. Chacune de ces étapes est formellement vérifiée dans un chapitre qui lui est dédiée. Le développement a été intégré au compilateur C formellement vérifié CompCert. Nous prouvons que le résultat de l'estimation est correct et nous évaluons ses performances dans des ensembles de benchmarks de référence dans le domaine. Les contributions de cette thèse incluent la formalisation des techniques utilisées pour estimer le WCET, l'outil d'estimation lui-même (obtenu à partir de la formalisation), et l'évaluation expérimentale des résultats. Nous concluons que le développement fondé sur les méthodes formelles permet d'obtenir des résultats intéressants en termes de précision, mais il exige des précautions particulières pour s'assurer que l'effort de preuve reste maîtrisable. Le développement en parallèle des spécifications et des preuves est essentiel à cette fin. Les travaux futurs incluent la formalisation de modèles de coût matériel, ainsi que le développement d'analyses plus sophistiquées pour augmenter la précision du WCET estimé. / Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
|
76 |
Post-silicon Functional Validation with Virtual PrototypesCong, Kai 03 June 2015 (has links)
Post-silicon validation has become a critical stage in the system-on-chip (SoC) development cycle, driven by increasing design complexity, higher level of integration and decreasing time-to-market. According to recent reports, post-silicon validation effort comprises more than 50% of the overall development effort of an 65nm SoC. Though post-silicon validation covers many aspects ranging from electronic properties of hardware to performance and power consumption of whole systems, a central task remains validating functional correctness of both hardware and its integration with software. There are several key challenges to achieving accelerated and low-cost post-silicon functional validation. First, there is only limited silicon observability and controllability; second, there is no good test coverage estimation over a silicon device; third, it is difficult to generate good post-silicon tests before a silicon device is available; fourth, there is no effective software robustness testing approaches to ensure the quality of hardware/software integration.
We propose a systematic approach to accelerating post-silicon functional validation with virtual prototypes. Post-silicon test coverage is estimated in the pre-silicon stage by evaluating the test cases on the virtual prototypes. Such analysis is first conducted on the initial test suite assembled by the user and subsequently on the expanded test suite which includes test cases that are automatically generated. Based on the coverage statistics of the initial test suite on the virtual prototypes, test cases are automatically generated to improve the test coverage. In the post-silicon stage, our approach supports coverage evaluation of test cases on silicon devices to ensure fidelity of early coverage evaluation. The generated test cases are issued to silicon devices to detect inconsistencies between virtual prototypes and silicon devices using conformance checking. We further extend the test case generation framework to generate and inject fault scenario with virtual prototypes for driver robustness testing. Besides virtual prototype-based fault injection, an automatic driver fault injection approach is developed to support runtime fault generation and injection for driver robustness testing. Since virtual prototype enables early driver development, our automatic driver fault injection approach can be applied to driver testing in both pre-silicon and post-silicon stages.
For preliminary evaluation, we have applied our coverage evaluation and test generation to several network adapters and their virtual prototypes. We have conducted coverage analysis for a suite of common tests on both the virtual prototypes and silicon devices. The results show that our approach can estimate the test coverage with high fidelity. Based on the coverage estimation, we have employed our automatic test generation approach to generate additional tests. When the generated test cases were issued to both virtual prototypes and silicon devices, we observed significant coverage improvement. And we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect. After we applied virtual prototype-based fault injection approach to virtual prototypes for three widely-used network adapters, we generated and injected thousands of fault scenarios and found 2 driver bugs. For automatic driver fault injection, we have applied our approach to 12 widely used drivers with either virtual prototypes or silicon devices. After testing all these drivers, we found 28 distinct bugs.
|
77 |
Towards effective and efficient temporal verification in grid workflow systemsChen, Jinjun, n/a January 2007 (has links)
In grid architecture, a grid workflow system is a type of high-level grid middleware
which aims to support large-scale sophisticated scientific or business processes in a
variety of complex e-science or e-business applications such as climate modelling,
disaster recovery, medical surgery, high energy physics, international stock market
modelling and so on. Such sophisticated processes often contain hundreds of
thousands of computation or data intensive activities and take a long time to
complete. In reality, they are normally time constrained. Correspondingly, temporal
constraints are enforced when they are modelled or redesigned as grid workflow
specifications at build-time. The main types of temporal constraints include upper
bound, lower bound and fixed-time. Then, temporal verification would be conducted
so that we can identify any temporal violations and handle them in time.
Conventional temporal verification research and practice have presented some
basic concepts and approaches. However, they have not paid sufficient attention to
overall temporal verification effectiveness and efficiency. In the context of grid
economy, any resources for executing grid workflows must be paid. Therefore, more
resources should be mainly used for execution of grid workflow itself rather than for
temporal verification. Poor temporal verification effectiveness or efficiency would
cause more resources diverted to temporal verification. Hence, temporal verification
effectiveness and efficiency become a prominent issue and deserve an in-depth
investigation.
This thesis systematically investigates the limitations of conventional temporal
verification in terms of temporal verification effectiveness and efficiency. The
detailed analysis of temporal verification effectiveness and efficiency is conducted
for each step of a temporal verification cycle. There are four steps in total: Step 1 -
defining temporal consistency; Step 2 - assigning temporal constraints; Step 3 -
selecting appropriate checkpoints; and Step 4 - verifying temporal constraints.
Based on the investigation and analysis, we propose some new concepts and develop
a set of innovative methods and algorithms towards more effective and efficient
temporal verification. Comparisons, quantitative evaluations and/or mathematical
proofs are also presented at each step of the temporal verification cycle. These
demonstrate that our new concepts, innovative methods and algorithms can
significantly improve overall temporal verification effectiveness and efficiency.
Specifically, in Step 1, we analyse the limitations of two temporal consistency
states which are defined by conventional verification work. After, we propose four
new states towards better temporal verification effectiveness. In Step 2, we analyse
the necessity of a number of temporal constraints in terms of temporal verification
effectiveness. Then we design a novel algorithm for assigning a series of finegrained
temporal constraints within a few user-set coarse-grained ones. In Step 3, we
discuss the problem of existing representative checkpoint selection strategies in
terms of temporal verification effectiveness and efficiency. The problem is that they
often ignore some necessary checkpoints and/or select some unnecessary ones. To
solve this problem, we develop an innovative strategy and corresponding algorithms
which only select sufficient and necessary checkpoints. In Step 4, we investigate a
phenomenon which is ignored by existing temporal verification work, i.e. temporal
dependency. Temporal dependency means temporal constraints are often dependent
on each other in terms of their verification. We analyse its impact on overall
temporal verification effectiveness and efficiency. Based on this, we develop some
novel temporal verification algorithms which can significantly improve overall
temporal verification effectiveness and efficiency. Finally, we present an extension
to our research about handling temporal verification results since these verification
results are based on our four new temporal consistency states.
The major contributions of this research are that we have provided a set of new
concepts, innovative methods and algorithms for temporal verification in grid
workflow systems. With these, we can significantly improve overall temporal
verification effectiveness and efficiency. This would eventually improve the overall
performance and usability of grid workflow systems because temporal verification
can be viewed as a service or function of grid workflow systems. Consequently, by
deploying the new concepts, innovative methods and algorithms, grid workflow
systems would be able to better support large-scale sophisticated scientific and
business processes in complex e-science and e-business applications in the context
of grid economy.
|
78 |
Towards verifiable adaptive control of gas turbine enginesPakmehr, Mehrdad 20 September 2013 (has links)
This dissertation investigates the problem of developing verifiable stable control architectures for gas turbine engines. First, a nonlinear physics-based dynamic model of a twin spool turboshaft engine which drives a variable pitch propeller is developed. In this model, the dynamics of the engine are defined to be the two spool speeds, and the two control inputs to the system are fuel flow rate and prop pitch angle. Experimental results are used to verify the dynamic model of JetCat SPT5 turboshaft engine. Based on the experimental data, performance maps of the engine components including propeller, high pressure compressor, high pressure, and low pressure turbines are constructed. The engine numerical model is implemented using Matlab.
Second, a stable gain scheduled controller is described and developed for a gas turbine engine that drives a variable pitch propeller. A stability proof is developed for a gain scheduled closed-loop system using global linearization and linear matrix inequality (LMI) techniques. Using convex optimization tools, a single quadratic Lyapunov function is computed for multiple linearizations near equilibrium and non-equilibrium points of the nonlinear closed-loop system. This approach guarantees stability of the closed-loop gas turbine engine system. To verify the stability of the closed-loop system on-line, an optimization problem is proposed which is solvable using convex optimization tools. Through simulations, we show the developed gain scheduled controller is capable to regulate a turboshaft engine for large thrust commands in a stable fashion with proper tracking performance.
Third, a gain scheduled model reference adaptive control (GS-MRAC) concept for multi-input multi-output (MIMO) nonlinear plants with constraints on the control inputs is developed and described. Specifically, adaptive state feedback for the output tracking control problem of MIMO nonlinear systems is studied. Gain scheduled reference model system is used for generating desired state trajectories, and the stability of this reference model is also analyzed using convex optimization tools. This approach guarantees stability of the closed-loop gain scheduled gas turbine engine system, which is used as a gain scheduled reference model. An adaptive state feedback control scheme is developed and its stability is proven, in addition to transient and steady-state performance guarantees. The resulting closed-loop system is shown to have ultimately bounded solutions with a priori adjustable bounded tracking error. The results are then extended to GS-MRAC with constraints on the magnitudes of multiple control inputs. Sufficient conditions for uniform boundedness of the closed-loop system is derived. A semi-global stability result is proven with respect to the level of saturation for open-loop unstable plants, while the stability result is shown to be global for open-loop stable plants. Simulations are performed for three different models of the turboshaft engine, including the nominal engine model and two models where the engine is degraded. Through simulations, we show the developed GS-MRAC architecture can be used for the tracking problem of degraded turboshaft engine for large thrust commands with guaranteed stability.
Finally, a decentralized linear parameter dependent representation of the engine model is developed, suitable for decentralized control of the engine with core and fan/prop subsystems. Control theoretic concepts for decentralized gain scheduled model reference adaptive control (D-GS-MRAC) systems is developed. For each subsystem, a linear parameter dependent model is available and a common Lyapunov matrix can be computed using convex optimization tools. With this control architecture, the two subsystems of the engine (i.e., engine core and engine prop/fan) can be controlled with independent controllers for large throttle commands in a decentralized manner. Based on this D-GS-MRAC architecture, a "plug and play" (PnP) technology concept for gas turbine engine control systems is investigated, which allows us to match different engine cores with different engine fans/propellers. With this plug and play engine control architecture, engine cores and fans/props could be used with their on-board subordinate controllers ready for integration into a functional propulsion system. Simulation results for three different models of the engine, including the nominal engine model, the model with a new prop, and the model with a new engine core, illustrate the possibility of PnP technology development for gas turbine engine control systems.
|
79 |
Certified Compilation and Worst-Case Execution Time EstimationOliveira Maroneze, André 17 June 2014 (has links) (PDF)
Safety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET.
|
80 |
Reprezentace stavů programu / Efficient Representation of Program StatesJančík, Pavel January 2017 (has links)
Při verifikaci programů se snažíme rozhodnout, zda program obsahuje či neobsahuje chyby. Základním předpokladem všech verifikačních postupů je efektivní reprezentace a manipulace se stavy programů. V této práci představujeme techniky pro nalezení nepodstatných informací ve stavech programů a pro jejich odstranění. Tato práce obsahuje redukce vhodné pro explicitní i symbolickou reprezentaci stavů. Naše postupy vhodné pro explicitní reprezentaci byly speciálně navrženy pro vícevláknové programy. Naše analýzy dokáží nalézt takové hodnoty v dynamicky alokovaných objektech, tedy na haldě, které program již nebude v následujících krocích číst. Logické formule v predikátové nebo výrokové logice jsou převažující symbolickou reprezentací množin stavů programu. Craigovy interpolanty jsou jedním z obvyklých postupů pro získání formulí s požadovanými vlastnostmi. V této práci představujeme nový způsob jejich výpočtu, který používá přiřazení proměnných pro zmenšení jejich velikosti. Pomocí přiřazení proměnných můžeme zablokovat ty cesty v programu, které nechceme, aby interpolant bral v potaz a tím zmenšit jejich velikost.
|
Page generated in 0.1039 seconds