• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 22
  • 22
  • 19
  • 19
  • 15
  • 14
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

DASE: Document-Assisted Symbolic Execution for Improving Automated Test Generation

Zhang, Lei 17 June 2015 (has links)
Software testing is crucial for uncovering software defects and ensuring software reliability. Symbolic execution has been utilized for automatic test generation to improve testing effectiveness. However, existing test generation techniques based on symbolic execution fail to take full advantage of programs’ rich amount of documentation specifying their input constraints, which can further enhance the effectiveness of test generation. In this paper we present a general approach, Document-Assisted Symbolic Execution (DASE), to improve automated test generation and bug detection. DASE leverages natural language processing techniques and heuristics to analyze programs’ readily available documentation and extract input constraints. The input constraints are then used as pruning criteria; inputs far from being valid are trimmed off. In this way, DASE guides symbolic execution to focus on those inputs that are semantically more important. We evaluated DASE on 88 programs from 5 mature real-world software suites: GNU Coreutils, GNU findutils, GNU grep, GNU Binutils, and elftoolchain. Compared to symbolic execution without input constraints, DASE increases line coverage, branch coverage, and call coverage by 5.27–22.10%, 5.83–21.25% and 2.81–21.43% respectively. In addition, DASE detected 13 previously unknown bugs, 6 of which have already been confirmed by the developers.
22

A Theoretical Study of the Synergy and Lazy Annotation Algorithms

Jayaram, Sampath January 2013 (has links) (PDF)
Given a program with assertions, the assertion checking problem is to tell whether there is an execution of the program that violates one of the assertions. One approach to this problem is to explore different paths towards assertion violations, and to learn “blocking” conditions whenever a path is blocked from reaching the violations. The Synergy algorithm of Gulavani et al. [FSE 2006] and the Lazy Annotation algorithm of McMillan [CAV2010] are two recent algorithms that follow this approach to assertion checking. Each technique has its own advantages. Synergy uses concrete tests which are very cheap as compared to theorem prover calls. The tests also help by giving us the place to perform the refinement (called the frontier) for an abstraction which is too coarse. Synergy uses partition refinement while maintaining its abstraction. The Lazy Annotation algorithm basically partitions each location in to regions that are safe and unsafe. The safe regions are those from which we cannot reach the error states, and the unsafe regions are the remaining ones. The annotations that this algorithm maintains correspond to the safe regions. The advantage that annotations have over partition refinement is that annotations can recover from irrelevant predicates used for annotating, where as once a partition is refined with an irrelevant predicate, it cannot recover from it. In this work, we make a theoretical study of the algorithms mentioned above. The aim of the study is to answer questions like: Is one algorithm provably better than the other, in terms of the best-case execution (counting the number of refinement steps) on input programs? Is the termination behavior of one always better than the other? We show that the Synergy and Lazy Annotation algorithms are incomparable, i.e., neither of them is provably better than the other, in terms of their best-case execution times. We also show how we can view the two algorithms on a common ground, in the sense that we show how to translate a snapshot of one algorithm into a snapshot of the other. This allows us to import the heuristics of one algorithm into the other, and there by propose new and potentially improved versions of these algorithms. By viewing them o n a common ground, we are also able to view the final proofs generated by the algorithms in either representation. We go on to study the proposed new versions of the Synergy and Lazy Annotation, comparing their best-case running times and their termination behaviour. We show that the following pairs of algorithms are incomparable: Mod-Syn (Lazy Annotation-style refinement imported into Synergy) and Synergy, Mod-Syn and Lazy Annotation, Synergy and SEAL(Synergy heuristics imported into Lazy Annotation). We show that the SEAL algorithm always performs better than the Lazy Annotation algorithm.
23

Ověřování asercí kódu pomocí zpětné symbolické exekuce / Code Assertions Verification Using Backward Symbolic Execution

Husák, Robert January 2017 (has links)
In order to prevent, detect and fix errors in software, various tools for programmers are available, while some of them are able to reason about the behaviour of the program. In the case of C# programming language, the main representatives are Microsoft FxCop, Code Contracts and Pex. Those tools can, indeed, help to build a highly reliable software. However, when a company wants to include them in the software development process, there is a significant overhead involved. Therefore, we created a "light- weight" assertion verification tool called AskTheCode that can help the user to focus on a particular problem at a time that needs to be solved. Because of its goal-driven approach, we decided to implement it using backward symbolic execution. Although it can currently handle only basic C# statements and data types, the evaluation against the existing tools shows that it has the potential to eventually provide significant added value to the user once developed further. Powered by TCPDF (www.tcpdf.org)
24

Model-Based Testing over IOSTS enriched with function calls / Test à base de modèles : IOSTS enrichis avec les appels de fonctions

Boudhiba, Imen 02 March 2017 (has links)
Les systèmes réactifs sont modélisés avec différents types d'automates, tels que les systèmes de transitions symboliques à entrée sortie (IOSTS). L'exécution symbolique d'un IOSTS permet la génération de cas de test qui peuvent être exécutés sur une implantation concrète, afin de déterminer si elle est conforme à son modèle. Dans ce document, nous étendons les IOSTS avec des appels de fonctions utilisateur et analysons leur impact sur le système entier et viceversa. Cette thèse comble l'écart entre une approche basée sur le modèle où les fonctions utilisateur sont abstraites et une approche basée sur le code où les petits morceaux de code sont considérés séparément, indépendamment de la façon dont ils sont combinés. Selon le niveau de connaissance que nous avons sur ces fonctions, elles sont modélisées soit par une spécification complète, soit par une spécification partielle, soit juste comme des boîtes noires fournies sans aucune connaissance. Premièrement, lorsque les fonctions sont partiellement connues, nous utilisons des bouchons définis par des tables contenant des tuples représentatifs des données d'entrée/sortie. L'approche proposée emprunte au test "concolic", l'idée de mélanger l'exécution symbolique avec l'information obtenue à partir d'exécutions concrètes des fonctions (tables). Deuxièmement, si l'utilisateur est prêt à fournir d'autres spécifications, il serait intéressant d'utiliser des représentations plus complètes pour les fonctions. Par conséquent, nous proposons d'abstraire des comportements des fonctions par des contrats pré/post. Ensuite, nous étendons l'exécution symbolique en analysant les fonctions via leurs contrats. Enfin, lorsque les fonctions appelées sont complètement inconnues, nous présentons une approche pour extraire de nouveaux contrats pour eux en explorant les contraintes issues de l'exécution symbolique de l'IOSTS. De tels contrats reflètent les contraintes des fonctions qui rendent possible un certain comportement (exigence). / Reactive systems are modeled with various kinds of automata, such as Input Output Symbolic Transition Systems (IOSTS). Symbolic execution over an IOSTS allows test cases generation that can be executed on a concrete implementation, in order to determine whether it is conforming to its model. In this dissertation, we aim at extending the IOSTS framework with explicit user-defined function calls and analyze their impact on a whole system and vice-versa. The thesis bridges the gap between a model-based approach in which user-defined functions are abstracted away and a code-based approach in which small pieces of code are considered separately regardless of the way they are combined. According to the level of knowledge we have about these functions, they are modeled either by a complete specification, a partial specification, or even just as black-boxes provided without any knowledge. First, when functions are partially known, we use function stubs defined by tables containing representative input/output data tuples. The proposed approach borrows from concolic testing, the idea of mixing symbolic execution with information obtained from instrumented concrete executions (function tables). Second, if the user is willing to provide further specifications, it would be interesting to use more complete representations for called functions. Hence, we propose to abstract function behaviors by means of pre/post contracts. Then we extend symbolic execution by analyzing the functions through their contracts. Finally, when called functions are completely unknown, we present an approach to extract new contracts for them by exploring constraints coming from the IOSTS symbolic execution. Such contracts reflect constraints on the functions that make some behavior (requirement) feasible.
25

Quantitative Analysis of Exploration Schedules for Symbolic Execution / Kvantitativ analys av utforskningsscheman för Symbolisk Exekvering

Kaiser, Christoph January 2017 (has links)
Due to complexity in software, manual testing is not enough to cover all relevant behaviours of it. A different approach to this problem is Symbolic Execution. Symbolic Execution is a software testing technique that tests all possible inputs of a program in the hopes of finding all bugs. Due to the often exponential increase in possible program paths, Symbolic Execution usually cannot exhaustively test a program. To nevertheless cover the most important or error prone areas of a program, search strategies that prioritize these areas are used. Such search strategies navigate the program execution tree, analysing which paths seem interesting enough to execute and which to prune. These strategies are typically grouped into two categories, general purpose searchers, with no specific target but the aim to cover the whole program and targeted searchers which can be directed towards specific areas of interest. To analyse how different searching strategies in Symbolic Execution affect the finding of errors and how they can be combined to improve the general outcome, the experiments conducted consist of several different searchers and combinations of them, each run on the same set of test targets. This set of test targets contains amongst others one of the most heavily tested sets of open source tools, the GNU Coreutils. With these, the different strategies are compared in distinct categories like the total number of errors found or the percentage of covered code. With the results from this thesis the potential of targeted searchers is shown, with an example implementation of the Pathscore-Relevance strategy. Further, the results obtained from the conducted experiments endorse the use of combinations of search strategies. It is also shown that, even simple combinations of strategies can be highly effective. For example, interleaving strategies can provide good results even if the underlying searchers might not perform well by themselves. / På grund av programvarukomplexitet är manuell testning inte tillräcklig för att täcka alla relevanta beteenden av programvaror. Ett annat tillvägagångssätt till detta problem är Symbolisk Exekvering (Symbolic Execution). Symbolisk Exekvering är en mjukvarutestningsteknik som testar alla möjliga inmatningari ett program i hopp om att hitta alla buggar. På grund av den ofta exponentiella ökningeni möjliga programsökvägar kan Symbolisk Exekvering vanligen inte uttömmande testa ettprogram. För att ändå täcka de viktigaste eller felbenägna områdena i ett program, används sökstrategier som prioriterar dessa områden. Sådana sökstrategier navigerar i programexekveringsträdet genom att analysera vilka sökvägar som verkar intressanta nog att utföra och vilka att beskära. Dessa strategier grupperas vanligtvis i två kategorier, sökare med allmänt syfte, utan något specifikt mål förutom att täcka hela programmet, och riktade sökare som kan riktas mot specifika intresseområden. För att analysera hur olika sökstrategier i Symbolisk Exekvering påverkar upptäckandetav fel och hur de kan kombineras för att förbättra det allmänna utfallet, bestod de experimentsom utfördes av flera olika sökare och kombinationer av dem, som alla kördes på samma uppsättning av testmål. Denna uppsättning av testmål innehöll bland annat en av de mest testade uppsättningarna av öppen källkod-verktyg, GNU Coreutils. Med dessa jämfördes de olika strategierna i distinkta kategorier såsom det totala antalet fel som hittats eller procenttalet av täckt kod. Med resultaten från denna avhandling visas potentialen hos riktade sökare, med ett exempeli form av implementeringen av Pathscore-Relevance strategin. Vidare stöder resultaten som erhållits från de utförda experimenten användningen av sökstrategikombinationer. Det visas också att även enkla kombinationer av strategier kan vara mycket effektiva.Interleaving-strategier kan till exempel ge bra resultat även om de underliggande sökarna kanske inte fungerar bra själva.
26

Detecting Server-Side Web Applications with Unrestricted File Upload Vulnerabilities

Huang, Jin 01 September 2021 (has links)
No description available.
27

RAUK: Automatic Schedulability Analysis of RTIC Applications Using Symbolic Execution

Håkansson, Mark January 2022 (has links)
In this thesis, the proof-of-concept tool RAUK for automatically analyzing RTIC applications for schedulability using symbolic execution is presented. The RTIC framework provides a declarative executable model for building embedded applications, which behavior is based on established formal methods and policies. Because of this, RTIC applications are amenable for both worst-case execution time (WCET) and scheduling analysis techniques. Internally, RAUK utilizes the symbolic execution tool KLEE to generate test vectors covering all feasible execution paths in all user tasks in the RTIC application. Since KLEE also checks for possible program errors e.g. arithmetic or array indexing errors, it can be used via RAUK to verify the robustness of the application in terms of program errors. The test vectors are replayed on the target hardware to record a WCET estimation for all user tasks. These WCET measurements are used to derive a worst-case response time (WCRT) for each user task, which in turn is used to determine if the system is schedulable using formal scheduling analysis techniques. The evaluation of this tool shows a good correlation between the results from RAUK and manual measurements of the same tasks, which showcases the viability of this approach. However, the current implementation can add some substantial overhead to the measurements, and sometimes certain types of paths in the application can be completely absent from the analysis. The work in this thesis is based on previous research in this field for WCET estimation using KLEE on an older iteration of the RTIC framework. Our contributions include a focus on an RTIC 1.0 pre-release, a seamless integration with the Rust ecosystem, minimal changes required to the application itself, as well as an included automatic schedulability analyzer. Currently, RAUK can verify simple RTIC applications for both program errors and schedulability with minimal changes to the application source code. The groundwork is laid out for further improvements that are required to function on larger and more complex applications. Solutions for known problems and future work are discussed in Chapters 6, 7 respectively.
28

Towards Effective Symbolic Execution

Nowack, Martin 15 April 2024 (has links)
Symbolic execution is an effective method to analyse an application's properties thoroughly, based on the idea of analysing every control flow path an application can take. Although symbolic execution has many different use cases, it suffers from two main challenges: the state space explosion problem and the computationally expensive solving of constraints collected along the paths. The primary goal of this thesis is to tackle both issues and improve the applicability of symbolic execution, i.e., allowing larger and more complex applications to be analysed by symbolic execution. This work will focus on C and C++ as languages for systems programming. Moreover, this work heavily builds on KLEE --- a symbolic execution engine based on the LLVM framework. We took on the challenge of managing the memory used by symbolic execution more efficiently. If an application is tested, symbolic execution tracks its memory. Due to the state space explosion problem, dynamic symbolic execution must handle many states simultaneously. Choosing the right granularity of tracked memory changes will positively impact symbolic execution's performance and memory utilisation. While constraint solving is one computationally expensive part of symbolic execution, incremental solving is an effective method for SMT and SAT solving to reuse existing solutions and speed up the solving process. Unfortunately, utilising this approach is less straightforward in symbolic execution as it hinders different optimisations and caching strategies used by symbolic execution. We introduce a technique that nevertheless helps to combine these methods efficiently. Furthermore, we researched how a multi-threaded symbolic execution engine can efficiently utilise process-local resources, avoiding inter-process communication. We mainly focus on the shared use of solver queries across threads by separating shared and common query constraints and re-using the latter results more efficiently across multiple queries. Besides the general challenges of symbolic execution, we found it difficult to evaluate the improvement we made along with this work. While partially caused by our learning process, we found that symbolic execution and its state-space explosion problem make it hard to compare contributions in this field and assess their impact. Therefore, we came up with Deterministic State-Space Exploration that improves this process.
29

On Reducing the Trusted Computing Base in Binary Verification

An, Xiaoxin 15 June 2022 (has links)
The translation of binary code to higher-level models has wide applications, including decompilation, binary analysis, and binary rewriting. This calls for high reliability of the underlying trusted computing base (TCB) of the translation methodology. A key challenge is to reduce the TCB by validating its soundness. Both the definition of soundness and the validation method heavily depend on the context: what is in the TCB and how to prove it. This dissertation presents three research contributions. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB. The first contribution targets the validation of OCaml-to-PVS translation -- commonly used to translate instruction-set-architecture (ISA) specifications to PVS -- where the destination language is non-executable. We present a methodology called OPEV to validate the translation between OCaml and PVS, supporting non-executable semantics. The validation includes generating large-scale tests for OCaml implementations, generating test lemmas for PVS, and generating proofs that automatically discharge these lemmas. OPEV incorporates an intermediate type system that captures a large subset of OCaml types, employing a variety of rules to generate test cases for each type. To prove the PVS lemmas, we develop automatic proof strategies and discharge the test lemmas using PVS Proof-Lite, a powerful proof scripting utility of the PVS verification system. We demonstrate our approach in two case studies that include 259 functions selected from the Sail and Lem libraries. For each function, we generate thousands of test lemmas, all of which are automatically discharged. The dissertation's second contribution targets the soundness validation of a disassembly process where the source language does not have well-defined semantics. Disassembly is a crucial step in binary security, reverse engineering, and binary verification. Various studies in these fields use disassembly tools and hypothesize that the reconstructed disassembly is correct. However, disassembly is an undecidable problem. State-of-the-art disassemblers suffer from issues ranging from incorrectly recovered instructions to incorrectly assessing which addresses belong to instructions and which to data. We present DSV, a systematic and automated approach to validate whether the output of a disassembler is sound with respect to the input binary. No source code, debugging information, or annotations are required. DSV defines soundness using a transition relation defined over concrete machine states: a binary is sound if, for all addresses in the binary that can be reached from the binary's entry point, the bytes of the (disassembled) instruction located at an address are the same as the actual bytes read from the binary. Since computing this transition relation is undecidable, DSV uses over-approximation by preventing false positives (i.e., the existence of an incorrectly disassembled reachable instruction but deemed unreachable) and allowing, but minimizing, false negatives. We apply DSV to 102 binaries of GNU Coreutils with eight different state-of-the-art disassemblers from academia and industry. DSV is able to find soundness issues in the output of all disassemblers. The dissertation's third contribution is WinCheck: a concolic model checker that detects memory-related properties of closed-source binaries. Bugs related to memory accesses are still a major issue for security vulnerabilities. Even a single buffer overflow or use-after-free in a large program may be the cause of a software crash, a data leak, or a hijacking of the control flow. Typical static formal verification tools aim to detect these issues at the source code level. WinCheck is a model-checker that is directly applicable to closed-source and stripped Windows executables. A key characteristic of WinCheck is that it performs its execution as symbolically as possible while leaving any information related to pointers concrete. This produces a model checker tailored to pointer-related properties, such as buffer overflows, use-after-free, null-pointer dereferences, and reading from uninitialized memory. The technique thus provides a novel trade-off between ease of use, accuracy, applicability, and scalability. We apply WinCheck to ten closed-source binaries available in a Windows 10 distribution, as well as the Windows version of the entire Coreutils library. We conclude that the approach taken is precise -- provides only a few false negatives -- but may not explore the entire state space due to unresolved indirect jumps. / Doctor of Philosophy / Binary verification is a process that verifies a class of properties, usually security-related properties, on binary files, and does not need access to source code. Since a binary file is composed of byte sequences and is not human-readable, in the binary verification process, a number of assumptions are usually made. The assumptions often involve the error-free nature of a set of subsystems used in the verification process and constitute the verification process's trusted computing base (or TCB). The reliability of the verification process therefore depends on how reliable the TCB is. The dissertation presents three research contributions in this regard. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB. The dissertation's first contribution presents a validation on OCaml-to-PVS translations -- commonly used to translate a computer architecture's instruction specifications to PVS, a language that allows mathematical specifications. To build up a reliable semantical model of assembly instructions, which is assumed to be in the TCB, it is necessary to validate the translation. The dissertation's second contribution validates the soundness of the disassembly process, which translates a binary file to corresponding assembly instructions. Since the disassembly process is generally assumed to be trustworthy in many binary verification works, the TCB of binary verification could be reduced by validating the soundness of the disassembly process. With the reduced TCB, the dissertation introduces WinCheck, the dissertation's third and final contribution: a concolic model checker that validates pointer-related properties of closed-source Windows binaries. The pointer-related properties include absence of buffer overflow, absence of use-after-free, and absence of null-pointer dereference.
30

Efficient Symbolic Execution of Concurrent Software

Guo, Shengjian 26 April 2019 (has links)
Concurrent software has been widely utilizing in computer systems owing to the highly efficient computation. However, testing and verifying concurrent software remain challenging tasks. This matter is not only because of the non-deterministic thread interferences which are hard to reason about but also because of the large state space due to the simultaneous path and interleaving explosions. That is, the number of program paths in each thread may be exponential in the number of branch conditions, and also, the number of thread interleavings may be exponential in the number of concurrent operations. This dissertation presents a set of new methods, built upon symbolic execution, a program analysis technique that systematically explores program state space, for testing concurrent programs. By modeling both functional and non-functional properties of the programs as assertions, these new methods efficiently analyze the viable behaviors of the given concurrent programs. The first method is assertion guided symbolic execution, a state space reduction technique that identifies and eliminates redundant executions w.r.t the explored interleavings. The second method is incremental symbolic execution, which generates test inputs only for the influenced program behaviors by the small code changes between two program versions. The third method is SYMPLC, a technique with domain-specific reduction strategies for generating tests for the multitasking Programmable Logic Controller (PLC) programs written in languages specified by the IEC 61131-3 standard. The last method is adversarial symbolic execution, a technique for detecting concurrency related side-channel information leaks by analyzing the cache timing behaviors of a concurrent program in symbolic execution. This dissertation evaluates the proposed methods on a diverse set of both synthesized programs and real-world applications. The experimental results show that these techniques can significantly outperform state-of-the-art symbolic execution tools for concurrent software. / Doctor of Philosophy / Software testing is a technique that runs software as a black-box on computer hardware multiple times, with different inputs per run, to test if the software behavior conforms to the designed functionality by developers. Nowadays, programmers have been increasingly developing multithreaded and multitasking software, e.g., web browser and web server, to utilize the highly efficient multiprocessor hardware. This approach significantly improves the software performance since a large computing job can now decompose to a set of small jobs which can then distribute to concurrently running threads (tasks). However, testing multithreaded (multitask) software is extremely challenging. The most critical problem is the inherent non-determinism. Typically, executing sequential software with the same input data always results in the same output. However, running a multithreaded (multitask) software multiple times, even under the same input data, may yield different output in each run. The root reason is that concurrent threads (tasks) may interleave their running progress at any time; thus the internal software execution order may be altered unexpectedly, causing runtime errors. Meanwhile, finding such faults is difficult, since the number of all possible interleavings can be exponentially growing in the number of concurrent thread (task) operations. This dissertation proposes four methods to test multithreaded/multitask software efficiently. The first method summarizes the already-tested program behaviors to avoid future testing runs that cannot lead to new faults. The second method only tests program behaviors that are impacted by program changes. The third method tests multitask Programmable Logic Controller (PLC) programs by excluding infeasible testing runs w.r.t the PLC semantics. The last method tests non-functional program properties by systematic concurrency analysis. This dissertation evaluates these methods upon a diverse set of benchmarks. The experimental results show that the proposed methods significantly outperform state-of-the-art techniques for concurrent software analysis.

Page generated in 0.1904 seconds