• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 22
  • 21
  • 18
  • 18
  • 15
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Finding Data Races in Software Binaries with Symbolic Execution

Jackson, Nathan D. 27 May 2020 (has links)
No description available.
12

Building An Abstract-Syntax-Tree-Oriented Symbolic Execution Engine for PHP Programs

Huang, Jin 07 June 2018 (has links)
No description available.
13

Architecture for a Symbolic Execution Environment

Norlén, Joacim January 2022 (has links)
Program testing is an important aspect of software development. Symbolic execution can be used as a tool to automatically verify the correctness of programs for all feasible paths of execution. Moreover, for embedded systems symbolic execution can be used to generate test cases to estimate run times to help determine the worst-case execution time (WCET) and schedulability of systems. This thesis explores an architecture for symbolic execution for use in embedded Rust. Accompanied with the architecture are implementation details of a prototype Symex that can handle small programs. Symex evaluates all feasible paths of execution looking for errors and assertions, and reports which concrete inputs lead to errors. Included with the prototype is a command-line tool to automatically build and analyze Rust projects. The tool allows for easy analysis of projects, and an included library provides functions to manipulate symbolic variables to aid the analysis. The method of evaluating all feasible paths work well with the purpose of evaluating embedded systems, where the aim is typically to keep the code complexity low. The low code complexity lends the software to be resilient towards path explosion. For the cases where this cannot be helped the functions to manipulate the symbolic variables in the analysis can be used to further constrain the variables and lower the number of feasible paths. The evaluation shows the architecture is feasible for the intended use case in embedded systems. Furthermore, evaluation of the prototype shows how the system can be used to show the absence of errors, verify functions, and check for functional equivalence. Inherent to the symbolic execution approach the system cannot handle programs with a large branching factor.
14

FUZZING DEEPER LOGIC WITH IMPEDING FUNCTION TRANSFORMATION

Rowan Brock Hart (14205404) 02 December 2022 (has links)
<p>Fuzzing, a technique for negative testing of programs using randomly mutated or gen?erated input data, is responsible for the discovery of thousands of bugs in software from web browsers to video players. Advances in fuzzing focus on various methods for enhancing the number of bugs found and reducing the time spent to find them by applying various static, dynamic, and symbolic binary analysis techniques. As a stochastic process, fuzzing is an inherently inefficient method for discovering bugs residing in deep logic of programs due to the compounding complexity of preconditions as paths in programs grow in length. We propose a novel system to overcome this limitation by abstracting away path-constraining preconditions from a statement level to a function level by identifying impeding functions, functions that inhibit control flow from proceeding. REFACE is an end-to-end system for enhancing the capabilities of an existing fuzzer by generating variant binaries that present an easier-to-fuzz interface and expands an ongoing fuzzing campaign with minimal offline overhead. REFACE operates entirely on binary programs, requiring no source code or sym?bols to run, and is fuzzer-agnostic. This enhancement represents a step forward in a new direction toward abstraction of code that has historically presented a significant barrier to fuzzing and aims to make incremental progress by way of several ancillary dataflow analysis techniques with potential wide applicability. We attain a significant improvement in speed of obtaining maximum coverage, re-discover one known bug, and discover one possible new bug in a binary program during evaluation against an un-modified state-of-the-art fuzzer with no augmentation.</p>
15

Automatic Deobfuscation and Reverse Engineering of Obfuscated Code

Yadegari, Babak January 2016 (has links)
Automatic malware analysis is an essential part of today's computer security practices. Nearly one million malware samples were delivered to the analysts on a daily basis on year 2014 alone while the number of samples submitted for analysis increases almost exponentially each year. Given the size of the threat we are facing today and the amount of malicious codes emerging every day, the ability to automatically analyze unknown and unwanted software is critically important more than ever. On the other hand, malware writers adapt their malicious codes to new security measurements to protect them from being exposed and detected. This is usually achieved by employing obfuscation techniques that complicate the reverse engineering and analysis of the code by adding lots of unnecessary and irrelevant computations. Most of the malicious samples found in the wild are obfuscated and equipped with complicated anti-analysis defenses intended to hide the malicious intent of the malware by defeating the analysis and/or increasing the analysis time. Deobfuscation (reversing the obfuscation) requires automatic techniques to extract the original logic embedded in the obfuscated code for further analysis. Presumably the deobfuscated code requires less analysis time and is easier to analyze compared to the obfuscated one. Previous approaches in this regard target specific types of obfuscations by making strong assumptions about the underlying protection scheme leaving opportunities for the adversaries to attack. This work addresses this limitation by proposing new program analysis techniques that are effective against code obfuscations while being generic by minimizing the assumptions about the underlying code. We found that standard program analysis techniques, including well-known data and control flow analyses and/or symbolic execution, suffer from imprecision due to the obfuscation and show how to mitigate this loss of precision. Using more precise program analysis techniques, we propose a deobfuscation technique that is successful in reversing the complex obfuscation techniques such as virtualization-obfuscation and/or Return-Oriented Programming (ROP).
16

Automated coverage calculation and test case generation

Morrison, George Campbell 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: This research combines symbolic execution, a formal method of static analysis, with various test adequacy criteria, to explore the e ectiveness of using symbolic execution for calculating code coverage on a program's existing JUnit test suites. Code coverage is measured with a number of test adequacy criteria, including statement coverage, branch coverage, condition coverage, method coverage, class coverage, and loop coverage. The results of the code coverage calculation is then used to automatically generate JUnit test cases for areas of a program that are not su ciently covered. The level of redundancy of each test case is also calculated during coverage calculation, thereby identifying fully redundant, and partially redundant, test cases. The combination of symbolic execution and code coverage calculation is extended to perform coverage calculation during a manual execution of a program, allowing testers to measure the e ectiveness of manual testing. This is implemented as an Eclipse plug-in, named ATCO, which attempts to take advantage of the Eclipse workspace and extensible user interface environment to improve usability of the tool by minimizing the user interaction required to use the tool. The code coverage calculation process uses constraint solving to determine method parameter values to reach speci c areas in the program. Constraint solving is an expensive computation, so the tool was parallellised using Java's Concurrency package, to reduce the overall execution time of the tool. / AFRIKAANSE OPSOMMING: Hierdie navorsing kombineer simboliese uitvoering, 'n formele metode van statiese analise, met verskeie toets genoegsaamheid kriteria, om die e ektiwiteit van die gebruik van simboliese uitvoer te ondersoek vir die berekening van kode dekking op 'n program se bestaande JUnit toets stelle. Kode dekking word gemeet deur verskeie toets genoegsaamheid kriteria, insluited stelling dekking, tak dekking, kondisie dekking, metode dekking, klas dekking, en lus dekking. Die resultate van die kode dekking berekeninge word dan gebruik om outomaties JUnit toets voorbeelde te genereer vir areas van 'n program wat nie doeltre end ondersoek word nie. Die vlak van oortolligheid van elke toets voorbeeld word ook bereken gedurende die dekkingsberekening, en daardeur word volledig oortollige, en gedeeltelik oortollige, toets voorbeelde identi seer. Die kombinasie van simboliese uitvoer en kode dekking berekening is uitgebrei deur die uitvoer van dekking berekeninge van 'n gebruiker-beheerde uitvoer, om sodoende kode dekking van 'n gebruiker-beheerde uitvoer van 'n program te meet. Dit laat toetsers toe om die e ektiwiteit van hulle beheerde uitvoer te meet. Bogenoemde word ge mplimenteer as 'n Eclipse aanvoegsel, genaamd ATCO, wat poog om voordeel te trek vanuit die Eclipse werkspasie, en die uitbreibare gebruiker oordrag omgewing, om die bruikbaarheid van ATCO te verbeter, deur die vermindering van die gebruiker interaksie wat benodig word om ATCO te gebruik. Die kode dekking berekeningsproses gebruik beperking oplossing om metode invoer waardes te bereken, om spesi eke areas in die program te bereik. Beperking oplossing is 'n duur berekening, so ATCO is geparalleliseer, met behulp van Java se Concurrency pakket, om die algehele uitvoer tyd van die program te verminder.
17

Identifying Testing Requirements for Modified Software

Apiwattanapong, Taweesup 09 July 2007 (has links)
Throughout its lifetime, software must be changed for many reasons, such as bug fixing, performance tuning, and code restructuring. Testing modified software is the main activity performed to gain confidence that changes behave as they are intended and do not have adverse effects on the rest of the software. A fundamental problem of testing evolving software is determining whether test suites adequately exercise changes and, if not, providing suitable guidance for generating new test inputs that target the modified behavior. Existing techniques evaluate the adequacy of test suites based only on control- and data-flow testing criteria. They do not consider the effects of changes on program states and, thus, are not sufficiently strict to guarantee that the modified behavior is exercised. Also, because of the lack of this guarantee, these techniques can provide only limited guidance for generating new test inputs. This research has developed techniques that will assist testers in testing evolving software and provide confidence in the quality of modified versions. In particular, this research has developed a technique to identify testing requirements that ensure that the test cases satisfying them will result in different program states at preselected parts of the software. This research has also developed supporting techniques for identifying testing requirements. Such techniques include (1) a differencing technique, which computes differences and correspondences between two software versions and (2) two dynamic-impact-analysis techniques, which identify parts of software that are likely affected by changes with respect to a set of executions.
18

Assertion-based repair of complex data structures

Elkarablieh, Bassem H. 09 August 2012 (has links)
As software systems are growing in complexity and size, reliability becomes a major concern. A large degree of industrial and academic efforts for increasing software reliability are directed towards design, testing and validation—activities performed before the software is deployed. While such activities are fundamental for achieving high levels of confidence in software systems, bugs still occur after deployment resulting in costly software failures. This dissertation presents assertion-based repair, a novel approach for error recovery from insidious bugs that occur after the system is deployed. It describes the design and implementation of a repair framework for Java programs and evaluates the efficiency and effectiveness of the approach on repairing data structure errors in both software libraries and open-source stand-alone applications. Our approach introduces a new form of assertions, assertAndRepair, for developers to use when checking the consistency of the data structures manipulated by their programs with respect to a set of desired structural and data properties. The developer provides the properties in a Java boolean method, repOk, which returns a truth value based on whether a given data structure satisfies these properties. Upon an assertion violation due to a faulty structure, instead of terminating the execution, the structure is repaired, i.e., its fields are mutated such that the resulting structure satisfies the desired properties, and the program proceeds with its execution. To aid developers in detecting the causes of the fault, repair-logs are generated which provide useful information about the performed mutations. The repair process is performed using a novel algorithm that uses a systematic search based on symbolic execution to determine valuations for the structures’ fields that result in a valid structure. Our experiments on repairing both library data structures, as well as, stand-alone applications demonstrate the utility and efficiency of the approach in repairing large structures, enabling programs to recover from crippling errors and proceed with their executions. Assertion-based repair presents a novel post-deployment mechanism that integrates with existing and newly developed software, providing them with the defensive ability to recover from unexpected runtime errors. Programmers already understand the advantages of using assertions and are comfortable with writing them. Providing new analyses and powerful extensions for them presents an attractive direction towards building more reliable software. / text
19

Enhancing symbolic execution using memoization and incremental techniques

Yang, Guowei, active 2013 20 September 2013 (has links)
The last few years have seen a resurgence of interest in the use of symbolic execution--program analysis technique developed more than three decades ago to analyze program execution paths. However, symbolic execution remains an expensive technique and scaling it remains a key technical challenge. There are two key factors that contribute to its cost: (1) the number of paths that need to be explored and (2) the cost of constraint solving, which is typically required for each path explored. Our insight is that the cost of symbolic execution can be reduced by an incremental approach, which uses static analysis and dynamic analysis to focus on relevant parts of code and reuse previous analysis results, thereby addressing both the key cost factors of symbolic execution. This dissertation presents Memoized Incremental Symbolic Execution, a novel approach that embodies our insight. Using symbolic execution in practice often requires several successive runs of the technique on largely similar underlying problems where successive problems differ due to some change, which may be to code, e.g., to fix a bug, to analysis parameters, e.g., to increase the path exploration depth, or to correctness properties, e.g., to check against stronger specifications that are written as assertions in code. Memoized Incremental Symbolic Execution, a three-fold approach, leverages the similarities in the successive problems to reduce the total cost of applying the technique. Our prototype tool-set is based on the Symbolic PathFinder. Experimental results show that Memoized Incremental Symbolic Execution enhances the efficacy of symbolic execution. / text
20

A verified framework for symbolic execution in the ACL2 theorem prover

Swords, Sol Otis 11 February 2011 (has links)
Mechanized theorem proving is a promising means of formally establishing facts about complex systems. However, in applying theorem proving methodologies to industrial-scale hardware and software systems, a large amount of user interaction is required in order to prove useful properties. In practice, the human user tasked with such a verification must gain a deep understanding of the system to be verified, and prove numerous lemmas in order to allow the theorem proving program to approach a proof of the desired fact. Furthermore, proofs that fail during this process are a source of confusion: the proof may either fail because the conjecture was false, or because the prover required more help from the user in order to reach the desired conclusion. We have implemented a symbolic execution framework inside the ACL2 theorem prover in order to help address these issues on certain problem domains. Our framework introduces a proof strategy that applies bit-level symbolic execution using BDDs to finite-domain problems. This proof strategy is a fully verified decision procedure for such problems, and on many useful problem domains its capacity vastly exceeds that of exhaustive testing. Our framework also produces counterexamples for conjectures that it determines to be false. Our framework seeks to reduce the amount of necessary user interaction in proving theorems about industrial-scale hardware and software systems. By increasing the automation available in the prover, we allow the user to complete useful proofs while understanding less of the detailed implementation of the system. Furthermore, by producing counterexamples for falsified conjectures, our framework reduces the time spent by the user in trying to determine why a proof failed. / text

Page generated in 0.0975 seconds