• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 56
  • 56
  • 22
  • 21
  • 19
  • 18
  • 15
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

FUZZING DEEPER LOGIC WITH IMPEDING FUNCTION TRANSFORMATION

Rowan Brock Hart (14205404) 02 December 2022 (has links)
<p>Fuzzing, a technique for negative testing of programs using randomly mutated or gen?erated input data, is responsible for the discovery of thousands of bugs in software from web browsers to video players. Advances in fuzzing focus on various methods for enhancing the number of bugs found and reducing the time spent to find them by applying various static, dynamic, and symbolic binary analysis techniques. As a stochastic process, fuzzing is an inherently inefficient method for discovering bugs residing in deep logic of programs due to the compounding complexity of preconditions as paths in programs grow in length. We propose a novel system to overcome this limitation by abstracting away path-constraining preconditions from a statement level to a function level by identifying impeding functions, functions that inhibit control flow from proceeding. REFACE is an end-to-end system for enhancing the capabilities of an existing fuzzer by generating variant binaries that present an easier-to-fuzz interface and expands an ongoing fuzzing campaign with minimal offline overhead. REFACE operates entirely on binary programs, requiring no source code or sym?bols to run, and is fuzzer-agnostic. This enhancement represents a step forward in a new direction toward abstraction of code that has historically presented a significant barrier to fuzzing and aims to make incremental progress by way of several ancillary dataflow analysis techniques with potential wide applicability. We attain a significant improvement in speed of obtaining maximum coverage, re-discover one known bug, and discover one possible new bug in a binary program during evaluation against an un-modified state-of-the-art fuzzer with no augmentation.</p>
12

Building An Abstract-Syntax-Tree-Oriented Symbolic Execution Engine for PHP Programs

Huang, Jin 07 June 2018 (has links)
No description available.
13

An Efficient 2-Phase Strategy to Achieve High Branch Coverage

Prabhu, Sarvesh P. 06 March 2012 (has links)
Symbolic execution-based test generation is gaining popularity for software test generation. The increasing complexity of the software program is posing new challenges in software execution-based test generation because of the path explosion problem. We present a new 2-phase symbolic execution driven strategy that achieves high branch coverage in software quickly. Phase 1 follows a greedy approach that quickly covers as many branches as possible by exploring each branch through its corresponding shortest path prefix. Phase 2 covers the remaining branches that are left uncovered if the shortest path to the branch was infeasible. In Phase 1, a basic conflict driven learning is used to skip all the paths that may have any of the earlier encountered conflicting conditions, while in Phase 2, a more intelligent conflict driven learning is used to skip regions that do not have a feasible path to any unexplored branch. This results in considerable reduction in unnecessary SMT solver calls. Experimental results show that significant speedup can be achieved, effectively reducing the time to detect a bug and providing higher branch coverage for a fixed time out period than previous techniques. / Master of Science
14

Partitioning Strategies to Enhance Symbolic Execution

Marcellino, Brendan Adrian 11 August 2015 (has links)
Software testing is a fundamental part of the software development process. However, testing is still costly and consumes about half of the development cost. The path explosion problem often necessitates one to consider an extremely large number of paths in order to reach a specific target. Symbolic execution can reduce this cost by using symbolic values and heuristic exploration strategies. Although various exploration strategies have been proposed in the past, the number of Satisfiability Modulo Theories (SMT) solver calls for reaching a target is still large, resulting in longer execution times for programs containing many paths. In this paper, we present two partitioning strategies in order to mitigate this problem, consequently reducing unnecessary SMT solver calls as well. In sequential partitioning, code sections are analyzed sequentially to take advantage of infeasible paths discovered in earlier sections. On the other hand, using dynamic partitioning on SSA-applied code, the code sections are analyzed in a non-consecutive order guided by data dependency metrics within the sections. Experimental results show that both strategies can achieve significant speedup in reducing the number of unnecessary solver calls in large programs. More than 1000x speedup can be achieved in large programs over conflict-driven learning. / Master of Science
15

Automatic Deobfuscation and Reverse Engineering of Obfuscated Code

Yadegari, Babak January 2016 (has links)
Automatic malware analysis is an essential part of today's computer security practices. Nearly one million malware samples were delivered to the analysts on a daily basis on year 2014 alone while the number of samples submitted for analysis increases almost exponentially each year. Given the size of the threat we are facing today and the amount of malicious codes emerging every day, the ability to automatically analyze unknown and unwanted software is critically important more than ever. On the other hand, malware writers adapt their malicious codes to new security measurements to protect them from being exposed and detected. This is usually achieved by employing obfuscation techniques that complicate the reverse engineering and analysis of the code by adding lots of unnecessary and irrelevant computations. Most of the malicious samples found in the wild are obfuscated and equipped with complicated anti-analysis defenses intended to hide the malicious intent of the malware by defeating the analysis and/or increasing the analysis time. Deobfuscation (reversing the obfuscation) requires automatic techniques to extract the original logic embedded in the obfuscated code for further analysis. Presumably the deobfuscated code requires less analysis time and is easier to analyze compared to the obfuscated one. Previous approaches in this regard target specific types of obfuscations by making strong assumptions about the underlying protection scheme leaving opportunities for the adversaries to attack. This work addresses this limitation by proposing new program analysis techniques that are effective against code obfuscations while being generic by minimizing the assumptions about the underlying code. We found that standard program analysis techniques, including well-known data and control flow analyses and/or symbolic execution, suffer from imprecision due to the obfuscation and show how to mitigate this loss of precision. Using more precise program analysis techniques, we propose a deobfuscation technique that is successful in reversing the complex obfuscation techniques such as virtualization-obfuscation and/or Return-Oriented Programming (ROP).
16

Automated coverage calculation and test case generation

Morrison, George Campbell 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: This research combines symbolic execution, a formal method of static analysis, with various test adequacy criteria, to explore the e ectiveness of using symbolic execution for calculating code coverage on a program's existing JUnit test suites. Code coverage is measured with a number of test adequacy criteria, including statement coverage, branch coverage, condition coverage, method coverage, class coverage, and loop coverage. The results of the code coverage calculation is then used to automatically generate JUnit test cases for areas of a program that are not su ciently covered. The level of redundancy of each test case is also calculated during coverage calculation, thereby identifying fully redundant, and partially redundant, test cases. The combination of symbolic execution and code coverage calculation is extended to perform coverage calculation during a manual execution of a program, allowing testers to measure the e ectiveness of manual testing. This is implemented as an Eclipse plug-in, named ATCO, which attempts to take advantage of the Eclipse workspace and extensible user interface environment to improve usability of the tool by minimizing the user interaction required to use the tool. The code coverage calculation process uses constraint solving to determine method parameter values to reach speci c areas in the program. Constraint solving is an expensive computation, so the tool was parallellised using Java's Concurrency package, to reduce the overall execution time of the tool. / AFRIKAANSE OPSOMMING: Hierdie navorsing kombineer simboliese uitvoering, 'n formele metode van statiese analise, met verskeie toets genoegsaamheid kriteria, om die e ektiwiteit van die gebruik van simboliese uitvoer te ondersoek vir die berekening van kode dekking op 'n program se bestaande JUnit toets stelle. Kode dekking word gemeet deur verskeie toets genoegsaamheid kriteria, insluited stelling dekking, tak dekking, kondisie dekking, metode dekking, klas dekking, en lus dekking. Die resultate van die kode dekking berekeninge word dan gebruik om outomaties JUnit toets voorbeelde te genereer vir areas van 'n program wat nie doeltre end ondersoek word nie. Die vlak van oortolligheid van elke toets voorbeeld word ook bereken gedurende die dekkingsberekening, en daardeur word volledig oortollige, en gedeeltelik oortollige, toets voorbeelde identi seer. Die kombinasie van simboliese uitvoer en kode dekking berekening is uitgebrei deur die uitvoer van dekking berekeninge van 'n gebruiker-beheerde uitvoer, om sodoende kode dekking van 'n gebruiker-beheerde uitvoer van 'n program te meet. Dit laat toetsers toe om die e ektiwiteit van hulle beheerde uitvoer te meet. Bogenoemde word ge mplimenteer as 'n Eclipse aanvoegsel, genaamd ATCO, wat poog om voordeel te trek vanuit die Eclipse werkspasie, en die uitbreibare gebruiker oordrag omgewing, om die bruikbaarheid van ATCO te verbeter, deur die vermindering van die gebruiker interaksie wat benodig word om ATCO te gebruik. Die kode dekking berekeningsproses gebruik beperking oplossing om metode invoer waardes te bereken, om spesi eke areas in die program te bereik. Beperking oplossing is 'n duur berekening, so ATCO is geparalleliseer, met behulp van Java se Concurrency pakket, om die algehele uitvoer tyd van die program te verminder.
17

Identifying Testing Requirements for Modified Software

Apiwattanapong, Taweesup 09 July 2007 (has links)
Throughout its lifetime, software must be changed for many reasons, such as bug fixing, performance tuning, and code restructuring. Testing modified software is the main activity performed to gain confidence that changes behave as they are intended and do not have adverse effects on the rest of the software. A fundamental problem of testing evolving software is determining whether test suites adequately exercise changes and, if not, providing suitable guidance for generating new test inputs that target the modified behavior. Existing techniques evaluate the adequacy of test suites based only on control- and data-flow testing criteria. They do not consider the effects of changes on program states and, thus, are not sufficiently strict to guarantee that the modified behavior is exercised. Also, because of the lack of this guarantee, these techniques can provide only limited guidance for generating new test inputs. This research has developed techniques that will assist testers in testing evolving software and provide confidence in the quality of modified versions. In particular, this research has developed a technique to identify testing requirements that ensure that the test cases satisfying them will result in different program states at preselected parts of the software. This research has also developed supporting techniques for identifying testing requirements. Such techniques include (1) a differencing technique, which computes differences and correspondences between two software versions and (2) two dynamic-impact-analysis techniques, which identify parts of software that are likely affected by changes with respect to a set of executions.
18

Assertion-based repair of complex data structures

Elkarablieh, Bassem H. 09 August 2012 (has links)
As software systems are growing in complexity and size, reliability becomes a major concern. A large degree of industrial and academic efforts for increasing software reliability are directed towards design, testing and validation—activities performed before the software is deployed. While such activities are fundamental for achieving high levels of confidence in software systems, bugs still occur after deployment resulting in costly software failures. This dissertation presents assertion-based repair, a novel approach for error recovery from insidious bugs that occur after the system is deployed. It describes the design and implementation of a repair framework for Java programs and evaluates the efficiency and effectiveness of the approach on repairing data structure errors in both software libraries and open-source stand-alone applications. Our approach introduces a new form of assertions, assertAndRepair, for developers to use when checking the consistency of the data structures manipulated by their programs with respect to a set of desired structural and data properties. The developer provides the properties in a Java boolean method, repOk, which returns a truth value based on whether a given data structure satisfies these properties. Upon an assertion violation due to a faulty structure, instead of terminating the execution, the structure is repaired, i.e., its fields are mutated such that the resulting structure satisfies the desired properties, and the program proceeds with its execution. To aid developers in detecting the causes of the fault, repair-logs are generated which provide useful information about the performed mutations. The repair process is performed using a novel algorithm that uses a systematic search based on symbolic execution to determine valuations for the structures’ fields that result in a valid structure. Our experiments on repairing both library data structures, as well as, stand-alone applications demonstrate the utility and efficiency of the approach in repairing large structures, enabling programs to recover from crippling errors and proceed with their executions. Assertion-based repair presents a novel post-deployment mechanism that integrates with existing and newly developed software, providing them with the defensive ability to recover from unexpected runtime errors. Programmers already understand the advantages of using assertions and are comfortable with writing them. Providing new analyses and powerful extensions for them presents an attractive direction towards building more reliable software. / text
19

Enhancing symbolic execution using memoization and incremental techniques

Yang, Guowei, active 2013 20 September 2013 (has links)
The last few years have seen a resurgence of interest in the use of symbolic execution--program analysis technique developed more than three decades ago to analyze program execution paths. However, symbolic execution remains an expensive technique and scaling it remains a key technical challenge. There are two key factors that contribute to its cost: (1) the number of paths that need to be explored and (2) the cost of constraint solving, which is typically required for each path explored. Our insight is that the cost of symbolic execution can be reduced by an incremental approach, which uses static analysis and dynamic analysis to focus on relevant parts of code and reuse previous analysis results, thereby addressing both the key cost factors of symbolic execution. This dissertation presents Memoized Incremental Symbolic Execution, a novel approach that embodies our insight. Using symbolic execution in practice often requires several successive runs of the technique on largely similar underlying problems where successive problems differ due to some change, which may be to code, e.g., to fix a bug, to analysis parameters, e.g., to increase the path exploration depth, or to correctness properties, e.g., to check against stronger specifications that are written as assertions in code. Memoized Incremental Symbolic Execution, a three-fold approach, leverages the similarities in the successive problems to reduce the total cost of applying the technique. Our prototype tool-set is based on the Symbolic PathFinder. Experimental results show that Memoized Incremental Symbolic Execution enhances the efficacy of symbolic execution. / text
20

A verified framework for symbolic execution in the ACL2 theorem prover

Swords, Sol Otis 11 February 2011 (has links)
Mechanized theorem proving is a promising means of formally establishing facts about complex systems. However, in applying theorem proving methodologies to industrial-scale hardware and software systems, a large amount of user interaction is required in order to prove useful properties. In practice, the human user tasked with such a verification must gain a deep understanding of the system to be verified, and prove numerous lemmas in order to allow the theorem proving program to approach a proof of the desired fact. Furthermore, proofs that fail during this process are a source of confusion: the proof may either fail because the conjecture was false, or because the prover required more help from the user in order to reach the desired conclusion. We have implemented a symbolic execution framework inside the ACL2 theorem prover in order to help address these issues on certain problem domains. Our framework introduces a proof strategy that applies bit-level symbolic execution using BDDs to finite-domain problems. This proof strategy is a fully verified decision procedure for such problems, and on many useful problem domains its capacity vastly exceeds that of exhaustive testing. Our framework also produces counterexamples for conjectures that it determines to be false. Our framework seeks to reduce the amount of necessary user interaction in proving theorems about industrial-scale hardware and software systems. By increasing the automation available in the prover, we allow the user to complete useful proofs while understanding less of the detailed implementation of the system. Furthermore, by producing counterexamples for falsified conjectures, our framework reduces the time spent by the user in trying to determine why a proof failed. / text

Page generated in 0.1173 seconds