• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 22
  • 21
  • 18
  • 18
  • 15
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Symvex : A Symbolic Execution System for Machine Code

Rönn, Mattias January 2016 (has links)
This thesis is a part of an ongoing research project at Link ̈oping University. The goal of the thesis work is to design and implement a prototype for a symbolic execution system that scales well with larger programs and is capable of performing symbolic execution on machine code. For this reason we have analyzed the current state of symbolic executors that are able to perform symbolic execution on machine code to see if we could use that implementation as base for our prototype. We wanted to know if any of the existing systems scaled well with large software. We found that neither of the existing systems worked well with the real life software in our evaluation. Furthermore, even if it would have been possible to fix one of the existing systems, the time required to figure out the faults in their implementation would most likely have been too great. For this reason we decided to create an implementation of our own from scratch. However, we did note that some approaches in the existing systems seemed to work better with large software. Specifically saving as little state as possible about the program seemed favorable. Armed with the knowledge gained from the analysis, we managed to implement a system that compared quite well with the existing systems. Our system was able to execute all the real-life programs used in our tests, but unfortunately had some issues with high memory usage for certain programs. In order to lessen the problem with high memory usage, we present and discuss several potential ways to mitigate this issue.

An empirical study of the influence of compiler optimizations on symbolic execution

Dong, Shiyu 18 September 2014 (has links)
Compiler optimizations in the context of traditional program execution is a well-studied research area, and modern compilers typically offer a suite of optimization options. This thesis reports the first study (to our knowledge) on how standard compiler optimizations influence symbolic execution. We study 33 optimization flags of the LLVM compiler infrastructure, which are used by the KLEE symbolic execution engine. Specifically, we study (1) how different optimizations influence the performance of KLEE for Unix Coreutils, (2) how the influence varies across two different program classes, and (3) how the influence varies across three different back-end constraint solvers. Some of our findings surprised us. For example, KLEE's setting for applying the 33 optimizations in a pre-defined order provides sub-optimal performance for a majority of the Coreutils when using the basic depth-first search; moreover, in our experimental setup, applying no optimization performs better for many of the Coreutils. / text

Compositional symbolic execution with memoized replay

Qiu, Rui, active 21st century 18 September 2014 (has links)
Symbolic execution is a powerful, systematic analysis that has received much visibility in the last decade. Scalability however remains a major challenge for symbolic execution. Compositional analysis is a well-known general purpose methodology for increasing scalability. This thesis introduces a new approach for compositional symbolic execution. Our key insight is that we can summarize each analyzed method as a memoization tree that captures the crucial elements of symbolic execution, and leverage these memoization trees to efficiently replay the symbolic execution of the corresponding methods with respect to their calling contexts. Memoization trees offer a natural way to compose in the presence of heap operations, which cannot be dealt with by previous work that uses logical formulas as summaries for composi- tional symbolic execution. Our approach also enables an efficient treatment of error traces by short-circuiting the execution of paths that lead to them. Our preliminary experimental evaluation based on a prototype implementation in Symbolic PathFinder shows promising results. / text

Verification of Cyber Physical Systems

Murali, Dilip Venkateswaran 20 September 2013 (has links)
Due to the increasing complexity of today\'s cyber-physical systems, defects become inevitable and harder to detect. The complexity of such software is generally huge, with millions of lines of code. The impact of failure of such systems could be hazardous. The reliability of the system depends on the effectiveness and rigor of the testing procedures. Verification of the software behind such cyber-physical systems is required to ensure stability and reliability before the systems are deployed in field. We have investigated the verification of the software for Autonomous Underwater Vehicles (AUVs) to ensure safety of the system at any given time in the field. To accomplish this, we identified useful invariants that would aid as monitors in detecting abnormal behavior of the software. Potential invariants were extracted which had to be validated. The investigation attempts to uncover the possibility of performing this method on existing Software verification platforms. This was accomplished on Cloud9, which is built on KLEE and using the Microsoft\'s VCC tool. Experimental results show that this method of extracting invariants can help in identifying new invariants using these two tools and the invariants identified can be used to monitor the behavior of the autonomous vehicles to detect abnormality and failures in the system much earlier thereby improving the reliability of the system. Recommendations for improving software quality were provided. The work also explored safety measures and standards on software for safety critical systems and Autonomous vehicles. Metrics for measuring software complexity and quality along with the requirements to certify AUV software were also presented. The study helps in understanding verification issues, guidelines and certification requirements. / Master of Science

Control Flow Based Static Execution Time Analysis Using Symbolic Execution

Sundell, Isak January 2022 (has links)
To ensure the correctness of real time systems, it is important to determine the execution time of tasks. The worst case execution time of each task needs to be found in order to determine if the system is schedulable. This thesis aims at bounding the execution time of programs analyzed by KLEE, a symbolic execution engine. This is done by estimating the cycles required on the Cortex-M4 processor. A custom fork of KLEE has been created which outputs additional information about the program under analysis. This information is used by a tool written in Rust which reconstructs the corresponding control flow in optimized assembly code. KLEE analyzes an intermediate representation language, LLVM IR. This representation is used in the compilation process of many programming languages. One of these languages is Rust which has been the primary focus of the tool. Testing has been done on applications written with the RTIC framework. The measured cycles of these applications has been correctly bounded for all test cases.

Distributed parallel symbolic execution

King, Andrew January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Robby / Software defects cost our economy a significant amount of money. Techniques that can detect software defects before the software begins its operational life-cycle are therefore highly valuable. Unfortunately, as software is becoming more ubiquitous, it is also becoming more complex. Static analysis of software can be computationally intensive, and as software becomes more complex the computational demands of any analysis applied increase also. While increasingly complex software entails more computationally demanding analysis, the computational capabilities provided by computers have increased exponentially over the last half century of computing. Historically, the increase in computational capability has come by increasing the clock speed of the computer's central processing unit (CPU.) In the last several years, engineering limitations have made it increasingly difficult to build CPU's with progressively higher clock speeds. Instead, processor manufacturers now provide increased capability in the form of `multi-core' CPUs; where each processor package contains two or more processing units, enabling that processor to execute more than one task concurrently. This thesis describes the design and implementation of a parallel version of symbolic execution which can take advantage of modern multi-core and multi-processor systems to complete analysis of software units in a reduced amount of time.

Model-Based Protocol Testing in an Erlang Environment

Blom, Johan January 2016 (has links)
Testing is the dominant technique for quality assurance of software systems. It typically consumes considerable resources in development projects, and is often performed in an ad hoc manner. This thesis is concerned with model-based testing, which is an approach to make testing more systematic and more automated. The general idea in model-based testing is to start from a formal model, which captures the intended behavior of the software system to be tested. On the basis of this model, test cases can be generated in a systematic way. Since the model is formal, the generation of test suites can be automated and with adequate tool support one can automatically quantify to which degree they exercise the tested software. Despite the significant improvements on model-based testing in the last 20 years, acceptance by industry has so far been limited. A number of commercially available tools exist, but still most testing in industry relies on manually constructed test cases. This thesis address this problem by presenting a methodology and associated tool support, which is intended to be used for model-based testing of communication protocol implementations in industry. A major goal was to make the developed tool suitable for industrial usage, implying that we had to consider several problems that typically are not addressed by the literature on model-based testing. The thesis presents several technical contributions to the area of model-based testing, including - a new specification language based on the functional programming language Erlang, - a novel technique for specifying coverage criteria for test suite generation, and - a technique for automatically generating test suites. Based on these developments, we have implemented a complete tool chain that generates and executes complete test suites, given a model in our specification language. The thesis also presents a substantial industrial case study, where our technical contributions and the implemented tool chain are evaluated. Findings from the case study include that test suites generated using (model) coverage criteria have at least as good fault-detection capability as equally large random test suites, and that model-based testing could discover faults in previously well-tested software where previous testing had employed a relaxed validation of requirements.

Effectiveness comparison between Concolic and Random Testing

Lai, Yan-shun 31 October 2011 (has links)
The development of software today, the company has their own test system usually. Because there has a few bugs in the every software. And it will make the damage of company¡¦s property or security of information. We can find the bugs in the software by the test systems. But the few bugs will appear repeatedly even if you have been fixed it. In this time, it will be effective if we use the automatic test systems. They can solve the waste of time and cost. Appearance of the automatic test system has been solved the defect of the test method in the past. In this paper will mention two kind of automatic test systems, one of them is concolic testing, and another is random testing. In the 2009, there had the few of evidence to discuss that the concolic testing was more effective than the random testing, but there wasn¡¦t have the enough demonstration. So I hope to prove that the effectiveness comparison between concolic and random testing by this paper.

Partitioning Strategies to Enhance Symbolic Execution

Marcellino, Brendan Adrian 11 August 2015 (has links)
Software testing is a fundamental part of the software development process. However, testing is still costly and consumes about half of the development cost. The path explosion problem often necessitates one to consider an extremely large number of paths in order to reach a specific target. Symbolic execution can reduce this cost by using symbolic values and heuristic exploration strategies. Although various exploration strategies have been proposed in the past, the number of Satisfiability Modulo Theories (SMT) solver calls for reaching a target is still large, resulting in longer execution times for programs containing many paths. In this paper, we present two partitioning strategies in order to mitigate this problem, consequently reducing unnecessary SMT solver calls as well. In sequential partitioning, code sections are analyzed sequentially to take advantage of infeasible paths discovered in earlier sections. On the other hand, using dynamic partitioning on SSA-applied code, the code sections are analyzed in a non-consecutive order guided by data dependency metrics within the sections. Experimental results show that both strategies can achieve significant speedup in reducing the number of unnecessary solver calls in large programs. More than 1000x speedup can be achieved in large programs over conflict-driven learning. / Master of Science

An Efficient 2-Phase Strategy to Achieve High Branch Coverage

Prabhu, Sarvesh P. 06 March 2012 (has links)
Symbolic execution-based test generation is gaining popularity for software test generation. The increasing complexity of the software program is posing new challenges in software execution-based test generation because of the path explosion problem. We present a new 2-phase symbolic execution driven strategy that achieves high branch coverage in software quickly. Phase 1 follows a greedy approach that quickly covers as many branches as possible by exploring each branch through its corresponding shortest path prefix. Phase 2 covers the remaining branches that are left uncovered if the shortest path to the branch was infeasible. In Phase 1, a basic conflict driven learning is used to skip all the paths that may have any of the earlier encountered conflicting conditions, while in Phase 2, a more intelligent conflict driven learning is used to skip regions that do not have a feasible path to any unexplored branch. This results in considerable reduction in unnecessary SMT solver calls. Experimental results show that significant speedup can be achieved, effectively reducing the time to detect a bug and providing higher branch coverage for a fixed time out period than previous techniques. / Master of Science

Page generated in 0.1121 seconds