• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 22
  • 22
  • 19
  • 19
  • 15
  • 14
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Symvex : A Symbolic Execution System for Machine Code

Rönn, Mattias January 2016 (has links)
This thesis is a part of an ongoing research project at Link ̈oping University. The goal of the thesis work is to design and implement a prototype for a symbolic execution system that scales well with larger programs and is capable of performing symbolic execution on machine code. For this reason we have analyzed the current state of symbolic executors that are able to perform symbolic execution on machine code to see if we could use that implementation as base for our prototype. We wanted to know if any of the existing systems scaled well with large software. We found that neither of the existing systems worked well with the real life software in our evaluation. Furthermore, even if it would have been possible to fix one of the existing systems, the time required to figure out the faults in their implementation would most likely have been too great. For this reason we decided to create an implementation of our own from scratch. However, we did note that some approaches in the existing systems seemed to work better with large software. Specifically saving as little state as possible about the program seemed favorable. Armed with the knowledge gained from the analysis, we managed to implement a system that compared quite well with the existing systems. Our system was able to execute all the real-life programs used in our tests, but unfortunately had some issues with high memory usage for certain programs. In order to lessen the problem with high memory usage, we present and discuss several potential ways to mitigate this issue.
2

An empirical study of the influence of compiler optimizations on symbolic execution

Dong, Shiyu 18 September 2014 (has links)
Compiler optimizations in the context of traditional program execution is a well-studied research area, and modern compilers typically offer a suite of optimization options. This thesis reports the first study (to our knowledge) on how standard compiler optimizations influence symbolic execution. We study 33 optimization flags of the LLVM compiler infrastructure, which are used by the KLEE symbolic execution engine. Specifically, we study (1) how different optimizations influence the performance of KLEE for Unix Coreutils, (2) how the influence varies across two different program classes, and (3) how the influence varies across three different back-end constraint solvers. Some of our findings surprised us. For example, KLEE's setting for applying the 33 optimizations in a pre-defined order provides sub-optimal performance for a majority of the Coreutils when using the basic depth-first search; moreover, in our experimental setup, applying no optimization performs better for many of the Coreutils. / text
3

Compositional symbolic execution with memoized replay

Qiu, Rui, active 21st century 18 September 2014 (has links)
Symbolic execution is a powerful, systematic analysis that has received much visibility in the last decade. Scalability however remains a major challenge for symbolic execution. Compositional analysis is a well-known general purpose methodology for increasing scalability. This thesis introduces a new approach for compositional symbolic execution. Our key insight is that we can summarize each analyzed method as a memoization tree that captures the crucial elements of symbolic execution, and leverage these memoization trees to efficiently replay the symbolic execution of the corresponding methods with respect to their calling contexts. Memoization trees offer a natural way to compose in the presence of heap operations, which cannot be dealt with by previous work that uses logical formulas as summaries for composi- tional symbolic execution. Our approach also enables an efficient treatment of error traces by short-circuiting the execution of paths that lead to them. Our preliminary experimental evaluation based on a prototype implementation in Symbolic PathFinder shows promising results. / text
4

Verification of Cyber Physical Systems

Murali, Dilip Venkateswaran 20 September 2013 (has links)
Due to the increasing complexity of today\'s cyber-physical systems, defects become inevitable and harder to detect. The complexity of such software is generally huge, with millions of lines of code. The impact of failure of such systems could be hazardous. The reliability of the system depends on the effectiveness and rigor of the testing procedures. Verification of the software behind such cyber-physical systems is required to ensure stability and reliability before the systems are deployed in field. We have investigated the verification of the software for Autonomous Underwater Vehicles (AUVs) to ensure safety of the system at any given time in the field. To accomplish this, we identified useful invariants that would aid as monitors in detecting abnormal behavior of the software. Potential invariants were extracted which had to be validated. The investigation attempts to uncover the possibility of performing this method on existing Software verification platforms. This was accomplished on Cloud9, which is built on KLEE and using the Microsoft\'s VCC tool. Experimental results show that this method of extracting invariants can help in identifying new invariants using these two tools and the invariants identified can be used to monitor the behavior of the autonomous vehicles to detect abnormality and failures in the system much earlier thereby improving the reliability of the system. Recommendations for improving software quality were provided. The work also explored safety measures and standards on software for safety critical systems and Autonomous vehicles. Metrics for measuring software complexity and quality along with the requirements to certify AUV software were also presented. The study helps in understanding verification issues, guidelines and certification requirements. / Master of Science
5

Control Flow Based Static Execution Time Analysis Using Symbolic Execution

Sundell, Isak January 2022 (has links)
To ensure the correctness of real time systems, it is important to determine the execution time of tasks. The worst case execution time of each task needs to be found in order to determine if the system is schedulable. This thesis aims at bounding the execution time of programs analyzed by KLEE, a symbolic execution engine. This is done by estimating the cycles required on the Cortex-M4 processor. A custom fork of KLEE has been created which outputs additional information about the program under analysis. This information is used by a tool written in Rust which reconstructs the corresponding control flow in optimized assembly code. KLEE analyzes an intermediate representation language, LLVM IR. This representation is used in the compilation process of many programming languages. One of these languages is Rust which has been the primary focus of the tool. Testing has been done on applications written with the RTIC framework. The measured cycles of these applications has been correctly bounded for all test cases.
6

Distributed parallel symbolic execution

King, Andrew January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Robby / Software defects cost our economy a significant amount of money. Techniques that can detect software defects before the software begins its operational life-cycle are therefore highly valuable. Unfortunately, as software is becoming more ubiquitous, it is also becoming more complex. Static analysis of software can be computationally intensive, and as software becomes more complex the computational demands of any analysis applied increase also. While increasingly complex software entails more computationally demanding analysis, the computational capabilities provided by computers have increased exponentially over the last half century of computing. Historically, the increase in computational capability has come by increasing the clock speed of the computer's central processing unit (CPU.) In the last several years, engineering limitations have made it increasingly difficult to build CPU's with progressively higher clock speeds. Instead, processor manufacturers now provide increased capability in the form of `multi-core' CPUs; where each processor package contains two or more processing units, enabling that processor to execute more than one task concurrently. This thesis describes the design and implementation of a parallel version of symbolic execution which can take advantage of modern multi-core and multi-processor systems to complete analysis of software units in a reduced amount of time.
7

Model-Based Protocol Testing in an Erlang Environment

Blom, Johan January 2016 (has links)
Testing is the dominant technique for quality assurance of software systems. It typically consumes considerable resources in development projects, and is often performed in an ad hoc manner. This thesis is concerned with model-based testing, which is an approach to make testing more systematic and more automated. The general idea in model-based testing is to start from a formal model, which captures the intended behavior of the software system to be tested. On the basis of this model, test cases can be generated in a systematic way. Since the model is formal, the generation of test suites can be automated and with adequate tool support one can automatically quantify to which degree they exercise the tested software. Despite the significant improvements on model-based testing in the last 20 years, acceptance by industry has so far been limited. A number of commercially available tools exist, but still most testing in industry relies on manually constructed test cases. This thesis address this problem by presenting a methodology and associated tool support, which is intended to be used for model-based testing of communication protocol implementations in industry. A major goal was to make the developed tool suitable for industrial usage, implying that we had to consider several problems that typically are not addressed by the literature on model-based testing. The thesis presents several technical contributions to the area of model-based testing, including - a new specification language based on the functional programming language Erlang, - a novel technique for specifying coverage criteria for test suite generation, and - a technique for automatically generating test suites. Based on these developments, we have implemented a complete tool chain that generates and executes complete test suites, given a model in our specification language. The thesis also presents a substantial industrial case study, where our technical contributions and the implemented tool chain are evaluated. Findings from the case study include that test suites generated using (model) coverage criteria have at least as good fault-detection capability as equally large random test suites, and that model-based testing could discover faults in previously well-tested software where previous testing had employed a relaxed validation of requirements.
8

Effectiveness comparison between Concolic and Random Testing

Lai, Yan-shun 31 October 2011 (has links)
The development of software today, the company has their own test system usually. Because there has a few bugs in the every software. And it will make the damage of company¡¦s property or security of information. We can find the bugs in the software by the test systems. But the few bugs will appear repeatedly even if you have been fixed it. In this time, it will be effective if we use the automatic test systems. They can solve the waste of time and cost. Appearance of the automatic test system has been solved the defect of the test method in the past. In this paper will mention two kind of automatic test systems, one of them is concolic testing, and another is random testing. In the 2009, there had the few of evidence to discuss that the concolic testing was more effective than the random testing, but there wasn¡¦t have the enough demonstration. So I hope to prove that the effectiveness comparison between concolic and random testing by this paper.
9

Finding Data Races in Software Binaries with Symbolic Execution

Jackson, Nathan D. 27 May 2020 (has links)
No description available.
10

Architecture for a Symbolic Execution Environment

Norlén, Joacim January 2022 (has links)
Program testing is an important aspect of software development. Symbolic execution can be used as a tool to automatically verify the correctness of programs for all feasible paths of execution. Moreover, for embedded systems symbolic execution can be used to generate test cases to estimate run times to help determine the worst-case execution time (WCET) and schedulability of systems. This thesis explores an architecture for symbolic execution for use in embedded Rust. Accompanied with the architecture are implementation details of a prototype Symex that can handle small programs. Symex evaluates all feasible paths of execution looking for errors and assertions, and reports which concrete inputs lead to errors. Included with the prototype is a command-line tool to automatically build and analyze Rust projects. The tool allows for easy analysis of projects, and an included library provides functions to manipulate symbolic variables to aid the analysis. The method of evaluating all feasible paths work well with the purpose of evaluating embedded systems, where the aim is typically to keep the code complexity low. The low code complexity lends the software to be resilient towards path explosion. For the cases where this cannot be helped the functions to manipulate the symbolic variables in the analysis can be used to further constrain the variables and lower the number of feasible paths. The evaluation shows the architecture is feasible for the intended use case in embedded systems. Furthermore, evaluation of the prototype shows how the system can be used to show the absence of errors, verify functions, and check for functional equivalence. Inherent to the symbolic execution approach the system cannot handle programs with a large branching factor.

Page generated in 0.1233 seconds