• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 9
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 105
  • 105
  • 44
  • 33
  • 23
  • 23
  • 17
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Program analysis to support quality assurance techniques for web applications

Halfond, William G. J. 20 January 2010 (has links)
As web applications occupy an increasingly important role in the day-to-day lives of millions of people, testing and analysis techniques that ensure that these applications function with a high level of quality are becoming even more essential. However, many software quality assurance techniques are not directly applicable to modern web applications. Certain characteristics, such as the use of HTTP and generated object programs, can make it difficult to identify software abstractions used by traditional quality assurance techniques. More generally, many of these abstractions are implemented differently in web applications, and the lack of techniques to identify them complicates the application of existing quality assurance techniques to web applications. This dissertation describes the development of program analysis techniques for modern web applications and shows that these techniques can be used to improve quality assurance. The first part of the research focuses on the development of a suite of program analysis techniques that identifies useful abstractions in web applications. The second part of the research evaluates whether these program analysis techniques can be used to successfully adapt traditional quality assurance techniques to web applications, improve existing web application quality assurance techniques, and develop new techniques focused on web application-specific issues. The work in quality assurance techniques focuses on improving three different areas: generating test inputs, verifying interface invocations, and detecting vulnerabilities. The evaluations of the resulting techniques show that the use of the program analyses results in significant improvements in existing quality assurance techniques and facilitates the development of new useful techniques.
62

Techniques to facilitate symbolic execution of real-world programs

Anand, Saswat 11 May 2012 (has links)
The overall goal of this research is to reduce the cost of software development and improve the quality of software. Symbolic execution is a program-analysis technique that is used to address several problems that arise in developing high-quality software. Despite the fact that the symbolic execution technique is well understood, and performing symbolic execution on simple programs is straightforward, it is still not possible to apply the technique to the general class of large, real-world software. A symbolic-execution system can be effectively applied to large, real-world software if it has at least the two features: efficiency and automation. However, efficient and automatic symbolic execution of real-world programs is a lofty goal because of both theoretical and practical reasons. Theoretically, achieving this goal requires solving an intractable problem (i.e., solving constraints). Practically, achieving this goal requires overwhelming effort to implement a symbolic-execution system that can precisely and automatically symbolically execute real-world programs. This research makes three major contributions. 1. Three new techniques that address three important problems of symbolic execution. Compared to existing techniques, the new techniques * reduce the manual effort that may be required to symbolically execute those programs that either generate complex constraints or parts of which cannot be symbolically executed due to limitations of a symbolic-execution system. * improve the usefulness of symbolic execution (e.g., expose more bugs in a program) by enabling discovery of more feasible paths within a given time budget. 2. A novel approach that uses symbolic execution to generate test inputs for Apps that run on modern mobile devices such as smartphones and tablets. 3. Implementations of the above techniques and empirical results obtained from applying those techniques to real-world programs that demonstrate their effectiveness.
63

Abstract interpretation of domain-specific embedded languages

Backhouse, Kevin Stuart January 2002 (has links)
A domain-specific embedded language (DSEL) is a domain-specific programming language with no concrete syntax of its own. Defined as a set of combinators encapsulated in a module, it borrows the syntax and tools (such as type-checkers and compilers) of its host language; hence it is economical to design, introduce, and maintain. Unfortunately, this economy is counterbalanced by a lack of room for growth. DSELs cannot match sophisticated domain-specific languages that offer tools for domainspecific error-checking and optimisation. These tools are usually based on syntactic analyses, so they do not work on DSELs. Abstract interpretation is a technique ideally suited to the analysis of DSELs, due to its semantic, rather than syntactic, approach. It is based upon the observation that analysing a program is equivalent to evaluating it over an abstract semantic domain. The mathematical properties of the abstract domain are such that evaluation reduces to solving a mutually recursive set of equations. This dissertation shows how abstract interpretation can be applied to a DSEL by replacing it with an abstract implementation of the same interface; evaluating a program with the abstract implementation yields an analysis result, rather than an executable. The abstract interpretation of DSELs provides a foundation upon which to build sophisticated error-checking and optimisation tools. This is illustrated with three examples: an alphabet analyser for CSP, an ambiguity test for parser combinators, and a definedness test for attribute grammars. Of these, the ambiguity test for parser combinators is probably the most important example, due to the prominence of parser combinators and their rather conspicuous lack of support for the well-known LL(k) test. In this dissertation, DSELs and their signatures are encoded using the polymorphic lambda calculus. This allows the correctness of the abstract interpretation of DSELs to be proved using the parametricity theorem: safety is derived for free from the polymorphic type of a program. Crucially, parametricity also solves a problem commonly encountered by other analysis methods: it ensures the correctness of the approach in the presence of higher-order functions.
64

Instrumentace a vyhodnocení pro dynamickou analýzu aplikací. / Instrumentation and Evaluation for Dynamic Program Analysis

Marek, Lukáš January 2014 (has links)
A dynamic program analysis provides essential information during later phases of an application development. It helps with debugging, profiling, performance optimizations or vulnerability detection. Despite that, support for creating custom dynamic analysis tools, especially in the domain of managed languages, is rather limited. In this thesis, we present two systems to help improve application observability on the Java platform. DiSL is a language accompanied with a framework allowing simple and flexible instrumentation for the dynamic program analysis. DiSL provides high level abstractions to enable quick prototyping even for programmers not possessing a knowledge of Java internals. A skilled analysis developer gains full control over the instrumentation process, thus does not have to worry about unwanted allocations or hidden execution overhead. ShadowVM is a platform that provides isolation between the observed application and the analysis environment. To reduce the amount of possible interactions between the analysis and the application, ShadowVM offloads analysis events out of the context of the application. Even though the isolation is the primary focus of the platform, ShadowVM introduces a number of techniques to stay performance comparable and provide a similar programming model as existing...
65

Raisonnement automatisé pour la logique de séparation avec des définitions inductives / Automated reasoning in separation logic with inductive definitions

Serban, Cristina 31 May 2018 (has links)
La contribution principale de cette thèse est un système de preuve correct et complet pour les implications entre les prédicats inductifs, fréquemment rencontrées lors de la vérification des programmes qui utilisent des structures de données récursives allouées dynamiquement. Nous introduisons un système de preuve généralisé pour la logique du premier ordre et nous l'adaptons à la logique de séparation, car ceci est un cadre qui répond aux plusieurs difficultés posées par le raisonnement sur les tas alloués dynamiquement. La correction et la complétude sont assurées par quatre restrictions sémantiques et nous proposons également un semi-algorithme de recherche de preuves qui devient une procédure de décision pour le problème d'implication lorsque les restrictions sémantiques sont respectées.Ce raisonnement d'ordre supérieur sur les implications nécessite des procédures de décision de premier ordre pour la logique sous-jacente lors de l'application des règles d'inférence et lors de la recherche des preuves. Ainsi, nous fournissons deux procédures de décision pour la logique de séparation, en considérant le fragment sans quantificateurs et le fragment quantifié de façon Exists*Forall*, qui ont été intégrées dans le solveur SMT open source CVC4.Finalement, nous présentons une implémentation de notre système de preuve pour la logique de séparation, qui utilise ces procédures de décision. Étant donné des prédicats inductifs et une requête d'implication, un avertissement est émis lorsqu'une ou plusieurs restrictions sémantiques sont violées. Si l'implication est valide, la sortie est une preuve. Sinon, un ou plusieurs contre-exemples sont fournis. / The main contribution of this thesis is a sound and complete proof system for entailments between inductive predicates, which are frequently encountered when verifying programs that work with dynamically allocated recursive data structures. We introduce a generalized proof system for first-order logic, and then adapt it to separation logic, a framework that addresses many of the difficulties posed by reasoning about dynamically allocated heaps. Soundness and completeness are ensured through four semantic restrictions and we also propose a proof-search semi-algorithm that becomes a decision procedure for the entailment problem when the semantic restrictions hold.This higher-order reasoning about entailments requires first-order decision procedures for the underlying logic when applying inference rules and during proof search. Thus, we provide two decision procedures for separation logic, considering the quantifier-free and the Exists*Forall*-quantified fragments, which were integrated in the open-source, DPLL(T)-based SMT solver CVC4.Finally, we also give an implementation of our proof system for separation logic, which uses these decision procedures. Given some inductive predicate definitions and an entailment query as input, a warning is issued when one or more semantic restrictions are violated. If the entailment is found to be valid, the output is a proof. Otherwise, one or more counterexamples are provided.
66

Analýza datových toků ve databázových systémech / Analyzing Data Lineage in Database Frameworks

Eliáš, Richard January 2019 (has links)
Large information systems are typically implemented using frameworks and libraries. An important property of such systems is data lineage - the flow of data loaded from one system (e.g. database), through the program code, and back to another system. We implemented the Java Resolver tool for data lineage analysis of Java programs based on the Symbolic analysis library for computing data lineage of simple Java applications. The library supports only JDBC and I/O APIs to identify the sources and sinks of data flow. We proposed some archi- tecture changes to the library to make easily extensible by plugins that can add support for new data processing frameworks. We implemented such plugins for few frameworks with different approach for accessing the data, including Spring JDBC, MyBatis and Kafka. Our tests show that this approach works and can be usable in practice. 1
67

FUZZING HARD-TO-COVER CODE

Hui Peng (10746420) 06 May 2021 (has links)
<div>Fuzzing is a simple yet effect approach to discover bugs by repeatedly testing the target system using randomly generated inputs. In this thesis, we identify several limitations in state-of-the-art fuzzing techniques: (1) the coverage wall issue , fuzzer-generated inputs cannot bypass complex sanity checks in the target programs and are unable to cover code paths protected by such checks; (2) inability to adapt to interfaces to inject fuzzer-generated inputs, one important example of such interface is the software/hardware interface between drivers and their devices; (3) dependency on code coverage feedback, this dependency makes it hard to apply fuzzing to targets where code coverage collection is challenging (due to proprietary components or special software design).</div><div><br></div><div><div>To address the coverage wall issue, we propose T-Fuzz, a novel approach to overcome the issue from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the coverage wall is reached, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program.</div></div><div><br></div><div><div>By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, we found 4 new bugs in previously-fuzzed programs and libraries.</div></div><div><br></div><div><div>To address the inability to adapt to inferfaces, we propose USBFuzz. We target the USB interface, fuzzing the software/hardware barrier. USBFuzz uses device emulation</div><div>to inject fuzzer-generated input to drivers under test, and applies coverage-guided fuzzing to device drivers if code coverage collection is supported from the kernel. In its core, USBFuzz emulates an special USB device that provides data to the device driver (when it performs IO operations). This allows us to fuzz the input space of drivers from the device’s perspective, an angle that is difficult to achieve with real hardware. USBFuzz discovered 53 bugs in Linux (out of which 37 are new, and 36 are memory bugs of high security impact, potentially allowing arbitrary read or write in the kernel address space), one bug in FreeBSD, four bugs (resulting in Blue Screens of Death) in Windows and three bugs (two causing an unplanned restart, one freezing the system) in MacOS.</div></div><div><br></div><div><div>To break the dependency on code coverage feedback, we propose WebGLFuzzer. To fuzz the WebGL interface (a set of JavaScript APIs in browsers allowing high performance graphics rendering taking advantage of GPU acceleration on the device), where code coverage collection is challenging, we introduce WebGLFuzzer, which internally uses a log guided fuzzing technique. WebGLFuzzer is not dependent on code coverage feedback, but instead, makes use of the log messages emitted by browsers to guide its input mutation. Compared with coverage guided fuzzing, our log guided fuzzing technique is able to perform more meaningful mutation under the guidance of the log message. To this end, WebGLFuzzer uses static analysis to identify which argument to mutate or which API call to insert to the current program to fix the internal WebGL program state given a log message emitted by the browser. WebGLFuzzer is under evaluation and so far, it has found 6 bugs, one of which is able to freeze the X-Server.</div></div>
68

Cyber-Physical Analysis and Hardening of Robotic Aerial Vehicle Controllers

Taegyu Kim (10716420) 06 May 2021 (has links)
Robotic aerial vehicles (RAVs) have been increasingly deployed in various areas (e.g., commercial, military, scientific, and entertainment). However, RAVs’ security and safety issues could not only arise from either of the “cyber” domain (e.g., control software) and “physical” domain (e.g., vehicle control model) but also stem in their interplay. Unfortunately, existing work had focused mainly on either the “cyber-centric” or “control-centric” approaches. However, such a single-domain focus could overlook the security threats caused by the interplay between the cyber and physical domains. <br>In this thesis, we present cyber-physical analysis and hardening to secure RAV controllers. Through a combination of program analysis and vehicle control modeling, we first developed novel techniques to (1) connect both cyber and physical domains and then (2) analyze individual domains and their interplay. Specifically, we describe how to detect bugs after RAV accidents using provenance (Mayday), how to proactively find bugs using fuzzing (RVFuzzer), and how to patch vulnerable firmware using binary patching (DisPatch). As a result, we have found 91 new bugs in modern RAV control programs, and their developers confirmed 32 cases and patch 11 cases.
69

Verifying Data-Oriented Gadgets in Binary Programs to Build Data-Only Exploits

Sisco, Zachary David 08 August 2018 (has links)
No description available.
70

PRECISION IMPROVEMENT AND COST REDUCTION FOR DEFECT MINING AND TESTING

Sun, Boya 31 January 2012 (has links)
No description available.

Page generated in 0.0519 seconds