• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 8
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 100
  • 100
  • 42
  • 31
  • 22
  • 21
  • 17
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Automated coverage calculation and test case generation

Morrison, George Campbell 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: This research combines symbolic execution, a formal method of static analysis, with various test adequacy criteria, to explore the e ectiveness of using symbolic execution for calculating code coverage on a program's existing JUnit test suites. Code coverage is measured with a number of test adequacy criteria, including statement coverage, branch coverage, condition coverage, method coverage, class coverage, and loop coverage. The results of the code coverage calculation is then used to automatically generate JUnit test cases for areas of a program that are not su ciently covered. The level of redundancy of each test case is also calculated during coverage calculation, thereby identifying fully redundant, and partially redundant, test cases. The combination of symbolic execution and code coverage calculation is extended to perform coverage calculation during a manual execution of a program, allowing testers to measure the e ectiveness of manual testing. This is implemented as an Eclipse plug-in, named ATCO, which attempts to take advantage of the Eclipse workspace and extensible user interface environment to improve usability of the tool by minimizing the user interaction required to use the tool. The code coverage calculation process uses constraint solving to determine method parameter values to reach speci c areas in the program. Constraint solving is an expensive computation, so the tool was parallellised using Java's Concurrency package, to reduce the overall execution time of the tool. / AFRIKAANSE OPSOMMING: Hierdie navorsing kombineer simboliese uitvoering, 'n formele metode van statiese analise, met verskeie toets genoegsaamheid kriteria, om die e ektiwiteit van die gebruik van simboliese uitvoer te ondersoek vir die berekening van kode dekking op 'n program se bestaande JUnit toets stelle. Kode dekking word gemeet deur verskeie toets genoegsaamheid kriteria, insluited stelling dekking, tak dekking, kondisie dekking, metode dekking, klas dekking, en lus dekking. Die resultate van die kode dekking berekeninge word dan gebruik om outomaties JUnit toets voorbeelde te genereer vir areas van 'n program wat nie doeltre end ondersoek word nie. Die vlak van oortolligheid van elke toets voorbeeld word ook bereken gedurende die dekkingsberekening, en daardeur word volledig oortollige, en gedeeltelik oortollige, toets voorbeelde identi seer. Die kombinasie van simboliese uitvoer en kode dekking berekening is uitgebrei deur die uitvoer van dekking berekeninge van 'n gebruiker-beheerde uitvoer, om sodoende kode dekking van 'n gebruiker-beheerde uitvoer van 'n program te meet. Dit laat toetsers toe om die e ektiwiteit van hulle beheerde uitvoer te meet. Bogenoemde word ge mplimenteer as 'n Eclipse aanvoegsel, genaamd ATCO, wat poog om voordeel te trek vanuit die Eclipse werkspasie, en die uitbreibare gebruiker oordrag omgewing, om die bruikbaarheid van ATCO te verbeter, deur die vermindering van die gebruiker interaksie wat benodig word om ATCO te gebruik. Die kode dekking berekeningsproses gebruik beperking oplossing om metode invoer waardes te bereken, om spesi eke areas in die program te bereik. Beperking oplossing is 'n duur berekening, so ATCO is geparalleliseer, met behulp van Java se Concurrency pakket, om die algehele uitvoer tyd van die program te verminder.
22

Integrating programming languages and databases via program analysis and language design

Wiedermann, Benjamin Alan 23 August 2010 (has links)
Researchers and practitioners alike have long sought to integrate programming languages and databases. Today's integration solutions focus on the data-types of the two domains, but today's programs lack transparency. A transparently persistent program operates over all objects in a uniform manner, regardless of whether those objects reside in memory or in a database. Transparency increases modularity and lowers the barrier of adoption in industry. Unfortunately, fully transparent programs perform so poorly that no one writes them. The goal of this dissertation is to increase the performance of these programs to make transparent persistence a viable programming paradigm. This dissertation contributes two novel techniques that integrate programming languages and databases. Our first contribution--called query extraction--is based purely on program analysis. Query extraction analyzes a transparent, object-oriented program that retrieves and filters collections of objects. Some of these objects may be persistent, in which case the program contains implicit queries of persistent data. Our interprocedural program analysis extracts these queries from the program, translates them to explicit queries, and transforms the transparent program into an equivalent one that contains the explicit queries. Query extraction enables programmers to write programs in a familiar, modular style and to rely on the compiler to transform their program into one that performs well. Our second contribution--called RBI-DB+--is an extension of a new programming language construct called a batch block. A batch block provides a syntactic barrier around transparent code. It also provides a latency guarantee: If the batch block compiles, then the code that appears in it requires only one client-server communication trip. Researchers previously have proposed batch blocks for databases. However, batch blocks cannot be modularized or composed, and database batch blocks do not permit programmers to modify persistent data. We extend database batch blocks to address these concerns and formalize the results. Today's technologies integrate the data-types of programming languages and databases, but they discourage programmers from using procedural abstraction. Our contributions restore procedural abstraction's use in enterprise applications, without sacrificing performance. We argue that industry should combine our contributions with data-type integration. The result would be a robust, practical integration of programming languages and databases. / text
23

Probabilistic Program Analysis for Software Component Reliability

Mason, Dave January 2002 (has links)
Components are widely seen by software engineers as an important technology to address the "software crisis''. An important aspect of components in other areas of engineering is that system reliability can be estimated from the reliability of the components. We show how commonly proposed methods of reliability estimation and composition for software are inadequate because of differences between the models and the actual software systems, and we show where the assumptions from system reliability theory cause difficulty when applied to software. This thesis provides an approach to reliability that makes it possible, if not currently plausible, to compose component reliabilities so as to accurately and safely determine system reliability. Firstly, we extend previous work on input sub-domains, or partitions, such that our sub-domains can be sampled in a statistically sound way. We provide an algorithm to generate the most important partitions first, which is particularly important when there are an infinite number of input sub-domains. We combine analysis and testing to provide useful reliabilities for the various input sub-domains of a system, or component. This provides a methodology for calculating true reliability for a software system for any accurate statistical distribution of input values. Secondly, we present a calculus for probability density functions that permits accurately modeling the input distribution seen by each component in the system - a critically important issue in dealing with reliability of software components. Finally, we provide the system structuring calculus that allows a system designer to take components from component suppliers that have been built according to our rules and to determine the resulting system reliability. This can be done without access to the actual components. This work raises many issues, particularly about scalability of the proposed techniques and about the ability of the system designer to know the input profile to the level and kind of accuracy required. There are also large classes of components where the techniques are currently intractable, but we see this work as an important first step.
24

Retrowrite: Statically Instrumenting COTS Binaries for Fuzzing and Sanitization

Sushant Dinesh (6640856) 10 June 2019 (has links)
<div>End users of closed-source software currently cannot easily analyze the security</div><div>of programs or patch them if flaws are found. Notably, end users can include devel</div><div>opers who use third party libraries. The current state of the art for coverage-guided</div><div>binary fuzzing or binary sanitization is dynamic binary translation, which results</div><div>in prohibitive overhead. Existing static rewriting techniques cannot fully recover</div><div>symbolization information, and so have difficulty modifying binaries to track code</div><div>coverage for fuzzing or add security checks for sanitizers.</div><div>The ideal solution for adding instrumentation is a static rewriter that can intel</div><div>ligently add in the required instrumentation as if it were inserted at compile time.</div><div>This requires analysis to statically disambiguate between references and scalars, a</div><div>problem known to be undecidable in the general case. We show that recovering this</div><div>information is possible in practice for the most common class of software and li</div><div>braries: 64 bit, position independent code. Based on our observation, we design a</div><div>binary-rewriting instrumentation to support American Fuzzy Lop (AFL) and Address</div><div>Sanitizer (ASan), and show that we achieve compiler levels of performance, while re</div><div>taining precision. Binaries rewritten for coverage-guided fuzzing using RetroWrite</div><div>are identical in performance to compiler-instrumented binaries and outperforms the</div><div>default QEMU-based instrumentation by 7.5x while triggering more bugs. Our im</div><div>plementation of binary-only Address Sanitizer is 3x faster than Valgrind memcheck,</div><div>the state-of-the-art binary-only memory checker, and detects 80% more bugs in our</div><div>security evaluation.</div>
25

Probabilistic Program Analysis for Software Component Reliability

Mason, Dave January 2002 (has links)
Components are widely seen by software engineers as an important technology to address the "software crisis''. An important aspect of components in other areas of engineering is that system reliability can be estimated from the reliability of the components. We show how commonly proposed methods of reliability estimation and composition for software are inadequate because of differences between the models and the actual software systems, and we show where the assumptions from system reliability theory cause difficulty when applied to software. This thesis provides an approach to reliability that makes it possible, if not currently plausible, to compose component reliabilities so as to accurately and safely determine system reliability. Firstly, we extend previous work on input sub-domains, or partitions, such that our sub-domains can be sampled in a statistically sound way. We provide an algorithm to generate the most important partitions first, which is particularly important when there are an infinite number of input sub-domains. We combine analysis and testing to provide useful reliabilities for the various input sub-domains of a system, or component. This provides a methodology for calculating true reliability for a software system for any accurate statistical distribution of input values. Secondly, we present a calculus for probability density functions that permits accurately modeling the input distribution seen by each component in the system - a critically important issue in dealing with reliability of software components. Finally, we provide the system structuring calculus that allows a system designer to take components from component suppliers that have been built according to our rules and to determine the resulting system reliability. This can be done without access to the actual components. This work raises many issues, particularly about scalability of the proposed techniques and about the ability of the system designer to know the input profile to the level and kind of accuracy required. There are also large classes of components where the techniques are currently intractable, but we see this work as an important first step.
26

Systematic techniques for efficiently checking Software Product Lines

Kim, Chang Hwan Peter 25 February 2014 (has links)
A Software Product Line (SPL) is a family of related programs, which of each is defined by a combination of features. By developing related programs together, an SPL simultaneously reduces programming effort and satisfies multiple sets of requirements. Testing an SPL efficiently is challenging because a property must be checked for all the programs in the SPL, the number of which can be exponential in the number of features. In this dissertation, we present a suite of complementary static and dynamic techniques for efficient testing and runtime monitoring of SPLs, which can be divided into two categories. The first prunes programs, termed configurations, that are irrelevant to the property being tested. More specifically, for a given test, a static analysis identifies features that can influence the test outcome, so that the test needs to be run only on programs that include these features. A dynamic analysis counterpart also eliminates configurations that do not have to be tested, but does so by checking a simpler property and can be faster and more scalable. In addition, for runtime monitoring, a static analysis identifies configurations that can violate a safety property and only these configurations need to be monitored. When no configurations can be pruned, either by design of the test or due to ineffectiveness of program analyses, runtime similarity between configurations, arising due to design similarity between configurations of a product line, is exploited. In particular, shared execution runs all the configurations together, executing bytecode instructions common to the configurations just once. Deferred execution improves on shared execution by allowing multiple memory locations to be treated as a single memory location, which can increase the amount of sharing for object-oriented programs and for programs using arrays. The techniques have been evaluated and the results demonstrate that the techniques can be effective and can advance the idea that despite the feature combinatorics of an SPL, its structure can be exploited by automated analyses to make testing more efficient. / text
27

Optimizing System Performance and Dependability Using Compiler Techniques

Rajagopalan, Mohan January 2006 (has links)
As systems become more complex, there are increasing demands for improvement with respect to attributes such as performance, dependability, and security. Optimization is defined as theprocess of making the most effective use of a set of resources with respect to some attribute. Existing optimization techniques, however, have two fundamental limitations. They target individual parts of a system without considering the potentially significant global picture, and they are designed to improve a single attribute at a time. These limitations impose significant restrictions on the kinds of optimization possible, the effectiveness of the techniques, and the ability to improvethe optimization process itself.This dissertation presents holistic system optimization, a new approach to optimization based on taking a broad view of a system. Unlike current approaches, holistic optimizations consider different kinds of interactions at multiple levels in a system, and target a variety of metrics uniformly. A key component of this research has been the use of proven compiler techniques to ensure transparency, automation, and correctness. These techniques have been implemented in Cassyopia, a software prototype of a framework for performing holistic optimization.The core of this work is three new holistic optimizations, which are also presented. The first describes profile-directed static optimizations designed to improve the performance of eventbased programs by spanning boundaries that separate code that raises events from handlers that field them. The second, system call clustering, improves the system call behavior of an entire program by grouping together calls that can be executed in a single boundary crossing. In thiscase, the optimization spans kernel and user address spaces. Finally, authenticated system calls optimize system security through a novel implementation of an efficient system call monitor. This example demonstrates how the new approach can be used to create new optimizations that not only span address space boundaries but also target attributes such as dependability. All of these optimizations involve the application of standard compiler techniques in non-traditional contexts and demonstrate how systems can be improved beyond what is possible using existing techniques.
28

Compiler Support for Fine-grain Software-only Checkpointing

Zhao, Cheng Yan 13 August 2013 (has links)
Checkpointing support allows program execution to roll-back to an earlier program point, discarding any modifications made since that point. Existing software-based checkpointing methods are mainly libraries that snapshot all of working-memory, and hence have prohibitive overhead for many potential applications. In this thesis we present a light-weight, fine-grain checkpointing framework implemented entirely in software through compiler transformations and optimizations. A programmer can specify arbitrary checkpoint regions via a simple API, and the compiler automatically transforms the code to implement the checkpoint at the granularity of individual stores, optimizing to remove redundancy. We explore three application areas for this support. First, we investigate its application to debugging, in particular by providing the ability to rewind to an arbitrarily-placed point in a buggy program’s execution. A study using BugBench applications shows that our compiler-based approach is more than 100X less overhead than full-process checkpointing. Second, we demonstrate that compiler-based checkpointing support can be leveraged to free the programmer from manually implementing and maintaining software rollback mechanisms when coding a backtracking algorithm, with runtime overhead of only 15% compared to the manual implementation. Finally, we utilize the efficient software-only checkpointing to support overlapping execution with delinquent loads through the demonstrations both control-speculation and data-speculation transformations. We further propose a theoretical speculative timing model and confirm its prediction effectiveness with real-machine workload.
29

Compiler Support for Fine-grain Software-only Checkpointing

Zhao, Cheng Yan 13 August 2013 (has links)
Checkpointing support allows program execution to roll-back to an earlier program point, discarding any modifications made since that point. Existing software-based checkpointing methods are mainly libraries that snapshot all of working-memory, and hence have prohibitive overhead for many potential applications. In this thesis we present a light-weight, fine-grain checkpointing framework implemented entirely in software through compiler transformations and optimizations. A programmer can specify arbitrary checkpoint regions via a simple API, and the compiler automatically transforms the code to implement the checkpoint at the granularity of individual stores, optimizing to remove redundancy. We explore three application areas for this support. First, we investigate its application to debugging, in particular by providing the ability to rewind to an arbitrarily-placed point in a buggy program’s execution. A study using BugBench applications shows that our compiler-based approach is more than 100X less overhead than full-process checkpointing. Second, we demonstrate that compiler-based checkpointing support can be leveraged to free the programmer from manually implementing and maintaining software rollback mechanisms when coding a backtracking algorithm, with runtime overhead of only 15% compared to the manual implementation. Finally, we utilize the efficient software-only checkpointing to support overlapping execution with delinquent loads through the demonstrations both control-speculation and data-speculation transformations. We further propose a theoretical speculative timing model and confirm its prediction effectiveness with real-machine workload.
30

Heavyweight Pattern Mining in Attributed Flow Graphs

Simoes Gomes, Carolina Unknown Date
No description available.

Page generated in 0.0774 seconds