• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 104
  • 104
  • 67
  • 36
  • 19
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Characterization of Cost Excess in Cloud Applications

January 2012 (has links)
abstract: The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations. / Dissertation/Thesis / Ph.D. Computer Science 2012
32

Exploiting structure for scalable software verification

Domagoj, Babić 11 1900 (has links)
Software bugs are expensive. Recent estimates by the US National Institute of Standards and Technology claim that the cost of software bugs to the US economy alone is approximately 60 billion USD annually. As society becomes increasingly software-dependent, bugs also reduce our productivity and threaten our safety and security. Decreasing these direct and indirect costs represents a significant research challenge as well as an opportunity for businesses. Automatic software bug-finding and verification tools have a potential to completely revolutionize the software engineering industry by improving reliability and decreasing development costs. Since software analysis is in general undecidable, automatic tools have to use various abstractions to make the analysis computationally tractable. Abstraction is a double-edged sword: coarse abstractions, in general, yield easier verification, but also less precise results. This thesis focuses on exploiting the structure of software for abstracting away irrelevant behavior. Programmers tend to organize code into objects and functions, which effectively represent natural abstraction boundaries. Humans use such structural abstractions to simplify their mental models of software and for constructing informal explanations of why a piece of code should work. A natural question to ask is: How can automatic bug-finding tools exploit the same natural abstractions? This thesis offers possible answers. More specifically, I present three novel ways to exploit structure at three different steps of the software analysis process. First, I show how symbolic execution can preserve the data-flow dependencies of the original code while constructing compact symbolic representations of programs. Second, I propose structural abstraction, which exploits the structure preserved by the symbolic execution. Structural abstraction solves a long-standing open problem --- scalable interprocedural path- and context-sensitive program analysis. Finally, I present an automatic tuning approach that exploits the fine-grained structural properties of software (namely, data- and control-dependency) for faster property checking. This novel approach resulted in a 500-fold speedup over the best previous techniques. Automatic tuning not only redefined the limits of automatic software analysis tools, but also has already found its way into other domains (like model checking), demonstrating the generality and applicability of this idea. / Science, Faculty of / Computer Science, Department of / Graduate
33

A taxonomical review of software verification techniques: an illustration using discrete-event simulation

Whitner, Richard B. 21 November 2012 (has links)
The use of simulation and modeling as a technique for solving today's complex problems is ever-increasing. Correspondingly, the demands placed on the software which serves as a computer-executable representation of the simulation model are increasing. With the increased complexity of simulation models comes a greater need for model verification, particularly programmed model verification. Unfortunately, current model verification technology is lacking in techniques which satisfy the verification needs. Specifically, there are few guidelines for performing programmed model verification. There is, however, an abundance of software verification techniques which are applicable for simulation model verification. An extensive review of techniques applicable for simulation programmed model verification is presented using the simulation and modeling terminology. A taxonomy for programmed model verification methods is developed. The usefulness of the taxonomy is twofold: (1) the taxonomy provides an approach for applying software verification techniques to the problem of programmed model verification, and (2) the breadth of the taxonomy provides a broad spectrum of perspectives from which to assess the credibility of simulation results. A simulation case study demonstrates the usefulness of the taxonomy and some of the verification techniques. By referencing this work, one can determine what, and when, techniques can be used throughout the development life cycle. He will know how to perform each technique, how difficult each will be to apply, and how effective the technique will be. The simulation modeler - as well as the software engineer â will find the taxonomy and techniques valuable tools for guiding verification efforts. / Master of Science
34

Exploring Hybrid Dynamic and Static Techniques for Software Verification

Cheng, Xueqi 10 March 2010 (has links)
With the growing importance of software on which human lives increasingly depend, the correctness requirement of the underlying software becomes especially critical. However, the increasing complexities and sizes of modern software systems pose special challenges on the effectiveness as well as efficiency of software verification. Two major obstacles include the quality of test generation in terms of error detection in software testing and the state space explosion problem in software formal verification (model checking). In this dissertation, we investigate several hybrid techniques that explore dynamic (with program execution), static (without program execution) as well as the synergies of multiple approaches in software verification from the perspectives of testing and model checking. For software testing, a new simulation-based internal variable range coverage metric is proposed with the goal of enhancing the error detection capability of the generated test data when applied as the target metric. For software model checking, we utilize various dynamic analysis methods, such as data mining, swarm intelligence (ant colony optimization), to extract useful high-level information from program execution data. Despite being incomplete, dynamic program execution can still help to uncover important program structure features and variable correlations. The extracted knowledge, such as invariants in different forms, promising control flows, etc., is then used to facilitate code-level program abstraction (under-approximation/over-approximation), and/or state space partition, which in turn improve the performance of property verification. In order to validate the effectiveness of the proposed hybrid approaches, a wide range of experiments on academic and real-world programs were designed and conducted, with results compared against the original as well as the relevant verification methods. Experimental results demonstrated the effectiveness of our methods in improving the quality as well as performance of software verification. For software testing, the newly proposed coverage metric constructed based on dynamic program execution data is able to improve the quality of test cases generated in terms of mutation killing — a widely applied measurement for error detection. For software model checking, the proposed hybrid techniques greatly take advantage of the complementary benefits from both dynamic and static approaches: the lightweight dynamic techniques provide flexibility in extracting valuable high-level information that can be used to guide the scope and the direction of static reasoning process. It consequently results in significant performance improvement in software model checking. On the other hand, the static techniques guarantee the completeness of the verification results, compensating the weakness of dynamic methods. / Ph. D.
35

Improving Error Discovery Using Guided Model Checking

Rungta, Neha Shyam 12 September 2006 (has links) (PDF)
State exploration in directed software model checking is guided using a heuristic function to move states near errors to the front of the search queue. Distance heuristic functions rank states based on the number of transitions needed to move the current program state into an error location. Lack of calling context information causes the heuristic function to underestimate the true distance to the error; however, inlining functions at call sites in the control flow graph to capture calling context leads to exponential growth in the computation. This paper presents a new algorithm that implicitly inlines functions at call sites to compute distance data with unbounded calling context that is polynomial in the number of nodes in the control flow graph. The new algorithm propagates distance data through call sites during a depth-first traversal of the program. We show in a series of benchmark examples that the new heuristic function with unbounded distance data is more efficient than the same heuristic function that inlines functions up to a certain depth.
36

A Verified Program for the Enumeration of All Maximal Independent Sets

Merten, Samuel A. January 2016 (has links)
No description available.
37

Investigations in Automating Software Verification

Kirschenbaum, Jason P. 27 July 2011 (has links)
No description available.
38

Abstraction as the Key to Programming, with Issues for Software Verification in Functional Languages

Bronish, Derek 25 June 2012 (has links)
No description available.
39

A language to support verification of embedded software

Swart, Riaan 04 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2004. / ENGLISH ABSTRACT: Embedded computer systems form part of larger systems such as aircraft or chemical processing facilities. Although testing and debugging of such systems are difficult, reliability is often essential. Development of embedded software can be simplified by an environment that limits opportunities for making errors and provides facilities for detection of errors. We implemented a language and compiler that can serve as basis for such an experimental environment. Both are designed to make verification of implementations feasible. Correctness and safety were given highest priority, but without sacrificing efficiency wherever possible. The language is concurrent and includes measures for protecting the address spaces of concurrently running processes. This eliminates the need for expensive run-time memory protection and will benefit resource-strapped embedded systems. The target hardware is assumed to provide no special support for concurrency. The language is designed to be small, simple and intuitive, and to promote compile-time detection of errors. Facilities for abstraction, such as modules and abstract data types support implementation and testing of bigger systems. We have opted for model checking as verification technique, so our implementation language is similar in design to a modelling language for a widely used model checker. Because of this, the implementation code can be used as input for a model checker. However, since the compiler can still contain errors, there might be discrepancies between the implementation code written in our language and the executable code produced by the compiler. Therefore we are attempting to make verification of executable code feasible. To achieve this, our compiler generates code in a special format, comprising a transition system of uninterruptible actions. The actions limit the scheduling points present in processes and reduce the different interleavings of process code possible in a concurrent system. Requirements that conventional hardware places on this form of code are discussed, as well as how the format influences efficiency and responsiveness. / AFRIKAANSE OPSOMMING: Ingebedde rekenaarstelsels maak deel uit van groter stelsels soos vliegtuie of chemiese prosesseerfasiliteite. Hoewel toetsing en ontfouting van sulke stelsels moeilik is, is betroubaarheid dikwels onontbeerlik. Ontwikkeling van ingebedde sagteware kan makliker gemaak word met 'n ontwikkelingsomgewing wat geleenthede vir foutmaak beperk en fasiliteite vir foutbespeuring verskaf. Ons het 'n programmeertaal en vertaler geïmplementeer wat as basis kan dien vir so 'n eksperimentele omgewing. Beide is ontwerp om verifikasie van implementasies haalbaar te maak. Korrektheid en veiligheid het die hoogste prioriteit geniet, maar sonder om effektiwiteit prys te gee, waar moontlik. Die taal is gelyklopend en bevat maatreëls om die adresruimtes van gelyklopende prosesse te beskerm. Dit maak duur looptyd-geheuebeskerming onnodig, tot voordeel van ingebedde stelsels met 'n tekort aan hulpbronne. Daar word aangeneem dat die teikenhardeware geen spesiale ondersteuning vir gelyklopendheid bevat nie. Die programmeertaal is ontwerp om klein, eenvoudig en intuïtief te wees, en om vertaaltyd-opsporing van foute te bevorder. Fasiliteite vir abstraksie, byvoorbeeld modules en abstrakte datatipes, ondersteun implementering en toetsing van groter stelsels. Ons het modeltoetsing as verifikasietegniek gekies, dus is die ontwerp van ons programmeertaal soortgelyk aan dié van 'n modelleertaal vir 'n modeltoetser wat algemeen gebruik word. As gevolg hiervan kan die implementasiekode as toevoer vir 'n modeltoetser gebruik word. Omdat die vertaler egter steeds foute kan bevat, mag daar teenstrydighede bestaan tussen die implementasie geskryf in ons implementasietaal, en die uitvoerbare masjienkode wat deur die vertaler gelewer word. Daarom poog ons om verifikasie van die uitvoerbare masjienkode haalbaar te maak. Om hierdie doelwit te bereik, is ons vertaler ontwerp om 'n spesiale formaat masjienkode te genereer bestaande uit 'n oorgangstelsel wat ononderbreekbare (atomiese) aksies bevat. Die aksies beperk die skeduleerpunte in prosesse en verminder sodoende die aantal interpaginasies van proseskode wat moontlik is in 'n gelyklopende stelsel. Die vereistes wat konvensionele hardeware aan dié spesifieke formaat kode stel, word bespreek, asook hoe die formaat effektiwiteit en reageerbaarheid van die stelsel beïnvloed.
40

An algebraic framework for reasoning about security

Rajaona, Solofomampionona Fortunat 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Stepwise development of a program using refinement ensures that the program correctly implements its requirements. The specification of a system is “refined” incrementally to derive an implementable program. The programming space includes both specifications and implementable code, and is ordered with the refinement relation which obeys some mathematical laws. Morgan proposed a modification of this “classical” refinement for systems where the confidentiality of some information is critical. Programs distinguish between “hidden” and “visible” variables and refinement has to bear some security requirement. First, we review refinement for classical programs and present Morgan’s approach for ignorance pre- serving refinement. We introduce the Shadow Semantics, a programming model that captures essential properties of classical refinement while preserving the ignorance of hidden variables. The model invalidates some classical laws which do not preserve security while it satisfies new laws. Our approach will be algebraic, we propose algebraic laws to describe the properties of ignorance preserving refinement. Thus completing the laws proposed in. Moreover, we show that the laws are sound in the Shadow Semantics. Finally, following the approach of Hoare and He for classical programs, we give a completeness result for the program algebra of ignorance preserving refinement. / AFRIKAANSE OPSOMMING: Stapsgewyse ontwikkeling van ’n program met behulp van verfyning verseker dat die program voldoen aan die vereistes. Die spesifikasie van ’n stelsel word geleidelik ”verfyn” wat lei tot ’n implementeerbare kode, en word georden met ‘n verfyningsverhouding wat wiskundige wette gehoorsaam. Morgan stel ’n wysiging van hierdie klassieke verfyning voor vir stelsels waar die vertroulikheid van sekere inligting van kritieke belang is. Programme onderskei tussen ”verborgeën ”sigbare” veranderlikes en verfyning voldoen aan ’n paar sekuriteitsvereistes. Eers hersien ons verfyning vir klassieke programme en verduidelik Morgan se benadering tot onwetendheid behoud. Ons verduidelik die ”Shadow Semantics”, ’n programmeringsmodel wat die noodsaaklike eienskappe van klassieke verfyning omskryf terwyl dit die onwetendheid van verborge veranderlikes laat behoue bly. Die model voldoen nie aan n paar klassieke wette, wat nie sekuriteit laat behoue bly nie, en dit voldoen aan nuwe wette. Ons benadering sal algebraïese wees. Ons stel algebraïese wette voor om die eienskappe van onwetendheid behoudende verfyning te beskryf, wat dus die wette voorgestel in voltooi. Verder wys ons dat die wette konsekwent is in die ”Shadow Semantics”. Ten slotte, na aanleiding van die benadering in vir klassieke programme, gee ons ’n volledigheidsresultaat vir die program algebra van onwetendheid behoudende verfyning.

Page generated in 0.1392 seconds