• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 867
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1759
  • 421
  • 359
  • 295
  • 270
  • 260
  • 254
  • 223
  • 211
  • 192
  • 179
  • 171
  • 128
  • 123
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Sound Modular Extraction of Control Flow Graphs from Java Bytecode

de Carvalho Gomes, Pedro January 2012 (has links)
Control flow graphs (CFGs) are abstract program models that preserve the control flow information. They have been widely utilized for many static analyses in the past decades. Unfortunately, previous studies about the CFG construction from modern languages, such as Java, have either neglected advanced features that influence the control flow, or do not provide a correctness argument. This is a bearable issue for some program analyses, but not for formal methods, where the soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, when developing open systems, i.e., systems in which at least one component is missing, one may want to extract CFGs to verify the available components. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependencies involving missing components. In this work we present two variants of a CFG extraction algorithm from Java bytecode considering precise exceptional flow, which are sound w.r.t to the JVM behavior. The first algorithm extracts CFGs from fully-provided (closed) programs only. It proceeds in two phases. Initially the Java bytecode is translated into a stack-less intermediate representation named BIR, which provides explicit representation of exceptions, and is more compact than the original bytecode. Next, we define the transformation from BIR to CFGs, which, among other features, considers the propagation of uncaught exceptions within method calls. We then establish its correctness: the behavior of the extracted CFGs is shown to be a sound over-approximation of the behavior of the original programs. Thus, temporal safety properties that hold for the CFGs also hold for the program. We prove this by suitably combining the properties of the two transformations with those of a previous idealized CFG extraction algorithm, whose correctness has been proven directly. The second variant of the algorithm is defined for open systems. We generalize the extraction algorithm for closed systems for a modular set-up, and resolve inter-dependencies involving missing components by using user-provided interfaces. We establish its correctness by defining a refinement relation between open systems, which constrains the instantiation of missing components. We prove that if the relation holds, then the CFGs extracted from the components of the original open system are sound over-approximations of the CFGs for the same components in the refined system. Thus, temporal safety properties that hold for an open system also hold for closed systems that refine it. We have implemented both algorithms as the ConFlEx tool. It uses Sawja, an external library for the static analysis of Java bytecode, to transform bytecode into BIR, and to resolve virtual method calls. We have extended Sawja to support open systems, and improved its exception type analysis. Experimental results have shown that the algorithm for closed systems generates more precise CFGs than the modular algorithm. This was expected, due to the heavy over-approximations the latter has to perform to be sound. Also, both algorithms are linear in the number of bytecode instructions. Therefore, ConFlEx is efficient for the extraction of CFGs from either open, or closed Java bytecode programs. / <p>QC 20121122</p>
232

Procedure-Modular Verification of Temporal Safety Properties

Soleimanifard, Siavash January 2012 (has links)
This thesis presents a fully automated technique for procedure-modular verification of control flow temporal safety properties. Procedure-modular verification is a natural instantiation of modular verification where modularity is achieved at the level of procedures. Here it is used for the verification of software systems in the presence of code evolution, multiple method implementations (as arising from software product lines), or even unknown method implementations (as in mobile code for open platforms). The technique is built on top of a previously developed modular verification framework based on maximal model construction. In the framework, program data is abstracted away completely to achieve algorithmic verification. This restricts the class of properties that can be verified. The technique is supported by a fully automated tool called ProMoVer which is described and evaluated on a number of real-life case studies. ProMoVer is quipped with a number of features, such as automatic specification extraction, to facilitate easy usage. Moreover, it provides a proof storage and reuse mechanism for efficiency. An application area which can significantly benefit from modular verification is software product line (SPL) design. In SPL engineering, products are generated from a set of well-defined commonalities and variabilities. The products of an SPL can be described by means of a hierarchical variability model specifying the commonalities and variabilities between the individual products. The number of products generated from a hierarchical model is exponential in the size of the hierarchical model. Therefore, scalable and efficient verification for SPL is only possible by exploiting modular verification techniques. In this thesis, we propose a hierarchical variability model for modeling product families. Then the modular verification technique and ProMoVer are adapted for the SPLs described with this hierarchical model. A natural extension of the modular verification technique is to include program data in a conservative fashion, by encoding data from a finite domain through control. By this, a wider class of properties can be supported. As a first step towards including program data, Boolean values are added to the program model, specification languages, maximal model construction and modular verification principles. / QC 20120507
233

Verification of Functional Requirements of Embedded Automotive C Code / Verifiering av funktionella krav på inbyggd C-kod i motorfordon

Lidström, Christian January 2016 (has links)
Today's vehicles are increasingly controlled by embedded computer systems. Such systems are of safety-critical nature, where an error in the computation could have dire consequences. A common way to ensure that software works is testing, but as the complexity of these systems grows larger it gets harder to ensure enough coverage in the tests. Another way to ensure that software fulfills its requirements is formal verification, where properties of the code are proven mathematically to hold under certain conditions. Formal verification gives a higher level of confidence in the correctness of code than testing alone, but it is not as widely used within the industry. This project has examined whether current state-of-the-art tools for formal verification are ready to be used to verify real-life safety-critical code. To answer this, a case study on a module running in Scania's vehicles was performed. Several of the requirements were successfully verified. The thesis also identifies for what type of code and requirements this is possible, and describes a process for how it can be done. / Idag kontrolleras fordon allt mer av inbyggda datorsystem. Sådana system är säkerhetskritiska, där ett fel kan ha ödesdigra konsekvenser. Ett vanligt sätt att försäkra sig om att mjukvaran fungerar är testning, men när komplexiteten av dessa system växer blir det allt svårare att försäkra sig om att testen har tillräcklig täckning. Ett annat sätt att försäkra sig om att mjukvaran uppfyller dess krav är formell verifiering, där egenskaper hos koden bevisas matematiskt att hålla under vissa villkor. Formell verifiering ger ett högre förtroende för kods korrekthet än vad enbart testning skulle göra, men används ännu inte i lika stor utsträckning inom industrin. Detta projekt har undersökt huruvida moderna verktyg för formell verifiering är mogna att användas för att verifiera riktig säkerhetskritisk kod. För att svara på detta har en fallstudie av en modul i Scanias fordon genomförts. Flera av dess krav lyckades verifieras. Rapporten identifierar också för vilka typer av kod och krav detta är möjligt, och beskriver en process för hur det kan utföras.
234

Synthesis of Specifications and Refinement Maps for Real-Time Object Code Verification

Al-Qtiemat, Eman Mohammad January 2020 (has links)
Formal verification methods have been shown to be very effective in finding corner-case bugs and ensuring the safety of embedded software systems. The use of formal verification requires a specification, which is typically a high-level mathematical model that defines the correct behavior of the system to be verified. However, embedded software requirements are typically described in natural language. Transforming these requirements into formal specifications is currently a big gap. While there is some work in this area, we proposed solutions to address this gap in the context of refinement-based verification, a class of formal methods that have shown to be effective for embedded object code verification. The proposed approach also addresses both functional and timing requirements and has been demonstrated in the context of safety requirements for software control of infusion pumps. The next step in the verification process is to develop the refinement map, which is a mapping function that can relate an implementation state (in this context, the state of the object code program to be verified) with the specification state. Actually, constructing refinement maps often requires deep understanding and intuitions about the specification and implementation, it is shown very difficult to construct refinement maps manually. To go over this obstacle, the construction of refinement maps should be automated. As a first step toward the automation process, we manually developed refinement maps for various safety properties concerning the software control operation of infusion pumps. In addition, we identified possible generic templates for the construction of refinement maps. Recently, synthesizing procedures of refinement maps for functional and timing specifications are proposed. The proposed work develops a process that significantly increases the automation in the generation of these refinement maps. The refinement maps can then be used for refinement-based verification. This automation procedure has been successfully applied on the transformed safety requirements in the first part of our work. This approach is based on the identified generic refinement map templates which can be increased in the future as the application required.
235

A Formal Method to Analyze Framework-Based Software

Larson, Trent N. 01 August 2002 (has links) (PDF)
Software systems are frequently designed using abstractions that make software verification tractable. Specifically, by choosing meaningful, formal abstractions for interfaces and then designing according to those interfaces, one can verify entire systems according to behavioral predicates. While impractical for systems in general, framework-based software architectures are a type of system for which formal analysis can be beneficial and practical over the life of the system. We present a method to formally analyze behavioral properties of framework-based software with higher-order logic and then demonstrate its utility for a significant, modern system.
236

Improving the Synthesis of Annotations for Partially Automated Deductive Verification / Att förbättra syntes av funktionsanteckningar för partiellt automatiserad deduktiv verifiering

Manjikian, Hovig January 2023 (has links)
This work investigates possible improvements to an existing annotation inference tool. The tool is part of a toolchain that aims to automate the process of software verification using formal methods. The purpose of the annotations is to facilitate the use of deductive verification, which is the formal method used in this project for proving that a given program meets its specifications. In the project, two categories of annotations are established. The first category is the category of functional annotations. These annotations describe the behavior of a function or a module. The other category is what we call auxiliary annotations. These annotations describe properties that are necessary for proving the correctness of the functional annotations. The tool that this work tries to improve is dedicated to inferring the auxiliary annotations. To our knowledge, this is the first tool of its kind to automatically infer auxiliary annotations for a complete module given the module’s source code and its interface specifications. The work contributed in four areas: inferring annotations from the interface specifications of a module and propagating these annotations to all the helper functions used in the module; inferring annotations for floating-point constructs; inferring annotations for pointer constructs; and finally, inferring annotations for array constructs. The improved tool was tested on production embedded code used in the heavy automotive industry. The results demonstrated a considerable improvement and were in line with earlier findings. The work confirms the feasibility and usability of auxiliary annotation inference in this scope. / Detta arbete undersöker möjliga förbättringar av ett befintligt verktyg för härledning av annoteringar (annotations). Verktyget är en komponent i en verktygskedja som syftar till att automatisera processen för mjukvaruverifiering med formella metoder. Syftet med annoteringarna är att underlätta användningen av deduktiv verifiering, vilket är den formella metod som används i detta projekt för att bevisa att ett givet program uppfyller dess specifikationer. I projektet fastställs två kategorier av annoteringar. Den första kategorin är kategorin funktionella annoteringar. Dessa annoteringar beskriver beteendet hos en funktion eller en modul. Den andra kategorin är vad vi kallar hjälp annoteringar (auxiliary annotations). Dessa annoteringar beskriver egenskaper som är nödvändiga för att bevisa korrektheten av de funktionella annoteringarna. Verktyget som detta arbete försöker förbättra är dedikerat till att härleda hjälp annoteringar. Arbetet bidrog inom fyra områden: att härleda annoteringar från gränssnittsspecifikationerna (interface specifications) för en modul och sprida dessa annoteringar till alla hjälpfunktioner som används i modulen; härleda annoteringar för flyttalskonstruktioner (floating-point constructs); härleda annoteringar för pekarkonstruktioner; och slutligen, härleda annoteringar för arraykonstruktioner. Det förbättrade verktyget testades på produktionsinbyggdad kod som används inom fordonsindustrin. Resultaten visade en avsevärd förbättring och var i linje med tidigare resultat. Arbetet bekräftar genomförbarheten och användbarheten av hjälpannoteringshärledning i projektets omfattning.
237

An Encoding of the Clock Cycle Semantics of Bluespec SystemVerilog in PVS / ENCODING THE CLOCK CYCLE SEMANTICS OF BSV IN PVS

Moore, Nicholas January 2022 (has links)
The invention of Hardware Description Languages has given hardware designers access to powerful methods of abstraction and organization, previously only available to software developers. A high-powered means of examining properties such as reliability, correctness and safety is the creation of formal, mathematical proofs of correctness. One approach to this is the modelling of the artifact in the logic of some deductive system, such as the higher order logic of the Prototype Verification System (PVS). The ambition of this work is to demonstrate a mechanism by which a class of hardware descriptions may be used to generate such models automatically. We further demonstrate the utility of said models, using them to demonstrate non-trivial correctness properties. We also present a method of generating hardware descriptions, logical models, and proofs from a class of tabular specifications. The language on which this method operates is Bluespec SystemVerilog (BSV), a high-level hardware description language notable for its elegant semantics. The target platform of our translation is the Prototype Verification System (PVS), which features a highly automatic theorem-proving system. The translation algorithm is discussed at length, including the reconciliation of BSV's action-oriented semantic and the Kripke semantics employed by our chosen model in PVS. Five case studies demonstrate our methodology. In studies one and two, function blocks of the IEC 61131-3 Annex F library are verified against tabular specifications, or generated from the same. The remaining case studies are based on the Shakti RISC-V implementation of the RapidIO subsystem. Our final case study demonstrates progress towards the verification of highly abstract and complex properties over the entire translatable subset of the RapidIO library. / Thesis / Doctor of Philosophy (PhD) / The invention of Hardware Description Languages has given hardware designers access to powerful methods of abstraction and organization, previously only available to software developers. A high-powered means of examining properties such as reliability, correctness and safety is the creation of formal, mathematical proofs of correctness. One approach to this is the modelling of the artifact in the logic of some deductive system, such as the higher order logic of the Prototype Verification System (PVS). The ambition of this work is to demonstrate a mechanism by which a class of hardware descriptions may be used to generate such models automatically. We further demonstrate the utility of said models, using them to demonstrate non-trivial correctness properties. We also present a method of generating hardware descriptions, logical models, and proofs from a class of tabular specifications. The language on which this method operates is Bluespec SystemVerilog (BSV), a high-level hardware description language notable for its elegant semantics. The target platform of our translation is the Prototype Verification System (PVS), which features a highly automatic theorem-proving system. The translation algorithm is discussed at length, including the reconciliation of BSV's action-oriented semantic and the Kripke semantics employed by our chosen model in PVS. Five case studies demonstrate our methodology. In studies one and two, function blocks of the IEC 61131-3 Annex F library are verified against tabular specifications, or generated from the same. The remaining case studies are based on the Shakti RISC-V implementation of the RapidIO subsystem. Our final case study demonstrates progress towards the verification of highly abstract and complex properties over the entire translatable subset of the RapidIO library.
238

Parameterized Verification and Synthesis for Distributed Agreement-Based Systems

Nouraldin Jaber (13796296) 19 September 2022 (has links)
<p> </p> <p>Distributed agreement-based systems use common distributed agreement protocols such as leader election and consensus as building blocks for their target functionality—processes in these systems may need to agree on a leader, on the members of a group, on owners of locks, or on updates to replicated data. Such distributed agreement-based systems are common and potentially permit modular, scalable verification approaches that mimic their modular design. Interestingly, while there are many verification efforts that target agreement protocols themselves, little attention has been given to distributed agreement-based systems that build on top of these protocols. </p> <p>In this work, we aim to develop a fully-automated, modular, and usable parameterized verification approach for distributed agreement-based systems. To do so, we need to overcome the following challenges. First, the fully automated parameterized verification problem, i.e, the problem of algorithmically checking if the system is correct for any number of processes, is a well-known <em>undecidable </em>problem. Second, to enable modular verification that leverages the inherently-modular nature of these agreement-based systems, we need to be able to support <em>abstractions </em>of agreement protocols. Such abstractions can replace the agreement protocols’ implementations when verifying the overall system; enabling modular reasoning. Finally, even when the verification is fully automated, a system designer still needs assistance in <em>modeling </em>their distributed agreement-based systems. </p> <p>We systematically tackle these challenges through the following contributions. </p> <p>First, we support efficient, decidable verification of distributed agreement-based systems by developing a computational model—the GSP model—for reasoning about distributed (agreement-based) systems that admits decidability and <em>cutoff </em>results. Cutoff results enable practical verification by reducing the parameterized verification problem to the verification problem of a system with a fixed, finite number of processes. The GSP model supports generalized communication primitives and global guards, both of which are essential to enable abstractions of agreement protocols. </p> <p>Then, we address the usability and modularity aspects by developing a framework, QuickSilver, tailored for modeling and modular parameterized verification of distributed agreement-based systems. QuickSilver provides an intuitive domain-specific language, called Mercury, that is equipped with two agreement primitives capable of abstracting away agreement protocols when modeling agreement-based systems; enabling modular verification. QuickSilver extends the decidability and cutoff results of the GSP model to provide fully automated, efficient parameterized verification for a large class of systems modeled in Mercury. </p> <p>Finally, we leverage synthesis techniques to further enhance the usability of our approach and propose Cinnabar, a tool that supports synthesis of distributed agreement-based systems with efficiently-decidable parameterized verification. Cinnabar allows a system de- signer to provide a sketch of their Mercury model and uses a counterexample-guided synthesis procedure to search for model completions that both belong to the efficiently-decidable fragment of Mercury and are correct. </p> <p>We evaluate our contributions on various interesting distributed agreement-based systems adapted from real-world applications, such as a data store, a lock service, a surveillance system, a pathfinding algorithm for mobile robots, and more. </p>
239

Formal Specification and Verification of Data-Centric Web Services

Moustafa, Iman Saleh 20 April 2012 (has links)
In this thesis, we develop and evaluate a formal model and contracting framework for data-centric Web services. The central component of our framework is a formal specification of a common Create-Read-Update-Delete (CRUD) data store. We show how this model can be used in the formal specification and verification of both basic and transactional Web service compositions. We demonstrate through both formal proofs and empirical evaluations that our proposed framework significantly decreases ambiguity about a service, enhances its reuse, and facilitates detection of errors in service-based implementations. Web Services are reusable software components that make use of standardized interfaces to enable loosely-coupled business-to-business and customer-to-business interactions over the Web. In such environments, service consumers depend heavily on the service interface specification to discover, invoke, and synthesize services over the Web. Data-centric Web services are services whose behavior is determined by their interactions with a repository of stored data. A major challenge in this domain is interpreting the data that must be marshaled between consumer and producer systems. While the Web Services Description Language (WSDL) is currently the de facto standard for Web services, it only specifies a service operation in terms of its syntactical inputs and outputs; it does not provide a means for specifying the underlying data model, nor does it specify how a service invocation affects the data. The lack of data specification potentially leads to erroneous use of the service by a consumer. In this work, we propose a formal contract for data-centric Web services. The goal is to formally and unambiguously specify the service behavior in terms of its underlying data model and data interactions. We address the specification of a single service, a flow of services interacting with a single data store, and also the specification of distributed transactions involving multiple Web services interacting with different autonomous data stores. We use the proposed formal contract to decrease ambiguity about a service behavior, to fully verify a composition of services, and to guarantee correctness and data integrity properties within a transactional composition of services. / Ph. D.
240

Supporting Open Source Investigative Journalism with Crowdsourced Image Geolocation

Kohler, Rachel 10 August 2017 (has links)
Journalists rely on image and video verification to support their investigations and often utilize open source tools to verify user generated content, but current practice requires experts be involved in every step of the process. Additionally, lacking custom tools to support verification efforts, experts are often limited to the utility of existing, openly available tools, which may or may not support the interactions and information gathering they require. We aim to support the process of geolocating images and videos through crowdsourcing. By enabling crowd workers to participate in the geolocation process, we can provide investigative journalists with efficient and complete verification of image locations. Parallelizing searching speeds up the verification process as well as provides a more extensive search, all while allowing the expert to follow up on other leads or investigative work. We produced a software prototype called GroundTruth which enables crowd workers to support investigative journalists in the geolocation of visual media quickly and accurately. Additionally, this work contributes experimental results demonstrating how the crowd can be utilized to support complex sensemaking tasks. / Master of Science

Page generated in 0.0951 seconds