• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 24
  • 22
  • 13
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 248
  • 248
  • 74
  • 73
  • 66
  • 57
  • 47
  • 46
  • 35
  • 32
  • 31
  • 28
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Formaln­ verifikace RISC-V procesoru s vyuit­m Questa PropCheck / Formal verification of RISC-V processor with Questa PropCheck

Javor, Adrin January 2020 (has links)
The topic of this master thesis is Formal verification of RISC-V processor with Questa PropCheck using SystemVerilog assertions. The theoretical part writes about the RISC-V architecture, furthermore, selected components of Codix Berkelium 5 processor used for formal verification are described, communication protocol AHB-lite, formal verification and its methods and tools are also studied. Experimental part consists of verification planning of selected components, subsequent formal verification, analysing of results and evaluating a benefits of formal technics.
32

Formal Verification Methodologies for NULL Convention Logic Circuits

Le, Son Ngoc January 2020 (has links)
NULL Convention Logic (NCL) is a Quasi-Delay Insensitive (QDI) asynchronous design paradigm that aims to tackle some of the major problems synchronous designs are facing as the industry trend of increased clock rates and decreased feature size continues. The clock in synchronous designs is becoming increasingly difficult to manage and causing more power consumption than ever before. NCL circuits address some of these issues by requiring less power, producing less noise and electro-magnetic interference, and being more robust to Process, Voltage, and Temperature (PVT) variations. With the increase in popularity of asynchronous designs, a formal verification methodology is crucial for ensuring these circuits operate correctly. Four automated formal verification methodologies have been developed, three to ensure delay-insensitivity of an NCL circuit (i.e., prove Input-Completeness, Observability, and Completion-Completeness properties), and one to aid in proving functional equivalence between an NCL circuit and its synchronous counterpart. Note that an NCL circuit can be functionally correct and still not be input-complete, observable, or completion-complete, which could cause the circuit to operate correctly under normal conditions, but malfunction when circuit timing drastically changes (e.g., significantly reduced supply voltage, extreme temperatures). Since NCL circuits are implemented using dual-rail logic (i.e., 2 wires, rail0 and rail1, represent one bit of data), part of the functional equivalence verification involves ensuring that the NCL rail0 logic is the inverse of its rail1 logic. Equivalence verification optimizations and alternative invariant checking methods were investigated and proved to decrease verification times of identical circuits substantially. This work will be a major step toward NCL circuits being utilized more frequently in industry, since it provides an automated verification method to prove correctness of an NCL implementation and equivalence to its synchronous specification, which is the industry standard.
33

Formal Verification of Tree Ensembles in Safety-Critical Applications

Törnblom, John January 2020 (has links)
In the presence of data and computational resources, machine learning can be used to synthesize software automatically. For example, machines are now capable of learning complicated pattern recognition tasks and sophisticated decision policies, two key capabilities in autonomous cyber-physical systems. Unfortunately, humans find software synthesized by machine learning algorithms difficult to interpret, which currently limits their use in safety-critical applications such as medical diagnosis and avionic systems. In particular, successful deployments of safety-critical systems mandate the execution of rigorous verification activities, which often rely on human insights, e.g., to identify scenarios in which the system shall be tested. A natural pathway towards a viable verification strategy for such systems is to leverage formal verification techniques, which, in the presence of a formal specification, can provide definitive guarantees with little human intervention. However, formal verification suffers from scalability issues with respect to system complexity. In this thesis, we investigate the limits of current formal verification techniques when applied to a class of machine learning models called tree ensembles, and identify model-specific characteristics that can be exploited to improve the performance of verification algorithms when applied specifically to tree ensembles. To this end, we develop two formal verification techniques specifically for tree ensembles, one fast and conservative technique, and one exact but more computationally demanding. We then combine these two techniques into an abstraction-refinement approach, that we implement in a tool called VoTE (Verifier of Tree Ensembles). Using a couple of case studies, we recognize that sets of inputs that lead to the same system behavior can be captured precisely as hyperrectangles, which enables tractable enumeration of input-output mappings when the input dimension is low. Tree ensembles with a high-dimensional input domain, however, seems generally difficult to verify. In some cases though, conservative approximations of input-output mappings can greatly improve performance. This is demonstrated in a digit recognition case study, where we assess the robustness of classifiers when confronted with additive noise.
34

Formally Verifying the Robustness of Machine Learning Models : A Comparative Study / Formell verifiering av robusthet hos maskininlärningsmodeller : En jämförelsestudie

Lundström, Linnea January 2020 (has links)
Machine learning models have become increasingly popular in recent years, and not without reason. They enable software to become more powerful, and with less human involvement. As a consequence however, the actions of the software are hard for a human to understand and anticipate. This prohibits the use of machine learning in systems where safety has to be assured, typically using formal proofs of relevant properties. This thesis is focused on robustness - one of many properties that can impact the safety of a system. There are several tools available that enable formal robustness verification of machine learning models, and a goal of this thesis is to evaluate their performance. A variety of machine learning models are also assessed according to how robust they can be proved to be. A digit recognition problem was used in order to evaluate how sensitive different model types are to perturbations of pixels in an image, and also to assess the performance of applicable verification tools. On this particular problem, we discovered that a Support Vector Machine demonstrates the highest degree of robustness, which could be verified with short enough time using the tool SAVer. In addition, machine learning models were trained on a data set consisting of Android applications that are labelled either as malware or benign. In this verification problem, we check whether adding permission requests to an application that is malware can make it become labelled as benign. For this problem, a Gradient Boosting Machine proved to be the most robust with a very short verification time using the tool VoTE. Although not the most robust, Neural Networks were proved to be relatively robust on both problems using the tool ERAN, whereas Random Forests performed the worst, in terms of robustness.
35

Symboleo: Specification and Verification of Legal Contracts

Parvizimosaed, Alireza 21 October 2022 (has links)
Contracts are legally binding and enforceable agreements among two or more parties that govern social interactions. They have been used for millennia, including in commercial transactions, employment relationships and intellectual property generation. Each contract determines obligations and powers of contracting parties. The execution of a contract needs to be continuously monitored to ensure compliance with its terms and conditions. Smart contracts are software systems that monitor and control the execution of contracts to ensure compliance. But for such software systems to become possible, contracts need to be specified precisely to eliminate ambiguities, contradictions, and missing clauses. This thesis proposes a formal specification language for contracts named Symboleo. The ontology of Symboleo is founded on the legal concepts of obligation (a kind of duty) and power (a kind of right) complemented with the concepts of event and situation that are suitable for conceptualizing monitoring tasks. The formal semantics of legal concepts is defined in terms of state machines that describe the lifetimes of contracts, obligations, and powers, as well as axioms that describe precisely state transitions. The language supports execution-time operations that enable subcontracting assignment of rights and substitution of performance to a third party during the execution of a contract. Symboleo has been applied to the formalization of contracts from three different domains as a preliminary evaluation of its expressiveness. Formal specifications can be algorithmically analyzed to ensure that they satisfy desired properties. Towards this end, the thesis presents two implemented analysis tools. One is a conformance checking tool (SymboleoPC) that ensures that a specification is consistent with the expectations of contracting parties. Expectations are defined for this tool in terms of scenarios (sequences of events) and the expected final outcome (i.e., successful/unsuccessful execution). The other tool (SymboleoPC), which builds on top of an existing model checker (nuXmv), can prove/disprove desired properties of a contract, expressed in temporal logic. These tools have been used for assessing different business contracts. SymboleoPC is also assessed in terms of performance and scalability, with positive results. Symboleo, together with its associated tools, is envisioned as an enabler for the formal verification of contracts to address requirements-level issues, at design time.
36

Arbitration Techniques for SoC Bus Interconnect with Optimized Verification Methodology

Sarpangala, Kishan January 2013 (has links)
No description available.
37

Design Verification for Sequential Systems at Various Abstraction Levels

Zhang, Liang 31 January 2005 (has links)
With the ever increasing complexity of digital systems, functional verification has become a daunting task to circuit designers. Functional verification alone often surpasses 70% of the total development cost and the situation has been projected to continue to worsen. The most critical limitations of existing techniques are the capacity issue and the run-time issue. This dissertation addresses the functional verification problem using a unified approach, which utilizes different core algorithms at various abstraction levels. At the logic level, we focus on incorporating a set of novel ideas to existing formal verification approaches. First, we present a number of powerful optimizations to improve the performance and capacity of a typical SAT-based bounded model checking framework. Secondly, we present a novel method for performing dynamic abstraction within a framework for abstraction-refinement based model checking. Experiments on a wide range of industrial designs have shown that the proposed optimizations consistently provide between 1-2 orders of magnitude speedup and can be extremely useful in enhancing the efficacy of existing formal verification algorithms. At the register transfer level, where the formal verification is less likely to succeed, we developed an efficient ATPG-based validation framework, which leverages the high-level circuit information and an improved observability-enhanced coverage to generate high quality validation sequences. Experiments show that our approach is able to generate high quality validation vectors, which achieve both high tag coverage and high bug coverage with extremely low computational cost. / Ph. D.
38

Mining Multinode Constraints and Complex Boolean Expressions for Sequential Equivalence Checking

Goel, Neha 13 August 2010 (has links)
Integrated circuit design has progressed significantly over the last few decades. This increasing complexity of hardware systems poses several challenges to the digital hardware verification. Functional verification has become the most expensive and time-consuming task in the overall product development cycle. Almost 70\% of the total verification time is being consumed by design verification and it is projected to worsen further. One of the reasons for this complexity is the synthesis and optimization (automated as well as manual) techniques used to improve performance, area, delay, and other measures have made the final implementation of the design very different from the golden (reference) model. Determining the functional correctness between the reference and implementation using exhaustive simulation can almost always be infeasible. An alternative approach is to prove that the optimized design is functionally equivalent to the reference model, which is known to be functionally correct. The most widely used formal method to perform this process is equivalence checking. The success of combinational equivalence checking (CEC) has contributed to aggressive combinational logic synthesis and optimizations for circuits with millions of logic gates. However, without powerful sequential equivalence checking (SEC) techniques, the potential and extent of sequential optimization is quite limited. In other words, the success of SEC can unleash a plethora of aggressive sequential optimizations that can take circuit design to the next level. Currently, SEC remains extremely difficult compared to CEC, due to the huge search space of the problem. Sequential Equivalence Checking remains a challenging problem, in this thesis we address the problem using efficient learning techniques. The first approach is to mine missing multi-node patterns from the mining database, verify them and add those proved as true during the unbounded SEC framework. The second approach is to mine powerful and generalized Boolean relationships among flip-flops and internal signals in a sequential circuit using a data mining algorithm. In contrast to traditional learning methods, our mining algorithms can extract illegal state cubes and inductive invariants. These invariants can be arbitrary Boolean expressions and can help in pruning a large don't-care space for equivalence checking. The two approaches are complementary to each other in nature. One computes the subset of illegal states that cannot occur in the normal function mode and the other approach mines legal constraints that represent the characteristics of the miter circuit and can never be violated. These powerful relations, when added as new constraint clauses to the original formula, help to significantly increase the deductive power for the SAT engine, thereby pruning a larger portion of the search space. Likewise, the memory required and time taken to solve the SEC problem is alleviated. / Master of Science
39

Automatic Selection of Verification Tools for Efficient Analysis of Biochemical Models

Bakir, M.E., Konur, Savas, Gheorghe, Marian, Krasnogor, N., Stannett, M. 24 April 2018 (has links)
Yes / Motivation: Formal verification is a computational approach that checks system correctness (in relation to a desired functionality). It has been widely used in engineering applications to verify that systems work correctly. Model checking, an algorithmic approach to verification, looks at whether a system model satisfies its requirements specification. This approach has been applied to a large number of models in systems and synthetic biology as well as in systems medicine. Model checking is, however, computationally very expensive, and is not scalable to large models and systems. Consequently, statistical model checking (SMC), which relaxes some of the constraints of model checking, has been introduced to address this drawback. Several SMC tools have been developed; however, the performance of each tool significantly varies according to the system model in question and the type of requirements being verified. This makes it hard to know, a priori, which one to use for a given model and requirement, as choosing the most efficient tool for any biological application requires a significant degree of computational expertise, not usually available in biology labs. The objective of this paper is to introduce a method and provide a tool leading to the automatic selection of the most appropriate model checker for the system of interest. Results: We provide a system that can automatically predict the fastest model checking tool for a given biological model. Our results show that one can make predictions of high confidence, with over 90% accuracy. This implies significant performance gain in verification time and substantially reduces the “usability barrier” enabling biologists to have access to this powerful computational technology. / EPSRC, Innovate UK
40

Enhancing Software Security through Code Diversification Verification, Control-flow Restriction, and Automatic Compartmentalization

Jang, Jae-Won 26 July 2024 (has links)
In today's digital age, computer systems are prime targets for adversaries due to the vast amounts of sensitive information stored digitally. This ongoing cat-and-mouse game between programmers and adversaries forces security researchers to continually develop novel security measures. Widely adopted schemes like NX bits have safeguarded systems against traditional memory exploits such as buffer overflows, but new threats like code-reuse attacks quickly bypass these defenses. Code-reuse attacks exploit existing code sequences, known as gadgets, without injecting new malicious code, making them challenging to counter. Additionally, input-based vulnerabilities pose significant risks by exploiting external inputs to trigger malicious paths. Languages like C and C++ are often considered unsafe due to their tendency to cause issues like buffer overflows and use-after-free errors. Addressing these complex vulnerabilities requires extensive research and a holistic approach. This dissertation initially introduces a methodology for verifying the functional equivalence between an original binary and its diversified version. The Verification of Diversified Binary (VDB) algorithm is employed to determine whether the two binaries—the original and the diversified—maintain functional equivalence. Code diversification techniques modify the binary compilation process to produce functionally equivalent yet different binaries from the same source code. Most code diversification techniques focus on analyzing non-functional properties, such as whether the technique improves security. The objective of this contribution is to enable the use of untrusted diversification techniques in essential applications. Our evaluation demonstrates that the VDB algorithm can verify the functional equivalence of 85,315 functions within binaries from the GNU Coreutils 8.31 benchmark suite. Next, this dissertation proposes a binary-level tool that modifies binaries to protect against control-flow hijacking attacks. Traditional approaches to guard against ROP attacks either introduce significant overhead, require hardware support, or need intimate knowledge of the binary, such as source code. In contrast, this contribution does not rely on source code nor the latest hardware technology (e.g., Intel Control-flow Enforcement Technology). Instead, we show that we can precisely restrict control flow transfers from transferring to non-intended paths even without these features. To that end, this contribution proposes a novel control-flow integrity policy based on a deny list called Control-flow Restriction (CFR). CFR determines which control flow transfers are allowed in the binary without requiring source code. Our implementation and evaluation of CFR show that it achieves this goal with an average runtime performance overhead for commercial off-the-shelf (COTS) binaries in the range of 5.5% to 14.3%. In contrast, a state-of-the-art binary-level solution such as BinCFI has an average overhead of 61.5%. Additionally, this dissertation explores leveraging the latest hardware security primitives to compartmentalize sensitive data. Specifically, we use a tagged memory architecture introduced by ARM called the Memory Tagging Extension (MTE), which assigns a metadata tag to a memory location that is associated with pointers referencing that memory location. Although promising, ARM MTE suffers from predictable tag allocation on stack data, vulnerable plain-text metadata tags, and lack of fine-grained memory access control. Therefore, this contribution introduces Shroud to enhance data security through compartmentalization using MTE and protect MTE's tagged pointers' vulnerability through encryption. Evaluation of Shroud demonstrates its security effectiveness against non-control-data attacks like Heartbleed and Data-Oriented Programming, with performance evaluations showing an average overhead of 4.2% on lighttpd and 2% on UnixBench. Finally, the NPB benchmark measured Shroud's overhead, showing an average runtime overhead of 2.57%. The vulnerabilities highlighted by exploits like Heartbleed capitalize on external inputs, underscoring the need for enhanced input-driven security measures. Therefore, this dissertation describes a method to improve upon the limitations of traditional compartmentalization techniques. This contribution introduces an Input-Based Compartmentalization System (IBCS), a comprehensive toolchain that utilizes user input to identify data for memory protection automatically. Based on user inputs, IBCS employs hybrid taint analysis to generate sensitive code paths and further analyze each tainted data using novel assembly analyses to identify and enforce selective targets. Evaluations of IBCS demonstrate its security effectiveness through adversarial analysis and report an average overhead of 3% on Nginx. Finally, this dissertation concludes by revisiting the problem of implementing a classical technique known as Software Fault Isolation (SFI) on an x86-64 architecture. Prior works attempting to implement SFI on an x86-64 architecture have suffered from supporting a limited number of sandboxes, high context-switch overhead, and requiring extensive modifications to the toolchain, jeopardizing maintainability and introducing compatibility issues due to the need for specific hardware. This dissertation describes x86-based Fault Isolation (XFI), an efficient SFI scheme implemented on an x86-64 architecture with minimal modifications needed to the toolchain, while reducing complexity in enforcing SFI policies with low performance (22.48% average) and binary size overheads (2.65% average). XFI initializes the sandbox environment for the rewritten binary and, depending on the instructions, enforces data-access and control-flow policies to ensure safe execution. XFI provides the security benefits of a classical SFI scheme and offers additional protection against several classes of side-channel attacks, which can be further extended to enhance its protection capabilities. / Doctor of Philosophy / In today's digital age, cyber attackers frequently target computer systems due to the vast amounts of sensitive information they store. As a result, security researchers must constantly develop new protective measures. Traditional defenses like NX bits have been effective against memory exploits, but new threats like code-reuse attacks, which leverage existing code without introducing new malicious code, present new challenges. Additionally, vulnerabilities in languages like C and C++ further complicate security efforts. Addressing these issues requires extensive research and a comprehensive approach. This dissertation introduces several innovative techniques to enhance computer security. First, it presents a method to verify that a diversified program is functionally equivalent to its original version, ensuring that security modifications do not alter its intended functions. Next, it proposes a technique to prevent control-flow hijacking attacks without requiring source code or advanced hardware. Then, the dissertation explores leveraging advanced hardware, such as ARM's Memory Tagging Extension, to protect sensitive data, demonstrating robust security against attacks like Heartbleed. Recognizing that adversaries often use external inputs to exploit vulnerabilities, this dissertation introduces Input-Based Compartmentalization to automatically protect memory based on user input. Finally, an efficient implementation of a well-known security technique called Software Fault Isolation on x86-64 architecture ensures safe execution with low overhead. These advancements collectively enhance the robustness of computer systems against modern cyber threats.

Page generated in 0.191 seconds