• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 243
  • 73
  • 31
  • 9
  • 6
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 450
  • 450
  • 155
  • 138
  • 114
  • 99
  • 90
  • 77
  • 77
  • 52
  • 51
  • 47
  • 45
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Formaln­ verifikace RISC-V procesoru s vyuit­m Questa PropCheck / Formal verification of RISC-V processor with Questa PropCheck

Javor, Adrin January 2020 (has links)
The topic of this master thesis is Formal verification of RISC-V processor with Questa PropCheck using SystemVerilog assertions. The theoretical part writes about the RISC-V architecture, furthermore, selected components of Codix Berkelium 5 processor used for formal verification are described, communication protocol AHB-lite, formal verification and its methods and tools are also studied. Experimental part consists of verification planning of selected components, subsequent formal verification, analysing of results and evaluating a benefits of formal technics.
82

STAMINA: Stochastic Approximate Model-Checker for Infinite-State Analysis

Neupane, Thakur 01 August 2019 (has links)
Reliable operation of every day use computing system, from simple coffee machines to complex flight controller system in an aircraft, is necessary to save time, money, and in some cases lives. System testing can check for the presence of unwanted execution but cannot guarantee the absence of such. Probabilistic model checking techniques have demonstrated significant potential in verifying performance and reliability of various systems whose execution are defined with likelihood. However, its inability to scale limits its applicability in practice. This thesis presents a new model checker, STAMINA, with efficient and scalable model truncation for probabilistic verification. STAMINA uses a novel model reduction technique generating a finite state representations of large systems that are amenable to existing probabilistic model checking techniques. The proposed method is evaluated on several benchmark examples. Comparisons with another state-of-art tool demonstrates both accuracy and efficiency of the presented method.
83

Hardware Trojan Detection in Sequential Logic Designs

Dharmadhikari, Pranav Hemant January 2018 (has links)
No description available.
84

Bounds for the Maximum-Time Stochastic Shortest Path Problem

Kozhokanova, Anara Bolotbekovna 13 December 2014 (has links)
A stochastic shortest path problem is an undiscounted infinite-horizon Markov decision process with an absorbing and costree target state, where the objective is to reach the target state while optimizing total expected cost. In almost all cases, the objective in solving a stochastic shortest path problem is to minimize the total expected cost to reach the target state. But in probabilistic model checking, it is also useful to solve a problem where the objective is to maximize the expected cost to reach the target state. This thesis considers the maximum-time stochastic shortest path problem, which is a special case of the maximum-cost stochastic shortest path problem where actions have unit cost. The contribution is an efficient approach to computing high-quality bounds on the optimal solution for this problem. The bounds are useful in themselves, but can also be used by other algorithms to accelerate search for an optimal solution.
85

Improving Error Discovery Using Guided Model Checking

Rungta, Neha Shyam 12 September 2006 (has links) (PDF)
State exploration in directed software model checking is guided using a heuristic function to move states near errors to the front of the search queue. Distance heuristic functions rank states based on the number of transitions needed to move the current program state into an error location. Lack of calling context information causes the heuristic function to underestimate the true distance to the error; however, inlining functions at call sites in the control flow graph to capture calling context leads to exponential growth in the computation. This paper presents a new algorithm that implicitly inlines functions at call sites to compute distance data with unbounded calling context that is polynomial in the number of nodes in the control flow graph. The new algorithm propagates distance data through call sites during a depth-first traversal of the program. We show in a series of benchmark examples that the new heuristic function with unbounded distance data is more efficient than the same heuristic function that inlines functions up to a certain depth.
86

Machine Code Verification Using The Bogor Framework

Edelman, Joseph R. 22 May 2008 (has links) (PDF)
Verification and validation of embedded systems software is tedious and time consuming. Software model checking uses a tool-based approach automating this process. In order to more accurately model software it is necessary to provide hardware support that enables the execution of software as it should run on native hardware. Hardware support often requires the creation of model checking tools specific to the instruction set architecture. The creation of software model checking tools is non-trivial. We present a strategy for using an "off-the-shelf" model checking tool, Bogor, to provide support for multiple instruction set architectures. Our strategy supports key hardware features such as instruction execution, exceptional control flow, and interrupt servicing as extensions to Bogor. These extensions work within the tool framework using existing interfaces and require significantly less code than creating an entire model checking tool.
87

Verifying Abstract Components Within Concrete Software Environments

Bao, Tonglaga 26 March 2009 (has links) (PDF)
In order to model check a software component which is not a standalone program, we need a model of the software which completes the program. This problem is important for software engineers who need to deploy an existing component into a new environment. The model is typically generated by abstracting the surrounding software environment in which the component will be executed. However, abstracting the surrounding software is a difficult and error-prone task, particularly when the surrounding software is a complex software artifact which can not be easily abstracted. In this dissertation, we present a new approach to the problem by abstracting the software component under test and leaving the surrounding software concrete. We derive this abstract-concrete mixed model automatically for both sequential and concurrent C programs and verify them using the SPIN model checker. We give verification results for several components under test contained within complex software environments to demonstrate the strengths and weaknesses of our approach. We are able to find errors in components which were too complex for analysis by existing model checking techniques. We prove that this mixed abstract-concrete model can be bisimilar to the original complete software system using an abstraction refinement scheme. We then show how to generate test cases for the component under test using this abstraction refinement process.
88

Kernel P systems: from modelling to verification and testing

Gheorghe, Marian, Ceterchi, R., Ipate, F., Konur, Savas, Lefticaru, Raluca 13 December 2017 (has links)
Yes / A kernel P system integrates in a coherent and elegant manner some of the most successfully used features of the P systems employed in modelling various applications. It also provides a theoretical framework for analysing these applications and a software environment for simulating and verifying them. In this paper, we illustrate the modelling capabilities of kernel P systems by showing how other classes of P systems can be represented with this formalism and providing a number of kernel P system models for a sorting algorithm and a broadcasting problem. We also show how formal verification can be used to validate that the given models work as desired. Finally, a test generation method based on automata is extended to non-deterministic kernel P systems. / The work of MG, FI and RL were supported by a grant of the Romanian National Author- ity for Scientific Research, CNCS-UEFISCDI (project number: PN-II-ID-PCE-2011-3-0688); RCUK
89

Symboleo: Specification and Verification of Legal Contracts

Parvizimosaed, Alireza 21 October 2022 (has links)
Contracts are legally binding and enforceable agreements among two or more parties that govern social interactions. They have been used for millennia, including in commercial transactions, employment relationships and intellectual property generation. Each contract determines obligations and powers of contracting parties. The execution of a contract needs to be continuously monitored to ensure compliance with its terms and conditions. Smart contracts are software systems that monitor and control the execution of contracts to ensure compliance. But for such software systems to become possible, contracts need to be specified precisely to eliminate ambiguities, contradictions, and missing clauses. This thesis proposes a formal specification language for contracts named Symboleo. The ontology of Symboleo is founded on the legal concepts of obligation (a kind of duty) and power (a kind of right) complemented with the concepts of event and situation that are suitable for conceptualizing monitoring tasks. The formal semantics of legal concepts is defined in terms of state machines that describe the lifetimes of contracts, obligations, and powers, as well as axioms that describe precisely state transitions. The language supports execution-time operations that enable subcontracting assignment of rights and substitution of performance to a third party during the execution of a contract. Symboleo has been applied to the formalization of contracts from three different domains as a preliminary evaluation of its expressiveness. Formal specifications can be algorithmically analyzed to ensure that they satisfy desired properties. Towards this end, the thesis presents two implemented analysis tools. One is a conformance checking tool (SymboleoPC) that ensures that a specification is consistent with the expectations of contracting parties. Expectations are defined for this tool in terms of scenarios (sequences of events) and the expected final outcome (i.e., successful/unsuccessful execution). The other tool (SymboleoPC), which builds on top of an existing model checker (nuXmv), can prove/disprove desired properties of a contract, expressed in temporal logic. These tools have been used for assessing different business contracts. SymboleoPC is also assessed in terms of performance and scalability, with positive results. Symboleo, together with its associated tools, is envisioned as an enabler for the formal verification of contracts to address requirements-level issues, at design time.
90

Criticism and robustification of latent Gaussian models

Cabral, Rafael 28 May 2023 (has links)
Latent Gaussian models (LGMs) are perhaps the most commonly used class of statistical models with broad applications in various fields, including biostatistics, econometrics, and spatial modeling. LGMs assume that a set of unobserved or latent variables follow a Gaussian distribution, commonly used to model spatial and temporal dependence in the data. The availability of computational tools, such as R-INLA, that permit fast and accurate estimation of LGMs has made their use widespread. Nevertheless, it is easy to find datasets that contain inherently non-Gaussian features, such as sudden jumps or spikes, that adversely affect the inferences and predictions made from an LGM. These datasets require more general latent non-Gaussian models (LnGMs) that can automatically handle these non-Gaussian features by assuming more flexible and robust non-Gaussian distributions on the latent variables. However, fast implementation and easy-to-use software are lacking, which prevents LnGMs from becoming widely applicable. This dissertation aims to tackle these challenges and provide ready-to-use implementations for the R-INLA package. We view scientific learning as an iterative process involving model criticism followed by model improvement and robustification. Thus, the first step is to provide a framework that allows researchers to criticize and check the adequacy of an LGM without fitting the more expensive LnGM. We employ concepts from Bayesian sensitivity analysis to check the influence of the latent Gaussian assumption on the statistical answers and Bayesian predictive checking to check if the fitted LGM can predict important features in the data. In many applications, this procedure will suffice to justify using an LGM. For cases where this check fails, we provide fast and scalable implementations of LnGMs based on variational Bayes and Laplace approximations. The approximation leads to an LGM that downweights extreme events in the latent variables, reducing their impact and leading to more robust inferences. Each step, the first of LGM criticism and the second of LGM robustification, can be executed in R-INLA, requiring only the addition of a few lines of code. This results in a robust workflow that applied researchers can readily use.

Page generated in 0.0498 seconds