• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 852
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1744
  • 412
  • 354
  • 291
  • 265
  • 256
  • 252
  • 219
  • 211
  • 189
  • 176
  • 169
  • 122
  • 120
  • 120
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Verification and Configuration of a Run-Time Reconfigurable Custom Computing Integrated Circuit for DSP Applications

Cherbaka, Mark F. 08 July 1996 (has links)
In recent years, interest in the area of custom computing machines (CCMs) has been on a steady increase. Much of the activity surrounding CCMs has centered around Field-Programmable Gate Array (FPGA) technology and rapid prototyping applications. While higher performance has been a concern in some applications, the solutions are limited by the relatively small FPGA bandwidth, density and throughput. This leads to area, speed, power, and application-specific constraints. In recent months, an integrated circuit known as the VT Colt has been developed to address some high performance digital signal processing (DSP) problems that conventional processors, CCMs, and ASICs cannot do under the space and power constraints. The Colt chip takes a data-flow approach to processing and is highly reconfigurable to suit the many computationally demanding tasks that new DSP applications present. A significant portion of the development of the Colt chip is architectural justification, functional verification, and configurability. This thesis explores verification of the Colt chip at various levels of development including mapping arithmetic computations and DSP algorithms that the Colt architecture was designed to solve. / Master of Science
42

Secure and Trusted Verification

Cai, Yixian 06 1900 (has links)
In our setting, verification is a process that checks whether a device's program (implementation) has been produced according to its corresponding requirements specification. Ideally a client builds the requirements specification of a program and asks a developer to produce the actual program according to the requirements specification it provides. After the program is built, a verifier is asked to verify the program. However, nowadays verification methods usually require good knowledge of the program to be verified and thus sensitive information about the program itself can be easily leaked during the process. In this thesis, we come up with the notion of secure and trusted verification which allows the developer to hide non-disclosed information about the program from the verifier during the verification process and a third party to check the correctness of the verification result. Moreover, we formally study the mutual trust between the verifier and the developer and define the above notion in the context of both an honest and a malicious developer. Besides, we implement the notion above both in the setting of an honest and a malicious developer using cryptographic primitives and tabular expressions. Our construction allows the developer to hide the modules of a program and the verifier to do some-what white box verification. The security properties of the implementation are also formally discussed and strong security results are proved to be achieved. / Thesis / Master of Science (MSc)
43

Machine-Learning-Assisted Test Generation to Characterize Failures for Cyber-Physical Systems

Chandar, Abhishek 17 July 2023 (has links)
With the advancements in Internet of Things (IoT) and innovations in the networking domain, Cyber-Physical Systems (CPS) are rapidly adopted in various domains from autonomous vehicles to manufacturing systems to improve the efficiency of the overall development of complex physical systems. CPS models allow an easy and cost-effective approach to alter the architecture of the system that yields optimal performance. This is especially crucial in the early stages of development of a physical system. Developing effective testing strategies to test CPS models is necessary to ensure that there are no defects during the execution of the system. Typically, a set of requirements are defined from the domain expertise to assert the system's behavior on different possible inputs. To effectively test CPS, a large number of test inputs is required to observe their performance on a variety of test inputs. But real-world CPS models are compute-intensive (i.e. takes a significant amount of time to execute the CPS for a given test input). Therefore, it is almost impossible to execute CPS models over a large number of test inputs. This leads to sub-optimal fixes based on the identified defects which may lead to costly issues at later stages of development. In this thesis, we aim to improve the efficiency of existing search-based software testing approaches to test compute-intensive CPS by combining them with ML. We call these ML-assisted test generation. In this work, we investigate two alternate ML-assisted test generation techniques: (1) surrogate-assisted and (2) ML-guided test generation, to efficiently test a given CPS model. Both the surrogate-assisted and ML-guided test generation can generate many test inputs. Therefore, we propose to build failure models that generate explainable rules on failure-inducing test inputs of the CPS model. Surrogate-assisted test generation involves using ML as a replacement to CPS under test so that the fitness value of some test inputs are predicted rather than executing them using CPS. A large number of test inputs are generated by combining cheap surrogate predictions and compute-intensive execution of CPS model to find the labels of the test inputs. Specifically, we propose a new surrogate-assisted test generation technique that leverages multiple surrogate models simultaneously and dynamically selects the prediction from the most accurate label. Alternatively, ML-assisted test generation aims to estimate the boundary regions that separate test inputs that pass the requirements and test inputs that fail the requirements and subsequently guide the sampling of test inputs from these boundary regions. Further, the test data generated by the ML-assisted test generation techniques are used to infer two alternative failure models namely the Decision Rule Model (DRM) and Decision Tree Model (DTM) that characterizes the failure circumstances of the CPS model. We conduct an empirical evaluation of the accuracy of failure models inferred from test data generated by both ML-assisted test generation techniques. Using a total of 15 different functional requirements from 5 Simulink-based benchmarks CPS, we observed that the proposed dynamic surrogate-assisted test generation technique generates failure models with an average accuracy of 83% for DRM and 90% for DTM. The average accuracy of the dynamic surrogate-assisted technique has a 16.9% improvement in the average accuracy of DRM and a 7.1% improvement in the average accuracy of DTM compared to the random search baseline.
44

Intersection types and higer-order model checking

Ramsay, Steven J. January 2014 (has links)
Higher-order recursion schemes are systems of equations that are used to define finite and infinite labelled trees. Since, as Ong has shown, the trees defined have a decidable monadic second order theory, recursion schemes have drawn the attention of research in program verification, where they sit naturally as a higher-order, functional analogue of Boolean programs. Driven by applications, fragments have been studied, algorithms developed and extensions proposed; the emerging theme is called higher-order model checking. Kobayashi has pioneered an approach to higher-order model checking using intersection types, from which many recent advances have followed. The key is a characterisation of model checking as a problem of intersection type assignment. This dissertation contributes to both the theory and practice of the intersection type approach. A new, fixed-parameter polynomial-time decision procedure is described for the alternating trivial automaton fragment of higher-order model checking. The algorithm uses a novel, type-directed form of abstraction refinement, in which behaviours of the scheme are distinguished according to the intersection types that they inhabit. Furthermore, by using types to reason about acceptance and rejection simultaneously, the algorithm is able to converge on a solution from two sides. An implementation, Preface, and an extensive body of evidence demonstrate empirically that the algorithm scales well to schemes of several thousand rules. A comparison with other tools on benchmarks derived from current practice and the related literature puts it well beyond the state-of-the-art. A generalisation of the intersection type approach is presented in which higher-order model checking is seen as an instance of exact abstract interpretation. Intersection type assignment is used to characterise a general class of safety checking problems, defined independently of any particular representation (such as automata) for a class of recursion schemes built over arbitrary constants. Decidability of any problem in the class is an immediate corollary. Moreover, the work looks beyond whole-program verification, the traditional territory of model checking, by giving a natural treatment of higher-type properties, which are sets of functions.
45

Precise verification of C programs

Lewis, Matt January 2014 (has links)
Most current approaches to software verification are one-sided -- a safety prover will try to prove that a program is safe, while a bug-finding tool will try to find bugs. It is rare to find an analyser that is optimised for both tasks, which is problematic since it is hard to know in advance whether a program you wish to analyse is safe or not. The result of taking a one-sided approach to verification is false alarms: safety provers will often claim that safe programs have errors, while bug-finders will often be unable to find errors in unsafe programs. Orthogonally, many software verifiers are designed for reasoning about idealised programming languages that may not have widespread use. A common assumption made by verification tools is that program variables can take arbitrary integer values, while programs in most common languages use fixed-width bitvectors for their variables. This can have a real impact on the verification, leading to incorrect claims by the verifier. In this thesis we will show that it is possible to analyse C programs without generating false alarms, even if they contain unbounded loops, use non-linear arithmetic and have integer overflows. To do this, we will present two classes of analysis based on underapproximate loop acceleration and second-order satisfiability respectively. Underapproximate loop acceleration addresses the problem of finding deep bugs. By finding closed forms for loops, we show that deep bugs can be detected without unwinding the program and that this can be done without introducing false positives or masking errors. We then show that programs accelerated in this way can be optimised by inlining trace automata to reduce their reachability diameter. This inlining allows acceleration to be used as a viable technique for proving safety, as well as finding bugs. In the second part of the thesis, we focus on using second-order logic for program analysis. We begin by defining second-order SAT: an extension of propositional SAT that allows quantification over functions. We show that this problem is NEXPTIME-complete, and that it is polynomial time reducible to finite-state program synthesis. We then present a fully automatic, sound and complete algorithm for synthesising C programs from a specification written in C. Our approach uses a combination of bounded model checking, explicit-state model checking and genetic programming to achieve surprisingly good performance for a problem with such high complexity. We conclude by using second-order SAT to precisely and directly encode several program analysis problems including superoptimisation, de-obfuscation, safety and termination for programs using bitvector arithmetic and dynamically allocated lists.
46

Incremental modelling for verified communication architectures

Boehm, Peter January 2011 (has links)
Modern computer systems are advancing from multi-core to many-core designs and System-on-chips (SoC) are becoming increasingly complex while integrating a great variety of components, thus constituting complex distributed systems. Such architectures rely on extremely complex communication protocols to exchange data with required performance. Arguing formally about the correctness of communication is an acknowledged verification challenge. This thesis presents a generic framework that formalises the idea of incremental modelling and step-wise verification to tackle this challenge: to control the overall complexity, features are added incrementally to a simple initial model and the complexity of each feature is encapsulated into an independent modelling step. Two main strategies reduce the verification effort. First, models are constructed with verification support in mind and the verification process is spread over the modelling process. Second, generic correctness results for framework components allow the verification to be reduced to discharging local assumptions when a component is instantiated. Models in the framework are based on abstract state machines formalised in higher order logic using the Isabelle theorem prover. Two case studies show the utility and breadth of the approach: the ARM AMBA Advanced High-performance Bus protocol, an arbiter-based master-slave bus protocol, represents the family of SoC protocols; the PCI Express protocol, an off-chip point-to-point protocol, illustrates the application of the framework to sophisticated, performance-related features of current and future on-chip protocols. The presented methodology provides an alternative to the traditional monolithic and post-hoc verification approach.
47

A symbolic execution framework for algorithm-level modelling and verification of computer microarchitecture

Hanna, Ziyad January 2011 (has links)
This dissertation addresses the challenge of modelling and functional verification for com- plex computer micro-architecture designs. It is evident that the emerging span and com- plexity of new computer architectures outstrip existing design modelling and verification methods. Several attempts in industry and academia, including High Level Modelling, still do not scale to address this problem. Typically they lack precise and clear language semantics for formal analysis, and do not have native support for concurrency, or the design language and methodology do not fit. Therefore, the gap between what current solutions provide and what industry needs is increasing. In this research we aim to leap ahead of the common incremental research in this area, and develop a new framework for algorithm level modelling and verification. We introduce a high level and executable Architectural Specification Language (ASL) for modelling the functional behaviour of the architectural algorithms. The semantics of our models is based on the theory of Abstract State Machines with synchronous parallel execution and finite choice, which we find naturally suitable for hardware modelling. Our framework is also powered by native symbolic execution algorithms for enabling high- level verification, design explorations and refinement checks of the high level models down to the design implementation. We developed a new framework that implements our ideas through ASL and supports symbolic execution. We demonstrate the utility of our language and symbolic execu- tion on examples and case studies in various modelling domains, and show a promising framework and methodology. We believe our approach will make it easier to explore micro-architectural algorithm behavior and easier to validate this using dynamic or formal techniques, thus yielding a promising attack on the modelling and verification problem, and enabling more productive convergence to high quality implementations.
48

Synthesis of Interactive Reactive Systems / Synthèse des systèmes réactifs interactifs

Bozianu, Rodica 12 December 2016 (has links)
Nous étudions le problème de la synthèse automatique de programmes dans des architectures multi-composants tels qu'elles respectent les spécifications par construction. Le principal objectif de cette thèse est de développer des procédures pour résoudre le problème de synthèse qui peut conduire à des implémentations efficaces. Chaque composant a une observation partielle sur l'état global du système multi-composants. Le problème est alors de fournir des protocoles basés sur les observations tel que les composants synthétisés assurent les spécifications pour tout le comportement de leur environnement. L'environnement peut être antagoniste, ou peut avoir ses propres objectifs et se comporter de façon rationnelle. Nous étudions d'abord le problème de synthèse lorsque l'environnement est présumé antagoniste. Pour ce contexte, nous proposons une procédure "Safraless" pour la synthèse d'un composant partiellement informé et un environnement omniscient à partir de spécications KLTL+. Elle est implémentée dans l'outil Acacia-K. Ensuite, nous étudions le problème de synthèse lorsque les composants de l'environnement ont leurs propres objectifs et sont rationnels. Pour le cadre plus simple de l'information parfaite, nous fournissons des complexités serrées pour des objectifs omega-réguliers particuliers. Pour le cas de l'information imparfaite, nous prouvons que le problème de la synthèse rationnelle est indécidable en général, mais nous regagnons la décidabilité si on demande à synthétiser un composant avec observation partielle contre un environnement multi-composante, omniscient et rationnel / We study the problem of automatic synthesis of programs in multi-component architectures such that they satisfy the specifications by construction. The main goal of the thesis is to develop procedures to solve the synthesis problem that may lead to efficient implementations.Each component may have partial observation on the global state of the multi-component system.Therefore, the synthesis problem asks to provide observation-based protocols for the components that have to be synthesized that ensure that specifications hold on all interactions with their environment.The environment may be antagonist, or may have its own objectives and behave rationally.We first study the synthesis problem when the environment is presumed to be completely antagonist. For this setting, we propose a "Safraless" procedure for the synthesis of one partially informed component and an omniscient environment from KLTL+ specifications. It is implemented in the tool Acacia-K. Secondly, we study the synthesis problem when the components in the environment have their own objectives and are rational. For the more relaxed setting of perfect information, we provide tight complexities for particular omega-regular objectives. Then, for the case of imperfect information, we prove that the rational synthesis problem is undecidable in general, but we gain decidability if is asked to synthesize only one component against a rational omniscient environment
49

Off-line signature verification

Larkins, Robert L. January 2009 (has links)
In today’s society signatures are the most accepted form of identity verification. However, they have the unfortunate side-effect of being easily abused by those who would feign the identification or intent of an individual. This thesis implements and tests current approaches to off-line signature verification with the goal of determining the most beneficial techniques that are available. This investigation will also introduce novel techniques that are shown to significantly boost the achieved classification accuracy for both person-dependent (one-class training) and person-independent (two-class training) signature verification learning strategies. The findings presented in this thesis show that many common techniques do not always give any significant advantage and in some cases they actually detract from the classification accuracy. Using the techniques that are proven to be most beneficial, an effective approach to signature verification is constructed, which achieves approximately 90% and 91% on the standard CEDAR and GPDS signature datasets respectively. These results are significantly better than the majority of results that have been previously published. Additionally, this approach is shown to remain relatively stable when a minimal number of training signatures are used, representing feasibility for real-world situations.
50

Behavioral Model Equivalence Checking for Large Analog Mixed Signal Systems

Singh, Amandeep 2011 May 1900 (has links)
This thesis proposes a systematic, hierarchical, optimization based semi-formal equivalence checking methodology for large analog/mixed signal systems such as phase locked loops (PLL), analog to digital convertors (ADC) and input/output (I/O) circuits. I propose to verify the equivalence between a behavioral model and its electrical implementation over a limited, but highly likely, input space defined as the Constrained Behavioral Input Space. Furthermore, I clearly distinguish between the behavioral and electrical domains and define mapping functions between the two domains to allow for calculation of deviation between the behavioral and electrical implementation. The verification problem is then formulated as an optimization problem which is solved by interfacing a sequential quadratic programming (SQP) based optimizer with commercial circuit simulation tools, such as CADENCE SPECTRE. The proposed methodology is then applied for equivalence checking of a PLL as a test case and results are shown which prove the correctness of the proposed methodology.

Page generated in 0.127 seconds