• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 855
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1747
  • 414
  • 357
  • 293
  • 266
  • 256
  • 252
  • 219
  • 211
  • 191
  • 177
  • 169
  • 124
  • 121
  • 120
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

SoC Security Verification Using Assertion-Based and Information Flow Tracking Techniques

Achyutha, Shanmukha Murali January 2021 (has links)
No description available.
102

Vérification de propriétés logico-temporelles de spécifications SystemC TLM / Verification of temporal properties for SystemC TLM specifications

Ferro, Luca 11 July 2011 (has links)
Au-delà de la formidable évolution en termes de complexité du circuit électronique en soi, son adoption et sa diffusion ont connu, au fil des dernières années, une explosion dans un très grand nombre de domaines distincts. Un système sur puce peut incorporer une combinaison de composants aux fonctionnalités très différentes. S'assurer du bon fonctionnement de chaque composant, et du système complet, est une tâche primordiale et épineuse. Dans ce contexte, l'Assertion-Based Verification (ABV) a considérablement gagné en popularité ces dernières années : il s'agit d'une démarche de vérification où des propriétés logico-temporelles, exprimées dans des langages tels que PSL ou SVA, spécifient le comportement attendu du design. Alors que la plupart des solutions d'ABV existantes se limitent au niveau transfert de registres (RTL), la contribution décrite dans cette thèse s'efforce de résoudre un certain nombre de limitations et vise ainsi une solution mature pour le niveau transactionnel (TLM) de SystemC. Une technique efficace de construction de moniteurs de surveillance à partir de propriétés PSL est proposée : cette technique, inspirée d'une approche originale existante pour le niveau RTL, est ici adaptée à SystemC TLM. Une méthode spécifique de surveillance des actions de communication à haut niveau d'abstraction est également détaillée. Les possibilités offertes par la technique présentée sont significativement étendues en proposant, pour les propriétés écrites en langage PSL, à la fois un support formel et une mise en oeuvre pratique pour des variables auxiliaires globales et locales, qui constituent un élément essentiel lors des spécifications à haut niveau d'abstraction. Tous ces concepts sont également implémentés dans un outil prototype. Afin d'illustrer l'intérêt de la solution proposée, diverses expérimentations sont effectuées avec des designs aux dimensions et complexités différentes. Les résultats obtenus permettent de souligner le fait que la méthode de vérification dynamique suggérée reste applicable pour des designs de taille réaliste. / Over the last years, the growing of electronic circuit complexity has experienced a tremendous evolution. Moreover, electronic circuits have become widespread elements in many different areas. This development leads to Systems-on-Chip incorporating a combination of components with highly heterogeneous features. Ensuring the correct behavior of each component, as well as validating the behavior of the whole system, is both a compelling and painful task. In this context, Assertion-Based Verification (ABV) has widely gained acceptance over the recent years : following this approach, temporal properties expressed using languages such as PSL or SVA specify the expected behavior of the design. While most existing ABV solutions are restricted to the register transfer level (RTL), the work of this thesis attempts to overcome some limitations by developing an actual ABV solution for the transaction level modeling (TLM) in SystemC. An effective technique for the construction of checker modules from PSL properties is proposed : this technique for SystemC TLM is inspired from a pioneering approach for RTL. A specific method for monitoring communication activities at a high level of abstraction is also described. The scope of the proposed technique is significantly improved by adding to PSL both a formal and a practical support for auxiliary global and local variables, which are compelling in higher level specifications. All these concepts are implemented in a prototype tool. In order to present the applicability of the proposed solution, we performed various experiments using designs of different sizes and complexities. The experimental results show that this dynamic verification methodology is also suitable for real-world designs.
103

FORMAL: A SEQUENTIAL ATPG-BASED BOUNDED MODEL CHECKING SYSTEM FOR VLSI CIRCUITS

Qiang, Qiang 10 April 2006 (has links)
No description available.
104

SPIRIT III Data Verification Processing

Garlick, Dean, Wada, Glen, Krull, Pete 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / This paper will discuss the functions performed by the Spatial Infrared Imaging Telescope (SPIRIT) III Data Processing Center (DPC) at Utah State University (USU). The SPIRIT III sensor is the primary instrument on the Midcourse Space Experiment (MSX) satellite; and as builder of this sensor system, USU is responsible for developing and operating the associated DPC. The SPIRIT III sensor consists of a six-color long-wave infrared (LWIR) radiometer system, an LWIR spectrographic interferometer, contamination sensors, and housekeeping monitoring systems. The MSX spacecraft recorders can capture up to 8+ gigabytes of data a day from this sensor. The DPC is subsequently required to provide a 24-hour turnaround to verify and qualify these data by implementing a complex set of sensor and data verification and quality checks. This paper addresses the computing architecture, distributed processing software, and automated data verification processes implemented to meet these requirements.
105

Digital signatures

Swanepoel, Jacques Philip 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015 / AFRIKAANSE OPSOMMING : In hierdie verhandeling stel ons 'n nuwe strategie vir outomatiese handtekening-verifikasie voor. Die voorgestelde raamwerk gebruik 'n skrywer-onafhanklike benadering tot handtekening- modellering en is dus in staat om bevraagtekende handtekeninge, wat aan enige skrywer behoort, te bekragtig, op voorwaarde dat minstens een outentieke voorbeeld vir vergelykingsdoeleindes beskikbaar is. Ons ondersoek die tradisionele statiese geval (waarin 'n bestaande pen-op-papier handtekening vanuit 'n versyferde dokument onttrek word), asook die toenemend gewilde dinamiese geval (waarin handtekeningdata outomaties tydens ondertekening m.b.v. gespesialiseerde elektroniese hardeware bekom word). Die statiese kenmerk-onttrekkingstegniek behels die berekening van verskeie diskrete Radontransform (DRT) projeksies, terwyl dinamiese handtekeninge deur verskeie ruimtelike en temporele funksie-kenmerke in die kenmerkruimte voorgestel word. Ten einde skryweronafhanklike handtekening-ontleding te bewerkstellig, word hierdie kenmerkstelle na 'n verskil-gebaseerde voorstelling d.m.v. 'n geskikte digotomie-transformasie omgeskakel. Die klassikasietegnieke, wat vir handtekeking-modellering en -verifikasie gebruik word, sluit kwadratiese diskriminant-analise (KDA) en steunvektormasjiene (SVMe) in. Die hoofbydraes van hierdie studie sluit twee nuwe tegnieke, wat op die bou van 'n robuuste skrywer-onafhanklike handtekeningmodel gerig is, in. Die eerste, 'n dinamiese tydsverbuiging digotomie-transformasie vir statiese handtekening-voorstelling, is in staat om vir redelike intra-klas variasie te kompenseer, deur die DRT-projeksies voor vergelyking nie-lineêr te belyn. Die tweede, 'n skrywer-spesieke verskil-normaliseringstrategie, is in staat om inter-klas skeibaarheid in die verskilruimte te verbeter deur slegs streng relevante statistieke tydens die normalisering van verskil-vektore te beskou. Die normaliseringstrategie is generies van aard in die sin dat dit ewe veel van toepassing op beide statiese en dinamiese handtekening-modelkonstruksie is. Die stelsels wat in hierdie studie ontwikkel is, is spesi ek op die opsporing van hoë-kwaliteit vervalsings gerig. Stelselvaardigheid-afskatting word met behulp van 'n omvattende eksperimentele protokol bewerkstellig. Verskeie groot handtekening-datastelle is oorweeg. In beide die statiese en dinamiese gevalle vaar die voorgestelde SVM-gebaseerde stelsel beter as die voorgestelde KDA-gebaseerde stelsel. Ons toon ook aan dat die stelsels wat in hierdie studie ontwikkel is, die meeste bestaande stelsels wat op dieselfde datastelle ge evalueer is, oortref. Dit is selfs meer belangrik om daarop te let dat, wanneer hierdie stelsels met bestaande tegnieke in die literatuur vergelyk word, ons aantoon dat die gebruik van die nuwe tegnieke, soos in hierdie studie voorgestel, konsekwent tot 'n statisties beduidende verbetering in stelselvaardigheid lei. / ENGLISH ABSTRACT : In this dissertation we present a novel strategy for automatic handwritten signature verification. The proposed framework employs a writer-independent approach to signature modelling and is therefore capable of authenticating questioned signatures claimed to belong to any writer, provided that at least one authentic sample of said writer's signature is available for comparison. We investigate both the traditional off-line scenario (where an existing pen-on-paper signature is extracted from a digitised document) as well as the increasingly popular on-line scenario (where the signature data are automatically recorded during the signing event by means of specialised electronic hardware). The utilised off-line feature extraction technique involves the calculation of several discrete Radon transform (DRT) based projections, whilst on-line signatures are represented in feature space by several spatial and temporal function features. In order to facilitate writer-independent signature analysis, these feature sets are subsequently converted into a dissimilarity-based representation by means of a suitable dichotomy transformation. The classification techniques utilised for signature modelling and verification include quadratic discriminant analysis (QDA) and support vector machines (SVMs). The major contributions of this study include two novel techniques aimed towards the construction of a robust writer-independent signature model. The first, a dynamic time warping (DTW) based dichotomy transformation for off-line signature representation, is able to compensate for reasonable intra-class variability by non-linearly aligning DRT-based projections prior to matching. The second, a writer-specific dissimilarity normalisation strategy, improves inter-class separability in dissimilarity space by considering only strictly relevant dissimilarity statistics when normalising the dissimilarity vectors belonging to a specific individual. This normalisation strategy is generic in the sense that it is equally applicable to both off-line and on-line signature model construction. The systems developed in this study are specifically aimed towards skilled forgery detection. System proficiency estimation is conducted using a rigorous experimental protocol. Several large signature corpora are considered. In both the off-line and on-line scenarios, the proposed SVM-based system outperforms the proposed QDA-based system. We also show that the systems proposed in this study outperform most existing systems that were evaluated on the same data sets. More importantly, when compared to state-of-the-art techniques currently employed in the literature, we show that the incorporation of the novel techniques proposed in this study consistently results in a statistically significant improvement in system proficiency.
106

Knowledge-based support for software selection in information centers: Design criteria, development issues, and empirical evaluation.

Vinze, Ajay Shreekrishna. January 1988 (has links)
An information center (IC) is described as an organization designed to help end users help themselves. ICs are expected to provide several services to end users. The services can be summarized as: consultation, distribution and trouble-shooting. The research is focused on a specific consultation activity: software selection. Providing support for selection and evaluation of software for users constitutes 91.5 percent of a typical IC's daily workload. In the last decade, ICs have proved successful in managing software resources for organizations. The initial success of ICs has increased user expectations and demand for the services offered but, because ICs are considered cost centers in most organizations, there is growing pressure for them to accomplish more with fewer resources. The research hypothesis is that the knowledge and methodologies of IC consultants, concerning software selection, as well as relevant institutional policies, can be represented in a knowledge base. A knowledge-based system ICE (Information Center Expert) to assist users with software selection has been developed and evaluated in the study reported here. The development of ICE used two main design criteria: maintainability and transportability. Maintainability was defined as the ability to support frequent updating of the software supported by an IC. This is important because new software tools are introduced in the market at a very rapid rate; to stay competitive an IC must be able continually to adapt to this dynamic environment. Transportability was considered necessary to make ICE usable in many different ICs, each supporting a different set of software. The transportability feature allows different ICs to individualize the system to meet their own site-specific needs. Validation studies were conducted to test the appropriateness of the recommendations made by ICE, using "blind" validation procedures in which scenarios (in case form) were presented to consultants. The cases were selected to represent problems frequently taken to an IC. Two sets of solutions, those offered by consultants and those provided by ICE, were then presented to experts who were asked to judge the appropriateness of each solution to a case without knowing its source. To test the comparative advantages of using ICE or IC consultants to obtain assistance with software selection a laboratory experiment was conducted. A hypothetical construct called "Consultation Effectiveness" was used, which included measures for "user satisfaction" with the process, as well as measures for the "task basis" and the "recommendation basis" for evaluating a consultation session.
107

Interactive program verification using virtual programs

Topor, Rodney W. January 1975 (has links)
This thesis is concerned with ways of proving the correctness of computer programs. The first part of the thesis presents a new method for doing this. The method, called continuation induction, is based on the ideas of symbolic execution, the description of a given program by a virtual program, and the demonstration that these two programs are equivalent whenever the given program terminates. The main advantage of continuation induction over other methods is that it enables programs using a wide variety of programming constructs such as recursion, iteration, non-determinism, procedures with side-effects and jumps out of blocks to be handled in a natural and uniform way. In the second part of the thesis a program verifier which uses both this method and Floyd's inductive assertion method is described. The significance of this verifier is that it is designed to be extensible, and to this end the user can declare new functions and predicates to be used in giving a natural description of the program's intention. Rules describing these new functions can then be used when verifying the program. To actually prove the verification conditions, the system employs automatic simplification, a relatively clever matcher, a simple natural deduction system and, most importantly, the user's advice. A large number of commands are provided for the user in guiding the system to a proof of the program's correctness. The system has been used to verify various programs including two sorting programs and a program to invert a permutation 'in place' the proofs of the sorting programs included a proof of the fact that the final array was a permutation of the original one. Finally, some observations and suggestions are made concerning the continued development of such interactive verification systems.
108

Verifying temporal properties of systems with applications to petri nets

Bradfield, Julian Charles January 1991 (has links)
This thesis provides a powerful general-purpose proof technique for the verification of systems, whether finite or infinite. It extends the idea of finite local model-checking, which was introduced by Stirling and Walker: rather than traversing the entire state space of a model, as is done for model-checking in the sense of Emerson, Clarke et al. (checking whether a (finite) model satisfies a formula), local model-checking asks whether a particular state satisfies a formula, and only explores the nearby states far enough to answer that question. The technique used was a tableau method, constructing a tableau according to the formula and the local structure of the model. This tableau technique is here generalized to the infinite case by considering sets of states, rather than single states; because the logic used, the propositional modal mu-calculus, separates simple modal and boolean connectives from powerful fix-point operators (which make the logic more expressive than many other temporal logics), it is possible to give a relatively straightforward set of rules for constructing a tableau. Much of the subtlety is removed from the tableau itself, and put into a relation on the state space defined by the tableau-the success of the tableau then depends on the well-foundedness of this relation. This development occupies the second and third chapters: the second considers the modal mu-calculus, and explains its power, while the third develops the tableau technique itself The generalized tableau technique is exhibited on Petri nets, and various standard notions from net theory are shown to play a part in the use of the technique on nets-in particular, the invariant calculus has a major role. The requirement for a finite presentation of tableaux for infinite systems raises the question of the expressive power of the mu-calculus. This is studied in some detail, and it is shown that on reasonably powerful models of computation, such as Petri nets, the mu-calculus can express properties that are not merely undecidable, but not even arithmetical. The concluding chapter discusses some of the many questions still to be answered, such as the incorporation of formal reasoning within the tableau system, and the power required of such reasoning.
109

Requirement Verification in Modelica For a Small Scale Network Simulation

Li, James Jizhi January 2015 (has links)
The usage of computer networks has increased enormously recently due to many benefits; it facilitates data distribution speed, long distance communication and industrial system control. Due to these reasons, industry systems have started to use computer networks for system control and data transmission. Meanwhile, the limitations of network devices also raise many challenges for network system configuration. To have the best optimized network system, we need to study the network system by performing experiments. However, experiments on the real systems could be expensive and dangerous. Therefore, we need a model to represent the behaviours of the system. This thesis work uses object-oriented acasual modelling language Modelica to model a local area network system, and the development is performed in OpenModelica, an open source Modelica-based modelling and simulation environment. The simulation results are analysed and verified by using a separate requirement verification model.
110

Exploiting model structure in CEGAR verification method / Exploiter la structure des modèles pour la vérification par la méthode CEGAR

Chucri, Farès 27 November 2012 (has links)
Cette thèse a eu pour but l'étude et la mise en oeuvre des méthodes de vérification par abstraction pour les modèles AltaRica. A cette effet, une méthode d'abstraction permettant l'utilisation d'une sous approximation de l'espace des états d'un système dans un algorithme CEGAR est présentée. Son utilisation permet d'accélérer l'algorithme CEGAR, ainsi que de réduire les ressources nécessaires lors de la vérification d'un modèle. Nous nous intéressons à une modélisation d'un sous ensemble du langage AltaRica , pour lequel une méthode d'abstraction hiérarchique est décrite, ainsi qu'un algorithme efficace permettant la vérification de contre-exemples issus de cette abstraction. La méthode proposée permet d'abstraire chaque composant de la hiérarchie indépendamment malgré la présence de priorités dans le modèle. Finalement l'implémentation de l'algorithme PCegar dans le model checker Mec 5 est présentée ainsi qu'une analyse de benchmarks sur des modèles académiques et un modèle industriel. / This thesis presents an abstraction verification method for AltaRica models. To this end a CEGAR algorithm that prunes away abstract states and therefore uses an underapproximation of the system state space is proposed. The use of an underapproximation of the abstract state space allow to accelerate the algorithm, and reduce the computational resources required by the algorithm. A CEGAR algorithm for a subset of the AltaRica language is also presented. A hierarchical abstractionscheme and an efficient counter-example analysis method are proposed. The abstraction scheme proposed allow to abstract each component independently despite the presence of priorities in the model. Finally, the implementation of our CEGAR with pruning method is present together with benchmarks on academic and industrial models.

Page generated in 0.0994 seconds