• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 871
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 21
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1767
  • 421
  • 360
  • 299
  • 272
  • 263
  • 254
  • 223
  • 211
  • 193
  • 179
  • 172
  • 129
  • 123
  • 123
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

First-order lax logic : a framework for abstraction, constraints and refinement

Walton, Matthew January 1998 (has links)
No description available.
62

Exploiting human expert techniques in automated writer identification

Duncan-Drake, Natasha January 2001 (has links)
No description available.
63

Automated test case generation for reactive software systems based on environment models

Imanian, James A. 06 1900 (has links)
The goal of software testing is to expose as many faults as possible. Often one can increase the number of faults detected by running large amounts of test cases, therefore the ability to automatically generate applicable test cases for a System Under Test (SUT), would be a valuable tool. In this thesis an attributed event grammar is designed and used to build a model that describes the environment a SUT must operate in. This event grammar captures events, their precedence or inclusion relation to other events, and attributes of the events. An event is defined as an observable action that has a distinct beginning and end. The high level environment model is then used by a test generator to produce an event trace from which input for the SUT is extracted. Thousands of event traces can be generated. For reactive systems the event trace will have the appropriate time delays between inputs. The feasibility of this approach is proven by implementing a prototype of an automated test generator based on environment models.
64

Analýza souborového systému pomocí Verifying C Compiler / Analysis of a File System Using the Verifying C Compiler

Škorvaga, David January 2015 (has links)
Title: Analysis of a File System Using the Verifying C Compiler Author: Bc. David Škorvaga Department: Department of Distributed and Dependable Systems Supervisor: RNDr. Jan Kofroň, Ph.D. Abstract: Formal verification is a way to improve reliability of software systems. One approach of formal verification is focused on proving correctness of annotat- ed source code of an established programming language. Verifying C Compiler (VCC) is a verifier for concurrent C that accepts an annotated code in C language and automatically verifies its correctness with respect to the given annotation. There have been successful attempts to verify some critical systems, including the operating system kernel. Another critical part of operating system is its file system. In the thesis, we choose FatFs file system, a simple device-independent implementation of the FAT file system. We specify a part of it using the VCC annotation and successfully verify its correctness. Keywords: Formal Verification, File System, VCC
65

Some recent philosophical doubts about ordinary statements

Rollins, Calvin Dwight January 1954 (has links)
No description available.
66

A software structuring tool for message-based systems

Rochat, Kim Lawson January 2011 (has links)
Photocopy of typescript. / Digitized by Kansas Correctional Industries
67

Towards effective and efficient temporal verification in grid workflow systems

Chen, Jinjun, n/a January 2007 (has links)
In grid architecture, a grid workflow system is a type of high-level grid middleware which aims to support large-scale sophisticated scientific or business processes in a variety of complex e-science or e-business applications such as climate modelling, disaster recovery, medical surgery, high energy physics, international stock market modelling and so on. Such sophisticated processes often contain hundreds of thousands of computation or data intensive activities and take a long time to complete. In reality, they are normally time constrained. Correspondingly, temporal constraints are enforced when they are modelled or redesigned as grid workflow specifications at build-time. The main types of temporal constraints include upper bound, lower bound and fixed-time. Then, temporal verification would be conducted so that we can identify any temporal violations and handle them in time. Conventional temporal verification research and practice have presented some basic concepts and approaches. However, they have not paid sufficient attention to overall temporal verification effectiveness and efficiency. In the context of grid economy, any resources for executing grid workflows must be paid. Therefore, more resources should be mainly used for execution of grid workflow itself rather than for temporal verification. Poor temporal verification effectiveness or efficiency would cause more resources diverted to temporal verification. Hence, temporal verification effectiveness and efficiency become a prominent issue and deserve an in-depth investigation. This thesis systematically investigates the limitations of conventional temporal verification in terms of temporal verification effectiveness and efficiency. The detailed analysis of temporal verification effectiveness and efficiency is conducted for each step of a temporal verification cycle. There are four steps in total: Step 1 - defining temporal consistency; Step 2 - assigning temporal constraints; Step 3 - selecting appropriate checkpoints; and Step 4 - verifying temporal constraints. Based on the investigation and analysis, we propose some new concepts and develop a set of innovative methods and algorithms towards more effective and efficient temporal verification. Comparisons, quantitative evaluations and/or mathematical proofs are also presented at each step of the temporal verification cycle. These demonstrate that our new concepts, innovative methods and algorithms can significantly improve overall temporal verification effectiveness and efficiency. Specifically, in Step 1, we analyse the limitations of two temporal consistency states which are defined by conventional verification work. After, we propose four new states towards better temporal verification effectiveness. In Step 2, we analyse the necessity of a number of temporal constraints in terms of temporal verification effectiveness. Then we design a novel algorithm for assigning a series of finegrained temporal constraints within a few user-set coarse-grained ones. In Step 3, we discuss the problem of existing representative checkpoint selection strategies in terms of temporal verification effectiveness and efficiency. The problem is that they often ignore some necessary checkpoints and/or select some unnecessary ones. To solve this problem, we develop an innovative strategy and corresponding algorithms which only select sufficient and necessary checkpoints. In Step 4, we investigate a phenomenon which is ignored by existing temporal verification work, i.e. temporal dependency. Temporal dependency means temporal constraints are often dependent on each other in terms of their verification. We analyse its impact on overall temporal verification effectiveness and efficiency. Based on this, we develop some novel temporal verification algorithms which can significantly improve overall temporal verification effectiveness and efficiency. Finally, we present an extension to our research about handling temporal verification results since these verification results are based on our four new temporal consistency states. The major contributions of this research are that we have provided a set of new concepts, innovative methods and algorithms for temporal verification in grid workflow systems. With these, we can significantly improve overall temporal verification effectiveness and efficiency. This would eventually improve the overall performance and usability of grid workflow systems because temporal verification can be viewed as a service or function of grid workflow systems. Consequently, by deploying the new concepts, innovative methods and algorithms, grid workflow systems would be able to better support large-scale sophisticated scientific and business processes in complex e-science and e-business applications in the context of grid economy.
68

Compositional verification of component-based heterogeneous systems / Yan Jin.

Jin, Yan January 2004 (has links)
"January 2004" / Bibliography: leaves 183-198. / xv, 198 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / As no single specification or verification method is able to solve all classes of problems, especially with industrial-sized applications, a diversity of modelling languages and analysis techniques specialised and optimized for various domains is needed, along with the ability to use them in combination. The work presented in this thesis has concentrated on developing techniques to support the use of a combination of modelling languages, especially visual languages, for system specification. Also, in order to tackle the main obstacles of model checking and make it more accessible to and usable by practising engineers, this work has focused on providing lightweight but effective methods and tools to alleviate the state space explosion problem in model checking. / Thesis (Ph.D.)--University of Adelaide, School of Computer Science, 2004
69

Diagnostic des erreurs de conception dans les circuits digitaux : le cas des erreurs simples

WAHBA, Ayman 07 May 1997 (has links) (PDF)
Le diagnostic automatique des erreurs de conception est un problème important dans le domaine de la CAO. Bien que des outils automatisés de synthèse soient employés pour générer des structures de circuits "correctes-par-construction", celles-ci sont souvent modifiées manuellement pour refléter des petites<br />modifications faites sur la spécification, ou pour améliorer certaines caractéristiques critiques de la<br />conception. Les outils de vérification peuvent révéler l'existence d'erreurs, mais ils ne donnent aucune<br />information sur leurs emplacements ou la façon de les corriger. Ces outils gènèrent seulement quelques<br />contres-exemples qui mettent en évidence l'erreur. Les concepteurs utilisent ces contre-exemples pour<br />diagnostiquer manuellement leur conception. Le diagnostic manuel est un processus trés lent et trés couteux. Le temps de diagnostic peut être égal, voire supérieur, au temps de conception. Nous présentons dans cette thèse de nouveaux algorithmes pour la localisation et la correction automatique des erreurs simples de conception dans les circuits logiques sous l'hypothèse d'une seule erreur. Les erreurs traitées ici sont : le remplacement d'un composant dans les circuits combinatoires et séquentiels, et une erreur de connexion dans les circuits combinatoires. Le modèle d'une seule erreur exige une stratégie de vérification fréquente, dans laquelle la conception est verifiée après chaque modification, pour que la probabilité d'insertion de plus d'une erreur ne soit pas trop élevée. Notre approche consiste à simuler et analyser automatiquement le circuit sous l'application de vecteurs de test que nous produisons spécialement pour accélérer le diagnostic. Nous avons réalisé deux logiciels prototypes bases sur ces algorithmes. CCDS est l'outil de diagnostic pour les circuits combinatoires, et SCDS est l'outil de diagnostic pour les circuits séquentiels. Ces outils sont actuellement intégrés dans l'environnement de preuves PREVAIL.
70

An information-theoretic analysis of phonotactic language verification /

Wong, Ka Keung. January 2007 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2007. / Includes bibliographical references (leaves 84-88). Also available in electronic version.

Page generated in 0.1104 seconds