• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 362
  • 88
  • 70
  • 31
  • 20
  • 12
  • 10
  • 10
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 748
  • 511
  • 193
  • 187
  • 143
  • 127
  • 119
  • 102
  • 87
  • 78
  • 75
  • 67
  • 67
  • 56
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Guided random-walk based model checking

Bui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
52

Guided random-walk based model checking

Bui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
53

A symbolic approach to the state graph based analysis of high-level Markov reward models

Lampka, Kai. January 2007 (has links) (PDF)
Erlangen, Nürnberg, Univ., Diss., 2007.
54

Developing concepts and methods for module and integration tests for models of reactive systems

Kyeyune, Yusufu. Unknown Date (has links)
University, Diss., 2000--Dortmund. / Dateiformat: PDF.
55

Preprocessing for property checking of sequential circuits on the register transfer level

Brinkmann, Raik. Unknown Date (has links) (PDF)
Techn. University, Diss., 2003--Kaiserslautern.
56

Model-based development of security-critical cystems

Wimmel, Guido Oliver. Unknown Date (has links) (PDF)
Techn. University, Diss., 2005--München.
57

Syntactic complexity in the modal μ calculus

Lehtinen, Maria Karoliina January 2017 (has links)
This thesis studies how to eliminate syntactic complexity in Lμ, the modal μ calculus. Lμ is a verification logic in which a least fixpoint operator μ, and its dual v, add recursion to a simple modal logic. The number of alternations between μ and v is a measure of complexity called the formula’s index: the lower the index, the easier a formula is to model-check. The central question of this thesis is a long standing one, the Lμ index problem: given a formula, what is the least index of any equivalent formula, that is to say, its semantic index? I take a syntactic approach, focused on simplifying formulas. The core decidability results are (i) alternative, syntax-focused decidability proofs for ML and Pμ 1 , the low complexity classes of μ; and (ii) a proof that Ʃμ 2 , the fragment of Lμ with one alternation, is decidable for formulas in the dual class Pμ 2 . Beyond its algorithmic contributions, this thesis aims to deepen our understanding of the index problem and the tools at our disposal. I study disjunctive form and related syntactic restrictions, and how they affect the index problem. The main technical results are that the transformation into disjunctive form preserves Pμ 2 -indices but not μ 2 -indices, and that some properties of binary trees are expressible with a lower index using disjunctive formulas than non-deterministic automata. The latter is part of a thorough account of how the Lμ index problem and the Rabin–Mostowski index problem for parity automata are related. In the final part of the thesis, I revisit the relationship between the index problem and parity games. The syntactic index of a formula is an upper bound on the descriptive complexity of its model-checking parity games. I show that the semantic index of a formula Ψ is bounded above by the descriptive complexity of the model-checking games for Ψ. I then study whether this bound is strict: if a formula Ψ is equivalent to a formula in an alternation class C, does a formula of C suffice to describe the winning regions of the model-checking games of Ψ? I prove that this is the case for ML, Pμ 1 , Ʃμ 2 , and the disjunctive fragment of any alternation class. I discuss the practical implications of these results and propose a uniform approach to the index problem, which subsumes the previously described decision procedures for low alternation classes. In brief, this thesis can be read as a guide on how to approach a seemingly complex Lμ formula. Along the way it studies what makes this such a difficult problem and proposes novel approaches to both simplifying individual formulas and deciding further fragments of the alternation hierarchy.
58

Rabbit: A novel approach to find data-races during state-space exploration / Rabbit: A novel approach to find data-races during state-space exploration

Oliveira, João Paulo dos Santos 30 August 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-05T18:45:35Z No. of bitstreams: 2 jpso-master_rabbit_complete.pdf: 1450168 bytes, checksum: 081b9f94c19c494561e97105eb417001 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-05T18:45:35Z (GMT). No. of bitstreams: 2 jpso-master_rabbit_complete.pdf: 1450168 bytes, checksum: 081b9f94c19c494561e97105eb417001 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012-08-30 / Data-races are an important kind of error in concurrent shared-memory programs. Software model checking is a popular approach to find them. This research proposes a novel approach to find races that complements model-checking by efficiently reporting precise warnings during state-space exploration (SSE): Rabbit. It uses information obtained across different paths explored during SSE to predict likely racy memory accesses. We evaluated Rabbit on 33 different scenarios of race, involving a total of 21 distinct application subjects of various sources and sizes. Results indicate that Rabbit reports race warnings very soon compared to the time the model checker detects the race (for 84.8% of the cases it reports a true warning of race in <5s) and that the warnings it reports include very few false alarms. We also observed that the model checker finds the actual race quickly when it uses a guided-search that builds on Rabbit’s output (for 74.2% of the cases it reports the race in <20s).
59

Formal Modeling and Analysis Techniques for High Level Petri Nets

Liu, Su 20 June 2014 (has links)
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
60

Propriétés de jeux multi-agents / Multi-agent games properties

Lopes, Arnaud Da Costa 20 September 2011 (has links)
Nous etendons les logiques temporelles du temps alternant ATL et ATL* au moyen de contextes strategiques et de contraintes sur la memoire : la premiere extension permet aux agents de s'en tenir a leurs strategies lors de l'evaluation des formules, contrairement a ATL ou chaque quantificateur de strategies ecrase les strategies anterieurement selectionnees. La seconde extension permet aux quantificateurs de strategies de se restreindre aux strategies sans memoire ou avec memoire bornee. Nous avons l'etudie l'expressivite de nos logiques. Nous montrons qu'elles expriment des proprietes importantes comme l'exstence d'equilibres, et nous les comparons formellement a d'autres formalismes proches (ATL, ATL*, Game Logic, Strategy Logic, ...). Nous avons aborde les problemes de model-checking. Nous donnons un algorithme PSPACE pour la logique n'impliquant que des strategies sans memoire, et un algorithme EXPSPACE pour le cas des strategies a memoire bornee. Dans le cas general, malgre leur forte expresssivite, nous prouvons que leur model-checking reste decidable par un algorithme a base d'automates d'arbres alternants qui permet d'evaluer une formule sur la classe complete des CGS avec n joueurs. / We extend the alternating-time temporal logics ATL and ATL* with strategy contexts and memory constraints: the first extension make agents commit to their strategies during the evaluation of formulas, contrary to plain ATL where strategy quantifiers reset previously selected strategies. The second extension allows strategy quantifiers to restrict to memoryless or bounded-memory strategies. We consider expressiveness issues. We show that our logics can express important properties such as equilibria, and we formally compare them with other similar formalisms (ATL, ATL*, Game Logic, Strategy Logic, ...). We address the problem of model-checking for our logics, especially we provide a PSPACE algorithm for the sublogics involving only memoryless strategies and an EXPSPACE algorithm for the bounded-memory case. In the general case, despite the high expressiveness of these logics, we prove that their model-checking problems remain decidable by designing a tree-automata-based algorithm for model-checking ATLsc on the full class of n-player concurrent game structures.

Page generated in 0.0353 seconds