• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 855
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1747
  • 414
  • 357
  • 293
  • 266
  • 256
  • 252
  • 219
  • 211
  • 191
  • 177
  • 169
  • 124
  • 121
  • 120
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Support for C++ in GMC / Support for C++ in GMC

Šebetovský, Jan January 2013 (has links)
Software is used in more and more aspects of our lives, so its correctness is more and more important. Its verification is thus a good idea. Now there are not many tools for verification of programs in the C++ language and most of them cannot verify all required properties. Because of this we decided to extend GMC, which was already able to verify C code, with support of the C++ language. However the C++ language is very vast, so the goal of this work is implementation of only the basic language features (inheritance, constructors, destructors, virtual methods and exceptions). The support of all those features have been implemented except for exceptions, which are implemented only partially. Powered by TCPDF (www.tcpdf.org)
112

A formal approach to contract verification for high-integrity applications

Zhang, Zhi January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / John M. Hatcliff / High-integrity applications are safety- and security-critical applications developed for a variety of critical tasks. The correctness of these applications must be thoroughly tested or formally verified to ensure their reliability and robustness. The major properties to be verified for the correctness of applications include: (1) functional properties, capturing the expected behaviors of a software, (2) dataflow property, tracking data dependency and preventing secret data from leaking to the public, and (3) robustness property, the ability of a program to deal with errors during execution. This dissertation presents and explores formal verification and proof technique, a promising technique using rigorous mathematical methods, to verify critical applications from the above three aspects. Our research is carried out in the context of SPARK, a programming language designed for development of safety- and security-critical applications. First, we have formalized in the Coq proof assistant the dynamic semantics for a significant subset of the SPARK 2014 language, which includes run-time checks as an integral part of the language, as any formal methods for program specification and verification depend on the unambiguous semantics of the language. Second, we have formally defined and proved the correctness of run-time checks generation and optimization based on SPARK reference semantics, and have built the certifying tools within the mechanized proof infrastructure to certify the run-time checks inserted by the GNAT compiler frontend to guarantee the absence of run-time errors. Third, we have proposed a language-based information security policy framework and the associated enforcement algorithm, which is proved to be sound with respect to the formalized program semantics. We have shown how the policy framework can be integrated into SPARK 2014 for more advanced information security analysis.
113

Visualization for Verification Driven Learning in Database Studies

Kallem, Aditya 17 December 2010 (has links)
This thesis aims at developing a data visualization tool to enhance database learning based on the Verification Driven Learning (VDL) model. The goal of the VDL model is to present abstract concepts in the contexts of real-world systems to students in the early stages of computer science program. In this project, a personnel/training management system has been turned into a learning platform by adding a number of features for visualization and quizzing. We have implemented various tactics to visualize the data manipulation and data retrieval operations in database, as well as the message contents in data messaging channels. The results of our development have been utilized in eight learning cases illustrating the applications of our visualization tool. Each of these learning cases were made by systematically implanting bugs in a functioning component; the students are assigned to identify the bugs and at the same time to learn the structure of the software system active
114

Reasoning about Stateful Network Behaviors

Fayaz, Seyed Kevah 01 February 2017 (has links)
Network operators must ensure their networks meet intended traversal policies (e.g., host A can talk to host B, or inbound traffic to host C goes through a firewall and then a NAT). Violations of the policies may result in revenue loss, reputation damage, and security breaches. Today checking whether the intended policies are enforced correctly is stymied by two fundamental sources of complexity: the diversity and stateful nature of the behaviors of real networks. First, we need to account for vast diversity in both the control plane (e.g., different routing protocols and their interactions) and the data plane (e.g., routers, firewalls, and proxies) of the network. Second, we need to reason about a very large space of stateful behaviors in both the control plane (e.g., the current state being characterized by the route advertisements the routers have seen so far) and the data plane (e.g., a firewall’s current state with respect to a TCP session). Prior work on checking network policies is limited to a particular state of the network. Any attempt to reason about the behavior of the network across its state space is hindered by two fundamental challenges: (i) capturing the diversity of the control and data planes, and (ii) exploring the state space of the control and data planes in a scalable manner. This thesis argues for the feasibility of checking the correctness of realistic network policies by addressing the above challenges via two key insights. First, to combat the challenge of diversity, we design unifying abstractions that glue together different routing protocols in the control plane and diverse network appliances (e.g., firewalls, proxies) in the data plane. Second, to explore the state space of the network in a scalable manner, we build tractable models of the control and data planes (e.g., by decomposing logically independent tasks) and design domain-specific optimizations (e.g., by narrowing down the scope of search given the intended policies). Taken together, these two ideas enable systematic reasoning about the correctness of stateful data and control planes. We show the utility and performance of these techniques across a range of realistic settings.
115

Separation logic : expressiveness, complexity, temporal extension / Logique de séparation : expressivité, complexité, extension temporelle

Brochenin, Rémi 25 September 2013 (has links)
Cette thèse étudie des formalismes logiques exprimant des propriétés sur des programmes. L'intention originale de ces logiques est de vérifier formellement la correction de programmes manipulant des pointeurs. Dans l'ensemble, il ne sera pas proposé de méthode de vérification applicable dans cette thèse- nous donnons plutôt un éclairage nouveau sur la logique de séparation, une logique pour triplets de Hoare. Pour certains fragments essentiels de cette logique, la complexité et la décidabilité du problème de la satisfiabilité n'étaient pas connus avant ce travail. Aussi, sa combinaison avec certaines autres méthodes de vérification était peu étudiée. D'une part, dans ce travail nous isolons l'opérateur de la logique de séparation qui la rend indécidable. Nous décrivons le pouvoir expressif de cette logique, en la comparant à des logiques du second ordre. D'autre part, nous essayons d'étendre des fragments décidables de la logique de séparation avec la une logique temporelle et avec l'aptitude à décrire les données. Cela nous permet de donner des limites à l'utilisation de la logique de séparation. En particulier, nous donnons des limites à la création de logiques décidables utilisant ce formalisme combiné à une logique temporelle ou à l'aptitude à décrire les données. / This thesis studies logics which express properties on programs. These logics were originally intended for the formal verification of programs with pointers. Overall, no automated verification method will be proved tractable here- rather, we give a new insight on separation logic. The complexity and decidability of some essential fragments of this logic for Hoare triples were not known before this work. Also, its combination with some other verification methods was little studied. Firstly, in this work we isolate the operator of separation logic which makes it undecidable. We describe the expressive power of this logic, comparing it to second-order logics. Secondly, we try to extend decidable subsets of separation logic with a temporal logic, and with the ability to describe data. This allows us to give boundaries to the use of separation logic. In particular, we give boundaries to the creation of decidable logics using this logic combined with a temporal logic or with the ability to describe data.
116

Foundations for analyzing security APIs in the symbolic and computational model / Fondations d'analyse de APIs de sécurité dans le modèle symbolique et calculatoire

Künnemann, Robert 07 January 2014 (has links)
Dans une infrastructure logicielle, les systèmes critiques ont souvent besoin de garder des clés cryptographiques sur des HSM ou des serveurs consacrés à la gestion de clés. Ils ont pour but de séparer les opérations cryptographiques, très critiques, du reste du réseau, qui est plus vulnérable. L'accès aux clés est fourni au travers des APIs de sécurité, comme par exemple le standard PKCS#11 de RSA, CCA d'IBM, ou encore l'API du TPM, qui ne permettent qu'un accès indirect aux clés. Cette thèse est composée de deux parties. La première introduit des méthodes formelles qui ont pour but l'identification des configurations dans lesquelles les APIs de sécurité peuvent améliorer le niveau de sûreté des protocoles existants, par exemple en cas de compromission d'un participant. Un paradigme prometteur considère les APIs de sécurité comme des participants d'un protocole afin d'étendre l'emploi des méthodes traditionnelles d'analyse de protocole à ces interfaces. À l'opposé des protocoles réseau, les APIs de sécurité utilisent souvent une base de données interne. Ces outils traditionnels d'analyse ne sont adaptés que pour le cas où le nombre des clés est borné a priori. Nous exposons également des arguments pour l'utilisation de la réécriture de multi-ensembles (MSR), lors de la vérification. De plus, nous proposons un langage dérivant du pi-calcul appliqué possédant des opérateurs qui permettent la manipulation d'un état explicite. Ce langage se traduit en règles MSR en préservant des propriétés de sécurité qui s'expriment dans une logique de premier ordre. La traduction a été implémentée sous forme d'un prototype produisant des règles spécifiquement adapté au prouveur tamarin. Plusieurs études de cas, dont un fragment de PKCS#11, le jeton d'authentification Yubikey, et un protocole de signature de contrat optimiste ont été effectuées. Le deuxième partie de la présente thèse a pour but l'identification des propriétés de sécurité qui a) peuvent être établies indépendamment du protocole b) permettent de trouver des failles au niveau de la cryptographie b) facilitent l'analyse des protocoles utilisant cette API de sécurité. Nous adoptons ici l'approche plus générale de Kremer et al. dans un cadre qui permet la composition universelle, à travers une fonctionnalité de gestion de clés. La nouveauté de ce genre de définition est qu'elle dépend des opérations mises à disposition par l'API. Ceci est seulement possible grâce à la composition universelle. Une API de sécurité n'est sûre que si elle réalise correctement la gestion des clés (selon la fonctionnalité présentée dans ce travail) et les opérations utilisant les clés (selon les fonctionalités qui les définissent). Pour finir, nous présentons aussi une implémentation de gestion de clés définie par rapport à des opérations arbitraires utilisant des clés non concernées par la gestion des clés. Cette réalisation représente ainsi un modèle générique pour le design des APIs de sécurité. / Security critical applications often store keys on dedicated HSM or key-management servers to separate highly sensitive cryptographic operations from more vulnerable parts of the network. Access to such devices is given to protocol parties by the means of Security APIs, e.g., the RSA PKCS#11 standard, IBM's CCA and the TPM API, all of which protect keys by providing an API that allows to address keys only indirectly. This thesis has two parts. The first part deals with formal methods that allow for the identification of secure configurations in which Security APIs improve the security of existing protocols, e.g., in scenarios where parties can be corrupted. A promising paradigm is to regard the Security API as a participant in a protocol and then use traditional protocol analysis techniques. But, in contrast to network protocols, Security APIs often rely on the state of an internal database. When it comes to an analysis of an unbounded number of keys, this is the reason why current tools for protocol analysis do not work well. We make a case for the use of MSR as the back-end for verification and propose a new process calculus, which is a variant of the applied pi calculus with constructs for manipulation of a global state. We show that this language can be translated to MSR rules while preserving all security properties expressible in a dedicated first-order logic for security properties. The translation has been implemented in a prototype tool which uses the tamarin prover as a back-end. We apply the tool to several case studies among which a simplified fragment of PKCS#11, the Yubikey security token, and a contract signing protocol. The second part of this thesis aims at identifying security properties that a) can be established independent of the protocol, b) allow to catch flaws on the cryptographic level, and c) facilitate the analysis of protocols using the Security API. We adapt the more general approach to API security of Kremer et.al. to a framework that allows for composition in form of a universally composable key-management functionality. The novelty, compared to other definitions, is that this functionality is parametric in the operations the Security API allows, which is only possible due to universal composability. A Security API is secure if it implements correctly both key-management (according to our functionality) and all operations that depend on keys (with respect to the functionalities defining those operations). We present an implementation which is defined with respect to arbitrary functionalities (for the operations that are not concerned with key-management), and hence represents a general design pattern for Security APIs.
117

Call graph reduction by static estimated function execution probability.

January 2009 (has links)
Lo, Kwun Kit. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 153-161). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Existing Approaches in Program Understanding --- p.2 / Chapter 1.1.1 --- Localized Program Understanding --- p.2 / Chapter 1.1.2 --- Whole System Analysis --- p.3 / Chapter 1.2 --- Example of Function Execution Probability Reduction of the Call Graph --- p.5 / Chapter 1.3 --- Organization of the Dissertation --- p.7 / Chapter 2 --- Preliminary Study --- p.8 / Chapter 2.1 --- Participants --- p.8 / Chapter 2.2 --- Study Design --- p.8 / Chapter 2.3 --- ispell --- p.10 / Chapter 2.3.1 --- Subject I1 (ispell) --- p.10 / Chapter 2.3.2 --- Subject PG1 (ispell) --- p.12 / Chapter 2.3.3 --- Subject PG2 (ispell) --- p.13 / Chapter 2.3.4 --- Subject I2 (ispell) --- p.14 / Chapter 2.3.5 --- ispell Analysis --- p.15 / Chapter 2.4 --- FreeBSD Kernel Malloc --- p.15 / Chapter 2.4.1 --- Subject I1 (FreeBSD) --- p.16 / Chapter 2.4.2 --- Subject PG1 (FreeBSD) --- p.17 / Chapter 2.4.3 --- Subject PG2 (FreeBSD) --- p.18 / Chapter 2.4.4 --- Subject I2 (FreeBSD) --- p.20 / Chapter 2.4.5 --- FreeBSD Analysis --- p.20 / Chapter 2.5 --- Threats to Validity --- p.21 / Chapter 2.6 --- Summary --- p.22 / Chapter 3 --- Approach --- p.24 / Chapter 3.1 --- Building Branch-Preserving Call Graphs --- p.26 / Chapter 3.1.1 --- Branch Reserving Call Graphs --- p.26 / Chapter 3.1.2 --- Branch-Preserving Call Graphs --- p.28 / Chapter 3.1.3 --- Example of BPCG Building Process --- p.31 / Chapter 3.2 --- System Function Removal --- p.34 / Chapter 3.3 --- Function Rating Calculation --- p.35 / Chapter 3.3.1 --- Rating Algorithm Complexity --- p.38 / Chapter 3.4 --- Building the Colored Call Graph --- p.39 / Chapter 3.5 --- Call Graph Reduction --- p.39 / Chapter 3.5.1 --- Remove-high-fan-in-functions Approach (FEPR-fanin) --- p.39 / Chapter 3.5.2 --- Remove-leaf-nodes Approach (FEPR-leaf) --- p.41 / Chapter 4 --- Validation --- p.42 / Chapter 4.1 --- Measures --- p.43 / Chapter 4.1.1 --- Inclusion Accuracy (IA) --- p.43 / Chapter 4.1.2 --- Reduction Efficiency (RE) --- p.44 / Chapter 4.1.3 --- Stability (S) --- p.45 / Chapter 4.2 --- Analysis of FEPR Techniques --- p.45 / Chapter 4.2.1 --- Settings --- p.45 / Chapter 4.2.2 --- Inclusion Accuracy (IA): --- p.47 / Chapter 4.2.3 --- Reduction Efficiency (RE): --- p.47 / Chapter 4.2.4 --- Stability (S) --- p.48 / Chapter 4.3 --- Ying and Tarr´ةs Approach --- p.48 / Chapter 4.3.1 --- Settings --- p.50 / Chapter 4.3.2 --- Inclusion Accuracy (IA) --- p.50 / Chapter 4.3.3 --- Reduction Efficiency (RE) --- p.51 / Chapter 4.3.4 --- Stability (S) --- p.51 / Chapter 4.4 --- Centrality Measure Approach --- p.52 / Chapter 4.4.1 --- Inclusion Accuracy (IA) --- p.53 / Chapter 4.5 --- Top-down Search Approach --- p.56 / Chapter 4.5.1 --- Reduction Efficiency (RE) --- p.57 / Chapter 4.6 --- Synthesized Analysis --- p.58 / Chapter 4.6.1 --- Inclusion Accuracy (IA) --- p.58 / Chapter 4.6.2 --- Reduction Efficiency (RE) --- p.59 / Chapter 4.6.3 --- Stability (S) --- p.59 / Chapter 4.6.4 --- Threats to Validity --- p.59 / Chapter 4.7 --- Summary --- p.60 / Chapter 5 --- Discussion --- p.62 / Chapter 5.1 --- Flexibility of Analysis --- p.62 / Chapter 5.2 --- "Existence of Function Pointers, GOTOs and Early Exits" --- p.62 / Chapter 5.3 --- Precision of Branch-Preserving Call Graphs --- p.63 / Chapter 5.4 --- Function Ranking and Recommender System --- p.64 / Chapter 5.5 --- Extending the Approach Beyond C --- p.66 / Chapter 6 --- Related Work --- p.67 / Chapter 6.1 --- Existing Approaches in Program Understanding --- p.67 / Chapter 6.1.1 --- Localized Program Understanding --- p.67 / Chapter 6.1.2 --- Whole Program Analysis --- p.69 / Chapter 6.2 --- Branch Prediction and Static Profiling --- p.73 / Chapter 7 --- Conclusions --- p.76 / Chapter A --- Call Graphs in Case Studies --- p.78 / Chapter B --- Source Files for BPCG Builder --- p.85 / Bibliography --- p.153
118

Formal symbolic verification using heuristic search and abstraction techniques

Qian, Kairong, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Computing devices are pervading our everyday life and imposing challenges for designers that have the responsibility of producing reliable hardware and software systems. As systems grow in size and complexity, it becomes increasingly difficult to verify whether a design works as intended. Conventional verification methods, such as simulation and testing, exercise only parts of the system and from these parts, draw conclusions about the correctness of the total design. For complex designs, the parts of the system that can be verified are relatively small. Formal verification aims to overcome this problem. Instead of exercising the system, formal verification builds mathematical models of designs and proves whether properties hold in these models. In doing so, it at least aims to cover the complete design. Model checking is a formal verification method that automatically verifies a model of a design, or generates diagnostic information if the model cannot be verified. It is because of this usability and level of automation that model checking has gained a high degree of success in verifying circuit designs. The major disadvantage of model checking is its poor scalability. This is due to its algorithmic nature: namely, every state of the model needs to be enumerated. In practice, properties of interest may not need the exhaustive enumeration of the model state space. Many properties can be verified (or falsified) by examining a small number of states. In such cases, exhaustive algorithms can be replaced with search algorithms that are directed by heuristics. Methods based on heuristics generally scale well. This thesis investigates non-exhaustive model checking algorithms and focuses on error detection in system verification. The approach is based on a novel integration of symbolic model checking, heuristic search and abstraction techniques to produce a framework that we call abstractiondirected model checking. There are 3 main components in this framework. First, binary decision diagrams (BDDs) and heuristic search are combined to develop a symbolic heuristic search algorithm. This algorithm is used to detect errors. Second, abstraction techniques are applied in an iterative way. In the initial phase, the model is abstracted, and this model is verified using exhaustive algorithms. If a definitive verification result cannot be obtained, the same abstraction is re-used to generate a search heuristic. The heuristic in turn is used to direct a search algorithm that searches for error states in the concrete model. Third, a model transformation mechanism converts an arbitrary branching-time property to a reachability property. Essentially, this component allows this framework to be applied to a more general class of temporal property. By amalgamating these three components, the framework offers a new verification methodology that speeds up error detection in formal verification. The current implementation of this framework indicates that it can outperform existing standard techniques both in run-time and memory consumption, and scales much better than conventional model checking.
119

A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement

Larkin, James Unknown Date (has links)
Program verification is vital as more and more users are creating, downloading and executing foreign computer programs. Software verification tools provide a means for determining if a program adheres to a user’s security requirements, or security policy. There are many verification tools that exist for checking different types of policies on different types of programs. Currently however, there is no verification tool capable of determining if all types of programs satisfy all types of policies. This thesis describes a framework for supporting multiple verification tools to determine program satisfaction. A user’s security requirements are represented at multiple levels of abstraction as Intermediate Execution Environments. Using a sequence of configurations, a user’s security requirements are transformed from the abstract level to the tool level, possibly for multiple verification tools. Using a number of case studies, the validity of the framework is shown.
120

Advances in space and time efficient model checking of finite state systems

Parashkevov, Atanas. January 2002 (has links) (PDF)
Bibliography: leaves 211-220 This thesis examines automated formal verification techniques and their associated space and time implementation complexity when applied to finite state concurrent systems. The focus is on concurrent systems expressed in the Communicating Sequential Processes (CSP) framework. An approach to the compilation of CSP system descriptions into boolean formulae in the form of Ordered Binary Decision Diagrams (OBDD) is presented, further utilised by a basic algorithm that checks a refinement or equivalence relation between a pair of processes in any of the three CSP semantic models. The performance bottlenecks of the basic refinement checking algorithms are identified and addressed with the introduction of a number of novel techniques and algorithms. Algorithms described in this thesis are implemented in the Adelaide Tefinement Checking Tool.

Page generated in 0.0885 seconds