• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 10
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Formal Modeling and Analysis Techniques for High Level Petri Nets

Liu, Su 20 June 2014 (has links)
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
2

Ontological approach for database integration

Alalwan, Nasser Alwan January 2011 (has links)
Database integration is one of the research areas that have gained a lot of attention from researcher. It has the goal of representing the data from different database sources in one unified form. To reach database integration we have to face two obstacles. The first one is the distribution of data, and the second is the heterogeneity. The Web ensures addressing the distribution problem, and for the case of heterogeneity there are many approaches that can be used to solve the database integration problem, such as data warehouse and federated databases. The problem in these two approaches is the lack of semantics. Therefore, our approach exploits the Semantic Web methodology. The hybrid ontology method can be facilitated in solving the database integration problem. In this method two elements are available; the source (database) and the domain ontology, however, the local ontology is missing. In fact, to ensure the success of this method the local ontologies should be produced. Our approach obtains the semantics from the logical model of database to generate local ontology. Then, the validation and the enhancement can be acquired from the semantics obtained from the conceptual model of the database. Now, our approach can be applied in the generation phase and the validation-enrichment phase. In the generation phase in our approach, we utilise the reverse engineering techniques in order to catch the semantics hidden in the SQL language. Then, the approach reproduces the logical model of the database. Finally, our transformation system will be applied to generate an ontology. In our transformation system, all the concepts of classes, relationships and axioms will be generated. Firstly, the process of class creation contains many rules participating together to produce classes. Our unique rules succeeded in solving problems such as fragmentation and hierarchy. Also, our rules eliminate the superfluous classes of multi-valued attribute relation as well as taking care of neglected cases such as: relationships with additional attributes. The final class creation rule is for generic relation cases. The rules of the relationship between concepts are generated with eliminating the relationships between integrated concepts. Finally, there are many rules that consider the relationship and the attributes constraints which should be transformed to axioms in the ontological model. The formal rules of our approach are domain independent; also, it produces a generic ontology that is not restricted to a specific ontology language. The rules consider the gap between the database model and the ontological model. Therefore, some database constructs would not have an equivalent in the ontological model. The second phase consists of the validation and the enrichment processes. The best way to validate the transformation result is to facilitate the semantics obtained from the conceptual model of the database. In the validation phase, the domain expert captures the missing or the superfluous concepts (classes or relationships). In the enrichment phase, the generalisation method can be applied to classes that share common attributes. Also, the concepts of complex or composite attributes can be represented as classes. We implement the transformation system by a tool called SQL2OWL in order to show the correctness and the functionally of our approach. The evaluation of our system showed the success of our proposed approach. The evaluation goes through many techniques. Firstly, a comparative study is held between the results produced by our approach and the similar approaches. The second evaluation technique is the weighting score system which specify the criteria that affect the transformation system. The final evaluation technique is the score scheme. We consider the quality of the transformation system by applying the compliance measure in order to show the strength of our approach compared to the existing approaches. Finally the measures of success that our approach considered are the system scalability and the completeness.
3

Non local analyses certification with an annotated semantics / Certification d'analyse non locale grâce à une sémantique annotée

Cabon, Gurvan 14 December 2018 (has links)
La quantité croissante de données traitées par les logiciels rend légitime le besoin de garanties de confidentialité. La propriété de non-interférence assure qu'un programme ne fuite pas de données privées vers une sortie publique. Nous proposons une méthode pour construire, une multisémantique annotée capable de capturer la propriété de non-interférence pour aider à prouver formellement des analyseurs. Nous fournissons un théorème prouvé indiquant que les annotations capturent correctement la non-interférence. Le théorème de correction permet de prouver un analyseur sans s'appuyer sur la définition de non-interférence mais sur les annotations. / Because of the increasing quantity of data processed by software, the need for privacy guarantees is legitimate. The property of non-interference ensures that a program does not leak private data to a public output. We propose a framework to build an annotated multisemantics able to capture the non-interference property to help formally prove analysers. The framework comes with a proved theorem stating that the annotations correctly capture non-interference. The correctness theorem allows to prove an analyser without relying on the definition of non-interference but on the annotations.
4

Formal Methods and Tools for Testing Communication Protocol System Security

Shu, Guoqiang 29 July 2008 (has links)
No description available.
5

Evaluation of the Formal Versus the Informal Method of Teaching Arithmetic

Garrett, Atlee Margaret 08 1900 (has links)
The problem in this investigation has a three-fold purpose: namely, (1) to determine the results of delaying formal arithmetic until the fourth grade; (2) to ascertain the results reflected in the reading abilities of children when the time ordinarily spent on drill and organized instruction in arithmetic is given to reading, the diverted time being spent in reading arithmetic stories, playing games, and in using other arithmetical processes in connection with school activities; (3) to compare the results of a behavior journal kept for each child in the experimental group and in the control group.
6

Automated reasoning over string constraints

Liang, Tianyi 01 December 2014 (has links)
An increasing number of applications in verification and security rely on or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings. Unfortunately, most string solvers today are standalone tools that can reason only about some fragment of the theory of strings and regular expressions, sometimes with strong restrictions on the expressiveness of their input language (such as, length bounds on all string variables). These specialized solvers reduce string problems to satisfiability problems over specific data types, such as bit vectors, or to automata decision problems. On the other side, despite their power and success as back-end reasoning engines, general-purpose Satisfiability Modulo Theories (SMT) solvers so far have provided minimal or no native support for string reasoning. This thesis presents a deductive calculus describing a new algebraic approach that allows solving constraints over the theory of unbounded strings and regular expressions natively, without reduction to other problems. We provide proofs of refutation soundness and solution soundness of our calculus, and solution completeness under a fair proof strategy. Moreover, we show that our calculus is a decision procedure for the theory of regular language membership with length constraints. We have implemented our calculus as a string solver for the theory of (unbounded) strings with concatenation, length, and membership in regular languages, and incorporated it into the SMT solver CVC4 to expand its already large set of built-in theories. This work makes CVC4 the first SMT solver that is able to accept and process a rich set of mixed constraints over strings, integers, reals, arrays and other data types. In addition, our initial experimental results show that, over string problems, CVC4 is highly competitive with specialized string solvers with a comparable input language. We believe that the approach we described in this thesis provides a new idea for string-based formal methods.
7

Methods and Algorithms for Scalable Verification of Asynchronous Designs

Yao, Haiqiong 01 January 2012 (has links)
Concurrent systems are getting more complex with the advent of multi-core processors and the support of concurrent programs. However, errors of concurrent systems are too subtle to detect with the traditional testing and simulation. Model checking is an effective method to verify concurrent systems by exhaustively searching the complete state space exhibited by a system. However, the main challenge for model checking is state explosion, that is the state space of a concurrent system grows exponentially in the number of components of the system. The state space explosion problem prevents model checking from being applied to systems in realistic size. After decades of intensive research, a large number of methods have been developed to attack this well-known problem. Compositional verification is one of the promising methods that can be scalable to large complex concurrent systems. In compositional verification, the task of verifying an entire system is divided into smaller tasks of verifying each component of the system individually. The correctness of the properties on the entire system can be derived from the results from the local verification on individual components. This method avoids building up the global state space for the entire system, and accordingly alleviates the state space explosion problem. In order to facilitate the application of compositional verification, several issues need to be addressed. The generation of over-approximate and yet accurate environments for components for local verification is a major focus of the automated compositional verification. This dissertation addresses such issue by proposing two abstraction refinement methods that refine the state space of each component with an over-approximate environment iteratively. The basic idea of these two abstraction refinement methods is to examine the interface interactions among different components and remove the behaviors that are not allowed on the components' interfaces from their corresponding state space. After the extra behaviors introduced by the over-approximate environment are removed by the abstraction refinement methods, the initial coarse environments become more accurate. The difference between these two methods lies in the identification and removal of illegal behaviors generated by the over-approximate environments. For local properties that can be verified on individual components, compositional reasoning can be scaled to large systems by leveraging the proposed abstraction refinement methods. However, for global properties that cannot be checked locally, the state space of the whole system needs to be constructed. To alleviate the state explosion problem when generating the global state space by composing the local state space of the individual components, this dissertation also proposes several state space reduction techniques to simplify the state space of each component to help the compositional minimization method to generate a much smaller global state space for the entire system. These state space reduction techniques are sound and complete in that they keep all the behaviors on the interface but do not introduce any extra behaviors, therefore, the same verification results derived from the reduced global state space are also valid on the original state space for the entire system. An automated compositional verification framework integrated with all the abstraction refinement methods and the state space reduction techniques presented in this dissertation has been implemented in an explicit model checker Platu. It has been applied to experiments on several non-trivial asynchronous circuit designs to demonstrate its scalability. The experimental results show that our automated compositional verification framework is effective on these examples that are too complex for the monolithic model checking methods to handle.
8

Formal Verification of FPGA Based Systems

Deng, Honghan 10 1900 (has links)
<p>In design verication, although simulation is still a widely used verication</p> <p>technique in FPGA design, formal verication is obtaining greater acceptance</p> <p>as the complexity of designs increases. In the simulation method, for a circuit</p> <p>with n inputs and m registers an exhaustive test vector will have as many as</p> <p>2<sup>(m+n)</sup> elements making it impractical for many modern circuits. Therefore</p> <p>this method is incomplete, i.e., it may fail to catch some design errors due to</p> <p>the lack of complete test coverage. Formal verication can be introduced as a</p> <p>complement to traditional verication techniques.</p> <p>The primary objectives of this thesis are determining: (i) how to for-</p> <p>malize FPGA implementations at dierent levels of abstraction, and (ii) how</p> <p>to prove their functional correctness. This thesis explores two variations of a</p> <p>formal verication framework by proving the functional correctness of several</p> <p>FPGA implementations of commonly used safety subsystem components us-</p> <p>ing the theorem prover PVS. We formalize components at the netlist level and</p> <p>the Verilog Register Transfer HDL level, preserving their functional semantics.</p> <p>Based on these formal models, we prove correctness conditions for the com-</p> <p>ponents using PVS. Finally, we present some techniques which can facilitate</p> <p>the proving process and describe some general strategies which can be used to</p> <p>prove properties of a synchronous circuit design.</p> / Master of Applied Science (MASc)
9

Program analysis for quantitative-reachability properties

Liu, Jiawen 06 September 2024 (has links)
Program analysis studies the execution behaviors of computer programs including programs’ safety behavior, privacy behavior, resource usage, etc. The kind of program analysis on the safety behavior of a program involves analyzing if a particular line of code leaks a secret and how much secret is leaked by this line of code. When studying the resource usage of a program, certain program analysis mainly focuses on analyzing whether a piece of code consumes a certain resource and how much resource is used by this piece of code. Yet another kind of program analysis is studying the program privacy behavior by analyzing whether a specific private data is dependent on other data and how many times they are dependent during multiple executions. We notice that when studying the aforementioned behaviors, there are two dominant program properties that we are analyzing – “How Much” and “Whether”, namely quantitative properties and reachability properties. In other words, we are analyzing the kind of program property that contains two sub-properties – quantitative and reachability. A property is a hyper-property if it has two or more sub-properties. For the class of properties that has quantitative and reachability sub-properties, I refer to them as quantitative-reachability hyperproperties. Most existing program analysis methods can analyze only one subproperty of a program’s quantitative-reachability hyper-property. For example, the reachability analysis methods only tell us whether some code pieces are executed, whether the confidential data is leaked, whether certain data relies on another data, etc., which are only the reachability sub-properties. These analysis methods do not address how many times or how long these properties hold with respect to some particular code or data. Quantitative analysis methods, such as program complexity analysis, resource cost analysis, execution time estimation, etc., only tell us the upper bound on the overall quantity, i.e., the quantitative sub-property. However, these quantities are not associated with a specific piece of code, program location, private data, etc., which are related to the reachability sub-properties. This thesis presents new program analysis methodology for analyzing two representative quantitative-reachability properties. The new methodology mitigates the limitations in both reachability analysis methods and quantitative analysis methods and help to control the program’s execution behaviors in higher granularity. The effectiveness of the new analysis method is validated through prototype implementations and experimental evaluations. The first noteworthy quantitative-reachability property I look into is the adaptivity in the programs that implement certain adaptive data analyses. Data analyses are usually designed to identify some properties of the population from which the data are drawn, generalizing beyond the specific data sample. For this reason, data analyses are often designed in a way that guarantees that they produce a low generalization error. An adaptive data analysis can be seen as a process composed by multiple queries interrogating some data, where the choice of which query to run next may rely on the results of previous queries. The generalization error of each individual query/analysis can be controlled by using an array of well-established statistical techniques. However, when queries are arbitrarily composed, the different errors can propagate through the chain of different queries and result in high generalization error. To address this issue, data analysts are designing several techniques that not only guarantee bounds on the generalization errors of single queries, but that also guarantee bounds on the generalization error of the composed analyses. The choice of which of these techniques to use, often depends on the chain of queries that an adaptive data analysis can generate, intuitively the adaptivity level in an adaptive data analysis. To help analysts with identifying which technique to use to control their generalization error, we consider adaptive data analyses implemented as while-like programs, and we design a program analysis framework. In this framework, we first formalize the intuitive notion of adaptivity as a quantitative-reachability property, which is a key measure of an adaptive data analysis to choose the appropriate technique. Then we design a program analysis algorithm that estimates a sound upper bound on the adaptivity of the program that implements an adaptive data analysis. We also implement my program analysis and show that it can help to analyze the adaptivity of several concrete data analyses with different adaptivity structures. As a continuation of the previous work, to get a more precise bound on a program’s adaptivity level, I look at another quantitative-reachability hyper-property – the number of times a given location inside a procedure is visited during the program execution. The upper bound on this hyper-property is referred to as the reachability-bound. It can help to improve the program analysis results when studying other different program features. For example, the reachability-bound on each program location can be used by some resource cost analysis techniques to compute a precise bound on a program’s worst-case resource consumption. When we analyze the adaptivity in an adaptive data analysis program as discussed above, the accuracy of my program analysis result can also be improved through a tight reachability-bound on every program location. Some existing program complexity analysis methods can be repurposed to analyze and estimate the reachability-bound. However, these methods focus only on the overall quantity and ignore the path sensitivity in the program. For this reason, the reachability-bounds of the locations in different sub-procedures are usually over-approximated. As far as we know, there is no general analysis algorithm that computes the reachability-bound for every program location directly and path-sensitively. To this end, I present a pathsensitive reachability-bound algorithm, which exploit the path sensitivity to compute a precise reachability-bound for every program location. We implement this path-sensitive reachability-bound algorithm in a prototype, and report on an experimental comparison with state-of-art tools over four different sets of benchmarks.
10

Génération de codes et d'annotations prouvables d'algorithmes de points intérieurs à destination de systèmes embarqués critiques / Generation of codes and provable annotations of interior-point algorithms for critical embedded systems

Davy, Guillaume 06 December 2018 (has links)
Dans l'industrie, l'utilisation de l'optimisation est omniprésente. Elle consiste à calculer la meilleure solution tout en satisfaisant un certain nombre de contraintes. Cependant, ce calcul est complexe, long et pas toujours fiable. C'est pourquoi cette tâche est longtemps restée cantonnée aux étapes de conception, ce qui laissait le temps de faire les calculs puis de vérifier que la solution était correcte et si besoin refaire les calculs. Ces dernières années, grâce à la puissance toujours grandissante des ordinateurs, l'industrie a commencé à intégrer des calculs d'optimisation au cœur des systèmes. C'est-à-dire que des calculs d'optimisation sont effectués en permanence au sein du système, parfois des dizaines de fois par seconde. Par conséquent, il est impossible de s'assurer a posteriori de la correction d'une solution ou de relancer un calcul. C'est pourquoi il est primordial de vérifier que le programme d'optimisation est parfaitement correct et exempt de bogue.L'objectif de cette thèse a été de développer outils et méthodes pour répondre à ce besoin.Pour ce faire, nous avons utilisé la théorie de la preuve formelle qui consiste à considérer un programme comme un objet mathématique. Cet objet prend des informations en entrée et produit un résultat. On peut alors, sous certaines conditions sur les entrées, prouver que le résultat satisfait nos exigences. Notre travail a consisté à choisir un programme d'optimisation puis à prouver formellement que le résultat de ce programme est correct. / In the industry, the use of optimization is ubiquitous. Optimization consists of calculating the best solution subject to a number of constraints. However, this calculation is complex,long and not always reliable. This is why this task has long been confined to the design stages,which allowed time to do the computation and then check that the solution is correct and if necessary redo the computation. In recent years, thanks to the ever-increasing power of computers, the industry has begun to integrate optimization computation at the heart of the systems. That is to say that optimization computation is carried out continuously within the system, sometimes dozens of times per second. Therefore, it is impossible to check a posteriori the solution or restart a calculation. That is why it is important to check that the program optimization is perfectly correct and bug-free. The objective of this thesis was to develop tools and methods to meet this need. To do this we have used the theory of formal proof that is to consider a program as a mathematical object. This object takes input data and produces a result. We can then, under certain conditions on the inputs, prove that the result meets our requirements. Our job was to choose an optimization program and formally prove that the result of this program is correct.

Page generated in 0.0393 seconds