101 |
Untersuchung der Nebenläufigkeit, Latenz und Konsistenz asynchroner Interaktiver Echtzeitsysteme mittels Profiling und Model Checking / Research on concurrency, latency, and consistency of asynchronous Realtime Interactive Systems using profiling and model checkingRehfeld, Stephan January 2016 (has links) (PDF)
Im Rahmen dieser Arbeit werden die Nebenläufigkeit, Konsistenz und Latenz in asynchronen
Interaktiven Echtzeitsystemen durch die Techniken des Profilings und des Model
Checkings untersucht. Zu Beginn wird erläutert, warum das asynchrone Modell das vielversprechendste
für die Nebenläufigkeit in einem Interaktiven Echtzeitsystem ist. Hierzu
wird ein Vergleich zu anderen Modellen gezogen. Darüber hinaus wird ein detaillierter
Vergleich von Synchronisationstechnologien, welche die Grundlage für Konsistenz
schaffen, durchgeführt. Auf der Grundlage dieser beiden Vergleiche und der Betrachtung
anderer Systeme wird ein Synchronisationskonzept entwickelt.
Auf dieser Basis wird die Nebenläufigkeit, Konsistenz und Latenz mit zwei Verfahren
untersucht. Die erste Technik ist das Profiling, wobei einige neue Darstellungsformen von
gemessenen Daten entwickelt werden. Diese neu entwickelten Darstellungsformen werden
in der Implementierung eines Profilers verwendet. Als zweite Technik wird das Model
Checking analysiert, welches bisher noch nicht im Kontext von Interaktiven Echtzeitsystemen
verwendet wurde. Model Checking dient dazu, die Verhaltensweise eines Interaktiven
Echtzeitsystems vorherzusagen. Diese Vorhersagen werden mit den Messungen aus
dem Profiler verglichen. / In this thesis the concurrency, latency, and consistency of asynchronous Realtime Interactive Systems (RIS) are analyzed using profiling and model checking. At the beginning, it is described why the Asynchronous Model is the most promising model to increase concurrency in a RIS. Therefore, it is compared to several other models. Furthermore, synchronization techniques are compared, which are used to provide consistency in a concurrent application. Upon both results, a synchronization concept is created.
Using this concept, the concurrency, latency, and consistency are analyzed using two techniques. The first technique is profiling. New visualizations are developed to visualize profiling data measured by profiling. The second technique is model checking. In this thesis, model checking is used for the first time in context of a RIS. Model checking is used to predict the behavior of a RIS. The predicition and the measurement from the profiling are compared.
|
102 |
Symbolische BDD-basierte Modellprüfung asynchroner nebenläufiger Systeme / Symbolic BDD-based Model Checking of Asynchronous Concurrent SystemsAppold, Christian January 2015 (has links) (PDF)
Today, information and communication systems are ubiquitous and consist very often of several interacting and communicating components. One reason is the widespread use of multi-core processors and the increasing amount of concurrent software for the efficient usage of multi-core processors. Also, the dissemination of distributed emergent technologies like sensor networks or the internet of things is growing. Additionally, a lot of internet protocols are client-server architectures with clients which execute computations in parallel and servers that can handle requests of several clients in parallel. Systems which consist of several interacting and communicating components are often very complex and due to their complexity also prone to errors. Errors in systems can have dramatic consequenses, especially in safety-critical areas where human life can be endangered by incorrect system behavior. Hence, it is inevitable to have methods that ensure the proper functioning of such systems.
This thesis aims on improving the verifiability of asynchronous concurrent systems using symbolic model checking based on Binary Decision Diagrams (BDDs). An asynchronous concurrent system is a system that consists of several components, from which only one component can execute a transition at a time. Model checking is a formal verification technique. For a given system description and a set of desired properties, the validity of the properties for the system is decided in model checking automatically by software tools called model checkers. The main problem of model checking is the state-space explosion problem. One approach to reduce this problem is the use of symbolic model checking. There, system states and transitions are not stored explicitely as in explicit model checking. Instead, in symbolic model checking sets of states and sets of transitions are stored and also manipulated together. The data structure which is used in this thesis to store those sets are BDDs. BDD-based symbolic model checking has already been used successful in industry for several times. Nevertheless, BDD-based symbolic model checking still suffers from the state-space explosion problem and further improvements are necessary to improve its applicability.
Central operations in BDD-based symbolic model checking are the computation of successor and predecessor states of a given set of states. Those computations are called image computations. They are applied repeatedly in BDD-based symbolic model checking to decide the validity of properties for a given system description. Hence, their efficient execution is crucial for the memory and runtime requirements of a model checker. In an image computation a BDD for a set of transitions and a BDD for a set of states are combined to compute a set of successor or predecessor states. Often, also the size of the BDDs to represent the transition relation is critical for the successful use of model checking. To further improve the applicability of symbolic model checking, we present in this thesis new data structures to store the transition relation of asynchronous concurrent systems. Additionally, we present new image computation algorithms. Both can lead to large runtime and memory reductions for BDD-based symbolic model checking. Asynchronous concurrent systems often contain symmetries. A technique to exploit those symmetries to diminish the state-space explosion problem is symmetry reduction. In this thesis we also present a new efficient algorithm for symmetry reduction in BDD-based symbolic model checking. / In unserem Alltag kommen wir heute ständig mit Systemen der Informations- und Kommunikationstechnik in Kontakt. Diese bestehen häufig aus mehreren interagierenden und kommunizierenden Komponenten, wie zum Beispiel nebenläufige Software zur effizienten Nutzung von Mehrkernprozessoren oder Sensornetzwerke. Systeme, die aus mehreren interagierenden und kommunizierenden Komponenten bestehen sind häufig komplex und dadurch sehr fehleranfällig. Daher ist es wichtig zuverlässige Methoden, die helfen die korrekte Funktionsweise solcher Systeme sicherzustellen, zu besitzen.
Im Rahmen dieser Doktorarbeit wurden neue Methoden zur Verbesserung der Verifizierbarkeit von asynchronen nebenläufigen Systemen durch Anwendung der symbolischen Modellprüfung mit binären Entscheidungsdiagrammen (BDDs) entwickelt. Ein asynchrones nebenläufiges System besteht aus mehreren Komponenten, von denen zu einem Zeitpunkt jeweils nur eine Komponente Transitionen ausführen kann. Die Modellprüfung ist eine Technik zur formalen Verifikation, bei der die Gültigkeit einer Menge von zu prüfenden Eigenschaften für eine gegebene Systembeschreibung automatisch durch Softwarewerkzeuge, die Modellprüfer genannt werden, entschieden wird. Das Hauptproblem der symbolischen Modellprüfung ist das Problem der Zustandsraumexplosion und es sind weitere Verbesserungen notwendig, um die symbolische Modellprüfung häufiger erfolgreich durchführen zu können.
Bei der BDD-basierten symbolischen Modellprüfung werden Mengen von Systemzuständen und Mengen von Transitionen jeweils durch BDDs repräsentiert. Zentrale Operationen bei ihr sind die Berechnung von Nachfolger- und Vorgängerzuständen von gegebenen Zustandsmengen, welche Bildberechnungen genannt werden. Um die Gültigkeit von Eigenschaften für eine gegebene Systembeschreibung zu überprüfen, werden wiederholt Bildberechnungen durchgeführt. Daher ist ihre effiziente Berechnung entscheidend für eine geringe Laufzeit und einen niedrigen Speicherbedarf der Modellprüfung. In einer Bildberechnung werden ein BDD zur Repräsentation einer Menge von Transitionen und ein BDD für eine Menge von Zuständen kombiniert, um eine Menge von Nachfolger- oder Vorgängerzuständen zu berechnen. Oft ist auch die Größe von BDDs zur Repräsentation der Transitionsrelation von Systemen entscheidend für die erfolgreiche Anwendbarkeit der Modellprüfung.
In der vorliegenden Arbeit werden neue Datenstrukturen zur Repräsentation der Transitionsrelation von asynchronen nebenläufigen Systemen bei der BDD-basierten symbolischen Modellprüfung vorgestellt. Zusätzlich werden neue Algorithmen zur Durchführung von Bildberechnungen präsentiert. Beides kann zu großen Reduktionen der Laufzeit und des Speicherbedarfs führen. Asynchrone nebenläufige Systeme besitzen häufig Symmetrien. Eine Technik zur Reduktion des Problems der Zustandsraumexplosion ist die Symmetriereduktion. In dieser Arbeit wird ebenfalls ein neuer effizienter Algorithmus zur Symmetriereduktion bei der symbolischen Modellprüfung mit BDDs aufgeführt.
|
103 |
Optimisation of steam reconditioning for regrowth-ash and plantation-grown eucalypt speciesBlakemore, Philip January 2008 (has links)
Doctor of Philosophy / Steam reconditioning to recover collapse, in mid to low density eucalypt species, has been known for over ninety years. The current industrial practices for steam reconditioning have largely been based on a few older studies, which were often poorly documented and based on very small sample sizes. On top of this, many local practices and ‘rules of thumb’ have developed over time, many of which have a questionable scientific basis. This thesis was undertaken to more rigorously investigate and fundamentally understand collapse recovery, and try to optimise its application. The most obvious variable that kiln operators have control over is the moisture content of the timber prior to steam reconditioning. Experiments were undertaken to generate a range of moisture gradients (ranging from minimal to more industrially realistic) to evaluate the effect of moisture content on collapse recovery. An optimal moisture content for the core of the boards was found to be between about 18–20%, although there was no statistical difference in recoveries between about 17–25% moisture content. Below 15% moisture content recovery dropped off severely and intra-ring internal checking closure was incomplete, while at 25% moisture content an increased level of normal shrinkage, due to the early removal of drying stresses, was the main drawback. Above a core moisture content of about 35% incomplete closure of intra-ring internal checks was again observed. There was little evidence of re-collapse occurring in these high moisture content samples. Previously established relationships between density and collapse and drying rate were again generally observed in these experiments. However, for the first time an effect of collapse in reducing the fitted drying diffusion coefficients was also observed. It was also observed that, provided the moisture content of the board was in the critical range, most of the collapse recovery was achieved in the time it took to get the core of the board up to the steaming temperature of close to 100°C. This suggests that for most thicknesses a conservative reconditioning period of two hours at temperature is all that is required. This recommended shortening of the reconditioning cycle could dramatically increase the throughput of timber through the steam reconditioning chambers. Alternatively, it could mean that where modern final drying kilns are being used, the reconditioning treatment could be carried out within the final drying kiln. A finite element model was developed to demonstrate the mechanism by which collapse recovery occurs. The theory tested was that the elastic component that stores the energy to restore the shape of the deformed cell is primarily found in the S1 and S3 layers. In contrast, the inelastic component is primarily found in the S2 layer. The model generated here provided limited support for this theory.
|
104 |
Towards Formal Verification in a Component-based Reuse MethodologyKarlsson, Daniel January 2003 (has links)
<p>Embedded systems are becoming increasingly common in our everyday lives. As techonology progresses, these systems become more and more complex. Designers handle this increasing complexity by reusing existing components (Intellectual Property blocks). At the same time, the systems must still fulfill strict requirements on reliability and correctness.</p><p>This thesis proposes a formal verification methodology which smoothly integrates with component-based system-level design using a divide and conquer approach. The methodology assumes that the system consists of several reusable components. Each of these components are already formally verified by their designers and are considered correct given that the environment satisfies certain properties imposed by the component. What remains to be verified is the glue logic inserted between the components. Each such glue logic is verified one at a time using model checking techniques.</p><p>The verification methodology as well as the underlying theoretical framework and algorithms are presented in the thesis.</p><p>Experimental results have shown the efficiency of the proposed methodology and demonstrated that it is feasible to apply it on real-life examples.</p> / Report code: LiU-Tek-Lic-2003:57.
|
105 |
Type-alpha DPLsArkoudas, Konstantine 05 October 2001 (has links)
This paper introduces Denotational Proof Languages (DPLs). DPLs are languages for presenting, discovering, and checking formal proofs. In particular, in this paper we discus type-alpha DPLs---a simple class of DPLs for which termination is guaranteed and proof checking can be performed in time linear in the size of the proof. Type-alpha DPLs allow for lucid proof presentation and for efficient proof checking, but not for proof search. Type-omega DPLs allow for search as well as simple presentation and checking, but termination is no longer guaranteed and proof checking may diverge. We do not study type-omega DPLs here. We start by listing some common characteristics of DPLs. We then illustrate with a particularly simple example: a toy type-alpha DPL called PAR, for deducing parities. We present the abstract syntax of PAR, followed by two different kinds of formal semantics: evaluation and denotational. We then relate the two semantics and show how proof checking becomes tantamount to evaluation. We proceed to develop the proof theory of PAR, formulating and studying certain key notions such as observational equivalence that pervade all DPLs. We then present NDL, a type-alpha DPL for classical zero-order natural deduction. Our presentation of NDL mirrors that of PAR, showing how every basic concept that was introduced in PAR resurfaces in NDL. We present sample proofs of several well-known tautologies of propositional logic that demonstrate our thesis that DPL proofs are readable, writable, and concise. Next we contrast DPLs to typed logics based on the Curry-Howard isomorphism, and discuss the distinction between pure and augmented DPLs. Finally we consider the issue of implementing DPLs, presenting an implementation of PAR in SML and one in Athena, and end with some concluding remarks.
|
106 |
Generic Techniques for the verification of infinite-state systemsLegay, Axel 10 December 2007 (has links)
Within the context of the verification of infinite-state systems,
'Regular model checking' is the name of a family of techniques in
which states are represented by words or trees, sets of states by
finite automata on these objects, and transitions by finite automata
operating on pairs of state encodings, i.e. finite-state
transducers. In this context, the problem of computing the set of
reachable states of a system can be reduced to the one of computing
the iterative closure of the finite-state transducer representing its
transition relation. This thesis provides several techniques to
computing the transitive closure of a finite-state transducer. One of
the motivations of the thesis is to show the feasibility and
usefulness of this approach through a combination of the necessary
theoretical developments, implementation, and experimentation. For
systems whose states are encoded by words, the iteration technique
proceeds by comparing a finite sequence of successive powers of the
transducer, detecting an 'increment' that is added to move from one
power to the next, and extrapolating the sequence by allowing
arbitrary repetitions of this increment. For systems whose states are
represented by trees, the iteration technique proceeds by computing
the powers of the transducer and progressively collapsing their states
according to an equivalence relation until a fixed point is reached.
The proposed iteration techniques can just as well be exploited to
compute the closure of a given set of states by repeated applications
of the transducer, which has proven to be a very effective way of
using the technique. Various examples have been handled completely
within the automata-theoretic setting.
Another applications of the techniques are the verification of linear
temporal properties as well as the computation of the convex hull of a
finite set of integer vectors.
|
107 |
Malicious DHTML Detection by Model-based ReasoningLin, Shih-Fen 21 August 2007 (has links)
¡@Including of HTML, client-side script, and other relative technology, Dynamic HTML (DHTML) is a mechanism of creating dynamic contents in a web page. Nowadays, because of the demand of dynamic web pages and the diffusion of web applications, attackers get a new, easily-spread, and hard-detected intrusion vector ¡Ð DHTML. And commercial anti-virus softwares, commonly using pattern-matching approach, still have weakness against commonly obfuscated malicious DHTML.
¡@According to this condition, we propose a new detective algorithm Model-based Reasoning (MoBR), basing on the respects of model and reasoning, that is resilient to common obfuscations used by attackers and can correctly determine whether a webpage is malicious or not. Through describing text and semantic signatures, we constructs the model of a malicious DHTML by the mechanism of templates. Experimental evaluation by actual DHTML demonstrates that our detection algorithm is tolerant to obfuscation and perform much superior to commercial anti-virus softwares. Furthermore, it can detect variants of malicious DHTML with a low false positive rate.
|
108 |
Behavioral Model Equivalence Checking for Large Analog Mixed Signal SystemsSingh, Amandeep 2011 May 1900 (has links)
This thesis proposes a systematic, hierarchical, optimization based semi-formal equivalence checking methodology for large analog/mixed signal systems such as phase locked loops (PLL), analog to digital convertors (ADC) and input/output (I/O) circuits. I propose to verify the equivalence between a behavioral model and its electrical implementation over a limited, but highly likely, input space defined as the Constrained Behavioral Input Space. Furthermore, I clearly distinguish between the behavioral and electrical domains and define mapping functions between the two domains to allow for calculation of deviation between the behavioral and electrical implementation. The verification problem is then formulated as an optimization problem which is solved by interfacing a sequential quadratic programming (SQP) based optimizer with commercial circuit simulation tools, such as CADENCE SPECTRE. The proposed methodology is then applied for equivalence checking of a PLL as a test case and results are shown which prove the correctness of the proposed methodology.
|
109 |
Approximation and Refinement Techniques for Hard Model-checking ProblemsBobaru, Mihaela 15 July 2009 (has links)
Formal verification by model checking verifies whether a system satisfies some given correctness properties, and is intractable in general. We focus on several problems originating from the usage of model checking and from the inherent complexity of model checking itself. We propose approximation and iterative refinement techniques and demonstrate that they help in making these problems tractable on practical cases. Vacuity detection is one of the problems, relating to the trivial satisfaction of properties. A similar problem is query solving, useful in model exploration, when properties of a system are not fully known and are to be discovered rather than checked. Both of these problems have solution spaces structured as lattices and can be solved by model checking using those lattices. The lattices, in the most general formulation of these problems, are too complex to be implemented efficiently. We introduce a general approximation framework for model checking with lattices and instantiate this framework for the two problems, leading to algorithms and implementations that can obtain efficiently partial answers to the problems. We also introduce refinement techniques that consider incrementally larger lattices and compute even the partial answers gradually, to further abate the size explosion of the problems. Another problem we consider is the state-space explosion of model checking. The size of system models is exponential in the number of state variables and that renders model checking intractable. We consider systems composed of several components running concurrently. For such systems, compositional verification checks components individually to avoid composing an entire system. Model checking an individual component uses assumptions about the other components. Smaller assumptions lead to smaller verification problems. We introduce iterative refinement techniques that improve the assumptions generated by previous automated approaches. One technique incrementally refines the interfaces between components in order to obtain smaller assumptions that are sufficient to prove a given property. The smaller assumptions are approximations of the assumption that would be obtained without our interface refinement. Another technique computes assumptions as abstractions of components, as an alternative to current approaches that learn assumptions from counterexamples. Our abstraction refinement has the potential to compute smaller nondeterministic assumptions, in contrast to the deterministic assumptions learned by current approaches. We confirm experimentally the benefits of our new approximation and refinement techniques.
|
110 |
Robust Consistency Checking for Modern FilesystemsSun, Kuei 19 March 2013 (has links)
A runtime file system checker protects file-system metadata integrity. It checks the consistency of file system update operations before they are committed to disk, thus preventing corrupted updates from reaching the disk. In this thesis, we describe our experiences with building Brunch, a runtime checker for an emerging Linux file system called Btrfs. Btrfs supports many modern file-system features that pose challenges in designing a robust checker. We find that the runtime consistency checks need to be expressed clearly so that they can be reasoned about and implemented reliably, and thus we propose writing the checks declaratively. This approach reduces the complexity of the checks, ensures their independence, and helps identify the correct abstractions in the checker. It also shows how the checker can be designed to handle arbitrary file system corruption. Our results show that runtime consistency checking is still viable for complex, modern file systems.
|
Page generated in 0.0273 seconds