Spelling suggestions: "subject:"computer science (mathematics)"" "subject:"coomputer science (mathematics)""
211 |
Saturation methods for global model-checking pushdown systemsHague, Matthew January 2009 (has links)
Pushdown systems equip a finite state system with an unbounded stack memory, and are thus infinite state. By recording the call history on the stack, these systems provide a natural model for recursive procedure calls. Model-checking for pushdown systems has been well-studied. Tools implementing pushdown model-checking (e.g. Moped) are an essential back-end component of high-profile software model checkers such as SLAM, Blast and Terminator. Higher-order pushdown systems define a more complex memory structure: a higher-order stack is a stack of lower-order stacks. These systems form a robust hierarchy closely related to the Caucal hierarchy and higher-order recursion schemes. This latter connection demonstrates their importance as models for programs with higher-order functions. We study the global model-checking problem for (higher-order) pushdown systems. In particular, we present a new algorithm for computing the winning regions of a parity game played over an order-1 pushdown system. We then show how to compute the winning regions of two-player reachability games over order-n pushdown systems. These algorithms extend the saturation methods of Bouajjani, Esparza and Maler for order-1 pushdown systems, and Bouajjani and Meyer for higher-order pushdown systems with a single control state. These techniques begin with an automaton recognising (higher-order) stacks, and iteratively add new transitions until the automaton becomes saturated. The reachability result, presented at FoSSaCS 2007 and in the LMCS journal, is the main contribution of the thesis. We break the saturation paradigm by adding new states to the automaton during the iteration. We identify the fixed points required for termination by tracking the updates that are applied, rather than by observing the transition structure. We give a number of applications of this result to LTL model-checking, branching-time model-checking, non-emptiness of higher-order pushdown automata and Büchi games. Our second major contribution is the first application of the saturation technique to parity games. We begin with a mu-calculus characterisation of the winning region. This formula alternates greatest and least fixed point operators over a kind of reachability formula. Hence, we can use a version of our reachability algorithm, and modifications of the Büchi techniques, to compute the required result. The main advantages of this approach compared to existing techniques due to Cachat, Serre and Vardi et al. are that it is direct and that it is not immediately exponential in the number of control states, although the worst-case complexity remains the same.
|
212 |
From interactive to semantic image segmentationGulshan, Varun January 2011 (has links)
This thesis investigates two well defined problems in image segmentation, viz. interactive and semantic image segmentation. Interactive segmentation involves power assisting a user in cutting out objects from an image, whereas semantic segmentation involves partitioning pixels in an image into object categories. We investigate various models and energy formulations for both these problems in this thesis. In order to improve the performance of interactive systems, low level texture features are introduced as a replacement for the more commonly used RGB features. To quantify the improvement obtained by using these texture features, two annotated datasets of images are introduced (one consisting of natural images, and the other consisting of camouflaged objects). A significant improvement in performance is observed when using texture features for the case of monochrome images and images containing camouflaged objects. We also explore adding mid-level cues such as shape constraints into interactive segmentation by introducing the idea of geodesic star convexity, which extends the existing notion of a star convexity prior in two important ways: (i) It allows for multiple star centres as opposed to single stars in the original prior and (ii) It generalises the shape constraint by allowing for Geodesic paths as opposed to Euclidean rays. Global minima of our energy function can be obtained subject to these new constraints. We also introduce Geodesic Forests, which exploit the structure of shortest paths in implementing the extended constraints. These extensions to star convexity allow us to use such constraints in a practical segmentation system. This system is evaluated by means of a “robot user” to measure the amount of interaction required in a precise way, and it is shown that having shape constraints reduces user effort significantly compared to existing interactive systems. We also introduce a new and harder dataset which augments the existing GrabCut dataset with more realistic images and ground truth taken from the PASCAL VOC segmentation challenge. In the latter part of the thesis, we bring in object category level information in order to make the interactive segmentation tasks easier, and move towards fully automated semantic segmentation. An algorithm to automatically segment humans from cluttered images given their bounding boxes is presented. A top down segmentation of the human is obtained using classifiers trained to predict segmentation masks from local HOG descriptors. These masks are then combined with bottom up image information in a local GrabCut like procedure. This algorithm is later completely automated to segment humans without requiring a bounding box, and is quantitatively compared with other semantic segmentation methods. We also introduce a novel way to acquire large quantities of segmented training data relatively effortlessly using the Kinect. In the final part of this work, we explore various semantic segmentation methods based on learning using bottom up super-pixelisations. Different methods of combining multiple super-pixelisations are discussed and quantitatively evaluated on two segmentation datasets. We observe that simple combinations of independently trained classifiers on single super-pixelisations perform almost as good as complex methods based on jointly learning across multiple super-pixelisations. We also explore CRF based formulations for semantic segmentation, and introduce novel visual words based object boundary description in the energy formulation. The object appearance and boundary parameters are trained jointly using structured output learning methods, and the benefit of adding pairwise terms is quantified on two different datasets.
|
213 |
Model Checking Systems with Replicated Components using CSPMazur, Tomasz Krzysztof January 2011 (has links)
The Parameterised Model Checking Problem asks whether an implementation Impl(t) satisfies a specification Spec(t) for all instantiations of parameter t. In general, t can determine numerous entities: the number of processes used in a network, the type of data, the capacities of buffers, etc. The main theme of this thesis is automation of uniform verification of a subclass of PMCP with the parameter of the first kind, using techniques based on counter abstraction. Counter abstraction works by counting how many, rather than which, node processes are in a given state: for nodes with k local states, an abstract state (c(1), ..., c(k)) models a global state where c(i) processes are in the i-th state. We then use a threshold function z to cap the values of each counter. If for some i, counter c(i) reaches its threshold, z(i) , then this is interpreted as there being z(i) or more nodes in the i-th state. The addition of thresholds makes abstract models independent of the instantiation of the parameter. We adapt standard counter abstraction techniques to concurrent reactive systems modelled using the CSP process algebra. We demonstrate how to produce abstract models of systems that do not use node identifiers (i.e. where all nodes are indistinguishable). Every such abstraction is, by construction, refined by all instantiations of the implementation. If the abstract model satisfies the specification, then a positive answer to the particular uniform verification problem can be deduced. We show that by adding node identifiers we make the uniform verification problem undecidable. We demonstrate a sound abstraction method that extends standard counter abstraction techniques to systems that make full use of node identifiers (in specifications and implementations). However, on its own, the method is not enough to give the answer to verification problems for all parameter instantiations. This issue has led us to the development of a type reduction theory, which, for a given verification problem, establishes a function phi that maps all (sufficiently large) instantiations T of the parameter to some fixed type T and allows us to deduce that if Spec(T) is refined by phi(Impl(T)), then Spec(T) is refined by Impl(T). We can then combine this with our extended counter abstraction techniques and conclude that if the abstract model satisfies Spec(T), then the answer to the uniform verification problem is positive. We develop a symbolic operational semantics for CSP processes that satisfy certain normality requirements and we provide a set of translation rules that allow us to concretise symbolic transition graphs. The type reduction theory relies heavily on these results. One of the main advantages of our symbolic operational semantics and the type reduction theory is their generality, which makes them applicable in other settings and allows the theory to be combined with abstraction methods other than those used in this thesis. Finally, we present TomCAT, a tool that automates the construction of counter abstraction models and we demonstrate how our results apply in practice.
|
214 |
Duality theory for optimal mechanism designGiannakopoulos, Ioannis January 2015 (has links)
In this work we present a general duality-theory framework for revenue maximization in additive Bayesian auctions involving multiple items and many bidders whose values for the goods follow arbitrary continuous joint distributions over some multi-dimensional real interval. Although the single-item case has been resolved in a very elegant way by the seminal work of Myerson [1981], optimal solutions involving more items still remain elusive. The framework extends linear programming duality and complementarity to constraints with partial derivatives. The dual system reveals the natural geometric nature of the problem and highlights its connection with the theory of bipartite graph matchings. We demonstrate the power of the framework by applying it to various special monopoly settings where a seller of multiple heterogeneous goods faces a buyer with independent item values drawn from various distributions of interest, to design both exact and approximately optimal selling mechanisms. Previous optimal solutions were only known for up to two and three goods, and a very limited range of distributional priors. The duality framework is used not only for proving optimality, but perhaps more importantly, for deriving the optimal mechanisms themselves. Some of our main results include: the proposal of a simple deterministic mechanism, which we call Straight-Jacket Auction (SJA) and is defined in a greedy, recursive way through natural geometric constraints, for many uniformly distributed goods, where exact optimality is proven for up to six items and general optimality is conjectured; a scheme of sufficient conditions for exact optimality for two-good settings and general independent distributions; a technique for upper-bounding the optimal revenue for arbitrarily many goods, with an application to uniform and exponential priors; and the proof that offering deterministically all items in a single full bundle is the optimal way of selling multiple exponentially i.i.d. items.
|
215 |
Higher-order semantics for quantum programming languages with classical controlAtzemoglou, George Philip January 2012 (has links)
This thesis studies the categorical formalisation of quantum computing, through the prism of type theory, in a three-tier process. The first stage of our investigation involves the creation of the dagger lambda calculus, a lambda calculus for dagger compact categories. Our second contribution lifts the expressive power of the dagger lambda calculus, to that of a quantum programming language, by adding classical control in the form of complementary classical structures and dualisers. Finally, our third contribution demonstrates how our lambda calculus can be applied to various well known problems in quantum computation: Quantum Key Distribution, the quantum Fourier transform, and the teleportation protocol.
|
216 |
Pictures of processes : automated graph rewriting for monoidal categories and applications to quantum computingKissinger, Aleks January 2011 (has links)
This work is about diagrammatic languages, how they can be represented, and what they in turn can be used to represent. More specifically, it focuses on representations and applications of string diagrams. String diagrams are used to represent a collection of processes, depicted as "boxes" with multiple (typed) inputs and outputs, depicted as "wires". If we allow plugging input and output wires together, we can intuitively represent complex compositions of processes, formalised as morphisms in a monoidal category. While string diagrams are very intuitive, existing methods for defining them rigorously rely on topological notions that do not extend naturally to automated computation. The first major contribution of this dissertation is the introduction of a discretised version of a string diagram called a string graph. String graphs form a partial adhesive category, so they can be manipulated using double-pushout graph rewriting. Furthermore, we show how string graphs modulo a rewrite system can be used to construct free symmetric traced and compact closed categories on a monoidal signature. The second contribution is in the application of graphical languages to quantum information theory. We use a mixture of diagrammatic and algebraic techniques to prove a new classification result for strongly complementary observables. Namely, maximal sets of strongly complementary observables of dimension D must be of size no larger than 2, and are in 1-to-1 correspondence with the Abelian groups of order D. We also introduce a graphical language for multipartite entanglement and illustrate a simple graphical axiom that distinguishes the two maximally-entangled tripartite qubit states: GHZ and W. Notably, we illustrate how the algebraic structures induced by these operations correspond to the (partial) arithmetic operations of addition and multiplication on the complex projective line. The third contribution is a description of two software tools developed in part by the author to implement much of the theoretical content described here. The first tool is Quantomatic, a desktop application for building string graphs and graphical theories, as well as performing automated graph rewriting visually. The second is QuantoCoSy, which performs fully automated, model-driven theory creation using a procedure called conjecture synthesis.
|
217 |
The regular histories formulation of quantum theoryPriebe, Roman January 2012 (has links)
A measurement-independent formulation of quantum mechanics called ‘regular histories’ (RH) is presented, able to reproduce the predictions of the standard formalism without the need to for a quantum-classical divide or the presence of an observer. It applies to closed systems and features no wave-function collapse. Weights are assigned only to histories satisfying a criterion called ‘regularity’. As the set of regular histories is not closed under the Boolean operations this requires a new con- cept of weight, called ‘likelihood’. Remarkably, this single change is enough to overcome many of the well-known obstacles to a sensible interpretation of quantum mechanics. For example, Bell’s theorem, which makes essential use of probabilities, places no constraints on the locality properties of a theory based on likelihoods. Indeed, RH is both counter- factually definite and free from action-at-a-distance. Moreover, in RH the meaningful histories are exactly those that can be witnessed at least in principle. Since it is especially difficult to make sense of the concept of probability for histories whose occurrence is intrinsically indeterminable, this makes likelihoods easier to justify than probabilities. Interaction with the environment causes the kinds of histories relevant at the macroscopic scale of human experience to be witnessable and indeed to generate Boolean algebras of witnessable histories, on which likelihoods reduce to ordinary probabilities. Further- more, a formal notion of inference defined on regular histories satisfies, when restricted to such Boolean algebras, the classical axioms of implication, explaining our perception of a largely classical world. Even in the context of general quantum histories the rules of reasoning in RH are remark- ably intuitive. Classical logic must only be amended to reflect the fundamental premise that one cannot meaningfully talk about the occurrence of unwitnessable histories. Crucially, different histories with the same ‘physical content’ can be interpreted in the same way and independently of the family in which they are expressed. RH thereby rectifies a critical flaw of its inspiration, the consistent histories (CH) approach, which requires either an as yet unknown set selection rule or a paradigm shift towards an un- conventional picture of reality whose elements are histories-with-respect-to-a-framework. It can be argued that RH compares favourably with other proposed interpretations of quantum mechanics in that it resolves the measurement problem while retaining an essentially classical worldview without parallel universes, a framework-dependent reality or action-at-a-distance.
|
218 |
Finite element simulation of a poroelastic model of the CSF system in the human brain during an infusion testEisenträger, Almut January 2012 (has links)
Cerebrospinal fluid (CSF) fills a system of cavities at the centre of the brain, known as ventricles, and the subarachnoid space surrounding the brain and the spinal cord. In addition, CSF is in free communication with the interstitial fluid of the brain tissue. Disturbances in CSF dynamics can lead to diseases that cause severe brain damage or even death. So-called infusion tests are frequently performed in the diagnosis of such diseases. In this type of test, changes in average CSF pressure are related to changes in CSF volume through infusion of known volumes of additional fluid. Traditionally, infusion tests are analysed with single compartment models, which treat all CSF as part of one compartment and balance fluid inflow, outflow and storage through a single ordinary differential equation. Poroelastic models of the brain, on the other hand, have been used to simulate spatial changes with disease, particularly of the ventricle size, on larger time scales of days, weeks or months. Wirth and Sobey (2008) developed a two-fluid poroelastic model of the brain in which CSF pressure pulsations are linked to arterial blood pressure pulsations. In this thesis, this model is developed further and simulation results are compared to clinical data. At first, the functional form of the compliance, which governs the storage of CSF in single compartment models, is examined by comparison of two different compliance models with clinical data. The derivations of a single-fluid and a two-fluid poroelastic model of the brain in spherical symmetry are laid out in detail and some of the parameters are related to the compliance functions considered earlier. The finite element implementation of the two-fluid model is described and finally simulation results of the average CSF pressure response and the pressure pulsations are compared to clinical data.
|
219 |
On learning assumptions for compositional verification of probabilistic systemsFeng, Lu January 2014 (has links)
Probabilistic model checking is a powerful formal verification method that can ensure the correctness of real-life systems that exhibit stochastic behaviour. The work presented in this thesis aims to solve the scalability challenge of probabilistic model checking, by developing, for the first time, fully-automated compositional verification techniques for probabilistic systems. The contributions are novel approaches for automatically learning probabilistic assumptions for three different compositional verification frameworks. The first framework considers systems that are modelled as Segala probabilistic automata, with assumptions captured by probabilistic safety properties. A fully-automated approach is developed to learn assumptions for various assume-guarantee rules, including an asymmetric rule Asym for two-component systems, an asymmetric rule Asym-N for n-component systems, and a circular rule Circ. This approach uses the L* and NL* algorithms for automata learning. The second framework considers systems where the components are modelled as probabilistic I/O systems (PIOSs), with assumptions represented by Rabin probabilistic automata (RPAs). A new (complete) assume-guarantee rule Asym-Pios is proposed for this framework. In order to develop a fully-automated approach for learning assumptions and performing compositional verification based on the rule Asym-Pios, a (semi-)algorithm to check language inclusion of RPAs and an L*-style learning method for RPAs are also proposed. The third framework considers the compositional verification of discrete-time Markov chains (DTMCs) encoded in Boolean formulae, with assumptions represented as Interval DTMCs (IDTMCs). A new parallel operator for composing an IDTMC and a DTMC is defined, and a new (complete) assume-guarantee rule Asym-Idtmc that uses this operator is proposed. A fully-automated approach is formulated to learn assumptions for rule Asym-Idtmc, using the CDNF learning algorithm and a new symbolic reachability analysis algorithm for IDTMCs. All approaches proposed in this thesis have been implemented as prototype tools and applied to a range of benchmark case studies. Experimental results show that these approaches are helpful for automating the compositional verification of probabilistic systems through learning small assumptions, but may suffer from high computational complexity or even undecidability. The techniques developed in this thesis can assist in developing scalable verification frameworks for probabilistic models.
|
220 |
Synthesis and alternating automata over real timeJenkins, Mark Daniel January 2012 (has links)
Alternating timed automata are a powerful extension of classical Alur-Dill timed automata that are closed under all Boolean operations. They have played a key role, among others, in providing verification algorithms for prominent specification formalisms such as Metric Temporal Logic. Unfortunately, when interpreted over an infinite dense time domain (such as the reals), alternating timed automata have an undecidable language emptiness problem. In this thesis we consider restrictions on this model that restore the decidability of the language emptiness problem. We consider the restricted class of safety alternating timed automata, which can encode a corresponding Safety fragment of Metric Temporal Logic. This thesis connects these two formalisms with insertion channel machines, a model of faulty communication, and demonstrates that the three formalisms are interreducible. We thus prove a non-elementary lower bound for the language emptiness problem for 1-clock safety alternating timed automata and further obtain a new proof of decidability for this problem. Complementing the restriction to safety properties, we consider interpreting the automata over bounded dense time domains. We prove that the time-bounded language emptiness problem is decidable but non-elementary for unrestricted alternating timed automata. The language emptiness problem for alternating timed automata is a special case of a much more general and abstract logical problem: Church's synthesis problem. Given a logical specification S(I,O), Church's problem is to determine whether there exists an operator F that implements the specification in the sense that S(I,F(I)) holds for all inputs I. It is a classical result that the synthesis problem is decidable in the case that the specification and implementation are given in monadic second-order logic over the naturals. We prove that this decidability extends to MSO over the reals with order and furthermore to MSO over every fixed bounded interval of the reals with order and the +1 relation.
|
Page generated in 0.0958 seconds