51 |
Computational properties of spatial logics in the real planeGriffiths, Aled Alun January 2008 (has links)
Spatial logics are formal languages whose predicate and function symbols are interpreted as geometric relations and properties. In order to use spatial logics to perform automated reasoning on spatial data, we must have formal procedures which can decide the satisfiability of the formulae of these logics. However, first order spatial logics are typically undecidable and thus have no such formal procedure. By restricting the syntax of a spatial logic in certain ways, we can achieve languages which are decidable. This thesis examines a particular type of syntactic restriction of spatial logics which result in a family of spatial logics called topological constraint languages. The thesis is mainly concerned with the various approaches taken to solving the satisfiability problems of these topological constraint languages. During this thesis we contribute new results providing complexity results and decision procedures for the satisfiability problems of some topological constraint languages.
|
52 |
A study of erasure correcting codesCai, Jing January 2010 (has links)
This work focus on erasure codes, particularly those that of high performance, and the related decoding algorithms, especially with low computational complexity. The work is composed of different pieces, but the main components are developed within the following two main themes. Ideas of message passing are applied to solve the erasures after the transmission. Efficient matrix-representation of the belief propagation (BP) decoding algorithm on the BEG is introduced as the recovery algorithm. Gallager's bit-flipping algorithm are further developed into the guess and multi-guess algorithms especially for the application to recover the unsolved erasures after the recovery algorithm. A novel maximum-likelihood decoding algorithm, the In-place algorithm, is proposed with a reduced computational complexity. A further study on the marginal number of correctable erasures by the In-place algoritinn determines a lower bound of the average number of correctable erasures. Following the spirit in search of the most likable codeword based on the received vector, we propose a new branch-evaluation- search-on-the-code-tree (BESOT) algorithm, which is powerful enough to approach the ML performance for all linear block codes. To maximise the recovery capability of the In-place algorithm in network transmissions, we propose the product packetisation structure to reconcile the computational complexity of the In-place algorithm. Combined with the proposed product packetisation structure, the computational complexity is less than the quadratic complexity bound. We then extend this to application of the Rayleigh fading channel to solve the errors and erasures. By concatenating an outer code, such as BCH codes, the product-packetised RS codes have the performance of the hard-decision In-place algorithm significantly better than that of the soft-decision iterative algorithms on optimally designed LDPC codes.
|
53 |
A study of iterative capacity-approaching codes and their optimal decoding algorithmsYang, Li January 2010 (has links)
No description available.
|
54 |
Applying multi-resolution numerical methods to geodynamicsDavies, David Rhodri January 2008 (has links)
Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher vigors. However, such configurations have recently been observed. With hot-spots widely-regarded as the surface expression of deep mantle plumes, it is of great importance to ascertain whether or not these conclusions are valid at the dynamical regime of Earth's mantle. Results demonstrate that at these high vigors, steady plumes do arise. However, they do not dominate the planform as in lower vigor cases: they coexist with mobile and ephemeral plumes and display ranging characteristics, which are consistent with hot-spot observations on Earth. Those plumes that do remain steady alter in intensity throughout the simulation, strengthening and weakening over time. Such behavior is caused by an irregular supply of cold material to the core-mantle boundary region, suggesting that subducting slabs are partially responsible for episodic plume magmatism on Earth. With this in mind, the influence of the upper boundary condition upon the planform of mantle convection is further examined. With the modified code, the CPU-time needed to solve a given problem is reduced and, hence, several simulations can be run efficiently, allowing a relatively rapid parameter space mapping of various upper boundary conditions. Results, in accordance with the investigations on upwelling structures, demonstrate that the surface exerts a profound control upon internal dynamics, manifesting itself not only in convective structures, but also in thermal profiles, Nusselt numbers and velocity patterns. Since the majority of geodynamical simulations incorporate a surface condition that is not at all representative of Earth, this is a worrying, yet important conclusion. By failing to address the surface appropriately, geodynamical models, regardless of their sophistication, cannot be truly applicable to Earth. In summary, the techniques developed herein, in both 2- and 3-D, are extremely practical and highly efficient, yielding significant advantages for geodynamical simulations. Indeed, they allow one to solve problems that would otherwise be unfeasible.
|
55 |
Hoare logic and VDM : machine-checked soundness and completeness proofsKleymann, Thomas January 1998 (has links)
Investigating soundness and completeness of verification calculi for imperative programming languages is a challenging task. Many incorrect results have been published in the past. We take advantage of the computer-aided proof tool LEGO to interactively establish soundness and completeness of both Hoare Logic and the operation decomposition rules of the Vienna Development Method (VDM) with respect to operational semantics. We deal with parameterless recursive procedures and local variables in the context of total correctness. As a case study, we use LEGO to verify the correctness of Quicksort in Hoare Logic. As our main contribution, we illuminate the rôle of auxiliary variables in Hoare Logic. They are required to relate the value of program variables in the final state with the value of program variables in the initial state. In our formalisation, we reflect their purpose by interpreting assertions as relations on states and a domain of auxiliary variables. Furthermore, we propose a new structural rule for adjusting auxiliary variables when strengthening preconditions and weakening postconditions. This rule is stronger than all previously suggested structural rules, including rules of adaptation. With the new treatment, we are able to show that, contrary to common belief, Hoare Logic subsumes VDM in that every derivation in VDM can be naturally embedded in Hoare Logic. Moreover, we establish completeness results uniformly as corollaries of Most General Formula theorems which remove the need to reason about arbitrary assertions.
|
56 |
Proving correctness of modular functional programsOwens, Christopher January 1999 (has links)
One reason for studying and programming in functional programming languages is that they are easy to reason about, yet there is surprisingly little work on proving the correctness of large functional programs. In this dissertation I show how to provide a system for proving the correctness of large programs written in a major functional programming language, ML. ML is split into two parts: the Core (in which the actual programs are written), and Modules (which are used to structure Core programs). The dissertation has three main themes: Due to the detail and complexity of proofs of programs, a realistic system should use a computer proof assistant, and so I first discuss how such a system can be coded in a generic proof assistant (I use Paulson's Isabelle). I give a formal proof system for proving properties of programs written in the functional part of Core ML. The Modules language is one of ML's strengths: it allows the programmer to write large programs by controlling the interactions between its parts. In the final part of the dissertation I give a method of proving correctness of ML Modules programs using the well-known data reification method. Proofs of reification using this method boil down to proofs in the system for the Core.
|
57 |
A typed operational semantics for type theoryGoguen, Healfdene January 1994 (has links)
Untyped reduction provides a natural operational semantics for type theory. Normalization results say that such a semantics is sound. However, this reduction does not take type information into account and gives no information about canonical forms for terms. We introduce a new operational semantics, which we call typed operational semantics, which defines a reduction to normal form for terms which are well-typed in the type theory. The central result of the thesis is soundness of the typed operational semantics for the original system. Completeness of the semantics is straightforward. We demonstrate that this equivalence between the declarative and operational presentations of type theory has important metatheoretic consequences: results such as strengthening, subject reduction and strong normalization follow by straightforward induction on derivations in the new system. We introduce these ideas in the setting of the simply typed lambda calculus. We then extend the techniques to Luo's system UTT, which is Martin-Löf's Logical Framework extended by a general mechanism for inductive types, a predicative universe and an impredicative universe of propositions. We also give a proof-irrelevant set-theoretic semantics for UTT.
|
58 |
Correctness-oriented approaches to software developmentGilmore, Stephen January 1991 (has links)
This thesis reports upon the experimental development of a software system. The domain of interest of this study is the use of mathematical reasoning in software development. An experiment is devised in which a modular software system is formally specified in a variety of specification styles. These initial specifications are subsequently refined to efficiently executable implementations. The refinements of the specifications are supported by differing amounts of mathematical reasoning. The issues to be investigated are the effect of increased use of mathematical analysis in software development and the influence of specifiation and refinement style on the quality of the subsequent implementation. Implementation quality is determined by: correctness with respect to the initial formal specification; clarity of the implementation; and machine efficiency. The products of the development are the analysis of specification, refinement and implementation styles and the software system itself.
|
59 |
Universal structure and a categorical framework for type theoryTakeyama, Makoto January 1995 (has links)
This thesis investigates the possibility of a computer checked language for categories with extra structure; the language is to describe objects and morphisms of those categories and to reason about them. We do so first by developing an abstract analysis of representability. This is followed by the investigation of a categorical framework for studying type theory. Our computer checked language therefore allows us to reason about the semantics of programming languages and models of logics. In order to provide our computer checked language, we need to classify categories with extra structure. Traditionally, that has been done in terms of equational structure, or more generally, essentially algebraic structure. That has proved to be somewhat awkward, both conceptually and computationally, so we give an alternative development of categories with extra structure in terms of universal structure, universality being the most important and most central concept of category theory. This unifies many of the concepts of the category theory, in particular many of those of greatest interest to computer scientists, such as cartesian closed structure, fibrations with extra structure, limits, colimits, and natural numbers objects. We use our definition of universal structure to develop a computer checked language in the proof system LEGO. We further give an abstract development of universal structure by showing how the concept of fibrations with extra structure may be seen in terms of universal structure. We continue by an investigation of the use of category theory within computer science. One of the principle uses, perhaps the deepest use, is to provide a semantics for type theory. We give a unified treatment of those categories with extra structure that are needed for the semantics of type theory, thus allowing us to use our computer checked language to reason about the semantics of programs, via use of the underlying type theory of a programming language. It also allows us a semantic study of logics, via use of their corresponding type theories as given by the Curry-Howard isomorphism. This provides what we call a categorical framework for type theory. It extends the relationship between typed lambda calculus and cartesian closed categories, allowing us, for instance, to account for dependent types, as they appear in some programming languages and proof systems such as LEGO, with its underlying type system the Extended Calculus of Constructions. Finally we illustrate our categorical framework with its universal structures by study of the fibration of 'deliverables'.
|
60 |
Diagrammatic representations in domain-specific languagesTourlas, Konstantinos January 2002 (has links)
One emerging approach to reducing the labour and costs of software development favours the specialisation of techniques to particular application domains. The rationale is that programs within a given domain often share enough common features and assumptions to enable the incorporation of substantial support mechanisms into domain-specific programming languages and associated tools. Instead of being machine-oriented, algorithmic implementations, programs in many domain-specific languages (DSLs) are rather user-level, problem-oriented specifications of solutions. Taken further, this view suggests that the most appropriate representation of programs in many domains is diagrammatic, in a way which derives from existing design notations in the domain. This thesis conducts an investigation, using mathematical techniques and supported by case studies, of issues arising from the use of diagrammatic representations in DSLs. Its structure is conceptually divided into two parts: the first is concerned with semantic and reasoning issues; the second introduces an approach to describing the syntax and layout of diagrams, in a way which addresses some pragmatic aspects of their use. The empirical context of our work is that of IEC 1131-3, an industry standard programming language for embedded control systems. The diagrammatic syntax of IEC 1131-3 consists of circuit (i.e. box-and-wire) diagrams, emphasising a data- flow view, and variants of Petri net diagrams, suited to a control-flow view. The first contribution of the thesis is the formalisation of the diagrammatic syntax and the semantics of IEC 1131-3 languages, as a prerequisite to the application of algebraic techniques. More generally, we outline an approach to the design of diagrammatic DSLs, emphasising compositionality in the semantics of the language so as to allow the development of simple proof systems for inferring properties which are deemed essential in the domain. The control-flow subset of IEC 1131-3 is carefully evaluated, and is subsequently re-designed, to yield a straightforward proof system for a restricted, yet commonly occurring, class of safety properties. A substantial part of the thesis deals with DSLs in which programs may be represented both textually and diagrammatically, as indeed is the case with IEC 1131-3. We develop a formalisation of the data-flow diagrams in IEC 1131-3.
|
Page generated in 0.0276 seconds