• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • 1
  • Tagged with
  • 76
  • 13
  • 12
  • 11
  • 10
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An investigation into Event-B methodologies and timing constraint modelling

Sulskus, Gintautas January 2017 (has links)
In the domain of formal modelling and verification of real-time safety-critical systems, our focus is on complex - i.e. nested, interdependent and cyclic-timing constraints. We strengthen the pallet of modelling tools and techniques to describe and verify timing properties in such real-time systems. Our contribution builds on Event-B - a formal language for systems modelling, based on set theory and predicate logic. The language has the advantage of mechanised proof and tackles system complexity through a stepwise refinement. The foundation of our scheme - a notion of a timing interval - is a higher level abstraction in terms of statemachine and formal timing interval specification. It can formally bind together several related timing requirements, expressed in delay, deadline and expiry concepts. To support the timing interval abstraction through the Event-B based refinement method, we present five compositional refinement transformations: Sub-Interval, Alternative, Abort-to-Response, Single-to-Multi and Retry. The timing interval and its refinement transformations use a template-based generative scheme for the transformation of timed models - specified with state machines and timing interval - to Event-B. We provide a workflow and a recommended convention for modelling and graphically representing a timing interval and its refinement transformations in state machine diagrams. The modelling of a timing interval and its refinement transformations process is automated with our tool - tiGen. Finally, we validate our approach and the recommended development workflow in three case studies. The results show that our timing interval can be developed through multiple levels of refinement. The process of modelling and proving is mostly automated.
12

Safety-Critical Java Level 2 : applications, modelling, and verification

Luckcuck, Matthew January 2016 (has links)
Safety-Critical Java (SCJ) introduces a new programming paradigm for applications that must be certified. To aid certification, SCJ is organised into three compliance levels, which increase in complexity from Level 0 to Level 2. The SCJ language specification (JSR 302) is an Open Group Standard, but it does not include verification techniques. Previous work has addressed verification for Level 0 and Level 1 programs. This thesis supports the much more complex SCJ Level 2 programs, which allow for the programming of highly concurrent multi-processor applications with Java threads, and wait and notify mechanisms. The SCJ language specification is clear on what constitutes a Level 2 program but not why it should be used. The utility of Levels 0 and 1 are clear from their features. The scheduling behaviour required by a program is a primary indicator of whether or not Level 0 should be used. However, both Levels 1 and 2 use concurrency and fixed-priority scheduling, so this cannot be used as an indicator to choose between them. This thesis presents the first examination of utility of the unique features of Level 2 and presents use cases that justify the availability of these features. This thesis presents a technique for modelling SCJ Level 2 programs using the state-rich process algebra Circus. The model abstracts away from resources (for example, memory) and scheduling. An SCJ Level 2 program is represented by a combination of a generic model of the SCJ API (the framework model) and an application-specific model (the application model) of that program. The framework model is reused for each modelled program, whereas the application model is generated afresh. This is the first formal semantics of the SCJ Level 2 paradigm and it provides both top-down and bottom-up benefits. Top-down, it is an essential ingredient in the development of refinement-based reasoning techniques for SCJ Level 2 programs. These can be used to develop Level 2 programs that are correct-by-construction. Bottom-up, the technique can be used as a verification tool for Level 2 programs. This is achieved with the Failures Divergences Refinement checker version 3 (FDR3), after translating the model from Circus to the machine readable version of CSP (CSPM). FDR3 allows animation and model checking, which can reveal sources of deadlock, livelock, and divergence. The CSPM version of the model fits the same pattern, with a generic model of the API being combined with an application-specific model of the program. Because the model ignores scheduling, these checks are a worst-case analysis and can give false-negatives.
13

A type-based locality analysis for a functional distributed language

Moreira, Alvaro F. January 2000 (has links)
In this thesis we give a type-based analysis for an ML-like distributed language that detects references certain not to escape from one processor to another. We assume a model of distribution based on distributed shared memory. From the programmer's viewpoint, the same reference on different machines refers to the same data object in a single logical store, but data is in fact distributed among the machines. A coherent protocol is then responsible for determining for each operation with references whether the associated data is available on the current machine, and if not, retrieving it over the network. The costs of calling a coherency protocol for each store access can be reduced if a locality analysis can determine which references refer to local data only. Assignment and dereference operations using these local references can then be compiled to specialised versions, usually comprising a few machine instructions to manipulate data in the store. The locality analysis we propose takes the form of a conservative extension of the Hindley-Milner polymorphic type discipline where reference types are tagged with locality information. We prove type soundness of the type system with respect to an operational semantics, and we also show that the type system soundly describes the locality of references in the sense that a local reference is certain not to escape according to the operational semantics. This result means that a compiler can safely use the locality information provided by the types to replace assignment and dereferencing operations performed on local references with specialised variants, which are less costly than the originals. In order to illustrate how this can be done we define a target language and we give an operational semantics for it at a level of abstraction that differentiates between local and global versions of operations with the store. We define a translation from well typed source expressions to expressions in the target language induced by the locality information on types, and we prove that this translation preserves the original behaviour. We then give a sound type reconstruction algorithm and we discuss a restricted form of best locality property that we conjecture the algorithm possesses. We also report on experiments showing that detecting local references has a significant impact on the performance of programs.
14

Types for modules

Russo, Claudio V. January 1998 (has links)
The programming language Standard ML is an amalgam of two, largely orthogonal, languages. The Core language expresses details of algorithms and data structures. The Modules language expresses the modular architecture of a software system. Both languages are statically typed, with their static and dynamic semantics specified by a formal definition. Over the past decade, Standard ML Modules has been the source of inspiration for much research into the type-theoretic foundations of modules languages. Despite these efforts, a proper type-theoretic understanding of its static semantics has remained elusive. In this thesis, we use Type Theory as a guideline to reformulate the unconventional static semantics of Modules, providing a basis for useful extensions to the Modules language. Our starting point is a stylised presentation of the existing static semantics of Modules, parameterised by an arbitrary Core language. We claim that the type-theoretic concepts underlying Modules are type parameterisation, type quantification and subtyping. We substantiate this claim by giving a provably equivalent semantics with an alternative, more type-theoretic presentation. In particular, we show that the notion of type generativity corresponds to existential quantification over types. In contrast to previous accounts, our analysis does not involve first-order dependent types. Our first extension generalises Modules to higher-order, allowing modules to take parameterised modules as arguments, and return them as results. We go beyond previous proposals for higher-order Modules by supporting a notion of type generativity. We give a sound and complete algorithm for type-checking higher-order Modules. Our second extension permits modules to be treated as first-class citizens of an ML-like Core language, greatly extending the range of computations on modules. Each extension arises from a natural generalisation of our type-theoretic semantics. This thesis also addresses two pragmatic concerns. First, we propose a simple approach to the separate compilation of Modules, which is adequate in practice but has theoretical limitations. We suggest a modified syntax and semantics that alleviates these limitations. Second, we study the type inference problem posed by uniting our extensions to higher-order and first-class modules with an implicitly-typed, ML-like Core language. We present a hybrid type inference algorithm that integrates the classical algorithm for ML with the type-checking algorithm for Modules.
15

A state-based join point model and language for AOP

Ali, Noorazean Mohd January 2011 (has links)
To specify its intention clearly, a pointcut definition must specify the join point where to adapt a crosscutting functionality and the applicability condition of the crosscutting functionality (if any), and to realise modular abstraction of a crosscutting functionality an advice must abstract only the crosscutting functionality. A state-dependent crosscutting concern is an aspect whose applicability of its advice at a matched join point is statedependent, and call graph-based AOP approaches are insufficient for a state-dependent crosscutting concern because they do not allow specification of the state-dependent applicability condition in the pointcut definition. Event-based AOP approaches expose sequences of event executions as join points. Therefore, the state-dependent applicability condition of an aspect's advice can be specified in a pointcut definition by stating the sequence of event executions that brings the base program into the state when the advice is applicable. However, this means the programmer must know the sequence of event executions that brings the base program into the state. This thesis proposes a state-based AOP approach that exposes state and state transition join points that allow a state-dependent applicability condition to be clearly specified in the pointcut definition simply by specifying the state or the state transition when the advice is applicable, as implemented in the proof-of-concepts prototype language, TinyState.
16

Realizability toposes and language semantics

Longley, John R. January 1995 (has links)
Realizability toposes are "models of constructive set theory" based on abstract notions of computability. They arose originally in the study of mathematical logic, but since then various realizability toposes (particularly the effective topos) have found their way into several areas of computer science. This thesis investigates the general theory of realizability toposes, and their application to the semantics and logic of some simple programming languages. In the earlier chapters we study the "pure theory" of realizability toposes. Each realizability topos is constructed from a partial combinatory algebra (PCA), which may be regarded as providing a notion of untyped computation. We introduce a new notion of morphism between PCAs, and show that these exactly correspond to certain functors between the toposes. Using this we are able to establish some previously unknown inequivalences between realizability toposes. Next we develop some "domain theory" in realizability toposes. The search for a theory that works well for a wide class of models leads us to identify a new category of predomains, the well-complete objects, whose properties we study. Finally we consider several versions of the programming language PCF and their semantics. We show how these languages may be adequately interpreted in realizability toposes, and prove a variety of universality and full abstraction results for particular languages with respect to particular models. We also obtain some more model-independent results asserting the "equivalence" between the call-by-name, call-by-value and lazy variants of PCF. We end with a discussion of how our models give rise to simple and intuitive "program logics" for these languages.
17

Game semantics and subtyping

Chroboczek, Juliusz January 2003 (has links)
Game Semantics is a relatively new framework for the description of the semantics of programming languages. By combining the mathematical elegance of Denotational Semantics with explicitly operational concepts, Game Semantics has made possible the direct and intuitive modelling of a large range of programming constructs. In this thesis, we show how Game Semantics is able to model subtyping. We start by designing an untyped λ-calculus with ground values that explicitly internalises the notion of typing error. We then equip this calculus with a rich typing system that includes quantification (both universal and existential) as well as recursive types. In a second part, we show how to interpret the untyped calculus; after equipping the domain of the interpretation with an ordering --- the liveness ordering --- loosely inspired from implication on process specifications, we show how our interpretation is both sound and computationally adequate. In a third part, we introduce a notion of game which we use for interpreting types, and show how the liveness ordering on games is suitable for interpreting subtyping. Finally, we prove that under the (unproved) assumption that recursive types are compatible with quantification, our interpretation is sound with respect to both subtyping and typing.
18

Trace-based just-in-time compilation for lazy functional programming languages

Schilling, Thomas January 2013 (has links)
This thesis investigates the viability of trace-based just-in-time (JIT) compilation for optimising programs written in the lazy functional programming language Haskell. A trace-based JIT compiler optimises only execution paths through the program, which is in contrast to method-based compilers that optimise complete functions at a time. The potential advantages of this approach are shorter compilation times and more natural interprocedural optimisation. Trace-based JIT compilers have previously been used successfully to optimise programs written in dynamically typed languages such as JavaScript, Python, or Lua, but also statically typed languages like Java or the Common Language Runtime (CLR). Lazy evaluation poses implementation challenges similar to those of dynamic languages, so trace-based JIT compilation promises to be a viable approach. In this thesis we focus on program performance, but having a JIT compiler available can simplify the implementation of features like runtime inspection and mobile code. We implemented Lambdachine, a trace-based JIT compiler which implements most of the pure subset of Haskell. We evaluate Lambdachine's performance using a set of micro-benchmarks and a set of larger benchmarks from the "spectral" category of the Nofib benchmark suite. Lambdachine's performance (excluding garbage collection overheads) is generally about 10 to 20 percent slower than GHC on statically optimised code. We identify the two main causes for this slow-down: trace selection and impeded heap allocation optimisations due to unnecessary thunk updates.
19

A functional semantics for space and time

Hope, Catherine January 2008 (has links)
In this thesis we focus on the problem of reasoning about the space and time usage of functional programs. Such reasoning is less straightforward than for imperative programs, due to the lack of explicit control flow and memory management that is inherent in the functional style. Our approach to making such information explicit is to consider the behaviour of an underlying abstract machine for executing functional programs, at which level both control flow, and the additional data structures required for execution, are manifest.
20

Investigating and improving novice programmers’ mental models of programming concepts

Ma, Linxiao January 2007 (has links)
No description available.

Page generated in 0.0543 seconds