• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 266
  • 49
  • 36
  • 30
  • 28
  • 16
  • 13
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Categorical term rewriting : monads and modularity

Lüth, Christoph January 1998 (has links)
Term rewriting systems are widely used throughout computer science as they provide an abstract model of computation while retaining a comparatively simple syntax and semantics. In order to reason within large term rewriting systems, structuring operations are used to build large term rewriting systems from smaller ones. Of particular interest is whether key properties are modular, that is, if the components of a structured term rewriting system satisfy a property, then does the term rewriting system as a whole? A body of literature addresses this problem, but most of the results and proofs depend on strong syntactic conditions and do not easily generalize. Although many specific modularity results are known, a coherent framework which explains the underlying principles behind these results is lacking. This thesis posits that part of the problem is the usual, concrete and syntax-oriented semantics of term rewriting systems, and that a semantics is needed which on the one hand elides unnecessary syntactic details but on the other hand still possesses enough expressive power to model the key concepts arising from the term structure, such as substitutions, layers, redexes etc. Drawing on the concepts of category theory, such a semantics is proposed, based on the concept of a monad, generalising the very elegant treatment of equational presentations in category theory. The theoretical basis of this work is the theory of enriched monads. It is shown how structuring operations are modelled on the level of monads, and that the semantics is compositional (it preserves the structuring operations). Modularity results can now be obtained directly at the level of combining monads without recourse to the syntax at all. As an application and demonstration of the usefulness of this approach, two modularity results for the disjoint union of two term rewriting systems are proven, the modularity of confluence (Toyama's theorem) and the modularity of strong normalization for a particular class of term rewriting systems (non-collapsing term rewriting systems). The proofs in the categorical setting provide a mild generalisation of these results.
62

Dependently typed functional programs and their proofs

McBride, Conor January 2000 (has links)
Research in dependent type theories [ML71a] has, in the past, concentrated on its use in the presentation of theorems and theorem-proving. This thesis is concerned mainly with the exploitation of the computational aspects of type theory for programming, in a context where the properties of programs may readily be specified and established. In particular, it develops technology for programming with dependent inductive families of datatypes and proving those programs correct. It demonstrates the considerable advantage to be gained by indexing data structures with pertinent characteristic information whose soundness is ensured by typechecking, rather than human effort. Type theory traditionally presents safe and terminating computation on inductive datatypes by means of elimination rules which serve as induction principles and, via their associated reduction behaviour, recursion operators [Dyb91]. In the programming language arena, these appear somewhat cumbersome and give rise to unappealing code, complicated by the inevitable interaction between case analysis on dependent types and equational reasoning on their indices which must appear explicitly in the terms. Thierry Coquand's proposal [Coq92] to equip type theory directly with the kind of pattern matching notation to which functional programmers have become used over the past three decades [Bur69, McB70] offers a remedy to many of these difficulties. However, the status of pattern matching relative to the traditional elimination rules has until now been in doubt. Pattern matching implies the uniqueness of identity proofs, which Martin Hofmann showed underivable from the conventional definition of equality [Hof95]. This thesis shows that the adoption of this uniqueness as axiomatic is sufficient t make pattern matching admissible. A datatype's elimination rule allows abstraction only over the whole inductively defined family. In order to support pattern matching, the application of such rules to specific instances of dependent families has been systematised. The underlying analysis extends beyond datatypes to other rules of a similar second order character, suggesting they may have other roles to play in the specification, verification and, perhaps, derivation of programs. The technique developed shifts the specificity from the instantiation of the type's indices into equational constraints on indices freely chosen, allowing the elimination rule to be applied. Elimination by this means leaves equational hypothesis in the resulting subgoals, which must be solved if further progress is to be made. The first-order unification algorithm for constructor forms in simple types presented in [McB96] has been extended to cover dependent datatypes as well, yielding completely automated solution to a class of problems which can be syntactically defined. The justification and operation of these techniques requires the machine to construct and exploit a standardised collection of auxiliary lemmas for each datatype. This is greatly facilitated by two technical developments of interest in their own right: a more convenient definition of equality, with a relaxed formulation rule allowing elements of different types to be compared, but nonetheless equivalent to the usual equality plus the axiom of uniqueness; a type theory, OLEG, which incorporates incomplete objects, accounting for their `holes' entirely within the typing judgments and, novelly, not requiring any notion of explicit substitution to manage their scopes. A substantial prototype has been implemented, extending the proof assistant LEGO [LP92]. A number of programs are developed by way of example. Chiefly, the increased expressivity of dependent datatypes is shown to capture a standard first-order unification algorithm within the class of structurally recursive programs, removing any need for a termination argument. Furthermore, the use of elimination rules in specifying the components of the program simplifies significantly its correctness proof.
63

The proof theory and semantics of intuitionistic modal logic

Simpson, Alex K. January 1994 (has links)
Possible world semantics underlies many of the applications of modal logic in computer science and philosophy. The standard theory arises from interpreting the semantic definitions in the ordinary meta-theory of informal classical mathematics. If, however, the same semantic definitions are interpreted in an intuitionistic meta-theory then the induced modal logics no longer satisfy certain intuitionistically invalid principles. This thesis investigates the intuitionistic modal logics that arise in this way. Natural deduction systems for various intuitionistic modal logics are presented. From one point of view, these systems are self-justifying in that a possible world interpretation of the modalities can be read off directly from the inference rules. A technical justification is given by the faithfulness of translations into intuitionistic first-order logic. It is also established that, in many cases, the natural deduction systems induce well-known intuitionistic modal logics, previously given by Hilbert-style axiomatizations. The main benefit of the natural deduction systems over axiomatizations is their susceptibility to proof-theoretic techniques. Strong normalization (and confluence) results are proved for all of the systems. Normalization is then used to establish the completeness of cut-free sequent calculi for all of the systems, and decidability for some of the systems. Lastly, techniques developed throughout the thesis are used to establish that those intuitionistic modal logics proved decidable also satisfy the finite model property. For the logics considered, decidability and the finite model property presented open problems.
64

Development of fuzzy system based channel equalisers

Patra, Sarat Kumar January 1998 (has links)
Channel equalisers are used in digital communication receivers to mitigate the effects of inter symbol interference (ISI) and inter user interference in the form of co-channel interference (CCI) and adjacent channel interference (ACI) in the presence of additive white Gaussian noise (AWGN). An equaliser uses a large part of the computations involved in the receiver. Linear equalisers based on adaptive filtering techniques have long been used for this application. Recently, use of nonlinear signal processing techniques like artificial neural networks (ANN) and radial basis functions (RBF) have shown encouraging results in this application. This thesis presents the development of a nonlinear fuzzy system based equaliser for digital communication receivers. The fuzzy equaliser proposed in this thesis provides a parametric implementation of symbolby-symbol maximum a-posteriori probability (MAP) equaliser based on Bayes’s theory. This MAP equaliser is also called Bayesian equaliser. Its decision function uses an estimate of the noise free received vectors, also called channel states or channel centres. The fuzzy equaliser developed here can be implemented with lower computational complexity than the RBF implementation of the MAP equaliser by using scalar channel states instead of channel states. It also provides schemes for performance tradeoff with complexity and schemes for subset centre selection. Simulation studies presented in this thesis suggests that the fuzzy equaliser by using only 10%-20% of the Bayesian equaliser channel states can provide near optimal performance. Subsequently, this fuzzy equaliser is modified for CCI suppression and is termed fuzzy–CCI equaliser. The fuzzy–CCI equaliser provides a performance comparable to the MAP equaliser designed for channels corrupted with CCI. However the structure of this equaliser is similar to the MAP equaliser that treats CCI as AWGN. A decision feedback form of this equaliser which uses a subset of channel states based on the feedback state is derived. Simulation studies presented in this thesis demonstrate that the fuzzy–CCI equaliser can effectively remove CCI without much increase in computational complexity. This equaliser is also successful in removing interference from more than one CCI sources, where as the MAP equalisers treating CCI as AWGN fail. This fuzzy–CCI equaliser can be treated as a fuzzy equaliser with a preprocessor for CCI suppression, and the preprocessor can be removed under high signal to interference ratio condition.
65

Formal derivation of a class of computers

Wang, Li-Guo January 1995 (has links)
The aim of this thesis is to investigate how to use logic-based specification, construction, and proof methods to formally derive a class of computers. Differing from the traditional concepts of specification, verification and synthesis, the emphasis will be on the formal design process and the notion of derivation. A derivation is a stepwise specification, construction and proof process. The specification of derivation is a set of specifications which is described by abstract syntax capturing the common structure of the specifications in the set. The construction of derivation is a mapping from the set of specifications to the power set of specifications. The proof of derivation is a guarantee that each construction is total and each constructed specification is correct. Computers are amongst the most complex hardware devices. Formal verification researchers have often used computers as their goals or case studies. But there are few reports of the formal construction and proof of a computer. There is no report about the formal derivation of a class of computers. In this thesis we present the concepts and methods of derivation, which includes, mainly, abstract-syntax-based specification schemes for hierarchical specifications, formal and nearly mechanical construction, and abstract and general verification for a class of computers. The central concept is the unified derivation model, developed from the predicate model, used to describe the whole derivation process. The central method is logic structural refinement which, complements behavioral, data, temporal, and (spatial) structural abstraction. It concentrates on describing the main aspect of construction: logic decomposition. The key point is the concept of an entry predicate which reveals the basic relationship in logic and time betweeen a specification and its sub-specifications. We use state-transition-based abstract syntax as the behavior specification scheme to describe a class of computers at the machine instruction level. Starting from the scheme we take nine steps of constructions and proofs to derive and obtain structural specifications at register transfer level. Each individual step, and thereby the whole construction, is proved to be total and correct. Our work develops the basic view point that the hierarchical architecture of a computer can be organized through a dynamic formal construction and proof process, in a three-dimensional structure of logic, space and time. The aspect of relation and construction in logic is fundamental and novel. In addition to the traditional understanding of microprograms our work brings up three new levels in microprogramming: global loop, abstract microprogram structure, and linear microprogram. Also, there is a sequence of constructions of time layers in which the usual verification based on two time layers is only a special case. As a corollary we discuss no-clock, variable clock, and constant clock machines. Gordon's computer is taken as a case study to illustrate our concepts and methods. We derive three different implementations and compare them with the original one of Gordon's paper.
66

Canonicity and bi-approximation in non-classical logics

Suzuki, Tomoyuki January 2010 (has links)
Non-classical logics, or variants of non-classical logics, have rapidly been developed together with the progress of computer science since the 20th century. Typically, we have found that many variants of non-classical logics are represented as ordered algebraic structures, more precisely as lattice expansions. From this point of view, we can think about the study of ordered algebraic structures, like lattice expansions or more generally poset expansions, as a universal approach to non-classical logics. Towards a general study of non-classical logics, in this dissertation, we discuss canonicity and bi-approximation in non-classical logics, especially in lattice expansions and poset expansions. Canonicity provides us with a connection between logical calculi and space-based semantics, e.g. relational semantics, possible world semantics or topological semantics. Note that these results are traditionally considered over bounded distributive lattice-based logics, because they are based on Stone representation. Today, thanks to the recent generalisation of canonical extensions, we can talk about the canonicity over poset expansions. During our investigation of canonicity over poset expansions, we will find the notion of bi-approximation, and apply it to non-classical logics, especially with resource sensitive logics.
67

Occupancy monitoring and prediction in ambient intelligent environment

Akhlaghinia, M. J. January 2010 (has links)
Occupancy monitoring and prediction as an influential factor in the extraction of occupants' behavioural patterns for the realisation of ambient intelligent environments is addressed in this research. The proposed occupancy monitoring technique uses occupancy detection sensors with unobtrusive features to monitor occupancy in the environment. Initially the occupancy detection is conducted for a purely single-occupant environment. Then, it is extended to the multipleoccupant environment and associated problems are investigated. Along with the occupancy monitoring, it is aimed to supply prediction techniques with a suitable occupancy signal as the input which can enhance efforts in developing ambient intelligent environments. By predicting the occupancy pattern of monitored occupants, safety, security, the convenience of occupants, and energy saving can be improved. Elderly care and supporting people with health problems like dementia and Alzheimer disease are amongst the applications of such an environment. In the research, environments are considered in different scenarios based on the complexity of the problem including single-occupant and multiple-occupant scenarios. Using simple sensory devices instead of visual equipment without any impact on privacy and her/his normal daily activity, an occupant is monitored in a living or working environment in the single-occupant scenario. ZigBee wireless communication technology is used to collect signals from sensory devices such as motion detection sensors and door contact sensors. All these technologies together including sensors, wireless communication, and tagging are integrated as a wireless sensory agent. The occupancy data is then collected from different areas in the monitored environment by installing a wireless sensory agent in each area. In a multiple-occupant scenario, monitored occupants are tagged to support sensory signals in distinguishing them from nonmonitored occupants or visitors. Upon enabling the wireless sensory agents to measure the radio signal strength of received data from tags associated with occupants, wireless localising sensory agents are formed and used for occupancy data collection in the multiple-occupant scenario. After the data collection, suitable occupancy time-series are generated from the collected raw data by applying analysis and suitable occupancy signal representation methods, which make it possible to apply time-series predictors for the prediction of reshaped occupancy signal. In addition, an occupancy signal generator is proposed and implemented to generate sufficient occupancy signal data for choosing the best amongst the prediction techniques. After converting the occupancy of different areas in an environment to an occupancy timeseries, the occupancy prediction problem is solved by time-series analysis and prediction techniques for the single-occupant scenario. The proposed technique has made it possible to predict the occupancy signal for 530 seconds in a real environment and up to 900 seconds for a virtual environment. The occupancy signal generator created based on the proposed statistical model is proved to be able to generate different occupancy signals for different occupant profiles incorporating different environmental layouts. This can give a good understanding of the occupancy pattern in indoor spaces and the effect of the uncertainty factors in the occupancy time-series. In the multiple-occupant scenario, the tagging technology integrated with data acquisition system has made it possible to distinguish monitored occupants and separate their occupancy signals. Separated signals can then be treated as individual time-series for prediction. All the proposed techniques and models are tested and validated by real occupancy data collected from different environments.
68

A new view of binary trees

Gibbons, Jeremy January 1988 (has links)
We present a new formalism of labelled binary trees, using two partial binary constructors instead of the usual total ternary constructor. This formalism is complemented by some higher-order operators, encapsulating common patterns of computation on trees. We explore their use by deriving solutions to a couple of algorithmic problems on trees. The work proceeds in the Bird-Meertens style. This is a calculus for programs, closely related to applicative programming languages. Expressions are written at the function level, avoiding the use of unnecessary names and being as concise as possible. Derivations are performed by program transformation — the application of correctness-preserving transformations to an initial specification, with the intention of improving its efficiency without changing its meaning.
69

Tabletop tangible interfaces for music performance : design and evaluation

Xambó, Anna January 2015 (has links)
This thesis investigates a new generation of collaborative systems: tabletop tangible interfaces (TTIs) for music performance or musical tabletops. Musical tabletops are designed for professional musical performance, as well as for casual interaction in public settings. These systems support co-located collaboration, offered by a shared interface. However, we still know little about their challenges and opportunities for collaborative musical practice: in particular, how to best support beginners or experts or both. This thesis explores the nature of collaboration on TTIs for music performance between beginners, experts, or both. Empirical work was done in two stages: 1) an exploratory stage; and 2) an experimental stage. In the exploratory stage we studied the Reactable, a commercial musical tabletop designed for beginners and experts. In particular, we explored its use in two environments: a multi-session study with expert musicians in a casual lab setting; and a field study with casual visitors in a science centre. In the experimental stage we conducted a controlled experiment for mixed groups using a bespoke musical tabletop interface, SoundXY4. The design of this study was informed by the previous stage about a need to support better real-time awareness of the group activity (workspace awareness) in early interactions. For the three studies, groups musical improvisation was video-captured unobtrusively with the aim of understanding natural uses during group musical practice. Rich video data was carefully analysed focusing on the nature of social interaction and how workspace awareness was manifested. The findings suggest that musical tabletops can support peer learning during multiple sessions; fluid between-group social interaction in public settings; and a democratic and ecological approach to music performance. The findings also point to how workspace awareness can be enhanced in early interactions with TTIs using auditory feedback with ambisonics spatialisation. The thesis concludes with theoretical, methodological, and practical implications for future research in New Interfaces for Musical Expression (NIME), tabletop studies, and Human-Computer Interaction (HCI).
70

Configurable logic : a dynamically programmable cellular architecture and its VLSI implementation

Kean, Thomas Andrew January 1989 (has links)
At present there are two main paradigms for computation : interpretation of a data stream representing a program by a processing unit (software) and an interconnection of active logic elements (hardware). While both systems can (given reasonable definitions) be shown to be equivalent in terms of which functions they can compute they have radically different properties : hardware can potentially provide a much higher performance implementation of a single simple algorithm whereas software can implement a wide variety of extremely complex algorithms. here we will consider a third paradigm for computation - configurable hardware in which the interconnection between the active logic elements is dependant on a control store. This paradigm can potentially offer many of the performance advantages of hardware white retaining much of the flexibility of software. This thesis examines the general properties of configurable systems and examples of previous designs in order to develop a new cellular-array architecture called Configurable Array Logic (CAL). The implementation of the architecture a related statically-programmed system, the Configurable Logic Array (CLA) in VLSI are discussed. The potential of the CAL system for implementation using Wafer Scale Integration is considered. The CAD system which would be required to allow algorithms expressed in normal programming languages to be implemented on the cellular architecture is discussed and the tools developed during the course of the project are covered. Four example designs using the system are presented : a digital stopwatch, a Data Encryption Standard (DES) encryptor, a unit to compute the 3-4 distance transform (which is used in image pattern matching) and a sketch of a system using configurable logic to implement cellular-automation models for fluid-flow simulations. Finally, methods of extending the Configurable Logic architecture to allow more complex computations to be performed and other directions for further research are discussed.

Page generated in 0.0225 seconds