• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 20
  • 9
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 87
  • 87
  • 87
  • 87
  • 25
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 12
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Test data generation : two evolutionary approaches to mutation testing

May, Peter S. January 2007 (has links)
No description available.
62

A privacy enhancing infrastructure for context-awareness

Osbakk, Patrik January 2007 (has links)
Context-awareness enables applications and services to better fulfil the needs of users by adapting to their situation and their preferences. However, the use of contextual information is complicated by privacy concerns. A subjects context is personal and needs to be regarded as sensitive. Hence, contextual information must only be used with the consensus of the subject and according to their privacy preferences. This thesis examines the development of privacy-friendly context-aware systems. In particular the focus is on (A) improving the overall level of privacy, (B) evaluating access control mechanisms, (C) providing development support, and (D) offering protection to third-party infrastructures. The hypothesis investigated is whether these objectives can be achieved through the use of a privacy enhancing infrastructure. As part of the investigation two conceptual models are presented describing the assumptions made about context and privacy. Also presented is a decentralised privacy enhancing infrastructure developed and implemented to determine the validity of the hypothesis. Along with the infrastructure mechanisms for privacy protection including authentication, access control, and anonymity are discussed. A general data format for context communication in the infrastructure is also presented. Finally the thesis presents the findings uncovered during the investigation and evaluation of the hypothesis. This includes a qualitative analysis of whether the privacy enhancing infrastructure meets the key objectives, a user survey examining the performance of two candidate access control mechanism, a performance measure of the infrastructure when run with resource constrained devices, and a comparison with the approaches taken in related work.
63

Towards a multipurpose neural network approach to novelty detection

Haggett, Simon J. January 2008 (has links)
Novelty detection, the identification of data that is unusual or different in some way, is relevant in a wide number of real-world scenarios, ranging from identifying unusual weather conditions to detecting evidence of damage in mechanical systems. However, utilising novelty detection approaches in a particular scenario presents significant challenges to the non-expert user. They must first select an appropriate approach from the novelty detection literature for their scenario. Then, suitable values must be determined for any parameters of the chosen approach. These challenges are at best time consuming and at worst prohibitively difficult for the user. Worse still, if no suitable approach can be found from the literature, then the user is left with the impossible task of designing a novelty detector themselves. In order to make novelty detection more accessible, an approach is required which does not pose the above challenges. This thesis presents such an approach, which aims to automatically construct novelty detectors for specific applications. The approach combines a neural network model, recently proposed to explain a phenomenon observed in the neural pathways of the retina, with an evolutionary algorithm that is capable of simultaneously evolving the structure and weights of a neural network in order to optimise its performance in a particular task. The proposed approach was evaluated over a number of very different novelty detection tasks. It was found that, in each task, the approach successfully evolved novelty detectors which outperformed a number of existing techniques from the literature. A number of drawbacks with the approach were also identified, and suggestions were given on ways in which these may potentially be overcome.
64

Anytime algorithms for ROBDD symmetry detection and approximation

Kettle, Neil January 2008 (has links)
Reduced Ordered Binary Decision Diagrams (ROBDDs) provide a dense and memory efficient representation of Boolean functions. When ROBDDs are applied in logic synthesis, the problem arises of detecting both classical and generalised symmetries. State-of-the-art in symmetry detection is represented by Mishchenko's algorithm. Mishchenko showed how to detect symmetries in ROBDDs without the need for checking equivalence of all co-factor pairs. This work resulted in a practical algorithm for detecting all classical symmetries in an ROBDD in O(|G|³) set operations where |G| is the number of nodes in the ROBDD. Mishchenko and his colleagues subsequently extended the algorithm to find generalised symmetries. The extended algorithm retains the same asymptotic complexity for each type of generalised symmetry. Both the classical and generalised symmetry detection algorithms are monolithic in the sense that they only return a meaningful answer when they are left to run to completion. In this thesis we present efficient anytime algorithms for detecting both classical and generalised symmetries, that output pairs of symmetric variables until a prescribed time bound is exceeded. These anytime algorithms are complete in that given sufficient time they are guaranteed to find all symmetric pairs. Theoretically these algorithms reside in O(n³+n|G|+|G|³) and O(n³+n²|G|+|G|³) respectively, where n is the number of variables, so that in practice the advantage of anytime generality is not gained at the expense of efficiency. In fact, the anytime approach requires only very modest data structure support and offers unique opportunities for optimisation so the resulting algorithms are very efficient. The thesis continues by considering another class of anytime algorithms for ROBDDs that is motivated by the dearth of work on approximating ROBDDs. The need for approximation arises because many ROBDD operations result in an ROBDD whose size is quadratic in the size of the inputs. Furthermore, if ROBDDs are used in abstract interpretation, the running time of the analysis is related not only to the complexity of the individual ROBDD operations but also the number of operations applied. The number of operations is, in turn, constrained by the number of times a Boolean function can be weakened before stability is achieved. This thesis proposes a widening that can be used to both constrain the size of an ROBDD and also ensure that the number of times that it is weakened is bounded by some given constant. The widening can be used to either systematically approximate an ROBDD from above (i.e. derive a weaker function) or below (i.e. infer a stronger function). The thesis also considers how randomised techniques may be deployed to improve the speed of computing an approximation by avoiding potentially expensive ROBDD manipulation.
65

The temporal spotlight of attention : computational and electrophysiological explorations

Chennu, Srivas January 2009 (has links)
The study of attention aims to understand how the visual system focuses its resources on salient targets presented amongst competing distractors. In a continuously changing environment, temporal attention must pick out targets presented in between spatially coincident distractors that are offset in time. Cognitive theories have proposed that this task is mediated by a temporal spotlight of attention. This thesis combines evidence from behaviour and electrophysiology (EEG) with theoretical insights from neural network modelling to investigate the interplay between this spotlight and conscious perception. The experiments described in this thesis investigate the electrophysiology of temporal visual perception using the Rapid Serial Visual Presentation (RSVP) paradigm. Building upon behavioural research, we use EEG to investigate the influence of target discriminability, the Attentional Blink (AB) and feature integration on the temporal dynamics of visual perception. These findings characterise the influence of pre-attentional processes on attentional deployment, and the subsequent influence of this deployment on perception and behaviour. In addition, they provide the basis for a complementary computational elucidation. The theoretical component of this thesis is based on the ST2 neural network model. The notion of Transient Attentional Enhancement (TAE) embodied therein is the computational equivalent of the temporal spotlight. Its function is evaluated within the ST2 model and in relation to other modelling approaches. In addition, human ERP (Event-Related Potential) data from the experiments are compared with the models equivalent activation traces, termed Virtual ERPs. This combination of theory and experiment broadens our understanding of temporal visual perception, and in conjunction, highlights the role of neural modelling in informing EEG research.
66

Modelling access control for healthcare information systems : how to control access through policies, human processes and legislation

Ferreira, Ana January 2010 (has links)
The introduction of Electronic Medical Records (EMR) within healthcare organizations has the main goal of integrating heterogeneous patient information that is usually scattered over different locations. However, there are some barriers that impede the effective integration of EMR within the healthcare practice (e.g., educational, time/costs, security). A focus in improving access control definition and implementation is fundamental to define proper system workflow and access. The main objectives of this research are: to involve end users in the definition of access control rules; to determine which access control rules are important to those users; to define an access control model that can model these rules; and to implement and evaluate this model. Technical, methodological and legislative reviews were conducted on access control both in general and the healthcare domain. Grounded theory was used together with mixed methods to gather users experiences and needs regarding access control. Focus groups (main qualitative method) followed by structured questionnaires (secondary quantitative method) were applied to the healthcare professionals whilst structured telephone interviews were applied to the patients. A list of access control rules together with the new Break-The-Glass (BTG) RBAC model were developed. A prototype together with a pilot case study was implemented in order to test and evaluate the new model. A research process was developed during this work that allows translating access control procedures in healthcare, from legislation to practice, in a systematic and objective way. With access controls closer to the healthcare practice, educational, time/costs and security barriers of EMR integration can be minimized. This is achieved by: reducing the time needed to learn, use and alter the system; allowing unanticipated or emergency situations to be tackled in a controlled manner (BTG) and reducing unauthorized and non-justified accesses. All this helps to achieve a faster and safer patient treatment.
67

Communicating Haskell processes

Brown, Neil Christopher Charles January 2011 (has links)
Concurrent programming supports multiple threads of concurrent execution. It is a suitable paradigm for interaction with the outside world, where many inputs (e.g. network packets, key-presses, mouse-clicks) may arrive at any time and need to be dealt with at the same time. Concurrent execution also potentially allows for parallel speed-up; multicore machines are now the standard for new PCs, but the quest to take full advantage of the available parallelism continues. One approach to concurrent programming is process-oriented programming, which uses message-passing and is based on Hoare and Roscoe's Communicating Sequential Processes (CSP). CSP is a process calculus centred on concurrent processes that communicate with each other via synchronous channels. This is used as a formal underpinning for process-oriented programming, either directly (proving behaviours of programs with a CSP model checker) or indirectly (providing sound design principles and patterns). Process-oriented programming has previously been provided in programming languages such as occam-π and libraries for other mainstream languages such as Java and C++. However, as yet process-oriented programming has failed to gain much traction via these implementations; occam-π is rooted in the much older language occam and thus lacks powerful data structures (among other features) which are standard in most modern languages - while languages such as Java and C++, where data is mutable and easily shared, can be an awkward fit to process-oriented programming. Haskell is a functional programming language that is notable for its purity, type-classes, monads and lazy evaluation: modern features that provide interesting and powerful ways to program, including good support for imperative programming. Like other functional languages, it eliminates mutable data - which immediately removes a whole class of problems in concurrent programming. This thesis contends that CSP and process-oriented programming fit well with Haskell. The thesis details the creation of a CSP library for Haskell (Communicating Haskell Processes: CHP) - which features powerful support for process composition - and its further augmentation with capabilities such as support for a new concurrency primitive (conjunction), tracing and the ability to generate formal models of CHP programs. This allows programmers to build concurrent message-passing systems with a strong formal underpinning in a modern programming language, with more features and less complications than process-oriented libraries for other languages.
68

Concerning T-degrees of d-r.e. sets and e-reducibility of Sigma₂ sets

Watson, P. R. January 1988 (has links)
No description available.
69

Views of formal program development

Boiten, E. A. January 1992 (has links)
A formal specification describes a problem to be solved on a high level - ideally, it specifies what has to be done, but not how. Such descriptive specifications facilitate the derivation of any of the possible solutions, whereas operational specifications suggest only particular ones. Formal derivations in this framework consist of semantics preserving transformations, i.e. steps that proceed from solutions to the initial specification to other, more defined, more operational, or more efficient ones. Thus, the resulting programs are correct by construction with respect to their initial specifications. This thesis contains a number of case studies aiming at the exploration of new territories in the area of program specification and transformation. /pubs/1992/159/''Improving recursive functions by inverting the order of evaluation'' gives a comprehensive survey of one particular transformation strategy (a larger conceptual step in a transformational development that can be described at a more abstract level). This strategy for recursive functions entails the derivation of equivalent functions that use in their recursive evaluations the same arguments in an inverted order. This is an important optimization strategy, in particular for tree-like recursive functions, that are often defined in such a way that several function calls need to be evaluated more than once. By evaluating the function in an inverted order, such multiple evaluations are eliminated. /pubs/1990/165/ ''Factorization of the factorial'', illustrates a number of the transformations in chapter 2, and also demonstrates the state of the art in recursion simplification transformations. Directed by a small set of simple and well-known heuristics, a previously unknown algorithm for computing factorials is derived. Also, a similar development is shown leading to a corresponding program for a simple pipeline architecture. /pubs/1991/162/ ''A note on similarity of specifications and reusability of transformational developments'', written together with http://www.informatik.uni-ulm.de/pm/mitarbeiter/helmuth.html , explores the possibilities of reuse of transformational developments. Although it has often been claimed that this could be done fully mechanically, the experience with a number of derivations in this chapter indicates that this claim is somewhat preposterous. Only by describing the transformation steps in a very abstract way (using just natural language) and by considering very general specifications, can the developments be reused. The central concept is similarity, and several definitions of this informal notion are given, each leading to a particular kind of reuse of derivations. Variants of a derivation of linear search lead to several interesting search algorithms, culminating in derivations by reuse of two complicated string matching algorithms. /pubs/1991/161/ ''Intersections of sets and bags of extended substructures - a class of problems'', generalizes the specification of pattern matching. It describes a class of problems that can be viewed as a generalization of pattern matching problems. The essence of pattern matching is considered to be the intersection of a particular set with a bag (multiset) of extended substructures of a structured object. The set contains the patterns, the extended substructures are possible occurrences, extended with labels that mark their positions in the original object. This leads to the first ideas on an (interesting) theory on (extended) substructures. It is shown how the abstract description of this class of problems lends itself to calculation in a BMF style. Also, clearly exhibiting the basic structure of such problems facilitates connecting them with various solution strategies. /pubs/1991/156/ ''Solving a combinatorial problem by transformation of abstract data types'', gives an application of techniques from the area of formal program development in a different area, viz. combinatorics. By describing a given combinatorial problem in terms of abstract data types with equivalences, and transforming those data types, a reduction to a known problem is obtained. Abstract data types proved to be a more suitable specification mechanism in this case than context free grammars, since arbitrary equivalences could be introduced on the data types.
70

Evolution transparency for distributed service types

Senivongse, Twittie January 1997 (has links)
Name: Twittie Senivongse Degree: PhD in Computer Science Evolution Transparency for Distributed Service Types Abstract Large software systems are never static as they exist in an environment that is subject to constant changes in both functionality and technology. Managing this evolution is one of the major challenges in the engineering of large-scale software systems. When a distributed service evolves its interface, other parties in its environment who need to continue using the service will themselves have to evolve correspondingly if the evolved service, although functionally compatible, is not type-compatible with the original one. The autonomous and decentralised nature of distributed components makes such an assumption impractical since the service provider and client systems may not agree on the evolution. It may not even be possible to track down and alter all client programs which are distributed over the network. The best way to tackle this problem is to provide 'evolution transparency' to give the affected components the illusion that the service does not change. This thesis presents an RM-ODP-based model that hides from clients details of changes occuring to distributed service types over time. By allowing type versioning , the model supports program compatibility by enabling a client program of one type version to access a service instance of another functionally compatible type version, even though the versions are not considered compatible by the supporting type system. The model manages a cross-version binding and maintains semantic information which is used to transform the client's invocation to the format recognised by the accessed service object. With a prototype implementation on ANSAware together with an analysis of its mechanism, the evolution transparency support proves useful; clients are given the flexibility to defer their own evolution, and type substitutability is also extended from syntactic to functional compatibility.

Page generated in 0.0141 seconds