• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Self-similarity and wavelet forms for the compression of still image and video data

Levy, Ian Karl January 1998 (has links)
This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results.
72

Model checking quantum protocols

Papanikolaou, Nikolaos K. January 2009 (has links)
This thesis describes model checking techniques for protocols arising in quantum information theory and quantum cryptography. We discuss the theory and implementation of a practical model checker, QMC, for quantum protocols. In our framework, we assume that the quantum operations performed in a protocol are restricted to those within the stabilizer formalism; while this particular set of operations is not universal for quantum computation, it allows us to develop models of several useful protocols as well as of systems involving both classical and quantum information processing. We detail the syntax, semantics and type system of QMC’s modelling language, the logic QCTL which is used for verification, and the verification algorithms that have been implemented in the tool. We demonstrate our techniques with applications to a number of case studies.
73

Multiresolution image segmentation

Bhalerao, Abhir January 1991 (has links)
Image segmentation is an important area in the general field of image processing and computer vision. It is a fundamental part of the 'low level' aspects of computer vision and has many practical applications such as in medical imaging, industrial automation and satellite imagery. Traditional methods for image segmentation have approached the problem either from localisation in class space using region information, or from localisation in position, using edge or boundary information. More recently, however, attempts have been made to combine both region and boundary information in order to overcome the inherent limitations of using either approach alone. In this thesis, a new approach to image segmentation is presented that integrates region and boundary information within a multiresolution framework. The role of uncertainty is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade-off between position and class resolution and ensures both robustness in noise and efficiency of computation. The segmentation is based on an image model derived from a general class of multiresolution signal models, which incorporates both region and boundary features. A four stage algorithm is described consisting of: generation of a low-pass pyramid, separate region and boundary estimation processes and an integration strategy. Both the region and boundary processes consist of scale-selection, creation of adjacency graphs, and iterative estimation within a general framework of maximum a posteriori (MAP) estimation and decision theory. Parameter estimation is performed in situ, and the decision processes are both flexible and spatially local, thus avoiding assumptions about global homogeneity or size and number of regions which characterise some of the earlier algorithms. A method for robust estimation of edge orientation and position is described which addresses the problem in the form of a multiresolution minimum mean square error (MMSE) estimation. The method effectively uses the spatial consistency of output of small kernel gradient operators from different scales to produce more reliable edge position and orientation and is effective at extracting boundary orientations from data with low signal-to-noise ratios. Segmentation results are presented for a number of synthetic and natural images which show the cooperative method to give accurate segmentations at low signal-to-noise ratios (0 dB) and to be more effective than previous methods at capturing complex region shapes.
74

Metric domains for completeness

Matthews, S. G. January 1985 (has links)
Completeness is a semantic non-operational notion of program correctness suggested (but not pursued) by W.W.Wadge. Program verification can be simplified using completeness, firstly by removing the approximation relation from proofs, and secondly by removing partial objects from proofs. The dissertation proves the validity of this approach by demonstrating how it can work in the class of metric domains. We show how the use of Tarski's least fixed point theorem can be replaced by a non-operational unique fixed point theorem for many well behaved Programs. The proof of this theorem is also non-operational. After this we consider the problem of deciding what it means f or a function to be "complete". It is shown that combinators such as function composition are not complete, although they are traditionally assumed to be so. Complete versions for these combinators are given. Absolute functions are proposed as a general model for the notion of a complete function. The theory of mategories is introduced as a vehicle for studying absolute functions.
75

Algorithmic and complexity aspects of simple coalitional games

Aziz, Haris January 2009 (has links)
Simple coalitional games are a fundamental class of cooperative games and voting games which are used to model coalition formation, resource allocation and decision making in computer science, artificial intelligence and multiagent systems. Although simple coalitional games are well studied in the domain of game theory and social choice, their algorithmic and computational complexity aspects have received less attention till recently. The computational aspects of simple coalitional games are of increased importance as these games are used by computer scientists to model distributed settings. This thesis fits in the wider setting of the interplay between economics and computer science which has led to the development of algorithmic game theory and computational social choice. A unified view of the computational aspects of simple coalitional games is presented here for the first time. Certain complexity results also apply to other coalitional games such as skill games and matching games. The following issues are given special consideration: influence of players, limit and complexity of manipulations in the coalitional games and complexity of resource allocation on networks. The complexity of comparison of influence between players in simple games is characterized. The simple games considered are represented by winning coalitions, minimal winning coalitions, weighted voting games or multiple weighted voting games. A comprehensive classification of weighted voting games which can be solved in polynomial time is presented. An efficient algorithm which uses generating functions and interpolation to compute an integer weight vector for target power indices is proposed. Voting theory, especially the Penrose Square Root Law, is used to investigate the fairness of a real life voting model. Computational complexity of manipulation in social choice protocols can determine whether manipulation is computationally feasible or not. The computational complexity and bounds of manipulation are considered from various angles including control, false-name manipulation and bribery. Moreover, the computational complexity of computing various cooperative game solutions of simple games in dierent representations is studied. Certain structural results regarding least core payos extend to the general monotone cooperative game. The thesis also studies a coalitional game called the spanning connectivity game. It is proved that whereas computing the Banzhaf values and Shapley-Shubik indices of such games is #P-complete, there is a polynomial time combinatorial algorithm to compute the nucleolus. The results have interesting significance for optimal strategies for the wiretapping game which is a noncooperative game defined on a network.
76

Predictive analysis and optimisation of pipelined wavefront applications using reusable analytic models

Mudalige, Gihan Ravideva January 2009 (has links)
Pipelined wavefront computations are an ubiquitous class of high performance parallel algorithms used for the solution of many scientific and engineering applications. In order to aid the design and optimisation of these applications, and to ensure that during procurement platforms are chosen best suited to these codes, there has been considerable research in analysing and evaluating their operational performance. Wavefront codes exhibit complex computation, communication, synchronisation patterns, and as a result there exist a large variety of such codes and possible optimisations. The problem is compounded by each new generation of high performance computing system, which has often introduced a previously unexplored architectural trait, requiring previous performance models to be rewritten and reevaluated. In this thesis, we address the performance modelling and optimisation of this class of application, as a whole. This differs from previous studies in which bespoke models are applied to specific applications. The analytic performance models are generalised and reusable, and we demonstrate their application to the predictive analysis and optimisation of pipelined wavefront computations running on modern high performance computing systems. The performance model is based on the LogGP parameterisation, and uses a small number of input parameters to specify the particular behaviour of most wavefront codes. The new parameters and model equations capture the key structural and behavioural differences among different wavefront application codes, providing a succinct summary of the operations for each application and insights into alternative wavefront application design. The models are applied to three industry-strength wavefront codes and are validated on several systems including a Cray XT3/XT4 and an InfiniBand commodity cluster. Model predictions show high quantitative accuracy (less than 20% error) for all high performance configurations and excellent qualitative accuracy. The thesis presents applications, projections and insights for optimisations using the model, which show the utility of reusable analytic models for performance engineering of high performance computing codes. In particular, we demonstrate the use of the model for: (1) evaluating application configuration and resulting performance; (2) evaluating hardware platform issues including platform sizing, configuration; (3) exploring hardware platform design alternatives and system procurement and, (4) considering possible code and algorithmic optimisations.
77

Error management in ATLAS TDAQ : an intelligent systems approach

Sloper, John Erik January 2010 (has links)
This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specifically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classification. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classification techniques and the factors specific to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered from two different sources. Analysis and processing of the datasets using statistical and ITS techniques shows that clusters exists in the data corresponding to the different simulated errors. Different ITS techniques are applied to the gathered datasets in order to realise an error detection model. These techniques include Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Cartesian Genetic Programming (CGP) and a comparison of the respective advantages and disadvantages is made. The principle conclusions from this work are that IST can be successfully used to detect errors in the ATLAS TDAQ system and thus can provide a tool to improve the overall error management system. It is of particular importance that the IST can be used without having a detailed knowledge of the system, as the ATLAS TDAQ is too complex for a single person to have complete understanding of. The results of this research will benefit researchers developing and evaluating IST techniques in similar large scale distributed systems.
78

Autocoding methods for networked embedded systems

Finney, James January 2009 (has links)
The volume and complexity of software is increasing; presenting developers with an ever increasing challenge to deliver a system within the agreed timescale and budget [1]. With the use of Computer-Aided Software Engineering (CASE) tools for requirements management, component design, and software validation the risks to the project can be reduced. This project focuses on Autocoding CASE tools, the methods used by such tools to generate the code, and the features these tools provide the user. The Extensible Stylesheet Language Transformation (XSLT) based autocoding method used by Rapicore in their NetGen embedded network design tool was known to have a number of issues and limitations. The aim of the research was to identify these issues and develop an innovative solution that would support current and future autocoding requirements. Using the literature review and a number of practical projects, the issues with the XSLT-based method were identified. These issues were used to define the requirements with which a more appropriate autocoding method was researched and developed. A more powerful language was researched and selected, and with this language a prototype autocoding platform was designed, developed, validated, and evaluated. The work concludes that the innovative use and integration of programmer-level Extensible Markup Language (XML) code descriptions and PHP scripting has provided Rapicore with a powerful and flexible autocoding platform to support current and future autocoding application requirements of any size and complexity.
79

Object code verification

Wahab, Matthew January 1998 (has links)
Object code is a program of a processor language and can be directly executed on a machine. Program verification constructs a formal proof that a program correctly implements its specification. Verifying object code therefore ensures that the program which is to be executed on a machine is correct. However, the nature of processor languages makes it difficult to specify and reason about object code programs in a formal system of logic. Furthermore, a proof of the correctness of an object code program will often be too large to construct manually because of the size of object code programs. The presence of pointers and computed jumps in object code programs constrains the use of automated tools to simplify object code verification. This thesis develops an abstract language which is expressive enough to describe any sequential object code program. The abstract language supports the definition of program logics in which to specify and verify object code programs. This allows the object code programs of any processor language to be verified in a single system of logic. The abstract language is expressive enough that a single command is enough to describe the behaviour of any processor instruction. An object code program can therefore be translated to the abstract language by replacing each instruction with the equivalent command of the abstract language. This ensures that the use of the abstract language does not increase the difficulty of verifying an object code program. The verification of an object code program can be simplified by constructing an abstraction of the program and showing that the abstraction correctly implements the program specification. Methods for abstracting programs of the abstract language are developed which consider only the text of a program. These methods are based on describing a finite sequence of commands as a single, equivalent, command of the abstract language. This is used to define transformations which abstract a program by replacing groups of program commands with a single command. The abstraction of a program formed in this way can be verified in the same system of logic as the original program. Because the transformations consider only the program text, they are suitable for efficient mechanisation in an automated proof tool. By reducing the number of commands which must be considered, these methods can reduce the manual work needed to verify a program. The use of an abstract language allows object code programs to be specified and verified in a system of logic while the use of abstraction to simplify programs makes verification practical. As examples, object code programs for two different processors are modelled, abstracted and verified in terms of the abstract language. Features of processor languages and of object code programs which affect verification and abstraction are also summarised.
80

Techniques for the analysis of monotone Boolean networks

Dunne, Paul E. January 1984 (has links)
Monotone Boolean networks are one the most widely studied restricted forms of combinational networks. This dissertation examines the complexity of such networks realising single output monotone Boolean functions and develops recent results on their relation to unrestricted networks. Two standard analytic techniques are considered: the inductive gate elimination argument, and replacement rules. In Chapters (3) and (4) the former method is applied to obtain new lower bounds on the monotone network complexity of threshold functions. In Chapter (5) a complete characterisation of all replacement rules, valid when computing some monotone Boolean functions, is given. The latter half of the dissertation concentrates on the relation between the combinational and monotone network complexity of monotone functions, and extends works of Berkowitz and Wegener on “slice functions”. In Chapter (6) the concept of “pseudo-complementation”, the replacement of instances of negated variables by monotone functions, without affecting computational behaviour, is defined. Pseudo-complements are show to exist for all monotone Boolean functions and using these a generalisation of slice function is proposed. Chapter (7) examines the slice functions of some NP-complete predicates. For the predicates considered, it is shown that the “canonical” slice has polynomial network complexity, and that the “central” slice is also NP-complete. This result permits a reformulation of the P = NP? Question in terms of monotone network complexity. Finally, Chapter (8) examines the existence of gaps for the combinational and monotone network complexity measures. A natural series of classes of monotone Boolean functions is defined and it is shown that for the “hardest” members of each class there is no asymptotic gap between these measures.

Page generated in 0.0915 seconds